In this printing of the thesis, there are no images. All images for the thesis will be provided with ‘long descriptions’ for accessibility reasons and these will be incorporated into the text.
The appendices are also not included.
The images and appendices are all available online in the master version of the thesis at http://www.sunriseresearch.org/WebContentAccessibility/AccessibilityPrinciples/00-title-page.html
Table of contents, figures and tables (complete labels etc)
Glossary (definitions of terms from thesis)
Summary of thesis (1000 words)
Timeline of author’s contributions
School of Mathematical and Geospatial
Science, Engineering and Technology Portfolio, RMIT University.
month and year when the thesis is submitted for the degree.
Except where due acknowledgement has been made, the work is that of the candidate alone. The work has not been submitted previously, in whole or in part, to qualify for any other academic award. The content of the thesis is the result of work that has been carried out since the official commencement date of the approved research program. No editorial work has been carried out by a third party and ethics procedures and guidelines have been followed.
for ref guides see http://www.education.uts.edu.au/fstudents/downloads/APA_Ref_Guide.pdf
This research has had special assistance from a number of sources. In combination, they have made it possible for work to be undertaken in an integrated and supportive environment. The early analysis of accessible Web Content Development (Appendix 8) was supported financially by a number of Australian and international agencies. The research has been supported during two periods at University of Tsukuba in Japan where the author was a very grateful Visiting Research Scientist.
Some documents based on the research were co-authored in collaboration with members of the IMS Global Learning Consortium, the Dublin Core DC Accessibility Working Group, ISO/IEC JTC1 SC36, and members of the INCITS V2 Working Group and MMI-DC Accessibility and Multilingual Workshops (see Appendices 1 and 2). The author was a working member of all these working groups and is grateful for the environment they created.
Description of assistance
IMS Australia, participating in the IMS Web Content Accessibility work was supported by DEST.
IMS Global Learning Consortium Accessibility Working Group - in particular, Jutta Treviranus, Madeleine Rothberg, Cathleen Barstow, Andy Heath, Hazel Kennedy Anastasia Cheetham, David Weinkauf, Mark Norton, Alex Jackl and Martyn Cooper.
Martin Ford, Martin Ford Consultancy, with whom the author undertook accessibility and metadata standards work in Europe.
University of Melbourne, Department of Information Systems, for a grant to work on WebCT's accessibility, accommodation and a friendly environment in which to work. All were essential and appreciated.
Oregon State University...
Particular thanks to John Gardner and others for their help with the difficult topics of haptic representations, mathematics, science etc.
Very special thanks to Charles McCathieNevile for his encouragement, sharp critique, friendship and, of course, his expert advice.
La Trobe University, Department of Computing and Mathematical Sciences, for a position as an Adjunct Associate Professor and making it easy to do research.
University of Tsukuba for wonderful times to work and learn about the Japanese way of life and an interest in further research to do with distributed resources.
Behzad Kateli, Sophie Lissonnet, James Munro and Sarah Pulis, former students who have been very helpful throughout the research and offered useful technical advice and personal support.
my very wonderful, tolerant and supportive family.
Table 1 - Table of acknowledgements
AccessForAll: Metadata for User-centred, Inclusive Access to Digital Resources 2
Table of Contents....................................................................................................... 5
Images and Tables – incomplete???............................................................................... 8
Table of tables.......................................................................................................... 10
Thesis Summary (1000 words - still drafty)................................................................ 11
Abbreviations and Web
Glossary of terms...................................................................................................
Chapter 1: Preamble...........................................................................................
An outdated view of
accessibility and the Web.......................................................
A new approach to
accessibility for an updated Web...............................................
significance of accessibility.....................................................
A metadata approach.............................................................................................
Chapter 2: Introduction...................................................................................
Accessibility and Disability......................................................
Models of disability...............................................................................................
Chapter 4: Universal
The early-history of
Separation of Structure
The WAI Requirements............................................................................................
WAI Compliance and
Special resources for
people with disabilities..........................................................
- the W3C Approach..........................................................
The UK Disabilities
Rights Commission Report.......................................................
Chapter 5: Other routes
Accessible code and
A Practical Approach..........................................................................................
post-production services and libraries.................................................
Chapter 6: Metadata.......................................................................................
Definitions of metadata......................................................................................
Formal Definition of DC
and WCAG 2.0.................................................................
Chapter 8: User needs
Profiles of user needs
User needs as a
Chapter 9: Resource
Primary and equivalent
alternative resources (or components).........................
A universal remote
The URC specifications........................................................................................
Chapter 10: Match and
The value of metadata........................................................................................
for Bibliographic Records..........................................
Proof of concept..................................................................................................
Chapter 12: Conclusion..................................................................................
Citations - odd?..................................................................................................
???: Map of Signatures and Ratifications of UN Convention A/RES/61/106 as of 10
December 2007 [UN Enable]
Figure ???: ...__________________________________________________________________
Figure ???: ...__________________________________________________________________
Figure ???: Australian Prime Minister's Website (Pandora, 2007)_________________________
Figure ??? The metadata
as viewed in a Safari browser (Pandora,
Figure ??? The metadata
as viewed in a Safari browser (Pandora,
Figure ???: Diagram of
Web 2.0 (O'Reilly, 2005)______________________________________
Figure ???: The simple
AccessForAll model that provides individual users with resources that match
their accessibility needs and preferences. why this??? explain it__________________________________________________
Figure ??? Burstein,
System development (Burstein, 2002, p. 153)_________________________
Figure ???: New York
Times Online (2005)___________________________________________
Figure ???: accessibility
Figure ???: Zoot Suit (Moock, 2005)________________________________________________
Figure ???: UK Government
Accounting Web Page_____________________________________
Figure ???: Demo of two
pages - sight vs sound differences (HFI-chocolate,
Figure ???: Disabilities piechart (Microsoft, 2003a)____________________________________
Figure ???: Likelihood of difficulties (Microsoft, 2003b)________________________________
Figure ???: Likelihood of
difficulties by population (Microsoft, 2003b)_____________________
Figure ???: Difficulties
by severity (Microsoft, 2003c)___________________________________
Figure ???: Difficulties by age (Microsoft, 2003c)______________________________________
Figure ???: Aging population (Microsoft, 2003c)______________________________________
Figure ???: WCAG______________________________________________________________
Figure 12: ATAG-WCAG-UUAG___________________________________________________
Figure ???: The wider
context for accessibility (Kelly
et al, 2005, p. 8)___________________
Figure ???: a tangram (Kelly, 2006)_______________________________________________
Figure ???: A progressive
set of images showing how (RDF or other) tagging of content can be used to
separate content from tags and then the tags themselves can be tagged, or
sorted in multiple ways.______________________
Figure ???: DC metadata
as grammar (1) (Baker, 2000)_______________________________
Figure ???: DC metadata
as grammar (2) (Baker, 2000)_______________________________
Figure ???: DCMI Resource
Model (Powell et al, 2007)_______________________________
Figure ???: DCMI
Description Set Model (Powell et al, 2007)__________________________
Figure ???: DCMI
Vocabulary Model (Powell et al, 2007)_____________________________
Figure ???: The Singapore
Framework (Nilsson, 2007)_______________________________
Figure ???: A tag cloud (Library Thing)___________________________________________
Figure ???: Topic maps
Figure ???: Topics maps
as an ontology framework__________________________________
and Figure ???: Two fragments of the
Semantic Web Figure ???: ???__________________
Figure ???: Front page of
the Age newspaper on 9/11/2007 in Safari and Opera Mini showing headlines so
phone users can easily select what to read or look at.____________________________________________________
Figure ???: Accessibility
Abstract model (Pulis, 2008)________________________________
Figure ???: AccessForAll
structure and vocabulary (image from AccessForAll Specifications, [IMS Accessibility].
Figure ???: Access
Extensibility Statement (Jackl, 2003).______________________________
Figure ???: Diagram
showing cycle of searches and role of AccessForAll server___________
Figure ???: A typical set
of user needs and preferences showing the default and the user's individual
Figure ???: What do we
need to know about an object for accessibility?__________________
Figure ???: Multiple
instantiations of a single Web page (HFI-testing).__________________
Figure ???: IMS structure
for accessibility metadata from 2.3, Page 7, AccMD Norton, 2004_
Figure ???: A user with a voice-controlled URC and a seated user
employing a touch-controlled URC (Gottfried
Figure ???: A wheel-chair
user struggling to reach an ATM (HREOC (with
Figure ???: As the items
are adjusted for matching to the user's PNP, their DRD more closely matches the
Figure ??? A pyramid
based on the Howel model of accessibility ????___________________
Figure ???: The reuse of
components in the 48,084 pages on the tested section of the La Trobe Web site.
from La Trobe Website audit (Nevile, 2004)_________________________________________________________________
Figure ???: The
behaviours for interoperability using AccLIP and AccMD in TILE (AccMD IM)
Figure ???: An
AccessForAll process diagram_______________________________________
Figure ???: The modified
section of the original diagram with a separate filtering service shown
Figure ???: 4 FRBR
entities associated with two resources and their possible relationships
(Morozumi et al, 2006).
Figure ???: The Globe
federated search model using ProLearn Query Language. (Ternier et al, 2008)
Figure ???: The point of
loss of information in the LOM -> DC translation process (Johnston et al,
Figure ???: A possible structure of a future metadata standardization framework. from Mikael Nilsson,
Figure ???: ABC Video on
Figure ???: Thesis
table of acknowledgements
table of contents
table of images and tables
the plan to make WCAG testable
table of services
AccessForAll structure and vocabulary
6.2.1 Display Preference Set
6.2.2 Screen reader Preference Set
6.2.9 Screen Enhancement Generic Preference Set
A typical set of user needs and preferences showing the default and the user's individual choices.
IMS structure for accessibility metadata
The behaviours for interoperability using ACCLIP and ACCMD in TILE.
Table ???: Tables
The first decade of international effort to make the Web accessible has not achieved its goal and a different approach is needed. In order to be more inclusive, the Web needs published resources to be described to enable their tailoring to the needs and preferences of individual users, and resources need to be continuously improvable according to a wide range of needs and preferences, and thus there is a need for management of resources that can be achieved with metadata. The specification of metadata to achieve such a goal is complex given the requirements, themselves not previously determined.
Accessibility is a term often used to describe property rights and or other aspects of availability of such resources or services. In this thesis, the term is used to mean the capability of individuals to access digital resources in perceptual modes that are appropriate for them at the time.
Ensuring accessibility of the Web has been a major concern of the World Wide Web Consortium (W3C) for a decade: those responsible for inventing the Web recognised early that the features such as the graphical user interface that attracted so many to the Web was simultaneously alienating many from it, because they could not perceive content in the form in which most of it is provided. For nearly a decade, the Web has acted as a publishing medium, and efforts to make the publications accessible have been based on a set of guidelines developed by international committees of experts led by the W3C. The guidelines have acted as specifications for developers.
More recently, the Web has become less of a one-way publications medium and, now known as Web 2.0, it is an interactive space in which resources become ‘live’ objects capable of reformation and reforming other resources.
What this thesis offers is an argument in favour of an on-going process approach to accessibility of resources that supports continuous improvement of any given resource, not necessarily by the author of the resource, and not necessarily by design or with knowledge of the original resource, by contributors who may be distributed globally. It argues that the current dependence on production guidelines and post-production evaluation of resources as either universally accessible or otherwise, does not adequately provide for either the accessibility necessary for individuals or the continuous or evolutionary approach possible within what is defined as a Web 2.0 environment. It argues that a distributed, social-networking view of the Web as interactive, combined with a social model of disability, given the management tools of machine-readable, interoperable AccessForAll metadata, as developed, can support continuous improvement of the accessibility of the Web with less effort on the part of individual developers and better results for individual users.
This thesis argues that metadata is essential and integral to any shift to an on-going process approach to accessibility. It is at the core of the research in as much as it provides essential infrastructure for a new approach to accessibility. (500 words)
ABC Video On Demand http://www.abc.net.au/vod/news/
AccLIP BPG, IMS Learner Information Package Accessibility for LIP Best Practice Guide - http://www.imsglobal.org/accessibility/acclipv1p0/imsacclip_bestv1p0.html
AccLIP Binding, IMS Learner Information Package Accessibility for LIP XML Binding - http://www.imsglobal.org/accessibility/acclipv1p0/imsacclip_bindv1p0.html
AccLIP IM, IMS Learner Information Package Accessibility for LIP Information Model - http://www.imsglobal.org/accessibility/acclipv1p0/imsacclip_infov1p0.html
AccLIP Conf, IMS Learner Information Package Accessibility for LIP Conformance Specification - http://www.imsglobal.org/accessibility/acclipv1p0/imsacclip_confv1p0.html
AccLIP UC, IMS Learner Information Package Accessibility for LIP Use Cases - http://www.imsglobal.org/accessibility/acclipv1p0/imsacclip_usecasesv1p0.html
AccMD Overview, IMS AccessForAll Meta-data Overview http://www.imsglobal.org/accessibility/accmdv1p0/imsaccmd_oviewv1p0.html
AccMD IM, IMS AccessForAll Meta-data Information Model http://www.imsglobal.org/accessibility/accmdv1p0/imsaccmd_infov1p0.html
AccMD Binding, IMS AccessForAll Meta-data XML Binding http://www.imsglobal.org/accessibility/accmdv1p0/imsaccmd_bindv1p0.html
AccMD BPG, IMS AccessForAll Meta-data Best Practice Guide http://www.imsglobal.org/accessibility/accmdv1p0/imsaccmd_bestv1p0.html
AGLS, AGLS Metadata Standard, Standards Australia 5044 http://www.agls.gov.au/
Alt-i-lab 2005 http://www.imsglobal.org/altilab
APH, American Printing House for the Blind http://www.aph.org/louis.htm
APLR, CEN APLR, http://www.cen-aplr.org
ATAG, Jutta Treviranus, J., McCathieNevile, C., Jacobs, I., & Richards, J., (Eds), (2000). Authoring Tool Accessibility Guidelines 1.0 http://www.w3.org/TR/WAI-AUTOOLS/
ATRC, Adaptive Technology Resource Center http://atrc.utoronto.ca/
AVCC The Australian Vice-Chancellors' Committee http://www.avcc.edu.au/
CC/PP, World Wide Web Consortium's Composite Capabilities and Personal Preferences specifications http://www.w3.org/Mobile/CCPP/
CEN/ISSS Learning Technologies Workshop http://www.cen.eu/cenorm/businessdomains/businessdomains/isss/activity/wslt.asp
CNIB, Canadian National Institute for the Blind http://www.cnib.ca/library/visunet/
Cornell university Library http://www.library.cornell.edu/iris/research/index.html
CSS, Cascading Style Sheets http://www.w3.org/TR/REC-CSS2/
CWIS Internet Scout http://scout.wisc.edu/Projects/CWIS/
DCMI, Dublin Core Metadata Initiative http://dublincore.org/
DCMI Access, Dublin Core Metadata Initiative Accessibility Working Group http://dublincore.org/groups/access/
DCMI Terms, Dublin Core Metadata Initiative Terms http://dublincore.org/documents/dcmi-terms/ Retrieved January 13, 2005, from.
DCMI DCAM, Dublin Core Abstract Model, http://dublincore.org/documents/abstract-model/
DDS, Dewey Decimal Classification System, http://www.oclc.org/dewey/
DRC, Disability Rights Commission (UK) http://www.drc-gb.org/
DRD, ISO standard for Digital Resource Description (FCD 24751-3, Individualized Adaptability and Accessibility in E-learning, Education and Training Part 3: Access For All Digital Resource Description online at http://jtc1sc36.org/doc/36N1141.pdf.
EdNA, Educational Network of Australia http://www.edna.edu.au/
Fluid Drag-and-Drop http://wiki.fluidproject.org/display/fluid/Drag+and+Drop+Design+Pattern
FRBR Functional Requirements for Bibliographic Records Final Report. http://www.ifla.org/VII/s13/frbr/frbr.pdf
GEM Gateway to Educational Materials http://www.learningcommons.org/educators/library/gem.php
Google Desktop http://desktop.google.com/
Google Similar Pages http://www.googleguide.com/similar_pages.html
HFI, Human Factors International http://www.humanfactors.com/
HREOC, Human Resources Equal Opportunity Commission of the Australian Federal Government http://www.hreoc.gov.au/
HTML 4.01, HyperText Markup Language. Raggett, D., Le Hors, & A., Jacobs, I., (Eds), (1999). HTML 4.01 Specification http://www.w3.org/TR/html4/
HTTP, Hypertext Transfer Protocol -- HTTP/1.1. R. Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach, P., & Berners-Lee, T., (Eds), (1999). http://tools.ietf.org/html/rfc2616
IEEE 184.108.40.206 - 2002 Standard for Learning Object Metadata: http://ltsc.ieee.org
IEEE/LOM, IEEE Learning Technology Standards Committee .http://ltsc.ieee.org/wg12/20020612-Final-LOM-Draft.html or http://ltsc.ieee.org/wg12/files/LOM_1484_12_1_v1_Final_Draft.pdf
IMS Accessibility http://www.imsglobal.org/accessibility/
IMS AccLIP, IMS Learner Information Package Accessibility for LIP http://www.imsglobal.org/accessibility/index.html#acclip
IMS AccMD, IMS AccessForAll Meta-data Specification http://www.imsglobal.org/accessibility/index.html#accmd
IMS AG, IMS Accessibility Guidelines for Education http://www.imsglobal.org/accessibility/index.html#accguide
IMS GLC, IMS Global Learning Consortium http://www.imsglobal.org/
INCITS V2 community http://v2.incits.org/
Inclusion UK http://inclusion.uwe.ac.uk/
International Academy of Digital Arts and Sciences http://www.iadas.net/
ISO coordinate ref system see http://www.isotc211.org/
ISO 2788 standard (http://www.ontopia.net/topicmaps/materials/tm-vs-thesauri.html#iso-2788)
ISO/IEC JTC1 SC36 http://jtc1sc36.org/
ISO/IEC JTC1 SC35 WG8 User Interfaces for Remote Interaction http://www.open-std.org/JTC1/sc35/wg8/
LMS Angel http://www.angellearning.com/
MMI-DC, European Committee for Standardization Meta-Data (Dublin Core) Workshop http://www.cenorm.be/isss/mmi-dc/
MathML, Mathematics Markup Language http://www.w3.org/Math/
METS, Metadata Encoding and Transmission Standard, http://www.loc.gov/standards/mets/
MRC UNC, Metadata Research Center, University of North Carolina at Chapel Hill http://ils.unc.edu/mrc/
MRP UCB, Metadata Research Program (formerly OASIS), University of California, Berkeley http://metadata.sims.berkeley.edu/index.html
NCD, US National Council on Disability http://www.ncd.gov/
NLS, National Library Service for the Blind and Physically Handicapped, Library of Congress http://lcweb.loc.gov/nls/
NIST, National Institute of Standards and Technology http://www.nist.gov/
OAI, Open Archives Initiative http://www.openarchives.org/
OCLC, Online Computer Library Center http://www.oclc.org
Open University, UK, http://www.open.ac.uk/
OZeWAI 2004 Conference http://www.OZeWAI.org/2004/
OZeWAI 2004 Conference http://www.OZeWAI.org/2007/
PDF, Portable Document Format http://www.iso.org/iso/catalogue_detail?csnumber=38920
ISO standard text of FCD 24751-2, Individualized Adaptability and Accessibility
Education and Training Part 2: Access For All Personal Needs and Preferences Statement http://jtc1sc36.org/doc/36N1140.pdf
RDF, Resource Description Framework. http://www.w3.org/RDF/
RNIB, Royal national Institute for the Blind. http://www.rnib.org.uk/
RSS, Really Simple Syndication or RDF Site Summary, http://web.resource.org/rss/1.0/spec
s.508 Rehab Act ..... http://www.section508.gov/
SAKAI, SAKAI Collaboration and Learning Environment for Education http://sakaiproject.org/
SALT, Specifications for Accessible Learning Technologies http://ncam.wgbh.org/salt/
SC36, ISO JTC1 SC36, Learning, Education and Training standards http://jtc1sc36.org/ or http://www.iso.org/iso/en/stdsdevelopment/tc/tclist/TechnicalCommitteeDetailPage.TechnicalCommitteeDetail?COMMID=4997
SGML Standard Generalized Markup Language ISO 8879 http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=16387
SMIL Synchronised Multimedia Integration Language http://www.w3.org/TR/REC-smil/
STEVE Museum http://www.steve.museum/
STSN Speech-to-Text Services Network http://www.stsn.org/
SVG, World Wide Web Consortium's Scalar Vector Graphics http://www.w3.org/Graphics/SVG/
SWAP, Smart Web Accessibility Platform http://www.ubaccess.com/swap.html
SWG-A, ISO/IEC JTC1 SWG-A http://www.jtc1access.org/
TBP Talboks-och Punktskrift Biblioteket, Sweden http://www.tpb.se/
testlab is a european http://www.svb.nl/project/testlab/testlab.htm
TextHelp Systems Inc. http://www.texthelp.com/
The Library of Congress National Library Service for the Blind and Physically Handicapped (NLS). The Union Catalogue (BPHP) and the file of In-Process Publications (BPHI) can both be searched via the NLS website (see http://lcweb.loc.gov/nls/).
TILE, The Inclusive Learning Exchange http://www.barrierfree.ca/tile/
Topic Maps http://www.topicmaps.org/
UAAG, World Wide Web Consortium WAI's User Agent Accessibility Guidelines http://www.w3.org/TR/WAI-USERAGENT/
UML, Unified Modeling Language http://www.uml.org/
UN Enable http://www.un.org/disabilities/
University of Toronto, http://www.utoronto.ca/
URI, Universal Resource Identifier http://labs.apache.org/webarch/uri/
W3C, World Wide Web Consortium, http://www.w3c.org/
WAI, World Wide Web Consortium's Web Accessibility Initiative http://www.w3c.org/WAI/
WGAC, Chisholm, W., Vanderheiden, G. and Jacobs, I. (1999). Web Content Accessibility Guidelines Version 1.0 http://www.w3.org/TR/WAI-WEBCONTENT/
WCAG-2 Web Content Accessibility Guidelines Version 2.0 Caldwell, B., Chisholm, W., Vanderheiden, G. and White, J. (2004). http://www.w3.org/TR/WCAG20/
WCAG WG http://www.w3.org/WAI/GL/
WGBH/NCAM, The Carl and Ruth Shapiro Family National Center for Accessible Media http://ncam.wgbh.org/
Webby award winners http://www.webbyawards.com/
WG7, Working Group 7 of ISO JTC1 SC36, Learning, Education and Training http://jtc1sc36.org/ or http://www.iso.org/iso/en/stdsdevelopment/tc/tclist/TechnicalCommitteeDetailPage.TechnicalCommitteeDetail?COMMID=4997
WSG, Web Standards Group http://webstandardsgroup.org/
WSIS World Summit on the Information Society http://www.itu.int/wsis/
XML, World Wide Web Consortium's Extensible Markup Language (http://www.w3.org/TR/REC-xml/)
a successful matching of information and communications to a user's needs and preferences to enable the user to interact with and perceive the intellectual content of the information or communications. This includes being able to use whatever assistive technologies or devices that are reasonably involved in the situation and that conform to suitably chosen standards.
people with ...
doing what is reasonably required to ensure accessibility for the maximum number of people individually
'metadata' from ... 1.3 of AGLS Metadata revision of usage guide....(check email from SA and Agnes)
“Metadata is just a new term for something that has been around for as long as humans have been writing. It is the Internet-age term for information that librarians traditionally have put into catalogues and archivists into archival control systems. The term ‘meta’ comes from a Greek word that denotes ‘alongside, with, after, next’. More recent Latin and English usage would employ ‘meta’ to denote something transcendental, or beyond nature. Metadata, then, can be thought of as data about other data. Although there are many varied uses for metadata, the term is commonly used to refer to descriptive information about online resources, generally called ‘resource discovery metadata’.
Resource discovery metadata is information in a structured format that describes a resource or a collection of resources. A metadata record, then, consists of a set of properties, or elements, which characterise resources and which are used to describe a resource. For example, a metadata system common in libraries – the library catalogue – contains a set of metadata records with elements that describe a book or other library item: author, title, date of creation or publication, subject coverage, and the call number specifying location of the item on the shelf.”
things that incl services and objects,
digital information and communication - including information that points or provides pointers to non-digital information
United Nations Convention for People with Disabilities, Article 2 Definitions
“For the purposes of the present Convention:
"Communication" includes languages, display of text, Braille, tactile communication, large print, accessible multimedia as well as written, audio, plain-language, human-reader and augmentative and alternative modes, means and formats of communication, including accessible information and communication technology;
"Language" includes spoken and signed languages and other forms of non spoken languages;
"Discrimination on the basis of disability" means any distinction, exclusion or restriction on the basis of disability which has the purpose or effect of impairing or nullifying the recognition, enjoyment or exercise, on an equal basis with others, of all human rights and fundamental freedoms in the political, economic, social, cultural, civil or any other field. It includes all forms of discrimination, including denial of reasonable accommodation;
"Reasonable accommodation" means necessary and appropriate modification and adjustments not imposing a disproportionate or undue burden, where needed in a particular case, to ensure to persons with disabilities the enjoyment or exercise on an equal basis with others of all human rights and fundamental freedoms;
"Universal design" means the design of products, environments, programmes and services to be usable by all people, to the greatest extent possible, without the need for adaptation or specialized design. "Universal design" shall not exclude assistive devices for particular groups of persons with disabilities where this is needed.” (UN, 2006)
While there are many organisations related to accessibility, too many to even name, there are some organisations that have played a significant role in shaping the Web since its inception. Some of these will be identified here as they usually also provide many online resources and any understanding of the 'literature' of accessibility of the Web or metadata relating to it necessarily relies on familiarity with the work of these organisations.
W3C's approach has evolved over time but it is currently understood as promoting 'universal design'. This idea was fundamental to WCAG 1.0 and is maintained for the forthcoming (WCAG 2.0) guidelines for the creation of content for the Web. WCAG is complemented by guidelines for authoring tools that reinforce the principles in the content guidelines and W3C also offers guidelines for browser developers. Significantly, the guidelines are also implemented by W3C in its own work via the Protocols and Formats Working Group who monitor all W3C developments from an accessibility perspective.
W3C entered the accessibility field at the instigation of its director and especially the W3C lead for Society and Technology at the time, Professor James Miller, shortly after the Web started to take a significant place in the information world. W3C established a new activity known as the Web Accessibility Initiative with funding from international sources. From the beginning, although W3C is essentially a members' consortium, in the case of the WAI, all activities are undertaken openly (all mailing lists etc are open to the public all the time) and experts depend upon input from many sources for their work.
The W3C/WAI activity has done more than develop standards over the years through its fairly aggressive outreach program. It publishes a range of materials that aim to help those concerned with accessibility to work on accessibility in their context.
The Trace Research & Development Center is a part of the College of Engineering, University of Wisconsin-Madison. Founded in 1971, Trace has been a pioneer in the field of technology and disability.
Trace Center Mission Statement:
To prevent the barriers and capitalize on the opportunities presented by current and emerging information and telecommunication technologies, in order to create a world that is as accessible and usable as possible for as many people as possible. ...
Trace developed the first set of accessibility guidelines for Web content, as well as the Unified Web Access Guidelines, which became the basis for the World Wide Web Consortium's Web Content Accessibility Guidelines 1.0 [TRACE].
Wendy Chisholm, who originally worked at TRACE was a leading staff member of WAI for many years and author of a number of the accessibility guidelines and other documents.
The Adaptive Technology Resource Centre is at the University of Toronto. It advances information technology that is accessible to all through research, development, education, proactive design consultation and direct service. The Director of ATRC, Professor Jutta Treviranus, has been significant in the standards work in many fora and the group has contributed the main work on the ATAG. They are also largely responsible for initiating the work for the AccessForAll approach to accessibility and the technical development associated with it.
The Carl and Ruth Shapiro Family National Center for Accessible Media is part of the WGBH, one of the bigger public broadcast media companies in the USA. Henry Becton, Jr., President of WGBH, is quoted on the WGBH Web site as saying that:
WGBH productions are seen and heard across the United States and Canada. In fact, we produce more of the PBS prime-time and Web lineup than any other station. Home video and podcasts, teaching tools for schools and home-schooling, services for people with hearing or vision impairments ... we're always looking for new ways to serve you! (WGBH About, 2007)
With respect to people with disabilities, the site offers the following:
are deaf, hard-of-hearing, blind, or visually impaired like to watch television
as much as anyone else. It just wasn't all that useful for them ... until WGBH
invented TV captioning and video descriptions.
Public television was first to open these doors. WGBH is working to bring media access to all of television, as well as to the Web, movie theaters, and more (WGBH Access, 2007).
NCAM is a major vehicle for these activities within the media context and its Research Director, Madeleine Rothberg, has been a significant researcher and author in the work that supports AccessForAll in a range of such contexts.
In addition to organisations that have been involved in the research and development that have led to the AccessForAll approach and standards, there have been the standards bodies themselves that have not only published standards but also initiated work that has made the standards' development possible. In many cases, standards are determined by 'standards' bodies that are, as in the case of the International Organisation for Standardization [ISO], federations of bodies that ultimately have the power to make laws with respect to the specifications.
W3C's role in the standards world is often described as different from, say, the role of ISO because of the structure of the organisation and also the processes used to develop specifications for recommendation (de facto standards). W3C membership is open to any organisation and tiered so that larger more financial organisations contribute a lot more funding than smaller or not-for-profit ones. The work processes are defined by the W3C so that working groups are open and consult widely and prepare documents which are voted on by members and then recommended, or otherwise, by the Director of the W3C, Sir Tim Berners-Lee. They are published as recommendations but usually referred to as standards and certainly, in the case of the accessibility guidelines, are de facto standards. In many countries, including Australia, they have been adopted into local laws in one way or another.
ISO collaborates with its partners, the International Electrotechnical Commission [IEC] and the International Telecommunication Union [ITU-T], particularly in the field of information and communication technology international standardization.
ISO makes clear on their Web site, that it is
a global network that identifies what International Standards are required by business, government and society, develops them in partnership with the sectors that will put them to use, adopts them by transparent procedures based on national input and delivers them to be implemented worldwide (ISO in brief, 2006).
ISO federates 157 national standards bodies from around the world. ISO members appoint national delegations to standards committees. In all, there are some 50,000 experts contributing annually to the work of the Organization. When ISO International Standards are published, they are available to be adopted as national standards by ISO members and translated into a range of languages.
The Joint Technical Committee 1 of ISO/IEC is for standardization in the field of information technology. At the beginning of April 2007, it had 2068 published ISO standards related to the Technical Committee and its Sub-Committees; 2068; 538 published ISO standards under its direct responsibility; 31 participating countries; 44 observer countries; at least 14 other ISO and IEC committees and at least 22 international organizations in liaison (JTC1, 2007).
JTC1 SC36 WG7 is the working group for Culture, Language and Human-functioning Activities within the Sub-Committee 36 for IT for Learning Education and Training. It is this working group that has developed the AccessForAll standards for ISO. Co-editors for these standards come from Australia (Liddy Nevile), Canada (Jutta Treviranus) and the United Kingdom (Andy Heath), but there have been major contributions from others in the form of reviews, suggestions, and discussion and support.
The IMS Global Learning Consortium [IMS] describes itself as having more than 50 Contributing Members and affiliates from every sector of the global learning community. They include hardware and software vendors, educational institutions, publishers, government agencies, systems integrators, multimedia content providers, and other consortia. IMS claims to provide "a neutral forum in which members work together to advocate the use of technology to support and transform education and learning" (IMS, 2007).
A joint project between WGBH/NCAM and IMS initiated the work on AccessForAll with a Specifications for Accessible Learning Technologies (SALT) Grant in December 2000. Anastasia Cheetham, Andy Heath, Jutta Treviranus, Liddy Nevile, Madeleine Rothberg, Martyn Cooper and David Wienkauf were particularly prominent in this work.
The Web site describes the Dublin Core Metadata Initiative as
an open organization engaged in the development of interoperable online metadata standards that support a broad range of purposes and business models. DCMI's activities include work on architecture and modeling, discussions and collaborative work in DCMI Communities and DCMI Task Groups, annual conferences and workshops, standards liaison, and educational efforts to promote widespread acceptance of metadata standards and practices (DCMI, 2007).
The DCMI Accessibility Community has been working formally on Dublin Core metadata for accessibility purposes since 2001. While the early work focused on how metadata might be used to make explicit the characteristics of resources as they related to the W3C WCAG, this goal has been realised in the AccessForAll work. The DCMI Accessibility Community has been working in close collaboration with the IMS and ISO efforts but it has engaged the metadata community, and therefore those working primarily in a wider context than education, especially including government and libraries. The author has been chairperson of the DCMI Accessibility community since its inception.
The European Committee for Standardization, was founded in 1961 by the national standards bodies in the European Economic Community and European Free Trade Association countries. CEN is a forum for the development of voluntary technical standards to promote free trade, the safety of workers and consumers, interoperability of networks, environmental protection, exploitation of research and development programmes, and public procurement (CEN, 2007).
A number of CEN committees have been involved in the development of AccessForAll, either in the form of contributed funding as for the MMI-DC, or in their independent review of the development of AccessForAll and how it will work in their context if it is adopted by the other standards bodies. Significant in this work have been Martyn Cooper, Martin Ford, Andy Heath, and Liddy Nevile who have all worked on CEN projects in recent years. The context for this work has included but not been limited to education.
There are a number of other standards bodies or regional associations that have considered the work in depth and contributed in some way. In fact, in early 2007, IMS versions of the specifications had been downloaded 28,082 times and the related guidelines more than 176,505 times. (Rothberg, 2007) CanCore has published the CanCore Guidelines for the "Access for All" Digital Resource Description Metadata Elements (Friesen, 2006) following an interview with Jutta Treviranus in which she discusses the specifications (Friesen, 2005).
The Centre for Educational Technology and Interoperability Standards [CETIS] in the UK provides a national research and development service to UK Higher and Post-16 Education sectors, funded by the Joint Information Systems Committee. CETIS has published some summary documents about the IMS AccMD, IMS AccLIP and IMS Guidelines.
By March 31, 2008, there were 126 signatories to the United Nations Convention, 71 signatories to the Optional Protocol, 18 ratifications of the Convention and 11 ratifications of the Optional protocol. Australia signed the Convention but has not ratified it (UN Enable, 2008). In an information era, everyone should have, one way or another, an equal right to information if they are to participate equally in the information age. The general aim of the new United Nations convention is to ensure that people with disabilities are treated inclusively as are other groups of people identified in earlier conventions. In particular, this convention calls for inclusive access to information and communications for people with disabilities, and specifies a number of situations in which these rights must be enforced, including for work, entertainment, health, politics and more (UN, 2006).
The idea that inclusive treatment of people eliminates the need for special considerations for people with disabilities is at the heart of the research reported in this thesis. It is derived from what has been defined as the social model of disability (Oliver, 1990b). First, it attends to the limits on people's abilities to participate in society rather than on any medically-defined 'defect' they may be considered to have. Secondly, it supports equally able-bodied people who for one reason or another cannot participate equally.
The social model of disability spreads responsibility for inclusion across the community. This research aims to enable continuous, distributed, community effort to make the World Wide Web inclusive.
For a decade, effort to make the Web accessible has focused on following, or otherwise, a set of guidelines that have come to be treated as specifications. These guidelines have been proven inadequate to ensure accessibility for all, because the universal accessibility model on which they depend is flawed. Recent estimates of the accessibility of the Web are as low as 3% (e-Government Unit, UK Cabinet Office, 2005).
If a user is blind, eyes-busy or using a small screen, instructions about how to get from one place to another presented as a map may be incapable of perception while a text version that can be read out and heard would be perceptible. Providing a text description of travel routes is an example of an accessibility improvement for a map. Managing the map and the new version so that it is associated with the map, and discoverable at the same time as the map, is what catalogue records or metadata can do for digital objects.
The research advocates a process to support ongoing incremental improvement of accessibility. This depends upon efficient management and description of distributed resources and their improvements, and descriptions of them, so they can be matched to people's individual needs and preferences. The research elaborates what is called AccessForAll metadata (Nevile & Treviranus, 2006), a descriptive framework for description of resources and resource components. AccessForAll metadata provides a common language for such descriptions so that they can be shared, so they will interoperate across description protocols, and so they can be used by computers to automatically match resources to users' needs and preferences. AccessForAll metadata includes provision for a common way of describing people's needs and preferences.
The research distinguishes the context in which earlier accessibility work took place. In what might be thought of as a ‘Web 1.0’ environment, one-way publishing was the dominant activity. In the current ‘Web 2.0’ environment, interactive publication happens across the Web in unpredictable ways, despite authors and publishers who provide well-structured, cohesive Web sites. Most people are learning to 'Google' and approach information from a range of perspectives and directions, often coming into resources through what is effectively a back door, and taking from resources what is of interest but disregarding or discarding the rest. The research also relies upon the interactivity and energy available from what is known as social networking that is occurring within the Web 2.0 environment (Flickr, YouTube, LibraryThing, Facebook, etc). It exploits new technologies to solve an old problem and to share the responsibility for the problem well beyond the practices, knowledge, and expertise of the original resource authors.
The research is not limited to classic 'Web pages', but includes access to all resources, including services, that are digitally addressed. AccessForAll metadata already describes digital resources and is being extended to describe a wider range of objects including events and places (ISO/IEC JTCI SC 36, 2008). Descriptions of the accessibility of those physical places and events will be Web addressable, so the access to those places will be 'on the Web'.
The United Nations publishes a map (Figure 1) that shows involvement in the United Nations (UN) Convention for the Rights of People with Disabilities. As of June 2008, more than eighteen months after the Convention was adopted by the UN, Australia had only signed the convention but not ratified it. Unless it is ratified by the Australian government, it has no legal status in Australia. On the other hand, Australians have been involved for many years in international efforts with W3C, ISO, IMS GLC, CEN, and others to ensure that information technology and digital resources are accessible to everyone. They have actively participated in the work of the World Wide Web Consortium [W3C] and others to curb the alienating effects of new multimedia technologies on the Web.
The recent United Nations convention on the rights of people with disabilities clearly states that accessibility is a matter of human rights. In the 21st century, it will be increasingly difficult to conceive of achieving rights of access to education, employment health care and equal opportunities without ensuring accessible technology (Roe, 2007).
Making the Web accessible to everyone has proven more difficult than anticipated. While Roe (2007) considers the value of accessibility to be far-reaching, Constantine (2006) summarises the unfortunate reality; much as one might like to make the Web accessible, it is not accessible and is not likely to become so unless something very effective becomes central to operations and organisations.
At the Museums and the Web 2006 conference, one word had the power to abruptly silence a lively discussion among multimedia developers: accessibility. When the topic was introduced during lunchtime conversation to a table of museum web designers, the initial silence was followed by a flurry of defensive complaints. Many pointed out that the lack of knowledgeable staff and funding resources prevented their museum from addressing the “special” needs of the online disabled community beyond alternative-text descriptions. Others feared that embracing accessibility in multimedia meant greater restrictions on their creativity. A few brave designers admitted they do not pay attention to the guidelines for accessibility because the Web Content Accessibility Guidelines (WCAG) 1.0 standards are dense with incomprehensible technical specifications that do not apply to their media design efforts. Most importantly, only one institution had an accessibility policy in place that mandated a minimum level of access for online disabled visitors. Conversations with developers of multimedia for museums about accessibility were equally restrained. Developers frequently blamed the authoring tools for the lack of support for accessible multimedia development. Other vendors simply dismissed the subject or admitted their lack of knowledge of the topic. Only one developer asked for advice on how to improve the accessibility of their learning applications (Constantine, 2006).
Roe (2007) elaborates the extent of the problem:
About 15% of Europeans report difficulties performing daily life activities due to some form of disability. With the demographic change towards an ageing population, this figure will significantly increase in the coming years. Older people are often confronted with multiple minor disabilities which can prevent them from enjoying the benefits that technology offers. As a result, people with disabilities are one of the largest groups at risk of exclusion within the Information Society in Europe.
It is estimated that only 10% of persons over 65 years of age use internet compared with 65% of people aged between 16-24. This restricts their possibilities of buying cheaper products, booking trips on line or having access to relevant information, including social and health services. Furthermore, accessibility barriers in products and devices prevents older people and people with disabilities from fully enjoying digital TV, using mobile phones and accessing remote services having a direct impact in the quality of their daily lives.
Moreover, the employment rate of people with disabilities is 20% lower than the average population. Accessible technologies can play a key role in improving this situation, making the difference for individuals with disabilities between being unemployed and enjoying full employment between being a tax payer or recipient of social benefits (Roe, 2007).
People with disabilities who are alienated by inaccessibility are regarded by Australian law (HREOC, 2002) as discriminated against. They are able to claim damages from those who discriminate against them if all relevant conditions are satisfied. This means Australia recognizes a general right. It is, therefore, incumbent on a victim to prove, within the legal system, that they have unreasonably suffered from discrimination. Although such a course has been used, reported cases are rare and, as with other cases likely to provoke negative publicity. Such cases would normally be settled out of court where possible, and so not publicly reported. Such a legal situation does not operate as a major threat to large organisations, especially as so far the damages awarded so far have not been substantial, e.g. Maguire v Sydney Organising Committee for the Olympic Games (HREOC, 1999).
Accessibility efforts in many cases aim to make a single resource universally accessible to everyone. Universal accessibility involves providing the same resource in many forms so that people with disabilities can use the full range of perceptions to access it across all platforms, fixed and mobile, standard and adaptive. Universal accessibility is distinguished from individual accessibility or accessibility to an individual user. Many resources are individually accessible while not universally accessible and many universally accessible resources (as defined by the standards in use) are not accessible by some individual users (Chapter 4).
Reinforcing the disinclination to worry about accessibility is the common belief that it costs a lot to make resources universally accessible (Steenhout, 2008). Frequently, it is left to a semi-technical person in a relatively insignificant position within an organisation or operation to champion accessibility as best they can. Anecdotally, they frequently report that all was going well until the resource was about to be released. Then, the marketing manager or some other more significant participant chose to add a particular feature and not be constrained by accessibility concerns. (In the 1990's, Nevile was responsible for the accessibility of original design of two major government portals, the Victorian Better Health Channel and the Victorian Education Channel. In both cases, late requests for change threatened the integrity of the sites but, in the end, the earlier accessibility work made it easy to avoid any ill-effects of the changes).
Economic factors are, therefore, important in the context of accessibility. Many believe that accessibility means more expenses when resources are being developed and more resources being supplied to the range of users of those resources. It is true that making an inaccessible resource accessible can take considerable effort, expertise and expense and, even then, is not always possible. On the other hand, some publishers are finding that by making accessibility a priority, they actually gain financially through cost savings (Jackson, 2004, Chapter 3).
Practicality is important. It has long been known that it is not always possible to make an inaccessible resource accessible without having to compromise some of the characteristics of the resource, depending on what sort of resource it is. If designers provide an attractive 'look and feel' for a Web site, for example, it may not be possible to have exactly that look and satisfy all the accessibility specifications. Additionally, those who are experts in accessibility are not usually designers but more often technical people. In practice, a designer who works within the accessibility constraints is able to design creatively and avoid the accessibility pitfalls.
One common reason that resources are not accessible is that they are dependent on a software application that does not render the content, or does not control or display the content in ways that make it accessible to everyone. Many people with disabilities use specialised equipment or software to gain access to content. Many people use mobile phones, and others use screens with content projected on to them, or printers, or old computers. Sometimes the content creator takes the end user into account. Unfortunately, this often means they arbitrarily anticipate, for example, that it will be printed on local-standard sized paper, in which case they fix the electronic version of the resource to match the way they expect it to appear on paper. This does not always work for the paper version because the local standards differ. Neither does it work for the digital version of the resource because rarely are screen sizes or windows appropriate for this. In cases where users have unusual needs or preferences, such as a need to change the font size or reverse the colours of the background and foreground. it is unlikely the necessary changes can be made. It is possible, however, where the digital version of the fixed print version is very well encoded for accessibility,. The World Wide Web Consortium [W3C] has developed a technology that allows a single resource to be presented in a variety of ways, depending on the medium, and explicitly for the user to have one form of presentation that overrides any made available by the publisher of the resource or the browser software [Cascading Style Sheets, CSS].
Many think of the Web as 'homepages' or Web sites. This is not sufficient. A Web page may contain links to documents that reside in databases, open or closed, and those 'documents' might be simply some application-free content, or they might be complex combinations of multimedia objects, even dynamically assembled for the individual user, locked into specific applications. The Web Accessibility Initiative [WAI] is the arm of W3C that focuses on accessibility for the Web. WAI distinguishes between two classes of software used in this context; authoring tools and user agents. The classes include software that does very different things according to what it is being used to author or access, which can range from literature to computer code, images to tactile objects. Authoring tools should both produce accessible content and be accessible, according to the relevant WAI guidelines [Authoring Tools Accessibility Guidelines, ATAG]. User agents are the software applications tools used to access the content. They should also be be both accessible and do the right thing with the content so that it is rendered in an accessible way [User Agent Accessibility Guidelines, UAAG]. (The user agents are often known as Web browsers but they can take many forms.)
The WAI set of guidelines, originally three for authoring tools, users agents and content [Web Content Accessibility Guidelines, WCAG], have been in constant development or revision for more than a decade (Chapter 4). They have been adopted in many countries and used by developers all around the world. Despite this incredible effort, the Web is far from accessible to everyone (Chapters 3, 4). The underlying principle for these guidelines has continued to be universal access, achieved by having a single resource that can be used by everyone.
In recent years, total dependence on the WAI work and its derivatives (such as s. 508 that was added to the US Disabilities Discrimination Act [DDA]) has been re-examined and a range of post-production solutions are being proposed. In particular, methods have been developed that support increasing the accessibility of a resource by a third party, unrelated or connected to the original publisher. ubAccess, for example, developed a service (SWAP) that could assist people with dyslexia who were having problems with resources, without reference to the original creator of the resource. In a similar way, a service called Access Monkey gives access to resources that would otherwise be inaccessible to some people and does this without reference to the original author of the resource (Bigham & Ladner, 2007).
In 2008, more and more such services are emerging. What is significant is not simply their number. It is that they represent a significant shift in thinking about accessibility. If resources are not going to be created universally accessible, or found in a universally accessible form, and it is unlikely there will be a significant change in this situation, it makes sense to think more about what can be done post-production.
Going a little further, the FLUID project aims to develop interchangeable user interface components that will be able to interpret and present content in ways that are accessible to individual users (2007). This will depend on content being made so it is not application or interface specific, not confined to a specific interface or application, but free to be adopted and adapted by standards conformant applications, interfaces, and thus accessible to all who use it.
The original use of the World Wide Web was to enable a few people scattered around the world to work together on shared files located on their own computers, to make them discoverable using a Uniform/Universal Resource Identifier [URI], and to access them using the HyperText Transfer Protocol [HTTP]. The early use of the Web was for collaborative development. In the first decade of widespread use of the Web as an information and communication technology, the main activity was the publication of resources. This involved the use of HTML encoded files that offer embedded links, embedded multimedia resources and may have had cascading stylesheets [HTML 4.01] and, often, relied on third party HTTP or Web servers to deliver those files to users. Now, as is recognised by the new name 'Web 2.0' (see below), all sorts of interactive, collaborative and shared activities are being undertaken using a wide range of technologies.
The research establishes that the dominant model of accessibility work is still grounded in the early Web, a network of static documents that may be updated but are usually from a single source. In this thesis, the term Web 1.0 is used to designate the Web as it was commonly used in its first decade (1995-2005). O'Reilly (2005) used the version terminology to differentiate between the uses of the Web to draw attention to more recent developments in the way people use the Web. Of course, it should be noted that the Web does not, in fact, have versions (O'Reilly, 2005) and this terminology is more about how it is used than what it can do.
Web 1.0 work assumes editorial control over publishing, even where the authors come from a single organisation and this task is undertaken by a number of people. In such cases, in fact, many organisations impose both style guides (or the equivalent) on the authors and/or provide templates within which those authors have constrained scope for their content. In such circumstances, it might be possible to force adherence to certain style standards, as it was in the earlier days when documents to be printed were encoded in Standard Generalized Markup Language [SGML] (the predecessor of HTML). The model also assumes that users of Web resources will interact with them as their author intended but more and more this is proving not to be the case as people use search engines, dynamic feeds from within Web sites, etc.
A side-effect of Web 1.0 work is that many people still do not recognise that they can use standard Web pages and Web authoring tools, in almost exactly the same way as they use non-standardised proprietary office tools, including to format, print, exchange and manage other documents. Many people are still using office tools that do not take advantage of the accessibility possible with available technologies. Organisations in which proprietary office tools are used form sub-cultures around those tools, and participants develop materials (resources) that suit the particular software tools. They are often not aware that their single resources could be as easily created and managed but far more flexible and interoperable not only between software systems, but also across ranges of modalities (on paper, on individual screens, as presentations on large screens, read aloud, etc.). Proprietary interests and competition have encouraged proprietary developers to distinguish their software by adding features often regardless of the inaccessibility simultaneously introduced by those features (Nevile, personal observations).
At the time of writing, there is a worldwide debate on the wisdom of adopting the Microsoft specification Office Open XML as an International Standards Organisation [ISO] standard for documents. One reason is the problem of accessibility that may flow from that decision (Krempl, 2008). Portable Document Format [PDF], another proprietary format, has long proved a problem for accessibility and continues to do so, despite being an ISO standard (W3C PDF, 2001).
The research establishes that the historic view of accessibility is no longer effective. The complexity of satisfying the original guidelines is shown to be out of the range of most developers. There are too many techniques involved; they are not explicit; they cannot always be tested with certainty; they do not completely cover even chosen use cases and are not intended to cover all user requirements; they are contradictory in some cases; they have not been applied systematically, and anyway, they do not apply to all potential information and communications. All of these claims are documented in this thesis.
This thesis is not alone in making the claims above: there are many authors and developers both writing and acting; some people have started work on post-production and even post-delivery reparation of resources lacking in accessibility, and others are proposing new ways of thinking about accessibility. Their work is considered in detail in the research.
What this thesis offers is an argument in favour of an on-going process approach to accessibility of resources that supports continuous improvement of any given resource, not necessarily by the author of the resource, and not necessarily by design or with knowledge of the original resource, or by contributors who may be distributed globally. It argues that the current dependence on production guidelines and post-production evaluation of resources as either universally accessible or otherwise, does not adequately provide for either the accessibility necessary for individuals or the continuous or evolutionary approach possible within what is defined as a Web 2.0 environment. It argues that a distributed, social-networking view of the Web as interactive, combined with a social model of disability, given the management tools of machine-readable, interoperable AccessForAll metadata, as developed, can support continuous improvement of the accessibility of the Web with less effort on the part of individual developers and better results for individual users.
As outlined above, there are a number of ways to make resources accessible. Relying solely on authors to 'do the right thing' by following the universal accessibility approach has generally failed to make resources universally accessible (Chapter 4) but many resources are nevertheless suitable for individual users, if only they can find them. Similarly, most resources that are universally accessible are not discoverable as such.
In Europe, there have been moves to apply metadata to resources (to catalogue them) that declare their accessibility in terms of conformance with various available specifications: the UK government has mandated certain provisions (BSI,2006; Sloan, 2005; Appendix 6) and the European Centre for Standards (CEN) supported a later abandoned project led by EuroAccessibility for an accessibility conformance mark for use in all European countries (RNIB, 2003). There have also been reservations about such an approach (Phipps et al, 2005). The current research challenges the wisdom of that practice. As there are often legal implications for having resources that are not accessible, even if there is not an economic incentive that might bias evaluations, it is hard to know which evaluations to trust. It is also very hard to evaluate accessibility accurately. One reason for the problem with the evaluation of accessibility is that only some of the criteria can be tested against absolute standards, as most depend upon human judgment. This causes problems because many people can manage and do the technical tests using automatic tests but do not realise they also have to do the human-based user testing, and when they do, they lack the knowledge, resources and expertise to do this properly. In fact, to rectify this situation, those developing specifications, such as the World Wide Web Consortium's Web Accessibility Initiative, are endeavouring to make all specifications testable against absolute values. Unfortunately, to achieve this, they appear to be compromising some of the specifications (Hudson & Weakley, 2007) and end up having to ignore the needs of important communities of users such as those with cognitive disabilities (Moss, 2006; WCAG 2.0, 2008a).
Metadata that merely identifies resources that have been marked as accessible is not particularly reliable and anyway, as is shown below (Chapter 4), conformance with the best-known guidelines does not necessarily mean a resource is universally accessible. Certainly, such metadata does not say if the resource is optimised for any particular individual user seeking it. More specific metadata is required if it is to be useful to the individual user. This has been recognised by the authors of the WCAG guidelines and there is provision in the forthcoming version of WCAG for metadata as a result of the AccessForAll work (W3C WCAG 2.0, 2008a).
If resources are to be made more accessible post-production, they will need to be discoverable prior to being delivered and found to be inaccessible and any missing or supplementary components, or services to adapt them, will also need to be discovered. Resource descriptions, like catalogue records, can usefully contin descriptions of the accessibility characteristics of resources without any need for declaring if the resource is or is not universally accessible. Such a description is known as AccessForAll metadata and discussed in detail below (Chapter 7). AccessForAll metadata has been adopted by four major standards bodies. First, the IMS Global Learning Consortium [IMS GLC] for the education sector. Then the Joint Technical Committee of the International Organisation for Standardization/International Electrotechnical Commission. Its, Sub-Committee 36, [ISO/IEC JTC1 SC36], adopted it again for the education sector. The Dublin Core Metadata Initiative [DCMI] is adopting it for general metadata, for all sectors, and most recently, Standards Australia has adopted if for the AGLS Metadata Standard [AGLS], for all Australian resources.
This thesis describes the background, theories, design and development of the metadata, as documented in the various published or forthcoming standards, and work associated with its adoption by various stakeholders.
In addition to metadata that describes the accessibility characteristics of resources, it is necessary to define metadata to describe the accessibility needs and preferences of users. 'AccessForAll' metadata is best used to match resources to users' needs and preferences, automatically where possible. Determining how such a match might be achieved in a distributed environment is a continuing interest of the author and colleagues in Japan, especially in as much as it relates to the use of the Functional Requirements for Bibliographic Records [FRBR], OpenURI (Hammond & Van de Sompel, 2003), and possibly GLIMIRs (Weibel, 2008a). This highlights the significance of the metadata as defined, the potential matches, and the ways in which AccessForAll metadata contributes to the accessibility process.
Usability is well established as a criterion for the utility of a resource (Nielsen, 2008). A flexible approach including usability in a loose sort of 'tangram' model could significantly improve the Web's accessibility (Kelly et al, 2006, Kelly et al, 2008). The AccessForAll metadata enables the management of resources in such a process with adaptability for personal needs and preferences for a better result.
Jakob Nielsen useit.com: Jakob Nielsen's Website “Understanding and significance of accessibility”
Understanding accessibility is not easy given the huge number of different contexts and requirements possible. In addition, there are many definitions.
For the purposes of the research, accessibility is defined as a successful matching of information and communications to an individual user's needs and preferences to enable that user to interact with and perceive the intellectual content of the information or communications. This includes being able to use whatever assistive technologies or devices are reasonably involved in the situation and that conform to suitably chosen standards. Explanations of the more detailed characteristics of accessibility are considered in Chapter 3.
The literature reveals two significant things: a current common approach to accessibility that is significantly reliant on universal accessibility, as promoted by the World Wide Web Consortium [W3C], and a significant failure of that approach to make a sufficient difference.
Almost one in five Australians has a disability, and the proportion is growing. The full and independent participation by people with disabilities in web-based communication and information delivery makes good business and marketing sense, as well as being consistent with our society's obligations to remove discrimination and promote human rights. (HREOC, 2002)
In 2005, estimates of accessibility were as low as 3% (e-Government Unit, UK Cabinet Office, 2005), even for important public information. In 2008, despite the introduction of quite stringent provisions regarding the accessibility of government sites, SiteMorse (2008) published figures that report that only 11.3% of UK government websites surveyed passed the WCAG AA test that is now mandated for such sites (Cabinet Office, 2008). (The sites were tested only with automated tests, so the results are only indicative of 'universal accessibility'.) Those with needs in terms of access in Europe are estimated to be 10-15%, and the number is increasing as the population ages (European Commission Report Number DG INFSO/B2 COCOMO4 – p. 14.).
Microsoft Corporation commissioned research that suggests the benefits of accessibility will be enjoyed by 64% of all Web users (Forrester Inc., 2004). In 2004, the United Kingdom's Disability Rights Commission [DRC] reported on the accessibility of 1,000 UK Web sites (DRC, 2004). They showed that 81% of Web sites failed to meet minimum standards for Web access for people with disabilities. Later, at a press conference, the DRC claimed that even sites considered prima facie to be demonstrating good practice, in fact failed to satisfy minimum standards when fully tested by the DRC. These reports have been endorsed by the United Nations' Global Audit of Web Accessibility (Nomensa, 2006).
Brian Kelly (2008) commented:
What we can’t say is that the Web sites which fail the automated tests are necessarily inaccessible to people with disabilities. And we also can’t say that the Web sites which pass the automated tests are necessarily accessible to people with disabilities.
The lack of accessibility solutions leads to the need for a new, comprehensive process for accessibility that includes the use of metadata to facilitate discovery and delivery of digital resources that are accessible to individuals according to their particular needs and preferences at the time of delivery. When a user has a constraint that renders information inaccessible to them, they are deemed to have a need, such as when a highly mobile person using a telephone cannot use a small scale map because it cannot be displayed on their tiny low-resolution screen or a blind person requires Braille. User preferences are less crucial responses to constraints for the individual user. It should be noted that some users have very specific needs that must be satisfied whereas other users may be satisfied by any from a range of preferences. Repeated???
The more information is mapped and rendered discoverable, not only by subject but also by accessibility criteria, the more easily and frequently inaccessible information for the individual user can be replaced or augmented by information that is accessible to them. This, in turn, means less damage when an individual author or publisher of information fails to make their information accessible. This is important because, as is shown (see Chapters 2, 5), making resources universally accessible is burdensome, unlikely to happen, and does not guarantee that the information presented will, in fact, be accessible to a particular individual user. It also means that distributed resources need to be managed so they can be augmented or reformed by components that are not originally a part of them or not intended to be associated with them. This can be done with suitable metadata.
Widespread-mapping of information depends upon the interoperability of individual mappings or, in another dimension, the potential for combining distributed information maps in a single search source. The ancient technique of creating atlases from a collection of maps is exemplary in this sense (Ashdowne et al, 2000). Being able to relate a location on one map to the same location on another map is achieved easily when latitude and longitude are represented in a common way, or when the name of one location is either represented in a common way, such as both in a certain language, or able to be related via a thesaurus or the equivalent.
Atlases would not be useful if every map were developed according to different forms of representation; the standardisation of representations enables the accumulation of maps to form the universal atlas. In the same way, the widespread mapping of accessible resources on the Web is achieved by the use of a common form of representation so that searches can be performed across collections of resources. Interoperability is typically said to function at three levels: structure, syntax and semantics (Weibel, 1997). Nevile & Mason (2005) argue that it does not operate at all unless there is also system-wide adoption (see Chapter 12).
The AccessForAll team (the AfA team) worked to exploit the use of metadata in the discovery and construction of digital information in a way that could increase Web accessibility on a worldwide scale. The outcome is a set of specifications (now forthcoming as standards) that can be used to enable the production of an atlas of accessible versions of information so that individual users everywhere can find something that will serve their purposes in a way that is independent of their choice of device, location, language and representational form. There are several ways in which this work needs to be followed by other work: to enable a similar selection of user interface components (see FLUID) and perhaps certification of organisations and systems that provide the new service, or at least those that enable it by providing useful metadata (see Chapter 12).
The AfA work takes advantage of the growing number of situations for which metadata is the management tool for digital of objects and services and of people's needs and preferences with respect to them, so that resources that are suitable can be discovered by users where they are well-described. AfA philosophy includes, in addition, that resources should be able to be decomposed and re-formed using metadata to make them accessible to users with varying devices, locations, languages and representation needs and preferences. Chapter 11 expands on some significant if not widespread adoption of this method. AfA metadata can be used immediately to manage resources within a shared, closed environment such as the original one established at the University of Toronto where the AccessForAll approach originated. There is, however, greater potential for it such as to use it in a distributed environment. Exactly how to do this is proving a challenge but the problem is closely aligned to the problems being considered by W3C's working group developing POWDER (W3C POWDER, 2008) and hopefully will soon be overcome.
In the case of the AccessForAll projects, Nevile has worked on many AccessForAll and other accessibility projects as the metadata researcher.
BC wants a diagram of input etc here????
In the research, the basic computer science task of classification in first normal form (IBM,2005) that is, in a functionally unambiguous way, is abstracted into the domain of accessibility according to theoretical principles developed in the last decade by the metadata communities. Implementers and developers work to unambiguously classify objects building databases and thesauri. The field of metadata, how to express and make interoperable such classifications, evolved from the librarian's discipline of cataloguing, inheriting many principles but explicitly rejecting others or adapting them, and adding some new ones. The role of technology, and hence the syntax and structure of the classifications, are significant in metadata work whereas the semantics were the focus in the earlier library work.
Metadata research is looking for a means of fixing semantics within a framework of vocabularies that are not aligned, using technology that is evolving, and looking for appropriate means for declaring the semantics in interoperable ways. Such research is being performed in a number of leading universities around the world (Metadata Research Center, University of North Carolina (MRC UNC); Metadata Research Project, University of California (Berkeley); Cornell University Library; etc.).
At the Metadata Research Center, School of Information and Library Science, University of North Carolina at Chapel Hill, a number of projects for developing metadata for specific domains have been funded and undertaken as research [MRC UNC]. A typical example is provided by the KEE-MP project:
The goal of the Knowledge Exploration Environment for Motion Pictures (KEE-MP) project is to design and develop a prototype web system that will enable aggregation, integration, and exploration of diverse forms of discourse for film.
The main research components of the project are:
Š Identification and categorization of descriptive information produced by the film discourse community.
Š Development of processes and principles for working with high-level content descriptions (e.g., of form, genre, theme, style) in metadata frameworks and thesauri (or ontologies).
Š Prototyping of a system for user testing and experimentation. (MRC UNC, 2008)
Such research does not depend upon standard research techniques (see Chapter 2), but nor is it development in the usual sense. While the direct output may be a prototype product, the research is about metadata. Some of what is learned is inevitably what is not supported by metadata as it is used, and how effective the evolving principles are, and what could improve them. It also touches upon the effectiveness of the evolving principles of technical accessibility development and ways to improve it. The work of these projects is demanding and necessarily involves a number of people.
Metadata research projects, as shown above, often involve a multi-disciplinary team including both developers and researchers. In as much as the research requires the use of new technologies, and they need to be built and tested, developers are often essential to the work. In addition, there is usually a need for subject experts, who can contribute not just bare information but by advising on the structure of the knowledge of the domain, and how it is used. Finally, it is usually important to have someone who is able to cross-the disciplines, to understand how they interact in the circumstances.
The Assistive Technology Resource Center [ATRC] at the University of Toronto has a proud record of research and development. In the field of accessibility, they have significant achievements and, specifically, were leaders in the use of database technologies to adapt resources to users’ individual needs, with their product ‘The Inclusive Learning Exchange’ [TILE].
While there is a close connection between database management of resources and metadata, they are not the same. Database developers and researchers work on such aspects as the speed with which the data can be manipulated by an application, the amount of data, etc. Metadata specialists are customers for this work; their concern is more the semantic value of descriptions of the resources so that people, as well as machines, can use the descriptions. Database specialists think in terms of the needs of the computational systems, metadata experts think about the substance of the resources and the discipline and thus its ontological principles. Metadata specialists do not specialise in the discipline so much as in how to manage its resources, and often learn this by working in a number of different contexts, and thus abstracting metadata principles that they can bring to bear in new situations. It is this final activity that forms the research being reported.
TILE is a database application in which certain ‘fields’ or what programmers think of as tokens, prompt certain responses from a computational system. Metadata is the result of an abstraction of such a process. Metadata is to do with the underlying model for such databases – how should the database be constructed to group resources, what triggers should it respond to, what inputs does it need, and so on. In this context, it can be helpful to think of the abstract work as developing a metadata schema such as the abstract model for AccessForAll metadata (Chapter 7).
In the AccessForAll interdisciplinary metadata team, there have been seven major players: Jutta Treviranus, Anastasia Cheetham and David Weinberg, in particular, from the Assistive Technology Resource Center [ATRC] at the University of Toronto, Canada (University of Toronto, Canada); Madeleine Rothberg from WGBH National Center for Accessible Media in Boston, USA (WGBH/NCAM); Liddy Nevile from La Trobe University, Australia; and Andy Heath from the University of Sheffield (now at the Open University) and Martyn Cooper from the Open University, United Kingdom (Open University, UK).
All in the team have been involved in accessibility work for a number of years but from different perspectives. Nevile is clearly the metadata researcher in the team, while Cheetham and Weinberg are responsible for the development of the prototype TILE, Heath is an expert in programming, and Rothberg, Treviranus and Cooper are responsible for major accessibility projects in education. Treviranus is the outstanding accessibility expert. Treviranus is the Director of the ATRC and a leader in the field of disability work involving technology, Director of the ATRC and its numerous projects, and Chair of the W3C Authoring Tools Accessibility Guidelines Working Group [ATAG WG], among other things.
The AccessForAll work has been undertaken in a number of contexts (as explained below) but always with the core team leading the efforts. The group emerged from the work being undertaken by the IMS Global Learning Consortium [IMS GLC] when they adopted the ATRC model, and has moved to other contexts, as explained below. Nevile, the Chair of the DCMI Accessibility Working Group (now the Accessibility Community), is responsible for AccessForAll finding its way into the DCMI world of metadata and has been responsible for developing the Accessibility Application Profile (or Module) for DCMI and all the schema and documentation required for an international technical standard (DCMI Access).
Nevile is the primary DC 'metadata' person in the AccessForAll team (Appendix 1 and 2) but also working to enable the AccessForAll principles to operate across the various metadata 'platforms'. The aim of her research is to find a way to enable the AccessForAll approach in a variety of formats with the greatest possible potential for interoperability between those formats. As always, those leading in this work are involved in many overlapping and, at times, conflicting communities (Figure ???). Consequently, this work has not been undertaken in a purely 'scientific' way - it has to satisfy practical considerations as well.
This thesis argues that metadata is an enabling technology
should be central and integral to any
shift to an AccessForAll approach to accessibility. It is at the core of the
research in as much as it provides
essential infrastructure for such a new approach to accessibility. From the
beginning, Nevile's involvement has been based on questions that have arisen in
the Dublin Core Metadata Initiative context, motivated by earlier development
work, and focused on what is the role metadata
play in accessibility.
This research establishes that careful metadata work is essential if metadata is to provide the infrastructure for AccessForAll practices that can make the Web more accessible. With respect to metadata, the research challenges the structure, the syntax and the semantics of the AccessForAll work. It includes:
Š analysis of the problems of interoperability between two different types of metadata (Learning Object Model and Dublin Core);
Š the creation of a suitable alternative structure for AccessForAll metadata, based on the Dublin Core Abstract Model (DCAM), that is interoperable with other Dublin Core metadata and thus also the Semantic Web (an significant emergent technology in the Web 2.0 environment);
Š alternative semantics for AccessForAll metadata that are compatible (without loss) with the original LOM-based model but conformant with the DC structure as defined in the DCAM, and
Š a syntactic representation that is interoperable with LOM, DC and Semantic Web expressions of AccessForAll metadata.
With respect to accessibility, based on estimates of the current accessibility of the Web, the research challenges the theoretical foundations of previous work. It adopts a new base to support inclusion and the UN Convention for the Rights of People with Disabilities (UN, 2006). It includes:
Š a review and interpretation of available statistics to determine the need for improved accessibility of the Web;
Š a review and interpretation of available standards and specifications currently in use;
Š evaluation and interpretation of reports of the effectiveness of current accessibility efforts;
Š articulation of a new theoretical model for metadata use to increase the accessibility of the Web;
Š face-to-face workshops in Europe, Asia and Australia to seek consensus for proposals, and
Š AccessForAll metadata standards development.
It considers the following questions among others:
1. What constitutes accessibility? in what context? for whom?
2. How effective are current accessibility strategies?
3. What is wrong with current strategies?
4. What is necessary to enable better access?
5. What other strategies could be used?
6. What are the major components of best accessibility practices?
7. How are such practices enabled?
The thesis provides the only comprehensive documentation of
the principles and products that support the AccessForAll metadata approach to
model, the standards, and
other products of the research are published elsewhere and,
increasingly, implemented and further researched.
Timeline of involvement - draft
see dipity timeline here: http://www.dipity.com/user/liddy/timeline/AccessForAll
See description of what I did re accessibility Appendix 10
In this Preamble, the scene has been set for the substantive work that follows. The development of a new way of working on the problem of accessibility has been shown to be not just a response to the lack of real success with previous methods, but also a response to the changing technological context in which this work takes place. Metadata research has matured in the last ten years and metadata development has led to the adoption of it for resource management within digital systems. In addition, earlier understanding of disability according to a deficit model has been replaced by a social inclusive model that avoids distinctions between people with physical or other medical disabilities and the general public, assuming that everyone, at times, is disabled either by their circumstances or by temporary or permanent human impairment.
In the next chapter, many of the components considered in the research are defined in greater detail and the research is described.
The first decade of international
effort to make the Web accessible has not achieved its goal and a different
approach is needed. In order to be more inclusive, the Web needs published
resources to be described to enable their tailoring to the needs and
preferences of individual users,
need to be continuously improvable according to a wide range of needs and
preferences , and thus
there is a need for management of resources that can be achieved with metadata.
The specification of metadata to achieve such a goal is complex given the
requirements, themselves not previously determined.
This thesis asserts that the low level of accessibility of
the Web justifies a new approach to accessibilty and
that the most appropriate is a comprehensive process approach that brings
together a number of strategies for use according to the circumstances and
context. In particular, it should be possible to continuously improve the
accessibility of resources and for this to be done by third parties,
independently of the original author
, and that this, in
turn, depends on the availability of metadata to manage the
process. (Metadata 's role in management is not new but perhaps is not as well
known as its use for discovery.) The research responds to the need (documented
in Chapter 4 of the thesis) for an
effective new approach to accessibility.
The general aim of work in the accessibility field is to help make the information era inclusive. Inclusive is a term used in this context to refer to a particular approach to people with disabilities and to the disabilities themselves. People with accessibility needs are not homogenous and many of them do not have long-term disabilities: what they need now may not be of interest to them in different circumstances or at other times. Accessibility is also a special term in this context, designating a relationship between a human (or machine) and an information resource. Both terms are defined in Chapter 3.
The research starts with a close examination and analysis of current accessibility processes and tools and moved on to include a new approach that will complement previous accessibility work and the problem of how to develop metadata to support a more process-oriented approach to accessibility. Co-editing of international specifications and standards for accessibility metadata, known as AccessForAll (AfA) metadata, was undertaken simultaneously with the research to determine metadata recommendations for a Dublin Core Metadata Application Profile module (see Chapter 7).
Actively promoting accessibility is taken to mean being inclusive. The term inclusive is used for operations and organisations that follow appropriate practices to promote accessibility of the Web and accommodate many improvements in a constantly widening range of contexts. The new process work suggests a 'quality of practice' approach to the process of content and service production that will support incremental but continuous improvement in the accessibility of the Web and thus inclusion in the digital information era.
In this section, there are brief introductions to the major
terms and concepts
the research. These are further refined in later chapters.
The UN Convention on the Rights of Persons with Disabilities and its Optional Protocol (UN, 2006) calls for equity in access to information and communications. In this thesis, the information and communications of concern are those that are digital and electronic and the terms are used both as nouns and as verbs: people need access to hardware and software to create, store, and deliver digital files as well as to the intellectual content of the files themselves. Collectively, these constitute what is called 'the Web' in this thesis; the Web of digital information and communications.
In particular, the Web is not simply th
pages encoded in HyperText Markup
Language [HTML 4.01]. While such a page
might provide the 'glue', it is clear that the information and communication
enabled by it is
most likely to be made available in a wide range of forms. A typical and simple
example of an HTML-encoded page was provided
by a temporary 'homepage' of a newly elected Australian Prime Minister (Figure
Figure ???: Australian Prime Minister's Website (Pandora, 2007)
On this very small Web page (Figure ???), there are six links that put the user in contact with other 'pages' as we might call them. To contact the Prime Minister, one does not send email that would be easily accessible but receives another page with a form on it. The form saves the Prime Minister from email directly from the user but introduces an accessibility issue; many forms within standard HTML pages are not what is here defined as accessible.
Links are provided on the Prime Minister's page to three
sources of information that explain privacy, copyright and about the site. One
link directs the user to the archive of the previous
Web site. This is a substantial source of information and when contact is made,
it reveals files in a range of formats. This archive is provided by the
National Library of Australia and before choosing a version, the user can see
metadata associated with the archive describing the formats of files involved.
(Interestingly, the note does not necessarily display properly even on a very standard user agent such as Safari
(a standard browser for Apple Macintosh computers) (see Figure ???)).
Figure ??? The metadata as viewed in a Safari browser (Pandora, 2007).
Only when the 'correct' font size is used is the full note legible:
Figure ??? The metadata as viewed in a Safari browser (Pandora, 2007).
Figure ??? shows the range of applications necessary to access just what is on the first page of the archive but then, each page of that archive is likely to point to yet more resources. All of these resources, the hardware and software needed to use them, form what in the research is defined to be 'the Web'.
In 2004, Tim O'Reilly described the Web using a new term that has since become a model for describing recent versions of evolved products that in fact have no formal versions. Later he said of it (2005):
The concept of "Web 2.0" began with a conference brainstorming session between O'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far from having "crashed", the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What's more, the companies that had survived the collapse seemed to have some things in common. Could it be that the dot-com collapse marked some kind of turning point for the web, such that a call to action such as "Web 2.0" might make sense? We agreed that it did, and so the Web 2.0 Conference was born.
In the year and a half since, the term "Web 2.0" has clearly taken hold, with more than 9.5 million citations in Google. But there's still a huge amount of disagreement about just what Web 2.0 means, with some people decrying it as a meaningless marketing buzzword, and others accepting it as the new conventional wisdom.
A significant aspect of the
Web as envisioned
is that it is a platform:
Like many important concepts, Web 2.0 doesn't have a hard boundary, but rather, a gravitational core. You can visualize Web 2.0 as a set of principles and practices that tie together a veritable solar system of sites that demonstrate some or all of those principles, at a varying distance from that core.
O'Reilly (2005) offered the following diagram (Figure ???) from a brain-storming session to help others visualize this 'new' Web.
Figure ??? shows many interactive 'spaces' (grey) as part of the Web. This means that users do not just receive information and communications but they initiate or respond to them as well. For this, they need a range of competencies (orange). The Web, as it is now, has a number of features (pink).
Web 2.0, the current Web, is vastly different from the
world of paper publications
most notably in its interactivity and the fluid nature of the information it
In November 2005, Dan Saffer described Web 2.0 in terms of the experiences associated with it and with an image:
On the conservative side of this experience continuum, we'll still have familiar Websites, like blogs, homepages, marketing and communication sites, the big content providers (in one form or another), search engines, and so on. These are structured experiences. Their form and content are determined mainly by their designers and creators.
In the middle of the continuum, we'll have rich, desktop-like applications that have migrated to the Web, thanks to Ajax, Flex, Flash, Laszlo, and whatever else comes along. These will be traditional desktop applications like word processing, spreadsheets, and email. But the more interesting will be Internet-native, those built to take advantage of the strengths of the Internet: collective actions and data (e.g. Amazon's "People who bought this also bought..."), social communities across wide distances (Yahoo Groups), aggregation of many sources of data, near real-time access to timely data (stock quotes, news), and easy publishing of content from one to many (blogs, Flickr).
The experiences here in the middle of the continuum are semi-structured in that they specify the types of experiences you can have with them, but users supply the content (such as it is).
On the far side of the continuum are the unstructured experiences: a glut of new services, many of which won't have Websites to visit at all. We'll see loose collections of application parts, content, and data that don't exist anywhere really, yet can be located, used, reused, fixed, and remixed.
The content you'll search for and use might reside on an individual computer, a mobile phone, even traffic sensors along a remote highway. But you probably won't need to know where these loose bits live; your tools will know.
These unstructured bits won't be useful without the tools and the knowledge necessary to make sense of them, sort of how an HTML file doesn't make much sense without a browser to view it. Indeed, many of them will be inaccessible or hidden if you don't have the right tools (Saffer, 2005).
As Saffer says,
There's been a lot of talk about the technology of Web 2.0, but only a little about the impact these technologies will have on user experience. Everyone wants to tell you what Web 2.0 means, but how will it feel? What will it be like for users? (Saffer, 2005)
This idea of versions of the Web is clearly abhorrent to
some, as its continuous evolution
considered by them to be one of its virtues (Borland, 2007),
but the significance of the changes in the Web are not denied. These comments
are made at a time when there is already talk of Web 3.0. If Web 3.0 represents
anything, according to Borland:
Web 1.0 refers to the first generation of the commercial Internet, dominated by content that was only marginally interactive. Web 2.0, characterized by features such as tagging, social networks, and user- created taxonomies of content called "folksonomies," added a new layer of interactivity, represented by sites such as Flickr, Del.icio.us, and Wikipedia.
Analysts, researchers, and pundits have subsequently argued over what, if anything, would deserve to be called "3.0." Definitions have ranged from widespread mobile broadband access to a Web full of on-demand software services. A much-read article in the New York Times last November clarified the debate, however. In it, John Markoff defined Web 3.0 as a set of technologies that offer efficient new ways to help computers organize and draw conclusions from online data, and that definition has since dominated discussions at conferences, on blogs, and among entrepreneurs (Borland, 2007, page 1).
The research necessarily involved recognising and predicting changes at least to prepare for them. As William Gibson wrote, “the future is here, it is just unevenly distributed.” (wikipedia William Gibson, 2006). It is no longer sufficient to work on an outdated model that involves merely electronic publication of traditional materials; the materials have changed and will continue to do so. As the research shows, the evolution of the Web offers both new challenges and new opportunities. Howell (2008) warns:
We need to keep our eyes on web trends and recognise trends that actually help to improve disabled people’s experience of the web. Arguably, personalisation is a trend that actually helps as its focus is on sites’ best possible performance for every user and is a great deal more effective that the ‘one site for all’ approach.
As part of the process, there is a substantial shift from a focus solely on the production of information and communications to a wider focus inclusive of post-production activities and consumer contributions.
The United Nations Convention (2006) refers to many kinds of digital resources and their location and use without using the word 'Web' despite the recent revolution caused by the development of what is known as the Web, or World Wide Web. Standards Australia, for example, in its 2008 draft metadata standard has included metadata for objects that are not digital, in the following:
This document is an entry point for those wishing to implement the AGLS Metadata Standard for the online description of online or offline resources. from 1.1 of "AGLS Metadata Standard Part 2: Usage Guide" draft - not available to public yet...
The aim of the AGLS Metadata Standard is to ensure that users searching the Australian information space on the World Wide Web (including intranets and extranets) have fast and efficient access to descriptions of many different resources. AGLS metadata should enable users to locate the resources they need without having to possess a detailed knowledge of where the resources are located or who is responsible for them. in 1.5 of "AGLS Metadata Standard Part 2: Usage Guide" draft - not available to public yet...
Computer operating systems are now being designed with the user interface driven by metadata in ways that extend the familiar interface of the 'Web' to personal computers and the files within them (for example, Sugar on the XO computer (Derndorfer, 2008), and the Google desktop (http://desktop.google.com/).
For this research, the 'Web' is defined as all digitally addressable resources without necessarily distinguishing between the applications or formats in which they are developed, stored, delivered or used by others. This, according to the man credited with the invention of the World Wide Web, Sir Tim Berners-Lee, is 'the Web' and as it develops it achieves more diversified characteristics:
The Semantic Web is an evolving extension of the World Wide Web in which web content can be expressed not only in natural language, but also in a format that can be read and used by software agents, thus permitting them to find, share and integrate information more easily. It derives from W3C director Sir Tim Berners-Lee's vision of the Web as a universal medium for data, information, and knowledge exchange (wikipedia Semantic Web, 2007).
The essential feature of the Web, then, is that the
resource can be addressed; that is, it has a Universal Resource Identifier [URI] that allows it to be found
(Such identifier need not be
persistent (consistent even for dynamically created content), and the resource
need not be maintained in any particular state and might be constantly changing
and it may not even have continuity. )
Brown and Gerrard (2006) argue
that broadband access to
it easier to make accessible content. This is in line with other expectations
for the future; as the technology improves, the opportunities should improve.
It is unlikely that more than 3% of the resources on the Web are accessible (as defined in the research, see Chapter 3). In other words, even if a user has appropriate equipment and has received a resource, the chance that they will be able to perceive the intellectual content of that resource is extremely low if they have special needs. It may be that they have a medically recognised disability such as being blind and the resource is only available as an image of a poem on a tombstone. If so, they may have no idea what it is or what it says. They may have a constructed disability, as a result of driving a car in a foreign country and using their phone to get location instructions in a language they understand. The social model of disability (Oliver, 1990b) conflates definitions of disabilities as characteristics of humans and instead adopts the perspective of the human as being disabled by the circumstances, natural or constructed, physical or otherwise (Chapter 3).
(In this thesis, disabilities of a medical nature are described as permanent disabilities. It is recognised that such disabilities naturally increase with age and usually are experienced by all who live long enough.)
The research concerns the accessibility of the Web. Accessibility in this context is a match between a person's perceptual abilities and information or communication technologies and artefacts. Many people have special needs to enable this match, not the least people with long-term disabilities. As the UN Convention says:
Persons with disabilities include those who have long-term physical, mental, intellectual or sensory impairments which in interaction with various barriers may hinder their full and effective participation in society on an equal basis with others (UN, 2006, Article 1).
The use of the term accessibility in this research
distinguishes between access as considered in this thesis and access as used to
describe possession of facilities for connection to the Web or having the
necessary legal rights to use resources. Th
kind of access is, of course, crucial to any user who is dependent on the Web.
Such access is often dependent upon socio-economic factors, levels of
education, regional and wider factors relating to communications availability
and quality, or many of a
number of similar factors. It also may be dependent upon such as intellectual
property, state or private censorship, etc. The AccessForAll approach advocated
in this thesis is only concerned with access as it relates to users who, for
whatever reason, cannot access Web resources, including services, when they are
in possession of facilities that should be adequate; in other words, when they
cannot access what they already have access to.
This is not an exhaustive definition and will be further
elaborated (Chapter 3) but it
significant that accessibility in
this thesis explicitly includes people with what are medically defined as
The most common definitions of metadata in the library communities from where it emerged in the context of the Web, suggest it is an agreed format for the creation of machine-readable descriptions of digital resources that can be used for, among other things, the discovery of those resources (wikipedia metadata, 2008; University of Queensland Library, 2008; UK Office for Library and Information Networking, 2008; W3C Technology and Society Domain, 2008, etc.).
The term metadata is usually applied to such descriptions when they are, in themselves, to be treated as resources whereas other descriptions of the same resources might be field names in a database containing those resources. Meta-metadata is metadata about a metadata resource. This 'first-class object' characteristic of metadata also supports the interoperability of the descriptions, and it is this quality that is often thought of as distinctive of metadata.
Metadata, as defined in the research, is used to
provide a reference for implementations that require interoperability of the
products of the implementation. As such, metadata is an abstraction from what
is used by implementers.
There is a detailed discussion of metadata in Chapter 6. This discussion will explain more about the multiple uses of metadata and how it comes to be central in the present work.
AccessForAll [AfA] accessibility depends on metadata for descriptions of the accessibility characteristics of resources. These descriptions enable content providers to create and offer resources that can be adapted to individual needs and preferences. Thus they can minimise the mismatch between people who, especially but not exclusively, have special needs due to medically recognised disabilities, and resources published within what is here defined as the Web. This is explained further in Chapter 7.
AccessForAll accessibility is based on the use of metadata. By adding AfA metadata to resources and resource components, new services are enabled that support just-in-time, as well as just-in-case, accessibility. Metadata describing individual people's accessibility needs and preferences is matched with descriptions of the accessibility characteristics of resources until the individual user is able to access a resource that satisfies their needs and preferences (Figure ???).
can prove that the Web will become more accessible but this
research shows that there are resources available
that could be immediately transformed
to take advantage of AfA metadata , to
make the Web more accessible. AfA includes new specifications for
the classification of resources. Initially, these were for education only. They
were further developed as an ISO/IEC JTC1 multi-part education standard
(N2008:24571). The continuing aim is to see their application broadening for resources
across all domains, including being adopted by other standards bodies (already
adopted in Australia as part of the Australia-wide AGLS Metadata Standard [AGLS]). The semantics
of the metadata specifications are not the focus of the research but
rather the form of definition
of the metadata so that it enables the
role that is considered a
critical component of AfA accessibility. The specifications
are to be
published for free and will be available from the various standards bodies' Web
sites (IMS GLC; ISO/IEC JTC1; AGLS).
The research provides evidence that there is already
metadata available that could be transformed to match the new standards, and
that other suitable data could be generated automatically from existing data
(see Chapter 7). Currently, such data is not
available for use by those with accessibility needs,
individual users cannot discover, in anticipation of the receipt
of resources, if they will be able to access them. If the required
descriptive data were available, individual users would be able to use it and
thus the Web would be more accessible for individual users, as explained later.
In many discipines,
those working within the narrow discour
cse of a
particular discipline, or part of it, tend to use words which can have other or
broader meanings in other contexts. The definitions offered above are not
exhaustive but are necessary for the reading that follows. The research was
confined to a small section of information systems work and does face the
problem that some of the te erms,
such as accessibility, are easily understood in a general sense by everyone , and so their particular use in this
context can be confusing. What follows further defines the limited scope
of the research and the context and methods used in the research.
A significant problem for people with special needs, and their content providers, is that there are often intellectual property issues associated with the materials, especially when they are transformed for access by users. In many jurisdictions, there are special rights for people with certain recognised disabilities and they can involve complicated intellectual property rules. This is completely beyond the scope of this research that focuses on how such materials can be made discoverable and interoperable, seen as a precursor to any work that needs to take place to allow such interaction.
The research includes a detailed analysis of the common
approach to accessibility based on the World Wide Web Consortium's Web
Accessibility Initiative's specifications and the techniques employed to
achieve it. This is
contained in a Web site that provides a set of
practices and their explanation (Appendix ???).
This Web site was developed and used for some time as the basis for a
university's accessibility strategy. (The work has not been maintained and so
is not in continued use.) The research is not about how to make Web resources
accessible , so the Web
site is not the research, but the research builds on this detailed work.
The research is not about the techniques used to make
digital information accessible to people with disabilities
although unless there is information that is
accessible to them, the metadata framework
proposed cannot help. It is about how, when information is
identified as of interest, a user with particular needs at the time and in the
context in which they find themselves, can have the intellectual content of the resource that was originally discovered presented in a way that is matche d to
their needs and preferences. If necessary, this includes having components of
the original intellectual content replaced or supplemented by the same
information in other modes, or having it transformed , and it contributes the potential for
this to be done, not the components themselves.
A challenging attribute of digital information is the
increasing mobility of people who expect information to be available
to be able to use
all sorts of devices to access it. As they travel from one country to another,
users expect to continue to gain information in their language of choice, even
though, for instance, it is about places where different languages are spoken.
Sometimes users expect to get location-based, or location
The context in which a user is operating is fundamental to
the type and range of needs and preferences they will have (Kelly et al, 2008).
The research embraces what is known as the Web 2.0. In this Web world, an
evolutionary progression from the original Web which was created by the
technique of referenced resources and distributed publishing, users interact
with resources and services that are made available by others, often with no
knowledge of their source. (Discussion of the new environment and the way it
operates is within scope as it provides the context for the work (see Preamble and Chapter 6).)
It should be noted that the W3C WAI work currently considers some Web content
out of scope at this stage, in
terms of some of their
accessibility work (W3C WCAG 2.0,
If a resource contains some components that are
inaccessible to a user,
it will need to have those components
replaced or supplemented for the user. It is outside the scope of the current
research to deal with the problem of discovery of those components or the
services that might be used for the transformation. The problem is considered
not to be peculiar to accessibility so
much as a problem related to modified on-going searching when resources
that are discovered prove inadequate. This has been researched recently at the
University of Tsukuba, in Japan (Morozumi et al, 2006). It is an on-going topic
closely related to new work on what are called GLIMIRS
(sic) from the Online Computer Library Center [OCLC]
in the US (Weibel,
2008a). Understanding of the problem is, however, in scope.
Out of scope also is any requirement to engage with the adoption of AfA by industry. Adoption by standards bodies depends upon processes that engage the industry in formal ways, so adoption by such a body is considered to include adoption by industry. Implementation is, on the other hand, not always ensured by the existence of standards. At the time of writing, before publication of the standards, there are already significant implementations of the AfA standards. These are discussed in Chapter 14.
The research reported is not a traditional empirical study
of a static situation. Rather, it is research to determine how, in a fast
changing world, metadata
about resources can be used to ensure
maximum accessibility for individual users of those resources.
John Seely Brown (1998) differentiated between what he
thought of as two main kinds of research, sustaining and pioneering.
Sustaining research, he thought, is aimed at analysis and evaluation of
existing conditions. The problem for researchers in fast-changing fields is
that often, by the time sustaining research is reported, the circumstances have
changed. As the original circumstances cannot be reproduced, the research
would need to be interpreted into a
different context to be useful and in some fields, this cannot happen. In the
case of pioneering research, the work is successfully implemented or, perhaps
more often, forgotten. This is the sort of work in which many technology
focused researchers are engaged: they follow what are traditional research
practices to a point, but their work is evaluated differently and they need to
engage with and accept different types of evaluation.
The 'best' technology is not always the one that becomes the accepted technology, as in the case of video standards and the Beta standard (Weiner, 2005, p. 311). In the current technology environment, acceptance is crucial because it is the mass acceptance and frequency and extent of its use, what is often called 'the network effect', that makes the technology what it is. In the case of metadata, without mass acceptance there is usually nothing of particular value that can be claimed.
Pioneering research is what Seely Brown (
cit) argued was the main and most
valued output from Xerox Parc in the 1980's. Staff at that institution
developed some of the most significant ideas that have been incorporated into
computers over the last 25 years. They were researchers but also inventors -
people who had to know the needs, the problems, the context, et cetera
and then invent something that might be useful. Their work has been tested not
by an evaluation of their research methodologies, or how closely they followed
the methodology they adopted, but rather by how useful and effective their work
has become. (“PARC is celebrated for such innovations as laser printing,
distributed computing and Ethernet, the graphical user interface (GUI),
object-oriented programming, and ubiquitous computing.” (Palo Alto Research Center, Inc., 2008).
Within the field of pharmacology, research is combined with empirical research before the work is released onto the market or used with humans. In the case of developments of Web 2.0, a product or idea or specification's release is watched for adoption and it is only in hindsight that its 'effectiveness' is determined, and then by popularity. This is not always satisfactory. Experience has shown us that substantial reliance can be misplaced on technologies that do not solve the problem for which they were designed. The Apple Newton, WebTV, the IBM PC Junior are just a few of the technologies that have been launched with great fanfare but failed within a short time. Many of the features of these technologies are still around, but in other forms (ComputerWorld, 2008).
It is essential that the intrinsic value of the technology is accounted for. In the field of accessibility, almost all effort has focused on a single set of guidelines (WCAG) with what this thesis argues are less than satisfactory results. It is important to evaluate AccessForAll accessibility to ensure this does not happen again. For whatever interest there is in the idea of AccessForAll metadata, there is still a need for research to discover how to create a suitable awareness of the context for the work and the value of the work. This means developing a strong understanding of the theoretical and practical issues related to accessibility, including practical considerations to do with professional development of resource developers and system developers, and the administrative processes and people that usually determine what these developers will be funded to do. It also involves the reading and writing of critical reviews of other work. In particular, while there is little doubt of the potential benefit to users with disabilities, it is not at all clear how to work with the prototyped AfA ideas to make them mainstream in the wider world, both in the world outside the educational domain and in the world of mixed metadata schemas (correct use of this word would be schemata but common usage accepts schemas).
profiles have begun to appear (at the time of writing, 4 major implementers
have developed systems using the profiles: University of Toronto (TILE), Industry Canada (WebForAll), Internet Scout (CWIS), Angel (LMS), and others (DC-Accessibility
Wiki, 2008), there is more
work to be done in developing ways to enable distributed discovery of suitable
accessible resource components for users and to build the architecture that can
take maximum advantage of the AfA approach.
Both of these developments are outside the scope of this present work but they,
too, are explained by, and therefore in some ways enabled by, the research. Author's note:
hopefully add in Microsoft and HiSoftware ...
Many use the expression 'research and development' to
differentiate between research and development. Development work is so
characterised without regard for the processes involved in achieving it. One is
reminded of Mitchel Resnick's story of Alexandra whose project to build a
marble-machine was rejected as not scientific until the process was carefully
examined when she was awarded a first prize for the best science project
(Resnick, 2006). In some fields, research is not just about writing a report,
it is also about repeatedly designing, creating, testing, evaluating and
reviewing something in an iterative process, often towards an unknown result
but according to a set of goals. These are also important processes for design.
Such processes benefit from rigorous scrutiny that can be attracted in a
variety of ways, including by being undertaken in a context where there are
strong stakeholders with highly motivated interests to protect. There is no
getting away from the value of well-researched and documented
In "Design Experiments: Theoretical and Methodological Challenges in Creating Complex Interventions in Classroom Settings", Ann Brown (1992) describes the problem of undertaking research in a dynamic classroom. She was, at the time, already an accomplished experimental researcher, but argued that it was not possible or appropriate to undertake experimental research in a changing classroom. The problems referred to were related to the complexity of research closely associated with development in a dynamic context. In the case of AccessForAll, the context was not fixed and so did not afford research of the kind associated with numerical analysis but rather, called for clear documentation of the problems to be solved, the context, the possibilities and the implications of the proposed solution. This thesis provides that documentation.
Brown argued that she needed to develop methodologies that would analyse what was happening in the changing classrooms and provide useful information for others wishing to replicate the model and results in other classrooms. This thesis provides analysis of relevant aspects of accessibility work to provide useful information for those wishing to use metadata to increase the accessibility of the Web.
Problem-solving and learning are similar activities. Educationalists aim to improve learning environments; accessibility specialists aim to improve accessibility problem-solving environments. They want better practices and better understanding and evaluation of those practices.
In "Design-based research: An emerging paradigm for educational inquiry",
The authors argue that design-based research, which blends empirical educational research with the theory-driven design of learning environments, is an important methodology for understanding how, when, and why educational innovations work in practice. Design-based researchers’ innovations embody specific theoretical claims about teaching and learning, and help us understand the relationships among educational theory, designed artifact, and practice. Design is central in efforts to foster learning, create usable knowledge, and advance theories of learning and teaching in complex settings. Design-based research also may contribute to the growth of human capacity for subsequent educational reform (DBRC and D.-B. R. Collective, 2003).
The complexity of the accessibility work is not unlike that of education; everything is constantly changing, including the technology, the skills and practices of developers, the jurisdictional contexts in which accessibility is involved and the laws governing it within those contexts, and the political environment in which people are making decisions about how to implement, or otherwise, accessibility. There are also a number of players, all of whom have different agendas, priorities and constraints, despite their declaration of a shared interest in increasing the accessibility of the Web for all.
The Australian Research Council funded the Clever Recordkeeping Metadata (CRKM) Linkage Project in 2003-2005 (ARC, 2007). It was a major metadata research project for Australia and so the research methods used are of interest. The project reported:
The research methodology was designed within an action-research framework where a close alignment between the practical development of tools and active reflection on each stage of their development iteratively informs both the further development of the tools and also identifies challenges and issues to be addressed in an ongoing fashion.
The research involved the initial development of a proof-of-concept prototype to demonstrate that metadata re-use is possible and illustrate the business utility of recordkeeping metadata. From that initial proof of concept, the project intended to develop a more robust demonstrator available for wider dissemination.
First Iteration: Development of Proof of Concept Prototype
The first iteration of the CRKM Project investigated a simple solution for demonstrating the automated capture and re-use of recordkeeping metadata. The expectation was that this initial investigation would expose the complex network of issues to be addressed in order to achieve metadata interoperability and automate the movement of recordkeeping metadata between systems, along with enabling researchers to develop skills and understandings of the existing technologies that support metadata translation and transformation. (CRKM, 2007)
At the end of the three year project, the key findings were:
There are significant barriers to interoperability within our current metadata standards and in our current records management and archival control frameworks.
Translation beyond a web services environment into a fully realised service oriented architecture is outstripping implementation reality, with current technology constraints illustrating that truly service oriented implementations are really things of the future
Our community has an opportunity to evolve towards that future via web services used initially to wrap legacy systems to achieve data interoperability as we progressively move towards decomposing and re-engineering recordkeeping functionality as services and creating appropriate business process and rules infrastructure. (CRKM, 2007)
The project demonstrated the use of an established computer systems development methodology in the metadata context. The closely coupled iterative review/development process underlies the current research reported in this thesis. In this case, the multiple reviews by multiple stakeholders significantly influenced the development of the final metadata, as shown in Figure ???
Make a timeline with all the standards meetings etc along it ....
In the introduction to a paper describing the research methodology for the CMKM project, Evans & Rouche (2006) claim:
Archival systems, like other information systems, are undergoing radical change as the impacts of digital and network technologies on recordkeeping and archival processes are grappled with. Accustomed to dealing with mature systems and technology, the field of archival science is at a point where archival research needs to encompass methods that investigate how emerging theories are operationalized through systems development. Systems development research methods allow exploration of the interface between theory and practice, including their interplay with technology. Not only do such methods serve to advance archival practice, but they also serve to validate the theoretical concepts under investigation, challenge their assumptions, expose their limitations, and produce refinements in the light of new insights arising from the study of their implementation. (p. 315)
'Accessibility systems' might well be substituted for 'archival systems' in this text. Engagement with the development of AccessForAll metadata enabled accessibility research that "needs to encompass methods that investigate how emerging theories are operationalized through systems development". In the case of the CRKM project, the researchers were interested in discovering how schemas played a role in the archival context so they would know how to build a metadata registry that uses such schemas (p. 316). In the present accessibility research, the focus is on how and what metadata schemas can improve the accessibility potential of the Web so metadata schemas can be developed for use in content discovery, matching and delivery systems.
The purpose of such a registry of metadata schemas is to act as a data collection and analysis tool to support comparative studies of the descriptive schemas. (p. 317)
The CRKM registry was to provide content for use in a harmonisation of schemas to inform a standardisation process. In the words of the researchers:
With no existing blueprint for such a registry, the first task of the research team was to conceptualise the system and establish its requirements. In so doing several key questions are raised including: – What are the salient features of metadata schemas that need to be documented for the purposes outlined above? How are these realised as elements? ... In order to address these questions, the research team looked at utilizing systems development as an exploratory research approach. (p. 317)
Systems development as a research method is well-established in information systems literature. Evans & Rouche cite Nunamaker et al (1990-91) as arguing for "inclusion of systems development as a pivotal part of ‘a multimethodological approach to IS research’" (Evans & Rouche, 2006, p. 318) but say it is not well-established in archive research literature. They go on to say:
Burstein elaborates on the process for such a systems development research approach, suggesting three major iterative stages – concept building, system building and system evaluation [Figure ???]. The concept building phase involves the identification and development of the research questions and investigation of the system requirements and functionality, incorporating relevant ideas and approaches from other disciplines. The system building phase involves constructing the system using systems development techniques and the systems evaluation phase involves analysing and assessing the system.
Burstein is further quoted as saying (Evans & Rouche, 2006, p. 320):
The major difference between this approach as a research method and conventional systems development is that the major emphasis is on the concept that the system has to illustrate, and not so much on the quality of the system implementation. At the beginning of such a project the implementation has to be justified in terms of whether there is another existing system capable of demonstrating the features of the concept under investigation. The evaluation stage of the systems development method is also different from the testing of a commercial system. It has to be done from the perspective of the research questions set up during the concept-building stage, and the functionality of the system is very much a secondary issue. (Burstein, 2002, p. 153)
Evans and Rouche argue that at all times using the systems development research, researchers must be motivated by the research questions whereas systems development is usually motivated by practicality (Evans & Rouche, 2006, p. 320). They also remark that in commercial development, the requirements are specified but in research the problem is to determine appropriate requirements, so for the former clear specification and implementation can work but for the latter, an incremental and iterative approach is necessary, especially where what is sought is an understanding of the issues with respect to the specifications. This describes one of the major goals for the present research. Here, it is not the enumeration of the best elements to describe the needs and preferences of users and the accessibility requirements of resources that is the focus of the research; that is the work of the developers doing the development part of the work. The research is about what makes for the best way to prescribe those elements, their structure, syntax and semantics, the schemas that will be most useful and interoperable across a number of types of metadata systems. The metadata research is grounded in the accessibility context but must share principles with metadata in general.
Finally, Evans & Rouche (Evans & Rouche, 2006, p. 334) argue that the interplay between theory and practice is crucial to archival systems research. Similarly, it is crucial to accessibility metadata systems research.
The research has resulted in the first significant description of AccessForAll metadata and how it can be used. It has justified the development of such an approach to accessibility, and shown how the actual metadata schema could be developed. This has involved a wide range of research activities, as shown below.
To investigate how effective accessibility efforts were in
a typical organisation, the author was involved in the auditing of a major
university Web site (Nevile,
2004). The process was significantly simplified by the combin
ation of several available tools and, in the
process , produced descriptions (metadata) of
the accessibility characteristics of the 48,084 pages reviewed.
To facilitate the use of the WCAG specifications by content developers, the Accessible Content Development Web site (Appendix 8) was built. The aim was to provide a fast look-up site accessible by topic and focus, rather than the lengthy, integrated approaches required at the time by anyone using the W3C Web Content Accessibility Guidelines [WCAG]. As a result of doing this work, the author gained a more structured view of the difficulties being tackled by developers in practice. This complemented previous work in which the author had, on many occasions, been consulted with respect to building accessible sites or to ascertain the accessibility or otherwise of sites, and many times commissioned to repair the sites.
To develop an automatic
conversion of MathML
encoded mathematics into Braille, a major Braille project was undertaken. The
first task was to understand the problems, then to see what partial solutions
were available, and then to develop a prototype service to convert mathematics
texts to Braille. In this case, there was no need to survey anyone to determine
the size of the problem or the satisfaction available from existing solutions -
the picture was patently bleak for the few Braille users interested in
mathematics and, in particular, the text was required by a Melbourne University
student for his study program. Ultimately, the research was grounded in
computer science, where it is common to have a prototype as the outcome with an
accompanying document that explains the theoretical aspects and implications of
the prototype. In this case, the prototype encoding work was undertaken by a
student who was supervised by the author, who herself managed or personally did
much of the other work in the project (Munro, 2007; Nevile et al, 2005).
To gain an insight into formal empirical research documenting specific problems with the W3C WAI Web Content Accessibility guidelines, the author studied the UK’s Disabilities Rights Commission’s review of 1000 Web sites. This was the first major review of Web sites that evaluated the WCAG’s effectiveness. Many of the findings have more recently been substantiated in other work (see Chapter 4) and they have been anecdotally reported by the author and others for many years.
To discover the likelihood that if
AccessForAll metadata were developed, it would
be possible to apply it
automatically to resources of interest for their accessibility, using their
existing metadata descriptions , the author considered
the available material documenting
the existence of such resources and their metadata by gathering
information about metadata from major suppliers of accessible resources (Chapter 7).
in a distributed environment, the author studied the Functional Requirements
for Bibliographic Records (FRBR)
and associated work and tried to determine how resources should be described so
that other resources with the same content but represented in different modes
or with other variations might be discoverable. This work was undertaken with
Japanese colleagues who, at the time, were trying to learn from FRBR and the
OpenURI work. The author is more inclined to think that a new approach to
resource description to be known as GLIMIRs may, in fact, prove more useful in
this context (see above).
AfA would be interoperable with other metadata systems, and
DC metadata in particular, the author studied the emerging DC abstract data
model. To do this,
the author worked with data models expressed in formal notation (Unified
Modeling Language [UML]). In doing this, the author discovered the
ambiguity of the DC Abstract Model as first expressed and became involved in
work to clarify th e model
(Pulis & Nevile, 2006). Eventually, the DC model was expressed in UML and
the model proposed for the DC implementation AccessForAll metadata was matched
to that model. There is a strong feeling emerging that unless data models are
matched, the metadata cannot losslessly interoperate
There are many major players in the field of accessibility.
These stakeholders had to be won over
there is really no other way that technologies such as metadata
schemas proliferate on the Web , and if they don't, the
technologies are not useful, as explained above. 'Winning over' bodies that use
technologies often means providing a strong technical solution as well as
compelling reasons (in implementers' eyes) for
adoption of those technologies. In the case of accessibility metadata, the
technical difficulties are substantial. As explained in the section on
metadata, there are many kinds of metadata and yet they share a goal of
interoperability - essential if the adoption is to scale and essential if it is
to be across-institutions, sectors, or otherwise working beyond the confines of
a single environment. The problems related to interoperability are considered
later (Chapter 11) but they are not the
only ones: metadata is frequently required to work well both locally and
globally, meaning that it has to be useful in the local context and work across
contexts. This tension between local and global is at the heart of the
technical challenges for adoption when diverse stakeholders are involved but so
the political and
At the time the
AfA work was being undertaken, there was a
major review of accessibility being undertaken by the ISO/IEC JTC1.
A Special Working Group [SWG-A] was
formed to do three things: to determine the needs of people with disabilities
with respect to digital resources, to audit existing laws, regulations and
standards that affect these, and to identify the gaps. Concerned that this was
merely a commercially-motivated use of a standards community to minimise the need for accessibility standards
compliance, the author asked to know the affiliations of the people represented
in the Working Group. Most were employed by one of the few major international
technology companies although they were present as national body
representatives. There were very few representatives of disability or other
interest groups. In fact, when the author asked if the people present could
identify their affiliations, it took an hour of debate before this was allowed.
Not only was the author uneasy about the disproportionate commercial
representation, but it emerged that the agenda was constantly under pressure to
do more than the stated research work, and to try to influence the development
of new regulations that were seen to threaten the major technology companies.
Although heavy resistance to the 'commercial' interests was provided by a minority,
and in the end the work was limited in scope to the original proposals, it
showed just how much effort is available from commercial interests when they
want to protect their established practices. Given that many of the companies
represented in the SWG-A are also participants in consortia such as W3C ,
IMS GLC, etc, it is indicative of what was
potentially constraining of the AfA work of the author and others. More recently, this has again been demonstrated by the effort of
Microsoft to have their proprietary document standard OOXML approved as an
international standard. In that case, there have been legal cases about the
problems of representation and decision-making (McDougall,
In design experiments, or research using design experiments
that is often just called design research, it is a feature of the process that
the goals and aspirations of those involved are considered and catered for. In
fact, as the work evolves, the goals of the various parties are
to be revisited
as the work changes according to the circumstances
and the research enlightens the
design of the experiments.
The current research is not about researchers testing a hypothesis on a randomly selected group of subjects; the stakeholders and the designers interact regularly and advantage is taken of this to guide the design. The practical aspects are constantly revised according to newly emerging theoretical principles and the new practical aspects lead to revised theories. The goals do not change but the ways of achieving them are not held immutable.
In the work reported here, considerable interaction occur
between the author as researcher and the author and colleagues and other
stakeholders in the design process. This was especially
exemplified in the various voting procedures that move d the
work through the relevant standards bodies. These formal processes take place
at regular intervals and demand scrutiny of the work by a range of people and then votes of support for
continued work. Challenges to the work, when they occur, generally promote the
work in ways that lead to revisiting of decisions and revisi on
of the theoretical position being relied upon at the time. Such challenges also
provide insight for the researcher into the problems and solutions being
In particular, the author
sat between two major metadata camps. Those involved in the IMS GLC
have experience mainly with relational databases and LOM metadata, which is
very 'hierarchical'. On the other hand, the DC community is biased towards
'flat' metadata which inevitably influenced the author,
given her role as Chair of the DC Accessibility Working Group (later the DC
Accessibility Community) and membership of the Advisory Board of DCMI. This was,
indeed, an uncomfortable position because the educational community
that was driving
the work initially is deeply engaged in the LOM
approach, even though many others working in education are not. The IMS GLC's
interests were towards
an outcome that would
suit them but, as the author saw it, risk even further fragmentation
of the total set of resources available to education, and so not serve the real goal which was to
increase the accessibility of the Web (of resources).
was wrestling all that time with
the problem of interoperability of the LOM and DC educational community's
metadata, a difficulty that has been present since the first educational
application profile was proposed nearly a decade ago. The interoperability is
necessary given that, for example, government resources might be used in
educational settings and if their metadata could not
be cross-walked (see Chapter 6) from one scheme
to the other, the descriptions of the government resources would not be useful to educationalists, which seems ridiculous. One way
to ease the problem would have
been to develop a standard that exactly suited both metadata systems and that
might have been possible, but there was
insufficient technical expertise available to achieve that goal, so the best
that could be done in the circumstances became the modified goal . This
was achieved and it is possible to cross-walk between the various metadata
standards so that it does not matter so much which is used, because the data of
the metadata descriptions can be shared without loss.
The design work reported has been progressively adopted and
has now become part of the Australian standard for all public resources on the
Web (as AGLS Metadata Standard) and by
virtue of being an ISO standard, an educational standard for Australia. This
can be taken as indicati
on of it
having proven satisfactory to a considerable number of people. Only actual
implementation and use will prove it to have been truly successful because it
will need to be proliferated to the extent that it becomes useful.
Implementations are discussed further in Chapter
The research establishes that, given an understanding of the field of accessibility, the context for it, and frustration with the lack of success and the results of recent research, it is evident that for all the good intentions, there has been poor implementation of accessibility techniques. Universal design is not a sufficient strategy even if it is applied, and a narrow focus on specifications for authoring of Web content alone will not produce the desired results. This means there is a need for a new approach. By using a range of existing and emerging standards and introducing metadata to describe user needs and preferences, it is possible to match them to resource characteristics, also described in metadata. By adding this possibility, without compromising interoperability of metadata or stakeholder interests, and by attracting implementation, individual access needs and preferences should be able to be satisfied. This AccessForAll approach places emphasis on the accessibility of the Web for individuals, and draws upon many standards working together. It does not depend upon universally accessible resources but includes them.
The following chapters report on:
Š the last ten years' efforts to define disability and thus accessibility (Chapter 3);
Š the development of universal accessibility techniques for making the content of the emerging Web accessible (Chapter 4);
Š what success or otherwise has resulted from the universal accessibility approach and responses to this state (Chapter 5);
Š an understanding and definiton of metadata and its potential role in a networked, digital world (Chapter 6);
Š early investigations and efforts in the use and likely availability of metadata to support accessibility or resources (Chapter 7);
Š a new use of metadata to describe individual user's needs and preferences with respect to resources in ways that are useful to people with special needs for effective perception of their content (Chapter 8);
Š a more traditional use of metadata to describe resources in ways that are useful to people with special needs for effective perception of the intellectual content of the resource (Chapter 9);
Š an extended use of metadata to provide a means of managing digital resource components for matching of compositions of those resources in ways that were effective for individual users (Chapter 10);
Š the definition of effective interoperability and the need for technical interoperability of AccessForAll metadata if its implementation is to become a reality (Chapter 11), and then
Š the conclusion (Chapter 12).
In this chapter, the term 'accessibility' is considered in some detail. Most people assume they know what it means because they assume they can imagine what it is like to have such disabilities as blindness, and they seem to assume also that the functional problems for people with disabilities are easily defined and even, perhaps, soluble. This chapter shows that these assumptions are not helpful. It also asserts that it is inappropriate to think of disabilities as fixed qualities of people rather than changing characteristics of contexts and activities.
"As the web becomes a major communications medium, the data on it must be made more accessible."
They were, as so many now realise, talking about why they were working on search engines, and most particularly Google, the now famous entrance to the Web. Their sentiments were similar to those of many others, especially those working to ensure that everyone gets access to information on the Web. Lawrence and Giles were quoting figures such as 800 million pages, 6 terabytes of text data and 3 million servers back then in 1999 being publicly indexed but amounting to only about 16% of what is actually available. They were lamenting that much of what people possibly wanted to find was not indexed by anyone.
Tim Berners-Lee is reputed to have said some time ago that, "The power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect" [WAI]. This now famous quotation represented Berners-Lee reacting to the disturbing news that even when a resource was available on the Web and could be found, and was able to be delivered to a particular user, it was not necessarily in a form that made access to the content of the resource available to that user. His reference was to the sensory access that was in some cases limited by a user's permanent disabilities.
Accessibility and disability as terms have been in tension for a long time. The term "accessibility" is ambiguous as access can be of many types, including that dependent upon economic conditions, intellectual property rights, telecommunications services, etc. Disability communities are often quick to promote a particular view or perspective of the effects of the disability on users that avoid labeling people and instead concentrate on positive aspects of their lives. Members of the deaf community in Melbourne, Australia, often ask to be referred to as that, members of a deaf community, and they assert that their communication in sign language is itself not appropriately described by reference to a medical condition so much as the use of a non-English language. They expect to be treated in the same way as other non-English speaking people (comments based on private communications and personal experience). In different countries, the names for disabilities or even their presence are changed for political reasons. At times, it seems, it is good to avoid labeling people by their disabilities and better to promote people's abilities to avoid referring to their disabilities. At other times, however, the disabilities are referred to in order to draw attention to them: the context and goals are often determinants of which definition is used.
Somehow, it seems that it is the community for people with vision disabilities who are the most active and effective in gaining funding for work on accessibility of the Web. They have the advantage that most people in the community think they know a little bit about vision impairment; they think they can imagine what it would be like to have such a problem even if their image of what it is like does not in any way match reality. They are all also very likely to suffer from such an impairment themselves, especially, as is often said, if they live long enough!
Vision impairment is not a quality of a person, it is condition of a person in a context: everyone has a vision impairment sometimes. When driving a car and trying to find a new location, we find drivers looking at printed maps and looking at the road, or worse, looking at the road and getting directions on a mobile phone screen. It is what is called an 'eyes busy' situation, where driving should completely occupy the eyes, they are being shared across tasks. Effectively, the person has a vision impairment either with respect to watching the road, or to reading the map or using the phone. Additionally, of course, the person also has a control impairment: their hands cannot perform well at two tasks at the same time. Disabilities are relative to contexts and activities.
Other disabilities are even harder to understand and recognise. Cognitive impairment is not usually expected to be associated with people who are performing well in the community but universities are beginning to find that a number of their otherwise capable students have dyslexia, for example (Morgan, 2000). Statistics vary enormously as dyslexia, for example, is not clearly defined and thus not easily quantified, but it may be reasonable to assume every classroom has at least one dyslexic student. Being clever and being dyslexic can easily go together (Lloyd, 2007), it seems, as the disability is relative to reading. In the case of learning Japanese, a character-based language as opposed to Roman languages, there is some chance that dyslexia will not be relevant or is even a positive ability (Asthana, 2006).
A difficulty associated with working to support people with disabilities is, then, discovering who needs assistance and what assistance they need. In part this is due to our reluctance, for good reason, to label people by naming a disability. It is partly due to the reluctance of some people to identify as having a disability, to self-identify, and partly due to the ignorance of many people that they do, in fact, have a disability in a given situation. In everyday life, for most things, people overcome whatever small inadequacies they have and are unaware of the process. Many people simply do what they can do well and don't bother with what they can't do so well. In most situations this works. The problems arise when people are required to do something they can't do well.
The workplace is one context in which tolerance for disabilities is critical: people are often required to perform tasks that compromise their abilities. Accessing civil rights is another: being able to vote, being able to access government services, being able to buy tickets to the Olympics Games, are just a few activities to which all citizens have an equal right of participation.
To repeat and misuse what Lawrence and Giles (1999) said, "As the web becomes a major communications medium, the data on it must be made more accessible." It becomes more important to ensure that not only those who have naturally taken to the new technologies, but everyone, can access what they need using the new medium.
So disability and accessibility have a context: the question becomes, in the presence of this major communications medium, when are people denied access? The answer is found in a variety of ways, as shown below, and it is as variable as the ways of describing disabilities or abilities, as will be seen. It is not simplified by an approach that aims to use medical pathology terms but it is easier to work with when it is described in terms of required functionality.
In addition, we would like the report to use the World Health Organization’s (WHO) new standard definition of disability, The International Classification of Functioning, Disability and Health (ICF - May 2001), and avoid the use of expressions such as “handicapped, demented and less skilled people”. This new definition emphasizes that disabled people’s functioning in a specific domain is an interactive process between their health condition, activities and the contextual factors. It is a radical departure from the earlier versions, which focused substantially on the medical and individual aspects of disability. The social model of disability suggests that disability is not entirely an attribute of an individual, but rather a complex social and environmental construct largely imposed by societal attitudes and the limitations of the human-made environment. Consequently, any process of amelioration and inclusion requires social action, and it is the collective responsibility of society at large to make the environmental and attitudinal changes necessary for their full participation in all areas of life (WS-SMH, 2003, p.10).
As stated in Wikipedia (2008):
The social model of disability is often based on a distinction between the terms 'impairment' and 'disability.' Impairment is used to refer to the actual attributes (or loss of attributes) of a person, whether in terms of limbs, organs or mechanisms, including psychological. Disability is used to refer to the restrictions caused by society when it does not give equivalent attention and accommodation to the needs of individuals with impairments.
The 'social model of disability' was first proposed by Michael Oliver in 1983 but later explained further, particularly in 1990:
There are two fundamental points that need to be made about the individual model of disability. Firstly, it locates the 'problem' of disability within the individual and secondly it sees the causes of this problem as stemming from the functional limitations or psychological losses which are assumed to arise from disability. These two points are underpinned by what might be called 'the personal tragedy theory of disability' which suggests that disability is some terrible chance event which occurs at random to unfortunate individuals. Of course, nothing could be further from the truth.
The genesis, development and articulation of the social model of disability by disabled people themselves is a rejection of all of these fundamentals (Oliver 1990a). It does not deny the problem of disability but locates it squarely within society. It is not individual limitations, of whatever kind, which are the cause of the problem but society's failure to provide appropriate services and adequately ensure the needs of disabled people are fully taken into account in its social organisation. Further, the consequences of this failure does not simply and randomly fall on individuals but systematically upon disabled people as a group who experience this failure as discrimination institutionalised throughout society. (Oliver, 1990b)
Oliver argues that by using a social model, one can understand disability as something that can be dealt with at a social level, that it is not merely about non-normal characteristics of individuals but rather the ways in which society functions. Social efforts including adjustments can, according to Oliver's theory, remove a disability.
Liz Crow (1995), on the other hand, argues that exclusively treating disability as a social problem restricts the ability of the person with disabilities and that some awareness of impairment in the medical sense is essential. She says that it is not that impairment does not exist but rather how it is interpreted that is important. She argues for awareness on the part of the person with disabilities and for them to consider their medical needs, which is not to accept other people's interpretations that imply inferiority.
A major use of the social model is the development of inclusive practices. Inclusion aims to consider all people equally and to avoid disabilities by providing for the needs of all people. To achieve this in education, for example, communities have worked on attitudes and practices that value everyone equally and so provide for all of them equally. Inclusion UK is a consortium of four organisations supporting inclusion in education. On their Web sites [Inclusion UK], they describe their work. The Centre for Studies on Inclusive Education provides details about their publications [CSIE]. On their Web site they show the process approach they advocate for inclusion in education:
The Index takes the social model of disability as its starting point, builds on good practice, and then organises the Index work around a cycle of activities which guide schools through the stages of preparation, investigation, development and review. (Booth & Ainscow, 2000)
The Index was widely distributed in the UK education system and has been updated. Of interest in this thesis is the approach taken by the authors. Inclusion is not treated as a fixed quality of a location but rather as a set of practices. The authors advocate a continuous cycle of development and review.
In this thesis, the social model of disability is adopted with the aim of making the Web an inclusive information space, with continual improvement based on an on-going cycle of development and review of Web resources.
In the mid 1980's, long before the Web become popular, there were communities of people with disabilities (in the medical sense) who had already been using computers for some time. The technology of the time allowed for text activities online and these presented few problems for assistive technologies; people with hearing disabilities were often assisted by their use of teletype machines and other print technologies that could allow them to communicate using what were otherwise typically sound or image and sound technologies, such as telephones, televisions, etc.; people with sight disabilities were able to use computers to enlarge script, to have it read aloud to them, and to produce Braille. (The author worked with such technologies for three years from 1983-6 for Barson Research.)
In 1989, Mosaic was released as a first major mouse-driven interface to the Web.
The Web's popularity exploded with Mosaic, which made it accessible to the novice user. This explosion started in earnest during 1993, a year in which Web traffic over the Internet increased by 300,000%.(wikipedia Computing Timeline, 2008)
A significant aspect of the Web that made it instantly attractive to the masses was its ability to include mouse-controlled images, sounds, and multi-media in general.
Unfortunately, the very technology that has opened the door to unprecedented access also harbors the possibility for the very opposite. Just as there are enabling and disabling conditions in the physical environment, so are there conditions associated with digital technology that result in the inclusion or exclusion of certain people. Technology that is not universally designed, without consideration for the full spectrum of human (dis)abilities, is likely to contain access barriers for people with print disabilities (Schmetzke, 2001).
It is the same technology, often, as was able to increase the inclusion of people with disabilities prior to the Web's emergence. It still can be used in ways that enable people: Miles Hilton-Barber, a blind man, recently co-piloted a small plane half-way around the world (The Age, 2007).
A typical and simple illustration of what became a problem for some people is the use of the 'mouse' and cursor. People with sight disabilities rarely use mice because they do not get the instant feedback that endears mice to people with good sight. The cursor, driven by the mouse, floats over the structure of a screen representation, and is freed from the serial flow of text, for example. This freedom is just what makes the mouse-cursor combination useful to people using sight and useless to people who cannot see it. They cannot tell where it is. There is no coordinate system that can convey to people who cannot see what is offered to the person who watches the cursor. Recently, the new project Fluid has developed a drag-and-drop user interface component that will be used to do this in the future.
Mouse-cursor users move the screen content under the cursor, by using other screen controls, and move the cursor over the screen. Many people who cannot see the cursor move about the screen by using keystrokes for such functions as 'line-up', 'line-down, 'move-left', 'move-right'. On arrival at a 'screen' destination, they need information about where they are, what it is that they are capable of acting on. In the case of the Web, this is often a hyperlink. It was almost always, in the beginning, and is still too often, labeled "click here". For the sighted person, the surrounding context, including the layout of the objects on the screen, will probably tell them what is likely to happen if they do, indeed, click there. The person who cannot see the screen, and so does not know the context for the hyperlink, is often confused as to what will happen if they click. Worse, experience soon teaches them that if they click, they may well be taken somewhere they did not anticipate and it might be very hard to find their way back. This, because the easy recovery technique of simply pressing the back button does not work when the link in fact spawned a new window, and that window does not have a 'previous' window. If they do find the previous location, how do they know which hyperlink to click when there are several choices all similarly labeled? How do they know if this link relates to the writing before the link or the writing after it, without access to the screen to see how the links are related graphically and location-wise on the screen? Perhaps there is a pull-down menu of links.
It is not hard to understand that without labeling of links, without certainty about the relationship between a link and a description of the choices available, the user does not have satisfactory access to the content that will be available if the link is activated.
Further, if what is offered as a resource is a video, without captions and a transcript, a deaf person is unlikely to have satisfactory access to the content of the video. Without a tactile version or long description of a diagram, a blind person is not likely to have satisfactory access to chemical content they may need. Without access to the content in a language understood by the user, there will be no access. Without content that is free of sarcasm, irony, literary illusion, a person with dyslexia is unlikely to have adequate access.
For all these reasons, the Web Content Accessibility Guidelines authors have worked on the aspects of access which are important to people who find themselves having access difficulties with Web content. For many years now, the Web Content Accessibility Guidelines Working Group [WCAG WG] has been trying to find ways of alleviating these difficulties. Typically, the WCAG WG identifies something that can be done to help, describes the requirement for the user in the WCAG, and their priorities are transferred to the developers of the computer languages developed by their colleagues within W3C and otherwise, and the capabilities required are incorporated into new languages and specifications for the Web. A typical example is provided by the development of Scalable Vector Graphics [SVG].
A detailed explanation of what accessibility means in practice and how it is achieved is available in a hyperlecture (Appendix 8).
In 1998, writing on the W3C WAI Interest Group mailing list, Harvey Bingham forwarded the following from Ephraim P. Glinert
Folks: I would like to draw your attention to a new research focus on the topic of UNIVERSAL ACCESS jointly sponsored by the HCI and KCS programs within the Information and Intelligent Systems (IIS) Division of CISE.
The word "access" implies the ability to find, manipulate and use information in an efficient and comprehensive manner. A primary objective of the HCI/KCS research focus on universal access is to empower people with disabilities so that they are able to participate as first class citizens in the emerging information society. But more than that, the research focus will benefit the nation as a whole, by advancing computer technology so that all people can possess the skills needed to fully harness the power of computing to enrich and make their lives more productive within a tightly knit "national family" whose members communicate naturally and painlessly through the sharing of (multimodal) information (Bingham, 1998).
Bingham was focused on what should happen, not how, and it has taken until now to find technology that will enable his dream.
It has been noted that the research is advocating an inclusive Web. This means more than merely solving problems for those with medical conditions that lead to a lack of access to resources. Internationalisation, for example, is treated as an issue of accessibility alongside location dependence and independence.
The Australian Government, in 2008, established a Social Inclusion Board and has a Minister responsible for social inclusion (Stephens, 2008). The Minister, prior to election, said:
Let me be clear: our social inclusion initiatives will not be about welfare – they will be an investment strategy to join social policy to economic policy to the benefit of both. For this reason, our Social Inclusion Unit and Board will be made up of serious economic and social thinkers, not just welfare representatives. This won’t be a memorial to good intentions – it will be about action and hard-headed economics. (Gillard, 2007)
About 15% of Europeans report difficulties performing daily life activities due to some form of disability. With the demographic change towards an ageing population, this figure will significantly increase in the coming years. Older people are often confronted with multiple minor disabilities which can prevent them from enjoying the benefits that technology offers. As a result, people with disabilities are one of the largest groups at risk of exclusion within the Information Society in Europe.
It is estimated that only 10% of persons over 65 years of age use internet compared with 65% of people aged between 16-24. This restricts their possibilities of buying cheaper products, booking trips on line or having access to relevant information, including social and health services. Furthermore, accessibility barriers in products and devices prevents older people and people with disabilities from fully enjoying digital TV, using mobile phones and accessing remote services having a direct impact in the quality of their daily lives.
Moreover, the employment rate of people with disabilities is 20% lower than the average population. Accessible technologies can play a key role in improving this situation, making the difference for individuals with disabilities between being unemployed and enjoying full employment between being a tax payer or recipient of social benefits.
The recent United Nations convention on the rights of people with disabilities clearly states that accessibility is a matter of human rights. In the 21st century, it will be increasingly difficult to conceive of achieving rights of access to education, employment health care and equal opportunities without ensuring accessible technology (Reding, 2007).
In 2008, a new European Commission IST Specific Support Action project called WAI-AGE commenced with the goal of increasing accessibility of the Web for the elderly as well as for people with disabilities in European Union Member States [WAI-AGE].
In the Report of the CEN ISSS MMI-DC (W15) Workshop on Metadata for Accessibility, Nevile and Ford (2004) considered multilinguality, and all it encompasses, at the same time as other accessibility issues. The report notes:
The European Union's official languages have recently increased from eleven to twenty. The linguistic combinations will increase from one hundred and ten to two hundred and ten. ... many Europeans have difficulties when using the Internet (p. 4).
and, in more detail, with respect to multilingualism:
Languages have inherent qualities: many of these are linguistic but others are cultural. Obviously, metaphors based on regionally or culturally specific analogies do not necessarily translate into other languages. What is often not realised is that there are other qualities that affect language use: there are different ways of describing time, location, people's identities, and more. Conversations across language boundaries are endlessly surprising; the provision of multiple-language versions of content and translation of content are almost always problematic. But within languages there are also problems: levels of facility with complexity of languages and limitations of languages are two examples. Not everyone is capable of understanding the same form of representation in any given language, yet we know this is not just a matter of literacy learning; for some it is to do with how well they have learned to read and for others it is to do with constraints imposed on them by such disabilities as dyslexia and disnumeracy. Those dependent upon Braille, for example, can find that their language does not yet have ways of representing information which is easily represented in other languages. (p.7)
Further work on the problem of lack of access due to language barriers was reported by Morosumi, Nevile and Sugimoto (2007). The immediate problem related to the lack of access to English research literature available on the Web:
There are at least three major groups of readers with language-skill problems who want access to intellectually stimulating and specialist English texts:
Š people with domain expertise who lacking sufficient English reading skills to access the English literature in their field of interest;
Š people with domain expertise who need translations of English literature, and
Š people with dyslexia.
We consider the problem for second-language readers, translators (particularly automated ones) and people with dyslexia to be similar: In all cases it is important to have plain English without distracting or confusing metaphors, or complicated language constructions such as the subjunctive mood or passive voice.
So it is necessary to be aware that cultural and linguistic consireations can necessitate functional accessibility requirements for information users.
Location can be very relevant to accessibility: location dependent information is very useful but it might need to be supplied in a language that is not associated with the location, e.g. for travelers. In such a case, location independence can be very important. Just because one is in Greece does not mean that one is thinking of what is on in the local cinema; a parent might be interested in what film a child is proposing to see at the local cinema in their absence. Whereas most efforts to work with location currently involve finding ways to be sensitive to the location, it is necessary to also be sensitive to the user's needs irrespective of their location.
Location changes can cause mismatch problems when assistive technology settings, or the actions of user agents, or other circumstances, change in some way.
Contexts often account for the special needs and preferences of users. If a user is in a noisy location, they will probably not be able to benefit from audio output whereas a user in a very quiet location may not be welcome to start using voice input. Content needs can also change because of device changes and these are at times associated with location changes. So sometimes context influences will be predictable according to the location and sometimes they will be temporary and personal, or independent of location.
The location changes might be small or large. When the changes are from one country to another, such as for a traveler moving from Italy to France, it is likely that the changes will involve language changes. When location changes are triggered by movement from one room in a house to another, it is quite likely the difference will be device changes and this may mean changes in means of control of the access device. ...
We can also imagine the same person moving from their personal laptop computer to the one in their family's office expecting to find that the office one needs to change to their needs and preferences after it has accommodated other members of the family with different needs and preferences. We cannot imagine users wanting to set up their needs and preferences every time they make such location changes. In fact, there are many people who would not be capable of determining their own needs and preferences and for these people, making the changes might be the most important.
When the location is fixed in one sense, as is the case in a train, but varied in a global sense, because the train moves, relative and absolute location descriptions become necessary (Nevile & Ford, 2006).
... we need a way to be precise about the locations so that we can ease the burden of adapting the devices to the user. This in turn means being able to specify a particular location with precision and in three dimensions. It also means being able to describe dynamic locations, such as inside a moving car or train. These may be relative locations. It also means being able to associate the user's personal profile for that device with that user's profile of needs and preferences. There is a need then for flexible, interoperable, machine-readable descriptions of locations for those cases in which they are determinants of the suitability of user profiles.
There is therefore a requirement for both location-dependent and location-independent profiling. The aim in both cases is the same, stability for the user and thus a personal sense of location-independent accessibility, but one depends upon not being affected by a change in location and the other upon being affected by it. The location-independence is thus as viewed from the user's perspective (Nevile & Ford, 2006).
Sometimes, a person's lack of access is more of a temporal problem: if an activity is taking place in one part of the world but welcoming online participants, it can be a matter of where people are located that determines the accessibility of the activity. It is not possible for everyone to be participants in everything and have sufficient sleep and day-time schedules for their local area. This location-based temporal factor means, for many people, difficulties in participating in educational, research, entertainment and financial opportunities that support international equity. This and other issues are considered further in ' Location and Access: Issues Enabling Accessibility of Information' (Nevile & Ford, 2006).
So, again, there are functional accessibility requirements that can flow simply from where one is at the time.
Some types of information present particular problems of accessibility. Mathematics has depended upon graphical representation to make it quickly accessible to mathematicians. They learn the symbolism and write and interpret the mathematics with agility if they can see it.
Blind mathematicians have enormous difficulties: they have to work with both the mathematical concepts and the very difficult encoding that represents the mathematical content but is cumbersome and increases their cognitive task enormously. (). W3C has developed a language called Mathematics Markup Language [MathML] for expressing mathematics for both presentation (graphically) and manipulation so that appropriate software can be used to display mathematics on the screen, as one expects to see it, but also to enable cutting-and-pasting of sections of mathematics as one does with text in a word processor.
Although the problem has been pretty well solved for the sighted mathematician, it remains a problem for the mathematician who wants to use Braille. The author and others have worked on the development of transformation services that will enable blind Braille users to access mathematics that is encoded correctly in MathML (W3C WCAG 2.0, 2004; Smith, 2004; BraMaNet, 2008).
Spatial information, now commonly available in multi-media forms, offers a special challenge to those who want everyone to be able to enjoy their information. Not only is there the standard range of problems, such as how does a blind person get access to the information in a map (an image), or how do they participate in an interactive walk-through of a building, but there is the special nature of information to consider. For professionals, the problem is usually different from the one of everyday users. Experts who work in areas such as spatial sciences, usually can work with text and make sense of it: databases containing numbers are useful as representations of information and they can be interpreted and used with standard database techniques, so blind people, for example, can learn to use these alternative formats. But people who are not blind, but for now have their eyes-busy, do not have this training. Not everyone who can see reads a map well, as we know. Some people like to picture the information about the route to the beach by thinking of the land marks, others by using the compass and still others perhaps by remembering the names of streets or the number of them. Maps allow such people to read off what works for them, in most cases. But now that people are walking around with hand-held devices, and the maps are often very small, or they need the information without having to look, we have to find ways for the speech output devices to represent the information. We have to work on the variety of ways in which people might understand spatial information, to find new representations that will work for them. This is a known current challenge, and the field of multi-media cartography is engaged with it (Nevile & Ford, 2006).
There are now a growing number of cybercartographers who are trying to re-invent cartography in the era of digital information (Taylor, 2006). Their focus is on what people can do with digital information and how this might lead to new forms of maps. In a similar way, there is work to be done to see how people with disabilities might benefit from the transition to digital data.
In order to decide what to read and when, especially when reading a newspaper, most users with visual abilities look for headings of sections and then choose what is of interest. In publications where this is to happen, headlines play a significant role in the overall presentation of the content. Where the headings are clearly such, the visual reader scans the headings and can even get clues as to their relative importance, usually from their size. A page from the New York Times provides a good example (Figure ???).
Where adaptive or assistive technologies provide additional help for users, such as by providing an overview of the content of the page, the structure can be marked for presentation in other ways, as illustrated by Human Factors International (Figure ???). On the left is a browser-generated table of contents from a Web page laid out using correct HTML heading structure, and on the right, a blank browser-generated table of contents from the same page that was marked up but using paragraphs and 'direct format' font elements to produce "headings" that were to be identified only by font size.
Figure ???: accessibility pages http://www.humanfactors.com/downloads/markup.asp accessed 15/1/2005
The following example of accessibility available on the Web (Figure ???) is a Macromedia Flash movie with closed captioning, played in Real Player
(requires Real Player version 7+),
and accompanied by a text transcript. It is made available in this form with a number of redundant pieces to ensure the necessary combinations are available to be assembled according to needs: the Flash movie, the captions and the file that synchronises them with the movie and the transcript. The last will be useful to anyone who wants to access the content using Braille or who cannot hear what is being played audibly or even just someone who cannot keep up with the pace of the movie.
Bob Regan (2005), Macromedia's erstwhile accessibility expert, pointed to what he described as the first and still relevant example of accessible Flash (WGBH NCAM, 2005) made by the WGBH National Center for Accessible Media [WGBH NCAM], Figure ???.
Figure ???: Zoot Suit (Moock, 2005)
It offers captions for the video, and detailed variations according to the access device being used (see Appendix 3 for complete code). The Web 'page' contains a set of instructions to the browser to determine what software is available and based on the response, to retrieve and activate certain components. This is, in fact, a simple example of what has been further developed into the AccesForAll approach.
UK Government Accounting offers an interesting collection of information at its site (Figure ???). The information is available as PDFs to be printed but also in electronic form so that additional features can be made available. Among other things, as they say:
The electronic version of Government Accounting 2000 enhances the print version by including a keyword search, hyper-links to related sections, pop-up definitions for Glossary terms, and easy-to-use navigation through the pages. The product now includes the ability to personalise font sizes as required. ... (UK Government, 2000)
Human Factors International (HFI), based in the US, has a very good demonstration of a page in an inaccessible and then accessible form that are different when rendered aurally although apparently the same when viewed visually (Figure ???).
The inaccessible Web page illustrated in the first column is representative of much current practice on the Internet. Graphics were used for some of the text, and tables were used to provide layout. Clear blank images were used to help stabilize the layout. HTML structural syntax is ignored. The page HTML is invalid.
The accessible page illustrated in the right column is constructed using text for all text elements, a single image for the one needed graphic. Standard HTML elements were used to construct the page - headings, paragraphs and definition lists in this case. Additional information was also coded into the page to provide some additional information to the listener. The page was validated against the HTML 4.01 standard
Although the pages appear visually to be much the same, they are very different for a screen reader.
HFI provide two audible renderings in mp3 format (others are also available):
A simple way to render an inaccessible page accessible is to provide a reading of the page. This would not solve all accessibility problems for all potential users, but it may solve it for many users. Thus, by providing a sound file of a reading of the text and description of the image, or even a text file where the text is transformable, the content of the page could be made available to a large number of potential users who might otherwise not be able to access it. As this page does not appear to have links, such a simple solution would be useful but only if the user could find the alternative version they want. This means the new file, wherever located, should be described and entered in the same catalogue of resources as the original, as an alternative for the original, and so discoverable by a user with the need for a non-visual version. The alternative approach to dealing with an inaccessible page, working to make it universally accessible, requires the cooperation of the page owner and, unfortunately, often considerable skill, if it is possible at all.
Captions are familiar to many in the form of sub-titles for films, and becoming more common in other circumstances.
Closed Captioning: Closed captions are all white uppercase (captial) letters encased in a black box. A decoder or television with a decoder chip is necessary to view them.
Open Captioning: (subtitling). The captions are "burned" onto the videotape and are always visble [sic] -- no decoder is needed. A wide variety of fonts is available for open-captioning allowing the use of upper and lowercase letters with descenders. The options for caption placement are great, permitting location anywhere on the screen. Open Captions are usually white letters with a black rim or drop shadow. The Captioned Media Program requires Open Captioning. ...
Open Captioning covers many nuances and subtleties. The Guidelines are the key to making knowledge, entertainment and information accessible to the deaf and hard of hearing, to those that are seeking to improve their reading and other literacy skills, and to those that are learning to speak English as a second language (US Department of Education, 2005).
In particular, captions provide an excellent example of the many accessibility techniques that make resources more accessible and useful in general. That is, like curb-cuts, they make a huge difference to some but are then found to have many other uses for the general population.
It is important to many users that content is properly structured. The most obvious issue is when a major heading is simply rendered in large or coloured print, and then a less important one is in a smaller font size. This is correctly done when the headings are marked as such, showing their ranking as 1, 2 etc..
One way to fix this problem is to reform the original page using the correct markup for the headings but one does not always have access to the original: the owner may not be interested, or it may be difficult to contact them, or impossible for some other reason. Providing a simple list of the contents, with links to specific parts of the page, can be done by annotation of the original page, where the annotations are stored elsewhere and then applied to the page upon retrieval before it is served to the user (Kateli, 2006). A less ambitious supplement to the page would be a list of the contents so that at least the user would know what to look for. Either way, the supplementary content is needed to be discovered and associated with the original content, whether by the user's agent or the content server or otherwise.
For many years, Microsoft showed its skepticism for universal accessibility including by its lack of effort to make its Internet Explorer browser UAAG conformant. In 2003, however, Microsoft commissioned a study in the US to get some indication of who might be needing assistance with accessing information if they are to use computers or other electronic devices (Microsoft, 2008). The overall population in the US in the age range 18 to 64 years was found to be divided into the following four groups: those with severe, mild, minimal and no difficulties, in the following proportions: 25% with severe, 37% with mild, and 37% with minimal or no difficulties resulting from disabilities (Figure ???).
Figure ???: Disabilities piechart (Microsoft, 2003a)
Further, they found (Figure ???) that:
dexterity, and hearing difficulties and impairments are the most common types
of difficulties or impairments among working-age adults:
• Approximately one in four (27%) have a visual difficulty or impairment.
• One in four (26%) have a dexterity difficulty or impairment.
• One in five (21%) have a hearing difficulty or impairment.
Somewhat fewer working-age adults have a cognitive difficulty or impairment (20%) and very few (4%) have a speech difficulty or impairment.
... For the
top three difficulties and impairments:
• 16% (27.4 million) of working-age adults have a mild visual difficulty or impairment, and 11% (18.5 million) of working-age adults have a severe visual difficulty or impairment.
• 19% (31.7 million) of working-age adults have a mild dexterity difficulty or impairment, and 7% (12.0 million) of working-age adults have a severe dexterity difficulty or impairment.
• 19% (32.0 million) of working-age adults have a mild hearing difficulty or impairment, and 3% (4.3 million) of working-age adults have a severe hearing difficulty or impairment (Microsoft, 2003b).
or as shown (Figure ???):
Figure ???: Likelihood of difficulties (Microsoft, 2003b)
These findings show that the majority of working-age adults are likely to benefit from the use of accessible technology. As shown in the chart in Figure [???], 60% (101.4 million) of working-age adults are likely or very likely to benefit from the use of accessible technology.
The chart in
Figure [???] also shows the percentages of working-age adults who are likely or
very likely to benefit from the use of accessible technology due to a range of
mild to severe difficulties and impairments:
• 38% (64.2 million) of working-age adults are likely to benefit from the use of accessible technology due to a mild difficulties and impairments.
• 22% (37.2 million) of working-age adults are very likely to benefit from the use of accessible technology due to a severe difficulties and impairments.
• 40% (67.6 million) of working-age adults are not likely to benefit due to a no or minimal difficulties or impairments (Microsoft, 2003b).
or as shown in Figure ???:
The report states:
The fact that a large percentage of working-age adults have difficulties or impairments of varying degrees may surprise many people. However, this study uniquely identifies individuals who are not measured in other studies as "disabled" but who do experience difficulty in performing daily tasks and could benefit from the use of accessible technology.
Note that many or most of the individuals who have mild difficulties and impairments do not self-identify as having an impairment or disability. In fact, the difficulties they have are not likely to be noticeable to many of their colleagues. (Microsoft, 2003b)
Three more sets of figures provide the incentive to think carefully about accessibility in the general population:
Figure ???: Difficulties by age (Microsoft, 2003c)
Figure ???: Aging population (Microsoft, 2003c)
Together, Figures ???,??? and ??? paint a picture for the US that looks grim. There is clearly a worrying trend towards much higher proportions of the community being much older than at present, and therefore more likely to be at risk of disability.
There is every reason to assume the figures will be similar in Australia.
In summary, the Microsoft report claims:
In the United States, 60% (101.4 million) of working-age adults who range from 18 to 64 years old are likely or very likely to benefit from the use of accessible technology due to difficulties and impairments that may impact computer use. Among current US computer users who range from 18 to 64 years old, 57% (74.2 million) are likely or very likely to benefit from the use of accessible technology due to difficulties and impairments that may impact computer use. (Microsoft, 2003d)
This points to the fact that not all those who could benefit from computer use, do use computers. There are many reasons for this, but as the trend to publish becomes electronic and the younger people adopt the technology, the evidence above suggests there is going to be an increasing problem unless accessibility is also rapidly increased.
While Microsoft was working to convince, or otherwise, itself of the need to pay attention to accessibility issues, Texthelp Systems Inc. has a different slant because they have developed a solution at least for a high proportion of those with disabilities. They claim:
In the US and Canada there are:
people with literacy problems (source :U.S. Nat'l Literacy Survey 1992)
10-15% of the population with a learning disability (source: National Institutes of Health)
18% of the population over age 5 for whom English is a second language (US Census Bureau 2002)
13+% of children aged 3-21 who receive special education (source: www.nces.ed.gov)
12% of the Canadian population with some type of disability (source: Statistics Canada)
22% of Canadians who are functioning at the lowest literacy level (source: Statistics Canada)
as justification for their product BrowseAloud. BrowseAloud is a service that can be offered by a Web site to provide streamed reading aloud of the content of the site, assuming it is properly constructed.
In 2006, the US National Council on Disability released a policy paper that explores key trends in information and communication technology, and highlights the potential opportunities and problems these trends present for people with disabilities. It suggests some strategies to maximize opportunities and avoid potential problems and barriers. In particular,
The following are some emerging technology trends that are causing accessibility problems.
Š Devices will continue to get more complex to operate before they get simpler. This is already a problem for mainstream users, but even more of a problem for individuals with cognitive disabilities and people who have cognitive decline due to aging.
Š Increased use of digital controls (e.g., push buttons used in combination with displays, touch screens, etc.) is creating problems for individuals with blindness, cognitive and other disabilities.
Š The shrinking size of products is creating problems for people with physical and visual disabilities.
Š The trend toward closed systems, for digital rights management or security reasons, is preventing individuals from adapting devices to make them accessible, or from attaching assistive technology so they can access the devices.
Š Increasing use of automated self-service devices, especially in unattended locations, is posing problems for some, and absolute barriers for others.
Š The decrease of face-to-face interaction, and increase in e-business, e-government, e-learning, e-shopping, etc., is resulting in a growing portion of our everyday world and services becoming inaccessible to those who are unable to access these Internet-based places and services. (NCD, 2006)
The report points out that technology in common use changes fast and unpredictably with the result that "assistive technology developers cannot keep pace". They cite convergence and competitive differences as having "a negative effect on interoperability between AT and mainstream technology where standards and requirements are often weak or nonexistent". The rapid increase in the number of aging people who have naturally increasing disabilities is, of course, always a concern.
On a more positive note, the NCD report summary lists a number of technological advances and says:
These technical advances will provide a number of opportunities for improvement in the daily lives of individuals with disabilities, including work, education, travel, entertainment, healthcare, and independent living.
It is becoming much easier to make mainstream products more accessible. The increasing flexibility and adaptability that technology advances bring to mainstream products will make it more practical and cost effective to build accessibility directly into these products, often in ways that increase their mass market appeal. (NCD, 2006)
In 1998, the US Federal Government legislated in favour of accessibility of digital resources including applications when the US federal government is procuring content, systems or services [s508]. As the largest employer of people with disabilities in the US, the Federal Government is also responsible for social security (income replacement) including for people with disabilities. There may have been some connection between the two because it is clearly better in a number of ways for the US Federal Government to offer useful employment to their citizens with disabilities than to have to support them all on disability pensions.
Fairfax in Australia, however, has perhaps offered a similarly striking economic reason for being concerned about accessibility. In 2003, they redeveloped their Web site with accessibility in mind and the result is a saving of an estimated $1,000,000 per year in transmission costs. In a 2004 presentation for the Web Standards Group [WSG], Brett Jackson, Creative Director of Fairfax Digital, reported that Fairfax achieved more than the following success with a major move to the XHTML/CSS platform.
Who we are
Š Fairfax Digital
o 40 sites
o 5 or 6 key destinations
o smh.com.au, theage.com.au, drive.com.au, mycareer.com.au, domain.com.au, afr.com.au
§ 135 million PI's per month
§ 6 mill uv's
§ The leading News sites in Australia
§ 3 to 4 minute average session times
What we did
Š moved our biggest sites across in a 6 month timeframe
Š the smoothest rollout we have ever experienced
Š will save a million $ in bandwidth a year
Where we're at now
Š First major AUS publisher to make the move to CSS/xhtml
Š started publishing in css/xhtml in nov 2003
Š will move all sites across in the next 6-9 month (Jackson, 2004)
In 2003, a surprisingly high proportion of the Webby award winners (organised by the International Academy of Digital Arts and Sciences) were found to have accessible sites despite their multimedia attraction. In the opinion of Bob Regan, the accessibility expert for Macromedia, the vendors of DreamWeaver and Authorware, the Webby winners did not have accessible sites so much because they were concerned about accessibility as because they were concerned to use the latest, smartest techniques, and these inevitably led to increased accessibility (Regan, 2004).
The Authoring Tools Accessibility Guidelines [ATAG] can be used as functional requirements for the accessibility of authoring tools of all kinds. The underlying belief is that if the tools are designed to promote accessible products, inadvertently, simply by using the tools, authors of resources will make their products accessible. The author, involved in the development of ATAG, asserts that if those who are so concerned about training their authors about accessibility were to save the money and time involved and instead buy them better authoring tools, more might be achieved with the same amount of money.
Work on making computer text 'accessible' had started at least by the early 1990's and the processes being advocated then are the base for what is used today. The term accessible has already been described. Here, the history of the effort is presented briefly. Then the emergence of the W3C and later, in 1997, the Web Accessibility Initiative is described in so far as the history is relevant. What are now known as the Web Accessibility Initiative's guidelines for accessible content, and published by W3C, started life before either the Web or W3C was significant in the field. They, like so many other things that happen, have historical roots that possibly help explain why they are as they are. The work of those responsible for authoring and recommending the guidelines, the W3C WAI Working Groups, is considered in so much as it is relevant and then the guidelines themselves are introduced.
The significance of the guidelines in this context is not how comprehensive or effective they are, but rather how they are determined and the role they play in stimulating technology development by allowing for the generalisation of specific accessibility problems.
In 1994, in the abstract to "Document processing based on architectural forms with ICADD as an example", the authors wrote:
ICADD (International Committee for Accessible Document Design) is committed to making printed materials accessible to people with print disabilities, eg. people who are blind, partially sighted, or otherwise reading impaired. The initiative for the establishment of ICADD was taken at the World Congress of Technology in 1991. (Harbo et al, 1994)
Earlier in the article they describe the mission of ICADD as:
The ambition of ICADD is that documents should be made available for people with print disabilities at the same time as and at no greater cost than they are made available to people who can access the documents in traditional ways (usually by reading them on pages of paper). This ambition presents a significant technological challenge.
ICADD has identified the SGML standard as an important tool in reaching their ambitious goals, and has designed a DTD that supports production of both "traditional" documents and of documents intended for people with print disabilities (eg. in braille form, or in electronic forms that support speech synthesis).
It should be noted that the proposed way of making the materials available was to use SGML, the predecessor of HTML that was the first and has remained the main markup language for the Web.
After WWW94, Dan Connolly (1994) reported his participation and recorded with respect to a discussion session chaired by Dave Raggett:
One interesting development is that right now, HTML is compatible with disabled-access publishing techniques; i.e. blind people can read HTML documents. We must be careful that we don't lose this feature by adding too many visual presentation features to HTML.
It might be noted that this early conference was held before the World Wide Web Consortium was formed. Yuri Rubinski was at that early conference at CERN. He, as an ICADD pioneer, had been involved in making sure that SGML could be used for other than standard text representations and he and his colleagues did not want their work to be lost in the context of the new technology, the fast emerging Web. A year later, at WWW4 in Boston in December 1995, Mike Paciello, another ICADD pioneer, offered a workshop called "Web Accessibility for the Disabled".
Meanwhile, the World Wide Web Consortium [W3C] was being formed with host offices in Boston, Tokyo and Sophie-Antipolis in France. It came into existence in late 1994. Within a short time, the American academies were working on what they were calling at the time the National Information Infrastructure (NII). It was a time of great expectations for the new technologies. In a report published in August 1997, the American National Academies called for work to ensure that the new technologies were accessible to everyone:
It is time to seek new paradigms for how people and computers interact, the committee said. Current computer systems, which arose from models conceived in the 1960s and 1970s, are based on the concept of a single user typing at a computing terminal. These systems have limitations, however. For example, using many applications simultaneously can be awkward, and inefficiency can ensue when multiple users with different abilities and equipment try to access and work on the same documents at the same time. No single solution will meet the needs of everyone, so a major research effort is needed to give users multiple options for sending and receiving information to and from a communication network. The prospects are exciting because of recent advances in several relevant technologies that will allow people to use more technologies more easily.
This is a time when tremendous creativity is required to take advantage of the vast array of new technologies coming forth, such as virtual reality systems and speech recognition, eye-tracking, and touch-sensitive technologies," said steering committee chair Alan Biermann, chair of the Levine Science Research Center at Duke University, Chapel Hill, N.C. "But the point remains that we are still using a mouse to point and click. Although a gloriously successful technology, pointing and clicking is not the last word in interface technology.
The report encourages both government and industry to invest in research on the components needed to develop computing and communication networks that are easy to use. Applying studies of human and organizational behaviors to lay the groundwork for building better systems will be very important to these efforts. New component designs also should take into account the varied needs of users. People with different physical and cognitive capacities are obvious audiences, but others would benefit as well. Communication devices that recognize users' voices would help both the visually impaired as well as people driving cars, for example. It is time to acknowledge that usability can be improved for everyone, not just those with special needs.
The report draws from a late 1996 workshop that convened experts in computing and communications technology, the social sciences, design, and special-needs populations such as people with disabilities, low incomes or education, minorities, and those who don't speak English (National Academies, 1997).
It should be noted that the steering committee included Gerhard Fischer and Gregg Vanderheiden, both already champions of the need for accessibility of electronic media.
The Committee wrote about research as helping with universal access:
This will complement government policies that address economic and other aspects of universal access. Federal agencies should encourage universal access to the NII by supporting research and requiring adequate development and testing of systems purchased for use at public service facilities (National Academies, 1997).
Very soon after this report was released, in October 1997, a press release was issued by the American National Science Foundation. What follows is from the archived version of it:
The National Science Foundation, with cooperation from the Department of Education's National Institute for Disability and Rehabilitation Research, has made a three-year, $952,856 award to the World Wide Web Consortium's Web Accessibility Initiative to ensure information on the Web is more widely accessible to people with disabilities.
Information technology plays an increasingly important role in nearly every part of our lives through its impact on work, commerce, scientific and engineering research, education, and social interactions. However, information technology designed for the "typical" user may inadvertently create barriers for people with disabilities, effectively excluding them from education, employment and civic participation. Approximately 500 to 750 million people worldwide have disabilities, said Gary Strong, NSF program director for interactive systems.
The World Wide Web, fast becoming the "de facto" repository of preference for on-line information, currently presents many barriers for people with disabilities.
The World Wide Web Consortium (W3C), created in 1994 to develop common protocols that enhance the interoperability and promote the evolution of the World Wide Web, is working to ensure that this evolution removes -- rather than reinforces -- accessibility barriers.
National Science Foundation and Department of Education grants will help create an international program office which will coordinate five activities for Web accessibility: data formats and protocols; guidelines for browsers, authoring tools and content creators; rating and certification; research and advanced development; and educational outreach. The office is also funded by the TIDE Programme under the European Commission, by industry sponsorships and endorsed by disability organizations in a number of countries.
I commend the National Science Foundation, the Department of Education and the W3C for continuing their efforts to make the World Wide Web accessible to people with disabilities," said President Clinton. "The Web has the potential to be one of technology's greatest creators of opportunity -- bringing the resources of the world directly to all people. But this can only be done if the Web is designed in a way that enables everyone to use it. My administration is committed to working with the W3C and its members to make this innovative project a success" (NSF, 2007).
Things had moved very quickly behind the scenes. W3C had worked through its academic staff to gain the NSF's support for the project and politically manoeuvred the launch into the public arena with the support of a newly appointed W3C Director and the President of the US.
Mike Paciello describes the history thus:
The World Wide Web Consortium (W3C) have consolidated previously written accessibility guidelines from a range of organisations (Lazzaro, 1998). Principally this work was initiated by Mike Paciello, George Kerscher and Yuri Rubinsky who co founded the International Committee for Accessible Document Design (ICADD). ICADD established standards for accessible electronic information (ISO 1208-3 and ICADD-22) the forerunners of the WAI guidelines. Whilst Mike Paciello was the Executive Director of the Yuri Rubinsky Insight Foundation from 1996-1999 he was responsible for developing and launching the Web Accessibility Initiative (Paciello ???).
Sadly, Yuri Rabinsky died in 1995. Gregg Vanderheiden became the Co-Chair of the Web Content Accessibility Guidelines Working Group, and Mike Paciello, long expected to have become the director of the W3C initiative, went elsewhere when Judy Brewer was appointed to that position.
Another significant player in this history was Jutta Treviranus. She had been working with Yuri Rabinsky at the University of Toronto and quickly emerged, with her colleague Jan Richards, as an expert who could lead the development of guidelines for the creation of good authoring tools. In a paper entitled "Nimble Document Navigation Using Alternative Access Tools" presented at WWW6 in 1997, she argued that:
Due to the evolution of the computer user interface and the digital document, users of screen readers face three major unmet challenges:
1. obtaining an overview and determining the more specific structure of the document,
2. orienting and moving to desired sections of the document or interface, and
3. obtaining translations of graphically presented information (i.e., animation, video, graphics
She further stated that:
These challenges can be addressed by modifying the following:
Š the access tool (i.e., screen reader, screen magnifier, Braille display),
Š the browser,
Š the authoring tools, (e.g., HTML, SGML, plug-in, Java, VRML authoring tools),
Š the HTML specifications, HTML extensions, Style Sheets,
Š the individual documents, and
Š the operating system (Treviranus, 1997).
Treviranus was already the Chair of the Authoring Tools Accessibility Working Group for W3C, and has been ever since. Clearly, the principles of the ICADD developments were on their way into the W3C guidelines.
With the appointment of Wendy Chisholm as a staff member at W3C, the work of TRACE, her former employer and the laboratory of Gregg Vanderheiden (co-chair of WCAG Working Group), the Wisconsin-based researchers, contributed significantly to W3C's WAI foundation. Judy Brewer, the Director of W3C responsible for WAI, was not herself an expert in content accessibility at the time but strong in disability advocacy.
The W3C guidelines were already crawling by the time they entered the W3C process.
W3C WAI inherited, from ICADD's ISO 1280-3 and later standards, the architecture of documents where a Document Terms Definition (DTD) document was used to describe the structure of the document in a common language, or a language that could be mapped to a common terminology, but the style applied to those structural objects could be set any number of times by a designer. Presentation could, and should, be separated from content, as the slogan goes.
ICADD is aware that it is unrealistic to expect document producers and publishers to use the ICADD DTD directly for production and storage. Instead a "document architecture" has been developed that permits relatively easy conversion of SGML documents in practically any DTD to documents that conform to the ICADD DTD for easy production of accessible versions of the documents. ...
The approach of ICADD is interesting, not least because it illustrates that document portability and exchange in SGML can be achieved by other means than standardizing on a single DTD in the exchange domain. In ICADD, portability is achieved by specifying mappings onto a standardized DTD. (Harbo et al, 1994)
This is an important article for its explanation of how, given an architecture for markup, a single application can be used to read the markup and present the content in different ways according to instructions about how to present each type of content. This was the state of the art in 1994.
The article further explains:
The relatively new international HyTime standard (ISO 10744) introduced the notion of architectural forms. With architectural forms, SGML elements can be classified by means of #FIXED attributes as belonging to some class. In HyTime, architectural forms are used as a basis for processing hypermedia documents, but their use is not limited to that.
With good foresight, the authors note the good and bad features of ICADD and then, in their conclusion, say:
Still, the approach chosen by ICADD does seem to be a good one, despite its lack of full generality. The problem that ICADD faces is not only technical, it is also political and organisational. Improving access through the use of the ICADD intermediate format will only happen if information owners and publishers choose to support it; ICADD depends on the DTD developers to specify the mapping onto the ICADD tag set. By using architectural forms for the specification, ICADD reduces the perceived complexity of specification development; and the same time this development - by having the specification be physically part of the DTD - it is stipulated to be an integrated part of the DTD development itself, thus presumably increasing the chances of support from the DTD developers.
What they said of ICADD seems to have accurately predicted what would happen to Web content markup in the next decade. What is now obvious, is that the influence of the early solutions and players was going to prove dominant and the SGML solutions would be, in some ways, taken for granted, and even possibly act as a constraint in the future.
It was but a short step to take the ICADD architecture into the Web world, as happened with the introduction of styles, machine-readable specifications for the presentation of structural elements in a Web page. Hypertext MarkUp Language (HTML) was the same kind of language as SGML, although far simpler and, like SGML, referred to a DTD. What had happened in the process of going from the early use of computers to the Web was the introduction of the extensive use of multimedia, particularly graphics, and so HTML needed to be adjusted with element attributes that would stem the flow from inaccessibility back towards some kind of accessibility. The challenge became not one of maintaining the mono media qualities, which had the qualities Connelly noted, but finding ways to support the proliferation of media without compromising the accessibility.
A simple example is provided by the tag that shows where the inclusion of an image is required. The <img> tag needed an attribute that would provide those who could not see the image with some idea of what it contained. Adding the <alt> attribute achieved this. Later, adding a new document element to be known as the <long desc> went further to provide for a full explanation of the image.
The idea was that the HTML DTD would specify the structural elements that should be used and the content would be interpreted, according to the provided styles, by the user agent, or 'browser' as it came to be known. What went wrong was that the browser developers were able to exploit this new technology to their advantage by offering browsers that could do more than any other: competition among the browser developers led to constant fragmentation of the standard as they offered both new elements and new ways of using them. The browser battles continue although a decade later, for a variety of reasons, some browsers are appearing that adhere to the current standards.
But determining what the code should do at that level was not the only work of W3C WAI. The jointly-funded activity was to:
create an international program office which will coordinate five activities for Web accessibility:
Š data formats and protocols;
Š guidelines for browsers, authoring tools and content creators;
Š rating and certification;
Š research and advanced development; and
Š educational outreach (NSF, 1997)
As the Web gained popularity, it acquired more and more users for whom it was inaccessible. As Tim Berners-Lee pointed out in an early presentation of the Web (Connolly, 1994), it had gone from being the communication medium for a lot of geeks who were content with text to a mass-medium and in the process lost some of its most endearing qualities, including the equity of participation that characterised the early Web.
WAI was positioned, then, to receive supplications from all sorts of users who were finding the Web inaccessible or people acting on their behalf. As an open activity, anyone could (and can) join the WAI Interest Group mailing list and voice their opinion. This has been happening for more than ten years and the list of problems is very long. In that time, many obvious problems were determined early and the more difficult ones, such as the identifiable problems for people with dyslexia and dysnumeria, have emerged more recently. Many have been repeated. They are generally classified into three types: problems to do with content, user agents and authoring tools and are channeled towards the three working groups responsible for those areas.
The Working Groups are more focused than the Interest Group and now have charters describing their goals, processes and achievement points that help them prepare a recommendation for the Director of the W3C. Essentially, what they do is gather requirements and describe those requirements in generic terminology, aiming to make their recommendations vendor and technology independent and future proof.
The Working Groups consist of experts who do what experts do, generalise and specialise. One might say, then, that the WAI Working Groups are chartered to determine the relevant specialisations for consideration and to generalise from them to define guidelines for accessibility.
The guidelines serve a number of purposes but a clear and specific use of them is to ensure that all W3C recommended "data formats and protocols" contribute to accessibility. The guidelines have themselves assumed the role of data formats and protocols. They have been promoted to content creators in their raw form and this has required considerable support effort which may have been avoided if they had been subsumed into the formal data formats and protocols and those had been the focus of promotion. This is what happened with HTML. The last version of HTML was amended to include the identified accessibility features which now appear as attributes within HTML Version 4.1. EXtensible MarkUp Language (XML) soon replaced HTML as a recommendation from the Director of W3C and with the introduction of XML, more accessibility features were introduced. Despite the W3C Director's recommendation that people should not continue to use HTML, it is still used extensively.
(It is the author's opinion that in many institutions, the money that might have been spent to pressure for better and cheaper authoring tools and to promote the replacement of old tools, instead of increasing the training of creators to use the now deprecated HTML in accessible ways. This is a tractable although difficult problem. Teaching content developers to use XML is frightening to most and so it is not even tried even though in fact it can be done almost without noticing if the right tools are used. The Authoring Tools Accessibility Guidelines, that have not been taken as seriously as the content guidelines, are designed to help make authoring tools that both are usable by people with disabilities and that produce content that is usable by people with disabilities. The point that is so often missed is that if authors use these tools, instead of the many non-conforming tools, without needing to know very much they can produce very accessible content 'unconsciously'. The author believes this would make a much bigger difference than has been the case with the approach of trying to make all content developers accessibility-skilled using bad tools and raw markup. The result is that HTML continues to be used in its raw form and little has been achieved in the way of increased accessibility of the Web. This, despite the reality that the move from HTML to XML requires very little effort beyond using what was HTML 4.1 correctly and ensuring that the right DTD is referred to and the tags are in lower case!)
W3C is a technical standards organisation and their work is devoted to technical specifications. Whereas another type of organisation concerned about accessibility might have worked on developer practices, and what practices should be encouraged within the industry and developer community, possibly with the pressure of 'ISO 9001' type certification available, W3C has stuck to specifying technical output and been remarkably successful in this process. The result is that many countries, in adopting legal support for accessibility, have also relied on the WCAG specifications, sadly almost always without reference to the authoring tools or user agent specifications.
Conformance with general guidelines is not easily verified. and so the WCAG generalities have been reduced to specifics in each particular case in order to be tested. The Working Groups who are responsible for the generalisations support this process by producing specific examples in order to clarify what they mean by their generalisations but, of course, these do not fit every situation and so are often not relevant or helpful. In general, the problem is that all these things are subject to interpretation by people with more or less expertise and personal bias. The working groups endeavour to write their recommendations in unambiguous language but, of course, this is not really possible. The result is that conformance is not an absolute quality.
Conformance with formats and protocols is simpler. This is a machine determinable state but it depends upon the formats and protocols having correctly captured the requirements for its effectiveness. As the range of problems that users may have is infinite, it cannot be expected that the guidelines and associated re-defined formats and protocols will cover every possibility for inaccessibility. There are also many requirements that are not capable of such formal definition.
Given the problems with accessibility, many developers have tried to avoid the problem by offering a 'text-only' version of their content. A major problem with this approach has been that the pages often get 'out of synch', with text-only pages not being updated with sufficient frequency.
But many people with disabilities do not want to be treated as such: they want to be able to participate in the world equally with others so they want to know what others are being given by a resource. They want an inclusive solution. They may prefer the idea of a universal resource - a one size fits all solution that includes them. The Chair of the British Standards Institution's committee on Web Accessibility, Julie Howell (2008) considers this issue and asks is it equality of service or equality of Web sites that matters most.
The early objection to the text-only alternative on the part of the developers disappeared when site management was given across to software systems that were capable of producing both versions from a single authoring of content. This relies on a shift from client software responsibility for the correct rendering of the resource to the provision of appropriate components by authoring/serving software. What are called 'dynamic' sites respond to client requests by combining components in response to user requests.
The motivation for accessibility often arises in a community of users rather than creators and so it is common to find a third party creating an accessible version of a resource or part of the content of a resource. The production of closed captions for films is usually the activity of a third party, as is the foreign language dubbing of the spoken sound tracks. ubAccess has developed a service that transforms content for people with dyslexia. A number of Braille translation services operate in different countries to cater for the different Braille languages, and online systems such as Babelfish help with translation services.
The opportunity to work with third party augmentations and conversions of content is realised by a shift from universal design to flexible composition. Universal design has the creator responsible for the various forms of the content while flexible composition allows for distributed authoring. The server, in the latter case, brings together the required forms, determined by reference to a user's needs and preferences.
For flexible, distributed resource composition, metadata descriptions of both the user's needs and preferences and the content pieces available for construction of the resource are needed. The Inclusive Learning Exchange [TILE] demonstrates this. TILE uses the AccLIP and AccMD metadata profiles to match resources to user's needs, with the capability to provide captions, transcripts, signage, different formats and more to suit users' needs.
Flexible composition satisfies the requirements for the users, allows for more participation in the content production which is a boon for developers, and demands more of server technology. As noted elsewhere, this is suitable for increasing accessibility but also has the benefit that it limits the transfer of content that will not be of use to the recipient. This technique also saves on requirements for client capabilities which is useful as devices multiply and become smaller. Economically too, it seems to be a better way to go (Jackson, 2004).
In summary, the history of the text-only page has shown some trends:
from universal design to flexible composition
from client responsibility for resource rendering to server responsibility
from centralised authoring to distributed authoring
from code-cutting designers to applications-supported designers
from creator controlled content forms to user
demanded content forms
At the time of writing, the authoritative version of the WCAG is "Web Content Accessibility Guidelines 1.0, W3C Recommendation 5-May-1999" [WCAG]. There is a new version under development for which the idea of universal design is maintained. The role of WCAG is still to support the developers as they choose what markup to use (of course, many of them are oblivious of the choices and their implications) and then to check that all is well.
The role of the authoring tools and user agents guidelines is to assure that the hopefully WCAG conformant content will be usable and fully functional.
There is no sense in which one would want to 'fault' the work of WAI in the area of accessibility. Like others, they have struggled to deal with an enormous and growing problem and everyone has contributed all they can to help the cause. Nevertheless, it is clear that the work of WAI alone cannot make the Web accessible. Although there has been a lot written about the achievements of the universal access approach, that is not the topic but rather the context for the current work, and working to increase the effectiveness of the WAI work is a major goal.
On 27/3/03, the UK Disabilities Rights Commission [DRC] issued a press release announcing its "First DRC Formal Investigation to focus on web access". They planned to investigate 1000 Web sites "for their ability to be accessed by Britain’s 8.5 million disabled people". They said that "A key aim of the investigation will be to identify recurrent barriers to web access and to help site owners and developers recognise and avoid them."
Significantly, this testing would not just be done by people evaluating the Web sites against a set of specifications, but they would also involve 50 disabled people in in-depth testing of a representative sample of the sites, testing in their case for practical usability. They claimed that, "This work will help clarify the relationship between a site’s compliance with standards and its practical usability for disabled people." Bert Massie, Chairman of the DRC, said: “The DRC wants to see a society where all disabled people can participate fully as equal citizens and this formal investigation into web accessibility is an important step towards that goal.”
The DRC has legal power. As Mr Massie said: “Organisations which offer goods and services on the Web already have a legal duty to make their sites accessible. The DRC is committed to enforcing these obligations but it is also determined to help site owners and developers tackle the barriers to inclusive web design.” (DRC, 2003)
On 30 April 2003, Accessify carried the following report of the briefing for the DRC project:
I had always thought that despite being labelled a 'formal' investigation, it would not carry any real legal implications, and thankfully (for many people) this was indeed the case. The term formal means that the DRC can carry out two types of investigation - a named party or a general investigation, and it's the latter that's taking place (a named party investigation would only apply to an organisation that is repeatedly 'offending' and is put under investigation). ...
So, it isn't a 'naming and shaming' exercise. What exactly does it entail then? Well, the format is basically this - 1,000 web sites hosted in Great Britain are going to be tested using automated testing tools such as Bobby and LIFT. From that initial 1,000 a further 100 sites will undergo more rigorous testing with the help of 50 people with a varying range of disabilities, varying technical knowledge and all kinds of assistive devices. This is not going to be centralised, so it will be interesting to see how the consistency is maintained. However, some of the testing will be filmed (the usual usability kind of set-up) and a whole raft of data is going to need to be pulled together in some kind of presentable format. I don't envy Helen Petrie who has the task of co-ordinating this!
The aim is to go beyond the simple testing for accessibility (although those original 1,000 sites will only have the automated tests) - the notion put forward is "Accessibility for Usability" ... which to these ears sounds like another term for 'Universal Design' or 'Design For All'. I'm not sure I appreciate the differences, if indeed there are any. It's certainly true that getting a Bobby Level AAA pass does not automatically make your site accessible, and it certainly doesn't assure usability. The interesting thing about this study, in my opinion, is how clear the correlation is between sites that pass the automated Bobby tests and their actual usability as determined by the testers. Will a site that has passed the tests with flying colours be more usable? I suspect that the answer will usually be yes. After all, if you have taken time and effort to make a site accessible, the chances are you have a good idea about the usability aspect. We will see ... (Accessify, 2003a)
Beyond establishing the proposed methodology, the DRC project leader claimed that they would:
Develop concept of “Accessibility for Usability” (Accessify, 2003b)
A year later, after the report was released, OUT-Law published an article about it (2004):
Egg.com and Oxfam.org.uk were among just five websites praised for their excellent accessibility...
City University London tested 1,000 UK-based sites on behalf of the DRC... Its findings, released yesterday, confirmed what many already suspected: very few sites are accessible to the disabled – albeit an inaccessible site presents a risk of legal action under the UK's Disability Discrimination Act.
However, while the report did not "name and shame" the 808 sites that failed to reach a minimum standard of accessibility in automated tests, City University has today revealed five "examples of excellence" from its study:
Š egg.com (Internet bank)
Š oxfam.org.uk (charity)
Š sisonline.org (spinal injuries voluntary organisation)
Š copac.ac.uk (on-line catalogues of research libraries)
Š whoohoo.co.uk (comedy dialect translator)
Helen Petrie, Professor of Human Computer Interaction Design at City University, said: “The Spinal Injuries Scotland site highlights how an accessible website can be created on a small budget and still be lively and colourful. Additionally, Egg’s site shows larger firms can embrace accessibility without compromising their corporate image or losing any sophistication from their e-services.”
Despite these examples of excellence, the overwhelming majority of websites were difficult, and at times impossible, for people with disabilities to access.
Petrie added: “Web developers need to use the Web Accessibility Initiative (WAI) guidelines as well as involve disabled users to ensure web sites are usable for these groups.” ...
In its automated tests, City University checked for technical compliance with the World Wide Web Consortium (W3C) guidelines. ...
Following the report from the DRC, co-written by City University, the W3C issued a statement "to address potential misunderstandings about W3C's [Web Accessibility Initiative or WAI] Guidelines introduced by certain interpretations of the data."
This was not, however, a rejection of the DRC's study. In fact, the W3C has confirmed that it welcomes the UK research. The potential misunderstanding came from the fact that, while 1,000 sites underwent automated tests, City University put 100 of these sites to further testing by a disabled user group.
That group identified 585 accessibility and usability problems; but the DRC commented that 45 per cent of these were not violations of any of the 65 checkpoints listed in the W3C's Web Content Accessibility Guidelines, or WCAG.
The report was based on Version 1.0 of the WCAG – a version which has been around since 1999. The W3C was keen to point out that the WCAG is only one of three sets of accessibility guidelines recognised as international standards, all prepared under the auspices of the W3C's Web Accessibility Initiative. ...
The W3C explained that in fact its WAI package addresses 95 per cent of the problems highlighted by the DRC report. However, both the W3C and the DRC are keen to point out that they are working towards a common goal: to make websites more accessible to the disabled.
OUT-LAW spoke to Judy Brewer, the W3C's Web Accessibility Initiative Domain Leader. The Web Content Accessibility Guidelines Working Group is currently working on Version 2.0 of the WCAG which she hopes will be finalised next year, possibly in the first quarter.
"We will be looking at the comments from the DRC report in our work on Version 2.0," explained Brewer. "We have always said that user testing of accessibility features is important when conducting comprehensive testing of web site accessibility."
She acknowledged that the way Version 1.0 is written means that it can sometimes be difficult to tell whether various checkpoints are satisfied. The plan, it seems, is to retain some concept of priority or conformance levels, with criteria included which will make it easier for web developers to know that they have met them.
This change of style should help: another recent study, by web-testing specialist SciVisum, found that 40 per cent of a sample of more than 100 UK sites claiming to be accessible do not meet the WAI checkpoints for which they claim compliance. Brewer said this is not unusual: "We noticed that over-claiming a site's accessibility by as much as a-level-and-a-half is not uncommon." So Version 2.0 should be more precisely testable.
The reason for the W3C statement on the DRC findings was, said Brewer, to minimise the risk that the public might interpret the findings as implying that they cannot rely on the guidelines.
City University's Professor Petrie told OUT-LAW: "Our report strongly recommends using the WCAG guidelines supplemented by user testing – which is a recommendation made by W3C." She added that the University's data is "completely at W3C's disposal" for its continuing work on WCAG Version 2.0.
Both the W3C and the DRC are keen to point out that developers should follow the guidelines for site design – WCAG Version 1.0 – but they should not follow these in isolation: user testing, they both agree, is very, very important. (Out-Law, 2004)
Out-law's commentary is interesting because it takes a critical position with respect to the report and its relationship and comments on the W3C WCAG Version 1 and 2. These comments will be considered in more detail in following chapters.
The DRC Report foreword by the Commission's Chairman Bert Massey, states:
This report demonstrates that most websites are inaccessible to many disabled people and fail to satisfy even the most basic standards for accessibility recommended by the World Wide Web Consortium. It is also clear that compliance with the technical guidelines and the use of automated tests are only the first steps towards accessibility: there can be no substitute for involving disabled people themselves in design and testing, and for ensuring that disabled users have the best advice and information available about how to use assistive technology, as well as the access features provided by Web browsers and computer operating systems. (DRC, 2004b, p. v)
The report authors tend to use the term 'inclusive design' rather than universal design.
They comment that:
Despite the obligations created by the DDA, domestic research suggests that compliance, let alone the achievement of best practice on accessibility, has been rare. The Royal National Institute of the Blind (RNIB) published a report in August 2000 on 17 websites, in which it concluded that the performance of high street stores and banks was “extremely disappointing” . A separate report in September 2002 from the University of Bath described the level of compliance by United Kingdom universities with website industry guidance as “disappointing" [Kelly, 2002]; and in November 2002, a report into 20 key “flagship” government websites found that 75% were “in need of immediate attention in one area or another” [Interactive Bureau, 2002] . Recent audits of the UK’s most popular airline and newspaper websites conducted by AbilityNet reported that none reached Priority 1 level conformance and only one had responded positively to a request to make a public commitment to accessibility (DRC, 2004b p. 4).
They further confirmed the lack of success in achieving accessibility of Web sites by the introduction of the guidelines and the local legislation. This time they were reporting on the state in the UK:
It is the purpose of this report to describe the process and results of that investigation, and to do so with particular regard to the relationship between formal accessibility guidance (such as that produced by the WAI) and the actual accessibility and usability of a site as experienced by disabled users. From that analysis, the report draws practical conclusions for the future development of website accessibility and usability, and makes recommendations directed at the Government, at disabled people and their organisations, at designers and providers of assistive technology, at the developers of automated accessibility checking tools, at designers of operating systems and browsers, at website developers, and at website commissioners and owners. In this way, it is the intention of this report to help realise the potential of the Web to play a leading part in the future full participation of all disabled people in society as equal citizens. (DRC, 2004b, p. 5)
The overall finding includes the comment that compliance with the WAI guidelines does not ensure accessibility. Finding 2 contains the sub-point 2.2:
Compliance with the Guidelines published by the Web Accessibility Initiative is a necessary but not sufficient condition for ensuring that sites are practically accessible and usable by disabled people. As many as 45% of the problems experienced by the user group were not a violation of any Checkpoint, and would not have been detected without user testing. (DRC, 2004b, p. 12)
The report goes on to describe many things that could be done by humans including training of web content providers and web users, proactive efforts by people with front-line responsibility such as librarians and more.
Finding 5 states:
Nearly half (45%) of the problems encountered by disabled users when attempting to navigate websites cannot be attributed to explicit violations of the Web Accessibility Initiative Checkpoints. Although some of these arise from shortcomings in the assistive technology used, most reflect the limitations of the Checkpoints themselves as a comprehensive interpretation of the intent of the Guidelines. (DRC, 2004b, p. 17)
The level of compliance with the guidelines was amazingly low, even given the common perception that compliance levels are not high:
Š Of 1000 pages tested, 81% [failed] even the lowest level of compliance as tested by automatic testing tools, which can only detect some kinds of lack of compliance, so clearly less that 19% would be even Level ! compliant.
Š Of the 1000, only 6 pages passed the automated testing part for level 1 and 2 indicating that less than 6 would be Level 2 compliant. in fact, only 2 of the original 1000 passed this phase of testing when they were manually checked.
Š No pages were found to be Level 3 compliant. (DRC, 2004b, pps 22,23)
In addition to the proportion of home pages that potentially passed at each level of Guideline compliance, analyses were also conducted to discover the numbers of Checkpoint violations on home pages. Two measures were investigated. The first was the number of different Checkpoints that were violated on a home page. The second was the instances of violations that occurred on a home page. For example, on a particular home page there may be violations of two Checkpoints: failure to provide ALT text for images (Checkpoint 1.1) and failure to identify row and column headers in tables (Checkpoint 5.1). In this case, the number of Checkpoint violations is two. However, if there are 10 images that lack ALT text and three tables with a total of 22 headers, then the instances of violations is 32. This example illustrates how violations of a small number of Checkpoints can easily produce a large number of instances of violations, a factor borne out by the data. (DRC, 2004b, p. 23)
Analysis of the instances of Checkpoint violations revealed approximately 108 points per page where a disabled user might encounter a barrier to access. These violations range from design features that make further use of the website impossible, to those that only cause minor irritation. It should also be noted that not all the potential barriers will affect every user, as many relate to specific impairment groups, and a particular user may not explore the entire page. Nonetheless, over 100 violations of the Checkpoints per page show the scale of the obstacles impeding disabled people’s use of websites. (DRC, 2004b, p. 24)
The report contains many statistics about the speed with which the users were able to complete tasks in what is generally to be understood as usability testing. It showed, in the end, that usable sites were usable and this, regardless of disability needs.
On page 31, there is some explanation of the results:
The user evaluations revealed 585 accessibility and usability problems. 55% of these problems related to Checkpoints, but 45% were not a violation of any Checkpoint and could therefore have been present on any WAI-conformant site regardless of rating. On the other hand, violations of just eight Checkpoints accounted for as many as 82% of the reported problems that were in fact covered by the Checkpoints, and 45% of the total number of problems (DRC, 2004b, p. 31).
After providing the details, the report continues:
Only three of these eight Checkpoints were Priority 1. The remaining five Checkpoints, representing 63% of problems accounted for by Checkpoint violations (or 34% of all problems), were not classified by the Guidelines as Priority 1, and so could have been encountered on any Priority 1-conformant site.
Further expert inspection of 20 sites within the sample confirmed the limitations of automatic testing tools. 69% of the Checkpoint related problems (38% of all problems) would not have been detected without manual checking of warnings, yet 95% of warning reports checked revealed no actual Checkpoint violation.
Since automatic checks alone do not predict users’ actual performance and experience, and since the great majority of problems that the users had when performing their tasks could not be detected automatically, it is evident that automated tests alone are insufficient to ensure that websites are accessible and usable for disabled people. Clearly, it is essential that designers also perform the manual checks suggested by the tools. However, the evidence shows that, even if this is undertaken diligently, many serious usability problems are likely to go undetected.
This leads to the inescapable conclusion that many of the problems encountered by users are of a nature that designers alone cannot be expected to recognise and remedy. These problems can only be resolved by including disabled users directly in the design and evaluation of websites. (DRC, 2004b, p. 33)
The final statement here is most important. It is the main thesis of the DRC Report that usability testing involving people with disabilities is essential to the accurate testing of content.
An important finding of the report was the extremely low level of accessibility of resources. It is explained:
The low rate of expertise identified, the lack of involvement of disabled people in the design and testing processes, and the relatively low use even of automatic testing tools contribute to an environment which makes the currently poor state of Web accessibility inevitable. (DRC, 2004b, p. 38)
What is significant here is that there is such a low rate of universal or, as Petrie says, inclusive accessibility. While this should not be taken as a sign of failure of those accessibility goals, it does suggest that there is a great need for more to be done, and that it is unlikely to be done by the original content creators. This means that third party support should be enabled, and that is dependent on protocols to enable it.
In a sense, the Report places responsibility on the users:
Disabled people need better advice about the assistive technology available so that they can make informed decisions about what best meets their individual needs, and better training in how to use the most suitable technology so they can get the best out of it. (DRC, 2004b, p. 39)
While this is a possible conclusion, it is asserted that the conclusion could equally have been that a better method of ensuring user satisfaction should be developed. There is a general emphasis on responsibility and training in many commentaries on accessibility. Many examples of calls for training of creators, for example, are similar to those within this report but perhaps this responsibility is misplaced. It is interesting to note also that the Report advocates more trust of users to select what they need and want (possibly represented by assistants).
If money is to be spent, the use of better authoring tools may prove cheaper than the training being advocated. And if users need to be served better, perhaps removing the need for them to translate their own needs into assistive technologies is somewhat more attractive.
It is hard to deny the conclusion that:
There is a need to increase the availability of affordable individual expert assessments, but this must be complemented by appropriate signposting to such qualified specialist organisations. That implies a requirement for the education of those who have prime responsibility for assessing the more general assistive technology needs of disabled people (such as occupational therapists, rehabilitation staff, special educational needs coordinators, and Job Centre Plus staff), and of those who are likely to provide advice and training to disabled people (for example, librarians, advisers in information bureaux, as well as professional information and computer technology trainers and assistants). (DRC, 2004b, p. 39)
But the question might be about what is the role these assistants should be trained to contribute. Perhaps training them to complete a simple questionnaire about their needs and preferences would be the easiest and most effective use of training time. Of course, this would only be possible if applications acted on those needs and preferences, and this does mean server improvements. (Note that this issue is considered in some detail in chapter ....)
The report suggests the very practical step of:
The development of on-line user communities and the consequent development by users of their own mutual support arrangements will usefully supplement individual assessments of this sort. (DRC, 2004b, p. 39)
But again this advice is based on a narrow definition of users that was the subject of the report, namely those with disabilities, as made clear in the following extract:
The investigation had three main purposes:
To evaluate systematically the extent to which the current design of websites accessed through the Internet facilitates or hinders use by disabled people in England, Scotland and Wales
To analyse the reasons for any recurrent barriers identified by the evaluation, including a provisional assessment of any technical and commercial considerations that are presently discouraging inclusive design
To recommend further work which will contribute towards enabling disabled people to enjoy full access to, and use of, the Web. (DRC, 2004b, p. 46)
The Report's definition of users is implicitly limited by the scope of the Report. Accessibility, in general, is a far broader issue with a wider scope. There is no way that there could be user groups of the kind suggested by the Report that would cater for all the situations that account for inaccessibility. The various combinations of needs would not be different but identification of classes of needs would be too difficult and the individual differences in needs and preferences would be lost in the process.
Petrie, the author of the DRD report, and others say:
Indeed, accessibility is often defined as conformance to WCAG 1.0 (e.g. [HTML Writers Guild]). However, the WAI’s definition of accessibility makes it much closer to usability: content is accessible when it may be used by someone with a disability [W3C. Web Accessibility Initiative Glossary] (emphasis added). Therefore the appropriate test for where a Web site is accessible is whether disabled people can use it, not whether it conforms to WCAG or other guidelines. (Kelly et al, 2005, p. 4)
Thatcher  expresses this nicely when he states that accessibility is not “in” a Web site, it is experiential and environmental, it depends on the interaction of the content with the user agent, the assistive technology and the user. (Kelly et al, 2005, p. 4)
Kelly et al (2005) argue that the DRD report and other evidence show that there is not yet a good solution to the accessibility problem but that it clearly does not rest in a set of technical authoring guidelines. In fact, they list factors that need to be taken into account in the determination of accessibility:
Š The intended purpose of the Web site or resource (what are the typical tasks that user groups might be expected to perform when using the site? What is the intended user experience?)
Š The intended audience – their level of knowledge both of the subject(s) addressed by the resource, and of Web browsing and, assistive technology.
Š The intended usage environment (e.g. can any assumptions be made about the range of browsers and assistive technologies that the target audience is likely to be using?)
Š The role in overall delivery of services and information (are there pre- existing non-Web means of delivering the same services?)
Š The intended lifecycle of resource (e.g. when will it be upgraded/redesigned? Is it expected to be evolvable?) (Kelly et al, 2005, p. 6)
They argue that priorities must be set for each context and that
This process should create a framework for effective application of the WCAG without fear that conformance with specific checkpoints may be unachievable or inappropriate. (Kelly et al, 2005, p. 7)
They provide an image of the wider context:
Figure ???: The wider context for accessibility (Kelly et al, 2005, p. 8)
This framework offers one way of thinking about the problems. But only a year later many of the same authors offered what they call the 'tangram' approach (Chapter 5). It should be noted that the proposed AccessForAll approach assumes an operational framework that can include any and all of these contextual issues.
By 2008, it is an open question whether WCAG should be the foundation of legislation for accessibility. This does not detract from its role as a standard for developers, but it suggests it is not a single-shop solution. Kelly (2008), in particular has been outspoken about this. In reporting on the UKOLN organised Accessibility Summit II event on A User-Focussed Approach to Web Accessibility, he said:
The participants at the meeting agreed on the need “to call on the public sector to rethink policy and guidelines on accessibility of the web to people with a disability“. As David Sloan, Research Assistant at the School of Computing at the University of Dundee and co-founder of the summit reported in a article published in the E-Government Bulletin “the meeting unanimously agreed the WCAG were inadequate“.
In the next chapter, other ways of approaching accessibility are considered.
This chapter considers the shift from all responsibility for accessibility being on the resource developer to produce a single resource (with multiple components if necessary) that is accessible to all, according to the WCAG specifications, to a situation in which responsibility is distributed among many, including the creator, the server, the user, etc... It adopts the concept of on-going inclusive practices and shows that there is a significant shift in current thinking to support this. It provides evidence of projects that support this view.
Van Assche et al (2006) stated the general problem succinctly in terms of e-learning as follows:
The main concern for Accessibility Interoperability is to shift the focus from design for disabilities to design for all. Now accessibility is very much design for access to single objects. A more holistic approach to accessibility of equipment, services and learning opportunities could benefit all users, not only persons with special needs. The WAI guidelines cover the syntactical accessibility, making it easy to test automatically if a web page conforms to accessibility requirements. However, stimulating to "design to the test" does not improve accessibility to learning. In addition, semantic and procedural aspects of electronic communication must be taken into consideration.
The main challenge is to stimulate the creations of alternatives instead of just having "cosmetic" transformations of digital resources. It is a stakeholders concern that some of the national legislation (e.g. Section 508 in the US) might block the development of more appropriate standards for accessibility of learning technologies. It is also, according to the community of experts, a danger of premature standardisation.
1. To improve accessibility to learning opportunities we should develop profiles and guides for the learning, education and training domain that would help us to gain more from a number of existing specifications, e.g. W3C's guidelines, a number of IMS specifications, etc. We should also develop guidelines how to provide alternative representations of learning resources and exploit the interactive capabilities of e-learning tools to ensure accessibility. Web services could enhance the accessibility capabilities of a number of technologies. Last and not least, we need to strengthen the awareness of the accessibility issues in the elearning community.
2. To ensure accessibility interoperability among different learning technologies, accessibility information should be embedded in all learning technologies.
The authors of "Developing A Holistic Approach For E-Learning Accessibility" (Kelly, Phipps & Swift, 2004), point to surveys of accessibility of higher educational sites undertaken in the UK before the DRC Report and comment that the findings are similarly not good,
These findings seem depressing, particularly in light of the publicity given to the SENDA legislation across the community, the activities of support bodies such as TechDis and UKOLN and the level of awareness and support for WAI activities across the UK Higher Education sector. (Kelly, Phipps & Swift, 2004)
But the thrust of the 'holistic approach' paper is that there is more to accessibility than a technical analysis of conformance with WAI guidelines and that such things as blended learning may provide better solutions. Blended learning is learning that is not only technology based but includes physical objects and the role of people such as assistants, maybe family members. Jutta Treviranus, on the other hand, in her Keynote address at the 2004 OZeWAI Conference [OZeWAI 2004], emphasised that there is an effort in Canada to use the technology, to exploit the artificiality of it and let it provide for people according to their needs and preferences in ways that humans in the physical world have and often can not (Treviranus & Roberts, 2006). This position does not deny the possibility of human and physical help, but it does make strong demands on the technology for those situations in which it is involved.
There is no reason to follow one approach or the other but rather it is important to be aware of both. Within educational contexts in the UK, the 'SENDA' legislation requires reasonable accommodations to be made to promote inclusive learning. Kelly et al (2005) argue this is done by adopting a holistic approach to accessibility. Where learning is being undertaken in an online environment, the technology should be operating to its highest level of support for accessibility, as required in Canada.
Kelly et al (2005) are raising expectations in terms of responsibility for teachers, parents, institutions and their performance; the Disabilities Rights Commission expect the support communities to take a greater role (2004a), and Treviranus claims the AccessForAll approach wants more from the technology: while Kelly et al argue for standing back from the online life and including other aspects of life. Treviranus argues that standing back from the original resource and providing what it contains in a form the user can access is what is needed. Kelly et al do this offline and AccessForAll requires the server to do it. In essence, they share the holistic model although they differ in their dependence on computers because they are working in different contexts. Another point of view on their perspectives, and those of the DRC, W3C, and others, asks what burdens are they placing on the humans, and how well can they respond?
Kelly et al (2005) expose their limited scope in the statement:
In our holistic approach to accessible e-learning we feel there is a need to provide accessible learning experiences, and not necessarily an accessible e-learning experience.
but the point they make is valid in a wider context.
By 2006, Kelly and colleagues (Kelly et al, 2006) were moving away from what they described as their earlier absolute solution to what they refer to as their tangram metaphor with multiple possibilities for satisfaction. They argued that the W3C tests provide a good base for accessibility but do not solve the problems and cannot - there are too many other factors involved.
Figure ???: a tangram (Kelly, 2006)
In a more recent exercise, Kelly and Brown (2007) proposed Accessibility 2.0 and called for greater variety being incorporated into the provision of accessibility.
In referring to the Australian legislative context for discrimination, Michael Bourk says:
In many ways people with disabilities represent different cultural groups. It is important to develop an understanding of different world views in attempting to negotiate policies that accommodate their requirements as citizens and consumers. The discrimination legislation is written from a rights perspective that considers the differences between impairment, disability and handicap. Confusion over the three terms and their application abounds among policy makers and service providers. Impairment refers to a temporary or permanent physical or intellectual condition. Disability is the restrictive effect on personal task performance that the surrounding environment places on people with impairments as a result of unaccommodating design or restricting structures. Handicaps are the negative social implications that occur from disabling environments. Instead of focusing on the limitations of physical or intellectual impairments, a rights model of disability places the emphasis on the disabling effects of an unaccommodating environment that may reduce social status. People may never lose their impairments but their disabilities and handicaps may be reduced with more accommodating environments designed with and for them. (Bourk, 1998)
The Commissioner accepted Telstra’s claim that it had no obligation to provide a new service as stated in s.24 of the Disability Discrimination Act. However, Wilson also accepted the counsel for the complainants [sic] argument that they were not seeking a new service but access to the existing service that formed Telstra's USO:
In my opinion, the services provided by the respondent are the provision of access to a telecommunications service. It is unreal for the respondent to say that the services are the provision of products (that is the network, telephone line and T200) it supplies, rather than the purpose for which the products are supplied, that is, communication over the network. The emphasis in the objects of the Telecommunications Act (s.3(a)(ii)) on the telephone service being "reasonably accessible to all people in Australia " must be taken to include people with a profound hearing disability. (HREOC,1995)
In other words, says Bourk, the case establishes it is the service not the objects that must be accessible. He says:
[The Commissioner]'s statement identifies the telephone service primarily as a social phenomenon and not a technological or even a market commodity. Once a social context is used as the defining environment in which the standard telephone service operates, it is difficult to dispute the claim that all does not include people with a disability. In addition part of the service includes the point of access in the same way that a retail shop front door is a point of access for a customer to a shop. Consequently, the disputed service is not a new or changed service but another mode of access to the existing service. It is the reference of access to an existing service that has particular relevance to the IT industry. (Bourk, 1998)
Bourk was writing as a student of Tom Worthington, an Australian expert in accessibility and an expert witness in the Maguire v SOCOG accessibility case (HREOC, 1999). Bourk makes two points of interest: accessibility is a quality of service and the need for attention is not merely that some people have medical disabilities. Both ideas are fundamental to the work being reported and of particular relevance in Australia.
In fact, the guidance notes for Australian regulations that extend the Australian Disabilities Discrimination Act say:
There is a need for much more effort to encourage the implementation of accessible web design; access to the Worldwide Web for people with disabilities can be readily achieved if good design practices are followed. A complaint of disability discrimination is unlikely to succeed if accessibility has been considered at the design stage and reasonable steps have been taken to provide access. (HREOC, 2002)
While Australian legislation, for example, follows others in using WCAG as the standard specifications for Web content encoding, it is clear that the test of accessibility is not just conformance to the guidelines.
Kelly et al (2005) point out that the W3C Guidelines do not claim to be the arbiters of accessibility but it is clear from most work in the field that they are often used this way. With respect to the W3C position, Kelly et al argue:
The only way to judge the accessibility of an institution is to assess it holistically and not judge it by a single method of delivery. (Kelly et al, 2005)
The summary of the US National Council on Disability's "Over the Horizon: Potential Impact of Emerging Trends in Information and Communication Technology on Disability Policy and Practice" concludes with the comment that:
"Pull" regulations (i.e., regulations that create markets and reward accessibility) generally work better than "push" regulations (i.e., regulations requiring conformance with regulatory standards), but both have a place in the development of public policies that bring about access and full inclusion for people with disabilities. Neither type of regulation works if it is not enforced. Enforcement provides a level playing field and a reward, rather than a lost opportunity, for those companies that work to make their products accessible. For enforcement to work, there must be accessibility standards that are testable and products that are tested against them. (NCD, 2006)
The AccessForAll framework developed for descriptions of accessibility needs and preferences and of resource characteristics enables the development of tests (of descriptions of resources) that are far more objective and testable than the WCAG criteria. The latter have been shown to be both frequently misjudged and abused when negative results are likely to have adverse ramifications. In addition, the WCAG criteria are not able to guarantee what they aim to achieve even if they are correctly evaluated. The AccessForAll framework does no more than identify the objective characteristics of resources.
Given the widespread faith in universal design and low levels of achievement, any resource that is repaired is likely to be done so 'retrospectively'. This is not a well-structured technical term but rather one that has simply become part of the vernacular of those working in accessibility.
In "Evaluation and Enhancement of Web Content Accessibility for Persons with Disabilities", Xiaoming Zeng (2004) considered a number of surveys of accessibility of Web sites, showing that in those studies, the same sort of results were obtained as in the DRC example. He pointed out that in the case of the study by Flowers, Bray and Algozzine (1999), "Their findings indicated that 73% of the universities’ special education homepages had accessibility errors, yet, with minimal revisions, 83% of those errors [were] correctable" (Xiaoming, 2004, p. 25).
Overwhelmingly, in most of the cases cited by Zeng, the problems were associated with failure to give a text label to images or giving one that was not appropriate. He points, however, to an exception:
Romano’s study  showed that the top 250 websites of Fortune listed companies are virtually inaccessible to many persons with disabilities. Of the 250 sites investigated, 181 of them had at least one major problem (priority 1) that would essentially keep the disabled from being able to use the site. While the study’s findings make it clear that even the best companies are not following WCAG guidelines, most of the problems blocking access to the websites could be easily identified and corrected with better evaluation methods (Xiaoming, 2004, p. 27).
The difference between a site that contains images that lack proper description and sites that cannot be used at all is, of course, huge. Zeng's contention is that good reporting on evaluations would make it easy for the site owners to correct the defects (Xiaoming, 2004, p. 36).
He goes on to work on numerical representations of accessibility, and develops his own, and later argues that with suitable software, the major flaws in the pages can often be corrected 'on the fly' to make the sites accessible in the broad sense, even if the images may lack a description. Zeng's contribution is to provide a way of moving from the accessible/inaccessible dichotomy which, as he argues, can cause a huge site to fail the test of accessibility when only one tag is missing while a smaller site can pass and be quite unusable. He limits the scope of his numerical evaluation to those features of accessibility that can be reliably tested automatically (Xiaoming, 2004, p. 38).
Zeng argues for a numerical value for accessibility mainly for the convenience and machine properties it would have but he states:
A quantitative numerical score would allow assessment of change in web accessibility over time as well as comparison between websites or between groups of websites. Instead of an absolute measure of accessibility that categorizes websites only as accessible or inaccessible, an assessment using the metric would be able to answer the fundamental scientific question: more or less accessible, compared to what? (Xiaoming, 2004, p. 38)
He cites a number of other benefits such a number might have but he fails to convince the reader that a number would provide useful information for making a site more accessible. If one is to judge a site, perhaps his system would give a fairer evaluation of a site than the existing and generally used checklist provided by WCAG, which is what people usually refer to for the dichotomous evaluation. He calls his metric the 'Web Accessibility Barrier' score (Xiaoming, 2004, p. 45).
This approach is in line with that taken by the EuroAccessibility group leading to a smaller group's work for a quality mark. Again, the quality mark approach, however constructed, does not seem to contribute to accessibility for the user, or access to resources that would be accessible to the user if not to everyone. It might act as a motivation for content developers to be more careful about the accessibility of their resource, but it is also a source of revenue for those few organisations certified to evaluate sites (in some cases those who proposed the certification scheme) and so there has been deep suspicion about it.
In April 2004, the EuroAccessibility Workshop was held in Copenhagen and came up with an annotated draft of the original WCAG that attempted to make it testable (EuroAccessibility, 2004).
Earlier, in Paris in 2003, the following press release was issued by the EuroAccessibility group:
Twenty three (23) European organisations from twelve (12) countries working in the field of Web Accessibility, together with the W3C/WAI (Web Accessibility Initiative), on Monday, April 28, 2003 have signed a Memorandum of Understanding (MoU) for the creation of the EuroAccessibility Project. The MoU sets out governing principles for their co-operation towards the goal of establishing a harmonised set of support services over Europe, which would include a common evaluation methodology, technical assistance, and a European certification authority for Web accessibility (EuroAccessibility, 2003).
One of the things they did was try to make WCAG testable: the plan from the April 30 2004 meeting was as follows:
Take an original WCAG guideline
Guideline 1. Provide equivalent alternatives to auditory and visual content.
and the original WCAG checkpoints
1.1 Provide a text equivalent for every non-text element (e.g., via "alt", "longdesc", or in element content). This includes: images, graphical representations of text (including symbols), image map regions, animations (e.g., animated GIFs), applets and programmatic objects, ascii art, frames, scripts, images used as list bullets, spacers, graphical buttons, sounds (played with or without user interaction), stand-alone audio files, audio tracks of video, and video.
and provide Clarification Points
A text equivalent (or reference to a text equivalent) must be directly associated with the element being described (via "alt", "longdesc", or from within the content of the element itself). It is not unacceptable for a text equivalent to be provided in any other manner i.e. an image being described from an adjacent paragraph
and testable statements
Statement 1.1.1: All IMG elements must be given an 'alt' attribute. Text: Related Technique:
Statement 1.1.2: The appropriate value for the text alternative given to each IMG element depends on the use of the image. etc
and provide a list of terms used for a glossary.
Table ???: The unfinished plan to make WCAG testable (EuroAccessibility, 2003)
There is no evidence the group managed to go much further than to develop the statements before the group was disbanded due to lack of funding. in the current context, it is interesting to note that the group were trying to find ways to insist that within a resource, any necessary alternatives should be identified. This is also considered important for AccessForAll and shares the use of what might be called metadata. In the former case, metadata would be embedded in the resource and in the latter it can be independent of it. The practical difference is that the EuroAccessibility approach would not support third party, distributed or asynchronous annotation as easily as does the AccessForAll approach. Nor would it support the continuous improvement of the resource by the addition of accessible components, which is a major aspect of the AccessForAll approach.
The two approaches differ fundamentally, however, in that the EuroAccessibility approach was intended to make a judgmental statement about the resource whereas the AccessForAll approach strictly avoids that.
In their original press release, the EuroAccessibility group stated that:
Š the W3C/WAI guidelines, which address accessibility of Web sites, browsers and media players, and authoring tools, may be promoted and implemented differently in different countries,
Š there is no harmonised methodology for their application and for assessing the quality of Web sites,
Š several "labels" are emerging over Europe,
Š governmental organisations express the need of a guarantee of quality concerning Web accessibility,
Š the Council Resolution on "eAccessibility" - improving the access of people with disabilities to the Knowledge Based Society (doc. 5165/03), under section II, paragraph 2, letter a, calls on the member states and invites the Commission "to consider the provision of an "eAccessibility mark" for goods and services which comply with relevant standards for eAccessibility.
Consequently, the signatories want to join their efforts in order to:
Š co-ordinate with W3C/WAI to develop testing methodology based on the W3C/WAI Web Content Accessibility Guidelines,
Š set up a common certification methodology,
Š create an Accessibility Quality Mark based on common rules,
Š develop an harmonised set of supporting services over Europe, based on a network, set up regional consulting desks,
Š disseminate good practices,
Š establish a certification authority for Web Accessibility (EuroAccessibility, 2004)
There was a division of labour among the various members of the EuroAccessibility group and a small group received funding to pursue their ideas as the CEN/ISSS WS/WAC developing a "CWA on Specifications for a Complete European Web Accessibility Certification Scheme and a Quality Mark" (CEN/ISSS WS/WAC, 2006).
There was, at the time, significant concern with respect to the EuroAccessibility work after the group had split. It was suspected that the motivation for the work was not simply improving the accessibility of the Web, but also the creation of an industry, in circumstances when there was doubt about the value of such an industry and fear it might actually stifle better work. Such concerns were notified informally to the CEN process (ref) and discussed informally in many other contexts.
In 2003, there was a general struggle with the question of what could be done to improve the accessibility of the Web. This was documented in a note to the Australian Standards Sub-Committee IT-019 (Appendix 9). One of the major concerns was that the kind of metadata being proposed at the time was mainly focused on compliance with WCAG and therefore generally not reliable. one idea was to use EARL, a technology that would include in the metadata, when it was made and by whom, or what (in the case of automatic software evaluations).
Given the problems with accessibility, a number of those concerned started to find ways to accept the failures of the creators and think about what could be done given an inaccessible resource. While many resource developers were still being encouraged to make their own resources more accessible, and this will continue hopefully, the problem of how to repair resources without access to the original files and servers became important.
Currently, there is a substantial 'industry' engaged in the production of what are called 'alternate formats' or 'alternative formats'. These are materials that have been published in an inaccessible form that are converted for particular users, for example, for students at a university. The need for this work is probably increasing as the number of students requiring special versions of content increases. It is not, however, what in this research is known as post-production techniques for increasing the accessibility of resources but it could be.
The conversion of resources into alternate formats is usually done on a case-by-case basis, and is subject to copyright in many cases. For this latter reason, it is not within the focus of the research because the copyright law limits this activity to cases where a student is registered as having a medical or permanent disability and therefore qualifies for special resource conversions. When the resource is converted, it is supposed to be registered with Copyright Austyralia if it is otherwise subject to copyright, but it seems from anecdotal evidence presented in 2007 (ref is La Trobe conf) that many resources are not properly registered because it is a cumbersome process and those responsible try to avoid engaging in it. What heppens when a resource is so registered is that it becomes discoverable for other users with permission to use such a resource in an alternate format. This means that there is metadata about the alternative, and so it could be described or its existing description probably could be converted, to provide AccessForAll metadata but even if this were so, it would only be available for some users.
What is of particular interest in the research is the development of automatic conversions or adaptations that can be used by anyone, following the principles of inclusion. This means resources that have additional formats made available post-production, and in a way that makes the alternatives available to all who might need them. Sometimes this happens on-the-fly, whereby the original resource is converted on request, and sometimes it happens with the alternative being stored in some static form and available for users who need it some time in the future. The difference for the research purposes is not relevant: it is either a service or a resource that is being offered to the user, but the result is the same. What is important is that the service or new resource is discoverable and the research argues this is possible if it is described using appropriate metadata.
there will be a list of these with a brief description of the services or resources they offer.
Richard Ladner ...
Vision Australia ...
This chapter has shifted the focus from universal accessibility of individual resources to accessibility to individual users, based on a copmbinatioon of effort including both human and machine input.
IBM launched on Tuesday an application that seeks to harness the power and time of Internet users around the globe to make the Web more accessible to the visually impaired....
Using the new IBM software users can report these problems to a central database and ask for additional descriptive text to be added to a site. Other Internet users that want to contribute can then check the database, select one of the submitted problems and "start fixing it" by added text labels. The additional information isn't incorporated into the original site's HTML code but into a metadata file that is loaded each time a visually impaired user subsequently visits the site.
IBM software enhances Web accessibility for the blind
Martyn Williams ITWorldCanada Friday, July 11, 2008 retrieved then too
In this chapter, the term metadata is defined. Metadata is central to the research and its definition and operation are essential to understanding the thesis. There is extensive consideration of emerging mapping technologies because the evolving Web is composed of increasingly smaller (atomic) components and discovery and use of these is essential to the AccessForAll metadata approach at the core of the research. There are a number of ways of buidling a metadata profile of a resource and as the technology in this process is the very technology to be exploited by the research, some of the possibilities are oncluded ion this chapter, such as Topic Maps and the Resource Description Framework [RDF].
In the home, we put our clothes away and remember which drawer holds what and assume that, if we're not wearing the clothes, they will be in the drawers or in the wash. We know which drawer to go to for our socks.
In the office, we put documents in files in drawers and number them so we can look up the number, or name, and find the file and thus the document.
In the digital world, we have invisible digital objects so we write labels for them and look through the labels to find the object we want.
If we label our digital objects in the same way, even using the same grammar, we can attach a lot of different labels to the same object and still find what we want.
If we have rules for organising the labels, we can use the labels to sort and organise the objects.
Then we can connect objects to each other by referring to the labels, even without looking at the objects themselves.
W3C says that, "Metadata is machine understandable information for the web" (W3C Metadata Activity).
The Dublin Core Metdata Initiative's description in plain English includes:
Metadata has been with us since the first librarian made a list of the items on a shelf of handwritten scrolls. The term "meta" comes from a Greek word that denotes "alongside, with, after, next." More recent Latin and English usage would employ "meta" to denote something transcendental, or beyond nature. Metadata, then, can be thought of as data about other data. It is the Internet-age term for information that librarians traditionally have put into catalogs, and it most commonly refers to descriptive information about Web resources.
A metadata record consists of a set of attributes, or elements, necessary to describe the resource in question. For example, a metadata system common in libraries -- the library catalog -- contains a set of metadata records with elements that describe a book or other library item: author, title, date of creation or publication, subject coverage, and the call number specifying location of the item on the shelf.
between a metadata record and the resource it describes may take one of two
1. elements may be contained in a record separate from the item, as in the case of the library's catalog record; or
2. the metadata may be embedded in the resource itself.
Examples of embedded metadata that is carried along with the resource itself include the Cataloging In Publication (CIP) data printed on the verso of a book's title page; or the TEI header in an electronic text. Many metadata standards in use today, including the Dublin Core standard, do not prescribe either type of linkage, leaving the decision to each particular implementation (DCMI Usage Guide).
The forthcoming guidelines for the use of the forthcoming AGLS Metadata standard for Australia says:
Metadata is a term for something that has been around for as long as humans have been writing. It is the Internet-age term for information that librarians traditionally have put into catalogues and archivists into archival control systems. The term ‘meta’ comes from a Greek word that denotes ‘alongside, with, after, next’. Metadata is data about other data. Although there are many varied uses for metadata, the term refers to descriptive information about resources, generally called ‘resource discovery metadata’. 1.3 in "AGLS Metadata Standard Part 2: Usage Guide" draft - not available to public yet..
and, significantly, continues:
The properties in the sets of DCMI and AGLS Metadata Terms form the current AGLS Metadata Standard. AGLS can be used for describing both online (ie web pages or other networked resources) and offline resources (eg books, museum objects, paintings, paper files etc). AGLS is intended to describe more than information resources – it is also designed to describe services and organisations. in 1.4. "AGLS Metadata Standard Part 2: Usage Guide" draft - not available to public yet...
In describing the Content Standard for Digital Geospatial Metadata, the Clinton administration's Federal Geographic Data Committee said:
The objectives of the standard are to provide a common set of terminology and definitions for the documentation of digital geospatial data. The standard establishes the names of data elements and compound elements (groups of data elements) to be used for these purposes, the definitions of these compound elements and data elements, and information about the values that are to be provided for the data elements (FGDC 1998).
They go on to add:
The standard was developed from the perspective of defining the information required by a prospective user to determine the availability of a set of geospatial data, to determine the fitness [of] the set of geospatial data for an intended use, to determine the means of accessing the set of geospatial data, and to successfully transfer the set of geospatial data. As such, the standard establishes the names of data elements and compound elements to be used for these purposes, the definitions of these data elements and compound elements, and information about the values that are to be provided for the data elements. The standard does not specify the means by which this information is organized in a computer system or in a data transfer, nor the means by which this information is transmitted, communicated, or presented to the user.
There are many definitions of metadata but generally they share two characteristics; they are about "a common set of terminology and definitions" and they have a shared structure for that language. Although metadata is analogous to catalogue and other filing descriptions, the name usually indicates that it is recorded and used electronically.
One difficulty in the use of the term is that it is, correctly, a plural noun but as that is awkward and not usually recognised in common practice, it will herein be treated as a singular noun, following the practice described by Murtha Baca, Head, Getty Standards Program, in her introduction to a book about metadata written by Getty staff and others:
Note: The authors of this publication are well aware that the noun "metadata" (like the noun "data") is plural, and should take plural verb forms. We have opted to treat it as a singular noun, as in everyday speech, in order to avoid awkward locutions (Baca, 1998).
Another difficulty is the frequency with which the word 'mapping' is used. The author wishes to write about mapping but is aware of its use in the context of 'metadata mapping' where it is usually meant to denote the relating of one mapping scheme to another. It is also used in the expression 'metadata application profile' (MAP) where it means a particular set of metadata rules and, more specifically, where it is used by the DCMI for a set of metadata rules where those rules are a combination of rules from other sets.
Yet another difficulty is a quality of good metadata: one man's metadata can be another's data. The characteristic of metadata being referred to here is what is known as its 'first class' nature: any metadata can be either the data about some other data or itself the subject of other metadata. This is exemplified by the work of the Open Archives Initiative [OAI] who developed a standard for describing metadata so that it can be 'harvested'.
In "Metadata Principles and Practicalities" (Weibel et al, 2002), the authors comment that:
The global scope of the Web URI name space means that each data element in an element set can be represented by a globally addressable name (its URI). Invariant global identifiers make machine processing of metadata across languages and applications far easier, but may impose unnatural constraints in a given context.
Identifiers such as URIs are not convenient as labels to be read by people, especially when such labels are in a language or character set other than the natural language of a given application. People prefer to read simple strings that have meaning in their own language. Particular tools and applications can use different presentation labels within their systems to make the labels more understandable and useful in a given linguistic, cultural, or domain context (Weibel et al, 2002).
In fact, although it is often hoped that metadata will be human-readable, the more it becomes useful to computers, the more that it seems to become unreadable to humans. In large part, this is due to its being encoded in languages that make it essential for the reader to know what is encoding and what is the metadata, but it is also perhaps an artifact of how it is presented.
Atlases are useful collections of maps, traditionally collected from a range of cartographers (Ashdowne et al, 2000). Such a collection makes more sense, and is more useful if the conventions for representation used in each map are the same. The way of writing metadata descriptions and terms should be defined in an open way so they can be interpreted by machines and people.
In the research, metadata is used to denote structured descriptions of resources that are organised in a common way and use a common language.
When collecting descriptive metadata for discovery, one usually has a database or repository and specifications for the structure of the data to be stored in that repository that make it possible to ‘publish' the data in a consistent way. In order to share metadata for repositories, it is necessary to have the same structure for all metadata but usually, to make one's own metadata most useful locally, those who develop such metadata tend to want idiosyncratic structures that suit their local purposes. So local specificity and global share-ability, inter-operability, are competing interests. Sharing of the metadata means that more people can use it whereas local specificity makes it more valuable in the immediate context, where it is usually engaged with more frequently, and where the cost is often borne.
One of the features of good metadata is that it is suitable for use in a simple way but that it can handle complexity. Another is that it operates widely on the dimension of locally-specific to globally-interoperable (Fugure ???).
The Dublin Core Metadata Element Set (more recently known as the DC Terms) provides an excellent example of how this might be achieved. It is a formal definition of the way in which descriptive information about a resource can be organised. It has a core set of elements that have been found to be extremely useful in describing almost every type of resource on the Web. Elements can be qualified in various ways for greater precision. In addition, selected elements can be combined with others in what is called an 'application profile' to create a new set for a given purpose. 'Dublin Core' metadata is considered to be such if it conforms to the formal definition of such metadata although there is no requirement for the number of elements that must be used beyond that there must be a unique identifier for the resource being described. DC metadata can be expressed in a range of computer languages.
Originally, DC metadata was used in HTML tags in simply encoded resources. The choice of meaning for so-called core elements was, to a certain extent, arbitrary and based on a pragmatic approach to the high-cost of quality metadata and the experience of cataloguers in the bibliographic world - mostly. Some of the definitions were arrived at as a sort of compromise and they were fairly loosely defined, even where some experienced cataloguers knew there were problems being hidden within the definitions.
Over the last decade, the definitions and supporting documentation have been slowly improved, always with the need to ensure that this will not alienate existing systems.
Currently, the DC terms are defined as follows:
Each term is specified with the following minimal set of attributes:
The unique token assigned to the term.
The Uniform Resource Identifier used to uniquely identify a term.
The human-readable label assigned to the term.
A statement that represents the concept and essential nature of the term.
Type of Term:
The type of term, such as Element or Encoding Scheme, as described in the DCMI Grammatical Principles.
Status assigned to term by the DCMI Usage Board, as described in the DCMI Usage Board Process.
Date on which a term was first declared.
Where applicable, the following attributes provide additional information about a term:
Additional information about the term or its application.
A link to authoritative documentation.
A citation or URL of a resource referenced in the Definition or Comment.
A reference to a term refined by an Element Refinement.
A reference to a term qualified by an Encoding Scheme.
A reference from a more general to a more specific Vocabulary Term.
A reference from a more specific to a more general Vocabulary Term. (DC Terms)
Despite the aim of having strict adherence to the original definitions of the DC terms, it became difficult to deal with the many moves to expand, qualify and otherwise change the DC terms. Doggedly sticking to the original documentation without further explanation and improved interoperability was proving a threat to the utility of DC metadata as the technology developed. In 2000, Thomas Baker described the grammar of the DCMES in an attempt to make it clear how it manages extensibility of elements (Figure ???).
Figure ???: DC metadata as grammar (1) (Baker, 2000)
with an example (Figure ???).
Figure ???: DC metadata as grammar (2) (Baker, 2000)
In 1999, a meeting about how to use DC metadata in educational portals was convened at Kattamingga in Australia (by the author as part of the work to develop the metadata for Victoria's new education portal). At this meeting, educationalists discussed the suitability of the DC terms to provide for descriptions of learning resources. The international group agreed that there were some extra things they wanted to use and that if there were a way of 'regularising' these, interoperability between educational catalogues (repositories) would be improved. The meeting was attended by some of the leading cataloguers of educational Web resources at the time (e.g. Stuart Sutton and Nancy Morgan from the University of Washington's GEM Project and John Mason from EdNA) and one of the two directors of the Dublin Core Metadata Initiative, Stu Weibel.
Ad hoc rules for extensions and alterations of terms were suggested on the spot by the Director of the DCMI, Stu Weibel, who said that all qualifications should:
Š not redefine terms,
Š not duplicate terms, and
Š follow the dumb-down rule. (author's notes)
In addition, there was the idea that certain communities would find particular terms useful and the DCMI should provide for their inclusion, perhaps as a second layer of terms for use. Significantly, this was the first formal application profile. An application profile was understood to be a metadata profile, conformant to DC principles, but suited to the needs of the local or domain specific community using it. The development led to the formation of working groups for communities of interest within the DCMI structure, and the Education Working Group soon was followed by others such as the Government Working Group.The Government Working Group of the DCMI followed the lead of the Education Working Group by developing an application profile. Many years later, the term 'audience,'originally suggested at the Kattaminga meeting, was added to the core set of DC terms. (For sentimental reasons, perhaps, the core is still usually referred to as having 15 elements despite the addition of the audience element.)
In 2000, Rachel Heery and others (Heery et al, 2000) wrote what has become a seminal article on application profiles and they are now established within DC practice. The essence of an application profile is that it allows for the mixing of metadata terms from different schema: the constraint on it is that it should not, itself, define new metadata terms but must derive them from existing schema. When this is not possible because the community in fact wants a new term, this is achieved by the community defining that term in a new name space and then referring to it, alongside other terms used in the application profile.
The DCMI glossary of 2006 offered the following:
In DCMI usage, an application profile is a declaration of the metadata terms an organization, information resource, application, or user community uses in its metadata. In a broader sense, it includes the set of metadata elements, policies, and guidelines defined for a particular application or implementation. The elements may be from one or more element sets, thus allowing a given application to meet its functional requirements by using metadata elements from several element sets including locally defined sets. For example, a given application might choose a specific subset of the Dublin Core elements that meets its needs, or may include elements from the Dublin Core, another element set, and several locally defined elements, all combined in a single schema. An application profile is not considered complete without documentation that defines the policies and best practices appropriate to the application. (DCMI Glossary-A)
In an attempt to further clarify the Dublin Core approach to metadata, the DCMI Architecture Working Group published two diagrams and some description of them in March 2005. Version 1.0 of what is known as the Abstract Model [DCMI AM] emerged after six months of interaction and consideration by that Working Group in an open forum.
It should be noted that its authors, Powell et al., stated that: “the UML modeling used here shows the abstract model but is not intended to form a suitable basis for the development of DCMI software applications”. Elsewhere, software developers were, however, explicitly stated to be one of the three target audiences for the DCAM, the other two being developers of syntax encoding guidelines and of application profiles.
That Abstract Model was a substantial step towards making it easier for implementers to model the DC metadata but it still did not solve all the problems. In 2006, a funded effort to provide an abstract model was commissioned by the DCMI. This produced a more formal graphical representation (Figures ??? - ???).
That model did not adhere to the strict rules for such diagrams set by the Unified Modeling Language (UML) and was not as easy to interpret as had been hoped. Several papers were presented at the DC 2006 Conference (Palacios et al, 2006; Pulis & Nevile, 2006), in which authors argued for a yet better model to be represented in strict UML form, pointing to a number of inconsistencies in the then current model. A new one was commissioned in 2007. At the DC 2007 Conference, Mikael Nilsson (2007) presented a formal version that is to be known as the Singapore Framework (Figure ???).
Having more precisely defined models enables profile developers to be more certain about what they need to do. This is important and the lack of a clear model, to a large extent, explains many of the difficulties faced in the accessibility metadata work at that time.
DCMES can thus be seen as providing a three-dimensional mapping of the characteristics of Web resources:
with the facility for application profiles that contain combinations of these.
As some might see it, DC is providing for infinitely extensible, n-dimensional mapping of resources.
In general, the maps of metadata are not read so much as used in the discovery or identification process. But mapping in this sense is analogous to mapping as we commonly think of it in the cartographic sense. There are rules for the co-ordinates (descriptions) of resources and there are structural rules known in the information world as taxonomies that are topologies. The browse structure of a Web site allows one to zoom in and out on details, and map intersections and location finders are common.
Web 2.0 is not a new Web but it is a world in which resources are distributed and combined in many ways at the instigation of both the publisher and the user. It is not possible to limit the ways in which this will be done and it is not yet clear how to 'freeze' or later reconstruct any given instantiation of a resource. (Arguably, Web 3.0 will be a Web in which this is done by machines (Garshol, 2004).)
There is another aspect of Web 2.0 that is relevant to the work in accessibility. Social interaction on the Web is being generated in many cases by what is known as 'tagging' of resources. These resources are often very small, atomic, objects such as an image, or a small piece of text, or a sound file. While these objects have been on the Web since the beginning, in general they have been published within composite resources where the components have not been separately identified and they have rarely been described in metadata. The move is towards what is known as microformats:
a set of simple open data format standards that many (including Technorati) are actively developing and implementing for more/better structured blogging and web microcontent publishing in general. (Microformats)
Associated with this move is the departure of many Web users from Web site visitations to the use of 'back doors' into information stores. So many people use Google and its equivalent to find what they want and then 'click' their way into the middle of Web sites, that the time has come to think seriously about the role of Web sites. Blogs and wikis as publishing models are increasingly becoming the source of information for many people. The increasing availability of atomic objects, or objects in what is becoming known as micro-formats, is expected to increase the accessibility of the Web.
With respect to taxonomies, Lars Marius Garshol has the following to say:
The term taxonomy has been widely used and abused to the point that when something is referred to as a taxonomy it can be just about anything, though usually it will mean some sort of abstract structure. ... In this paper we will use taxonomy to mean a subject-based classification that arranges the terms in the controlled vocabulary into a hierarchy without doing anything further, though in real life you will find the term "taxonomy" applied to more complex structures as well. ...
Note that the taxonomy helps users by describing the subjects; from the point of view of metadata there is really no difference between a simple controlled vocabulary and a taxonomy. The metadata only relates objects to subjects, whereas here we have arranged the subjects in a hierarchy. So a taxonomy describes the subjects being used for classification, but is not itself metadata; it can be used in metadata, however (Garshol, 2004).
He then points out that at one level higher, there are thesauri that usually provide preferred terms, wider and narrower terms. Of these he says:
Thesauri basically take taxonomies as described above and extend them to make them better able to describe the world by not only allowing subjects to be arranged in a hierarchy, but also allowing other statements to be made about the subjects (Garshol, 2004).
The ISO 2788 standard for thesauri provides for more details and helps make thesauri more useful for information discovery. The extra qualifiers are similar to those used in metadata definition, such as scope note, use, top term and related term.
'Tagging' has become a feature of what many people think of as Web 2.0, the social information space where users contribute to content. This is often done simply by adding some 'tags' or freely chosen labels to others' content. For example, a user may visit a site and then send a tag referring to that site to a tag repository, organised by such as del.icio.us or digg. Typically such tags have values chosen freely by the user and so they may vary enormously for one concept, as well as the concepts associated with tags varying incredibly. In 2006, the STEVE Museum's Jennifer Trant (2006) reported that museum visitors who viewed paintings on a site were prone to submit one tag but a completely different one when they re-visited the same painting remotely or searched for it via a digital image. As indicated below, tags are generally displayed in what might be called a graphical form, for example, in tag clouds. With the increasingly graphical representation of metadata, including tags, metadata maps are starting to emerge. These can be used in a variety of ways, as considered below. Usually, words associated with the tags are not the traditional formal thesauri, as in the case of more structured metadata, but inform what are called folksonomies. These are, in fact, ontologies but with very different characteristics from the more traditional library subject terms and generally not structured; that is, users typically add tags with subject, author, format, etc., all mixed in together. This is not necessary, and some users are precise in their use of tags, including encoding them to relate to standard DC Terms, for example (Johnston, 2006).
In response to the increased use of tags on sites, the author started a community within the Dublin Core Metadata Initiative that is concerned with the relationship between standard metadata and tagging (DC Social Tagging). It is not yet known if tagging is merely a fashion or here to stay as a robust way of getting user-generated metadata but it is of interest to see how users use words, and so might help in the selection of terms for standard thesauri. It is also hoped that the energy available for tagging in the wider community can be harnessed to provide much needed accessibility metadata in the future.
Rel-Tag is one of several MicroFormats. By adding rel="tag" to a hyperlink, a page indicates that the destination of that hyperlink is an author-designated "tag" (or keyword/subject) of the current page. (Microformats-2)
Tags are described on the Microformats Web site as follows:
rel="tag" hyperlinks are intended to be visible links
on pages and posts. This is in stark contrast to meta keywords (which were
invisible and typically never revealed to readers), and thus is at least
somewhat more resilient to the problems which plagued meta keywords.
Making tag hyperlinks visible has the additional benefit of making it more obvious to readers if a page is abusing tag links, and thus providing more peer pressure for better behavior. It also makes it more obvious to authors, who may not always be aware what invisible metadata is being generated on their behalf. (Microformats-2)
Typically, tags are gathered and presented in a variety of ways including in tag piles as shown in an extract from an author cloud:
Tag clouds have no specific structure (see Figure ??? above). They tend to be simply piles of words, in no particular order except perhaps either alphabetical or temporal, with more popular terms displayed in larger font than less popular ones. Other systems use the graphical representation to show relationships between terms used, displaying the underlying structure in hierarchical, or other maps. Sometimes this is done explicitly, as in the case of the subject terms used in the Dewey Decimal System [DDS], for example, or implicitly, as done with the DC terms, in an abstract model that is completed for any set of actual terms.
Organization schemes like ontologies are conceptual; they reflect the ways we think. To convert these conceptual schemes into a format that a software application can process we need more concrete representations... (Lombardi, 2003).
The simplicity with which tags can be associated with content, and simultaneously find their way into a metadata repository, suggest that this might provide a way to capture metadata for accessibility, particularly for popular sites with a number of visitors. The energy that is apparently available for the tagging process is also of interest: can it be harnessed to produce accessibility metadata about resources?
Lars Marius Garshol describes several types of content organising schemes:
Data Model - A description of data that consists of all entities represented in a data structure or database and the relationships that exist among them. It is more concrete than an ontology but more abstract than a database dictionary (the physical representation).
Resource Description Framework (RDF) - a W3C standard XML framework for describing and interchanging metadata. The simple format of resources, properties, and statements allows RDF to describe robust metadata, such as ontological structures. As opposed to Topic Maps, RDF is more decentralized because the XML is usually stored along with the resources.
Topic Maps - An ISO standard for describing knowledge structures and associating them with information resources. The topics, associations, and occurrences that comprise topic maps allow them to describe complex structures such as ontologies. They are usually implemented using XML (XML Topic Maps, or XTM). As opposed to RDF, Topic Maps are more centralized because all information is contained in the map rather than associated with the resources (Garshol, 2002)
When XML is introduced into an organization it is usually used for one of two purposes: either to structure the organization's documents or to make that organization's applications talk to other applications. These are both useful ways of using XML, but they will not help anyone find the information they are looking for. What changes with the introduction of XML is that the document processes become more controllable and can be automated to a greater degree than before, while applications can now communicate internally and externally. But the big picture, something that collects the key concepts in the organization's information and ties it all together, is nowhere to be found.
This is where topic maps come in. With topic maps you create an index of information which resides outside that information, as shown in the diagram above. The topic map (the cloud at the top) describes the information in the documents (the little rectangles) and the databases (the little "cans") by linking into them using URIs (the lines).
The topic map takes the key concepts described in the databases and documents and relates them together independently of what is said about them in the information being indexed. ...
The result is an information structure that breaks out of the traditional hierarchical straightjacket that we have gotten used to squeezing our information into. A topic map usually contains several overlapping hierarchies which are rich with semantic cross-links like "Part X is critical to procedure V." This makes information much easier to find because you no longer act as the designers expected you to; there are multiple redundant navigation paths that will lead you to the same answer. You can even use searches to jump to a good starting point for navigation (Garshol, 2002).
Topic maps need not be just for describing the content of the resource, such as the subject of the resource. They could be used to describe the accessibility characteristics of that content.
Faceted classification, according to Garshol, was first developed by S.R. Ranganathan in the 1930s.
and works by identifying a number of facets into which the terms are divided. The facets can be thought of as different axes along which documents can be classified, and each facet contains a number of terms. How the terms within each facet are described varies, though in general a thesaurus-like structure is used, and usually a term is only allowed to belong to a single facet ...
In faceted classification the idea is to classify documents by picking one term from each facet to describe the document along all the different axes. This would then describe the document from many different perspectives (Garshol, 2004).
In Rangathan's case, he picked 5 axes. There has been significant work on faceted classification and recently it has been demonstrated as a powerful and useful way to use metadata. Again, this technology could be used to present accessible versions of resources to different communities of users.
Garshol's list of classification systems includes categories, taxonomies, thesauri, facets and then ontologies. He argues that as we progress through the list we are getting more expressive power with which to describe objects for their discovery. Of ontologies, he says:
With ontologies the creator of the subject description language is allowed to define the language at will. Ontologies in computer science came out of artificial intelligence, and have generally been closely associated with logical inferencing and similar techniques, but have recently begun to be applied to information retrieval.
He goes on in this article to describe topic maps as an ontology framework for information retrieval and to show that topic maps have a very rich structure for information about an object that is also quite likely to be interoperable. As his example he gives:
The colours have significance as shown:
names of topics
occurrences of a topic
these show the association types
different colours could be read as scope of topic names, or type of topic
Ontopia's Omnigator is a tool that allows the user to click on any topic name and have it become the 'centre of the universe' with its connections surrounding it. This makes interactive navigation around the graphical maps very simple and intuitive, and seamless across topic maps encoded differently [Ontopia]. The same idea could be used to group resources with particular accessibility characteristics.
In a similar way, the Resource Description Framework (RDF) provides a very flexible way of mapping resources. RDF requires the description of properties of resources to be strictly in the form:
resource ----- relationship ----- property
subject ---- predicate ---- object
http://dublincore.org ---- has title ---- Dublin Core Metadata Initiative
The theory is that if all the properties are so described, it will be easy to make logical connections between them. Currently, RDF is implemented in XML, as that is the language of most common use today, but the framework is independent of the encoding. RDF maps, like other good metadata systems, are interoperable and extensible. An example of RDF maps and how they interoperate is provided in Figures ??? and ???.
Figure ??? The two map fragments in Figure ??? as combined simply by overlaying the matching entities Q-colour-code #r23g67b98i to form the greater map (Nevile & Lissonnet, 2003).
One of the features of graphical maps that is of interest to those with vision disabilities, and many programmers, is that graphical programming is, when undertaken with the right tools, simultaneously graphical and textual. This is the same as for traditional geospatial maps, where databases often hold the data and where they can be interrogated by users who do not choose to work graphically (sometimes just because of the complexity and enormity of the graphical representation). This is also typical of the way CAD designers work. RDF or Semantic Web maps are also of interest because of their potential to automatically make connections with alternative forms of the same or similar content.
The progression through the various metadata technologies provides an insight into the possibilities that can be exploited in there is suitable AccessForAll metadata available for resources.
The interoperability of metadata is considered one of its strengths and it has, in its short recent history, led to many institutional digital libraries sharing their metadata to develop what operate as united libraries, following a range of organisational and technical models. Where sets of metadata are to be combined for some purpose, such as integration, if the metadata sets are not based on the same standards, it is often possible to map them both to a third set of metadata terms so they can be shared, even if with some loss.
The mapping can be loss-less when the two systems are fully compatible but often this is not the case and some compromises are made. Dublin Core metadata, for example, follows the flat model of one property for each metadata statement and all properties can be repeated and none but the identifier of the resource are mandatory. IEEE Learning Object Metadata (LOM), on the other hand, is deeply hierarchical, that is, a property can also have sub-properties and the sub-properties can have their own sub-properties. Mapping from LOM to DC metadata is not possible without loss at this stage, in general, although it is possible to do some mapping from LOM to DC when RDF encoding principles are applied (not the general case).
It may be, as some would suggest (Vickery, 2008), that the most important thing in the Web today is the facility to find things. Metadata, of one sort or another, is essential to this process and hence its significance in Web research and development worldwide. It is not a new topic, but it is attracting unprecedented attention, and the technical complexity of it has grwn significantly. In the next Chapter, there is a discussion of yet more specificity about metadata, this time accessibility metadata.
Given the Dublin Core as a huge base for international, cross-domain metadata, it seemed obvious when the research started with accessibility and metadata that the two should be combined. First, it is necessary to establish that there is a reasonable chance that there will be metadata about accessibility of resources. It is also important to know if there are, as proposed, Web services that adapt resources for users. Finally, in this chapter, the relevant pre-history of the Dublin Core's role in AccessforAll accessibility metadata work is explained.
There is always, in the mind of metadata experts. experience that shows that metadata is expensive to produce and that it is very often inaccurate. For this reason, it is important when proposing a new use or context for metadata, to be sure that it is necessary, not overly-complicated, and likely to be created and used. This section locates the current research in a world that is already partially prepared for it. Showing that there is a substantial amount of discoverable material in a range of formats suitable for people with varied needs and preferences is important if there is to be more work in finding a way to describe the necessary needs and preferences and the resources that might satisfy them. Thus, the quantity of discoverable material is indicated within this section. In addition, unless the new descriptions can be used alongside those already in use, that is, unless there are existing descriptions that are interoperable with the new ones, there is not much point in undertaking the research. What follows shows that there is sufficient material and provides a base against which the new metadata should be interoperable.
In the UK, the Royal National Institute for the Blind (RNIB) has developed and maintained the National Union Catalogue of Alternative Formats.
Ann Chapman, in "Library services for visually impaired people: a manual of best practice" (2000), states that only 5% of the 100,000 new British titles published each year were converted into alternative formats. She points out that these formats were created by a range of individuals and organisations and made available in a number of different ways and places. "In 1989 R.N.I.B. began the process of computerising its card catalogues, thereby creating the National Union Catalogue of Alternative Formats (NUCAF)". Prior to this date, the RNIB had a catalogue of its own conversions and for five years prior to the establishment of the NUCAF, was spasmodically collecting catalogue records from others.
As part of the Department for Culture, Media and Sport funded programme to improve library and information services to visually impaired people, the role of NUCAF was reviewed in 1999 (Chapman, 1999). The review concluded that a national database of resources in alternative formats was an essential tool in service provision and that while NUCAF in its present form had limitations, particularly in respect of access, it did provide a good basis for a more comprehensive database of resources." ...
It further recommended that the new database should primarily cover the output and holdings of the specialist non-commercial sector, and that collaborative agreements with existing databases and union catalogues should be developed to cover the commercial sector publications." The review pointed out that, "In addition to libraries, a range of agencies (doctors, dentists and health professionals, banks, advice centres, electricity, gas and water companies, tourist offices, schools and academic institutions, government departments, and service providers of various kinds) would either use the database or refer people to it. Currently visually impaired people and those working to support them are restricted to a few narrow avenues of access to NUCAF. The new database will be designed to be far more widely accessible to end users and library staff. To achieve this it was recommended that the national database should be held on a web-based system, supported by CD Rom and electronic file versions.
Eventually, as a result of various funding opportunities and projects carried out in a number of places, NUCAF was merged into a new service called REVEAL. In "Project One part A: The future role of NUCAF and a technical specification of the metadata requirements", Chapman (1999) reported "The national database should where possible use national and international standards. It should use the UKMARC format and conform to AACR2. Current RNIB subject indexing should be used for subject indexing, and LCSH entries retained where they exist in the records for the original items. A single set of headings for fiction genre/form should replace the existing ones. A full set of the data elements required has been identified."
These were found to be:
Bibliographical details (Title, author(s), publisher, date of publication,
edition, series, and subject.)
Search Support (Subject indexing, fiction genre and form indexing, target audience, format type.)
Decision Support (Annotation or content summary, target audience, series and character information, serial frequency, abridgement notes, narrator or cast notes for audio materials, format type and level, number of units comprising the title, serial holdings information.) Also desirable: sample passages, serials article indexing.
Š Support for inter-library lending and Loans (Holdings, locations, loan status)
Š Support for sale and hire (Availability status and charge, producer/hirer/retailer
Š Support for production selection (Statement of intention to produce, format, producer, copyright permission details.)
Š Record format
Š Subject indexing
Š Genre indexing
While NUCAF had catalogue records for many items, they were only items converted for the benefit of users with vision disabilities and they did not include representations in all formats or modes of access. Initially, they did not include commercially produced formats and they were expected to be catalogued only so they could be discovered, as was typical of the understanding of the use of metadata at the time (1999). The MARC21 007 fields provide for quite specific information about the form of tactile representation of information such as that it is contracted Literary Braille or 'spanner short form scoring' of music.
At 2.4, Chapman points out that the existing NUFAC's "only clearly defined objectives are those that relate to stock management and production management at the RNIB. It is therefore difficult for it to satisfactorily address functions outside the RNIB". She asserted that given the difficulties associated with copyright with respect to the transformation of information into alternative formats, the new data base would need to do more. She did not think of computers at that time as being able to automatically decompose information resources and recompose them to suit the needs and preferences of users. Her final recommendations included that, "The database must provide data rich bibliographic records".
At the time, the Library was UK’s most comprehensive collection of material on the subject of visual impairment. The resultant REVEALWEB, at the beginning of 2006, boasts 100,000 resources in accessible formats (2006). This is indicative of the quantity of material that could be made available for use by people with vision disabilities, and therefore all others who are for one reason or another not using their eyes as they might to view content.
REVEALWEB's formats are:
Š Braille Music (based on the same six dots as traditional Braille letters but in addition there are separate symbols for each note, key, tempo and duration)
Š Moon (a line-based tactile code in which many of the letters are simplified versions of the printed alphabet that is easier to learn than Braille and helps many older people continue to enjoy reading for themselves)
Š Braille with Print
Š Moon and Print
Š Tactile maps and diagrams (produced by either photocopying or printing onto heat sensitive 'swell' paper)
Š Audio cassettes 2 track (often produced with the author or an actor reading the printed word)
Š Audio cassettes 4 track (that need special equipment for playback)
Š Talking Books 8 track (digital audio files on CD)
Š CD-ROMs spoken word
Š DAISY (DTB) format (Digital Accessible Information System that enables navigation)
Š Electronic text files
Š Electronic Braille music files
Š Electronic Braille files
Š Large Print
Š Audio described videos. (RevealWeb, 2006)
Given the size of this collection of well-described, discoverable materials, it is important that any new metadata descriptions are interoperable with this list. There is every indication that these resource are described with standard metadata and therefore could be used by an AccessForAll service.
The USA also has a union catalogue maintained by the Library of Congress National Library Service for the Blind and Physically Handicapped (NLS). The Union Catalogue (BPHP) and the file of In-Process Publications (BPHI) can both be searched via the NLS Web site [NLS].
Indicative statistics for the NLS (according to those posted on 2005-01-11) are:
Each year it distributes 23 million books and magazines to a readership of more than 759,000 individuals who cannot read regular print for visual or physical reasons. NLS functions as the largest and frequently only source of recreational and information reading materials and services for a segment of the population who cannot readily use the print materials of public libraries. The NLS International Union Catalog contains 382,000 titles in 22 million copies. (NLS, 2002)
The formats available appear to be press Braille, digital Braille (Web-Braille), audio cassettes, large print text, digital text, maps (tactile), electronic resource, music (Braille), music (large print), and sound recordings (NLS, 2006).
In a fact sheet, NLS explains: "Currently, this service includes the acquisition, production, and distribution of Braille and recorded books and magazines, necessary playback equipment, catalogs and other publications, and publicity and marketing materials" and that, "One of the primary reasons for instituting a national program was to obviate the inevitable difficulty and high cost for individual libraries to acquire books in special formats" (NLS About, 2006). In a sense, this is the same motivation as is being suggested in this thesis for the development of a metadata standard for AccessForAll materials.
The Library of Congress uses standard metadata and it is this collection of resources is therefore evidence that there are alternatives available for immediate use by people with disabilities and that they are already described by suitable metadata. They could therefore be used by an AccessForAll service.
NCAM, the National Center for Accessible Media at WGBH in Boston have developed software and techniques for making media of all sorts available to all people. As part of this process, they have developed a clever way of distributing captions and descriptions (known as MOPIX) to theatre and cinema goers. Currently there are more than 300 films available with captions and descriptions (MOPIX, 2006).
The American Printing House for the Blind [APH] currently hosts the Louis Database of Accessible Materials for People who are Blind or Visually Impaired. The Louis Database contains over 145,000 titles of accessible materials, in braille, large print, sound recordings and computer files, from over 200 agencies throughout the United States. The database can be searched via the database Web site and there is a link to the NLS Web site and union catalogue database.
The Canadian National Institute for the Blind operate a number of services including online access to their library collection via VISUCAT. The library collection contains over 45,000 titles with materials in braille, print braille, audio, electronic text and descriptive video. Access to the catalogue is via a telnet connection. Library clients can search VISUCAT, check on titles currently on loan to them and reserve titles.
Vision Australia has a new major project that will augment the work already done by a number of organisations to provide people with vision disabilities with services for better accessibility.
The relevant organisations clearly have a lot of resources to offer and many of these already have standard metadata describing them. It can be assumed that if such resources can be used more frequently and discovered more generally, it is likely that their value will increase and more of them will be made available.
There are two kinds of content adaptation services: those that adapt the components of a resource to fit a given specification and those that in some way adapt the components, such as converting text into Braille. As well as static, or held content, there are services for creating accessible content - some of which work on-the-fly and others which can be used asynchronously.
For some time the Speech-to-Text Services Network [STSN] has been making accessible content alternatives for content that cannot be used by people with hearing disabilities. They describe their three real-time speech-to-text services according to the technology used to process incoming speech:
The STSN has a table that shows differences and similarities among their services. This table also makes clear the sort of services that are valued by people with hearing disabilities. Some of these are relevant in the current context because they represent services that some people will use when they cannot access auditory information.
Steno machine-based Stenography Systems - CART
Laptop-based Speed Typing Systems
Automatic Speech Recognition Systems (ASR)
Verbatim, or near-verbatim translation, i.e., word-for-word
Meaning-for-meaning translation, i.e., "all the meaning in fewer words"
Communication access usefulness determined by ASR software error rate, reader's error tolerance, skill of speaker, etc.
Typist who is trained court reporter
Typist who is trained in specific system
Trained "Shadow" speaker
Info Link CART
Table ???: Services offered by the Speech-to-Text Services Network (STSN 2006)
As is apparent from the table, human services are provided to render the content accessible to those who are not able to hear it in its original form. Such services exist alongside new ones being developed like those offered and proposed by ubAccess, particularly SWAP that will utilise computers to perform 'intelligent actions' on inaccessible content.
ubAccess has developed a wizard Semantic Web Accessibility Platform SWAP, that can transform a given Web page to have characteristics that will suit users with special needs. As this service depends upon knowing the users' needs, it is appropriate for it to be considered as an example of the type of service that will be enabled by the AccessForAll approach to accessibility.
There are many services that are built into content servers that could be described as adapting content, or components of aggregate content, into suitable composites for users. In general, these are driven by the device and software requirements. The materials delivered to a telephone by a standards compliant browser will at least attempt to adapt the resource for that device. For example, the Opera browser can present the user with a newspaper page in a way that makes sense to someone with a very small screen, as shown in Figure ???. Opera has recently released a browser for general use that contains a screen reader.
The early DC accessibility work is relevant because it cleared the way for the AccessForAll approach that has become to main work of that group.
The Dublin Core Accessibility Working Group was founded in 2001 to investigate the use of metadata in accessibility work (DC Accessibility Working Group, 2001). There was a follow-up joint Meeting of W3C WAI Interest Group and IMS Accessibility Working Group, Melbourne, November 2001 (WAI-IG, 2001). The aim, at the time, was to be proactive in setting an accessibility agenda for content developers by bringing their attention to the need for accessibility, as much as to provide functional metadata. Some time later, a Director of DCMI, Eric Miller, strongly defended this position at a DCMI Advisory Committee (as it was then) meeting and there was general support within that Committee for the work.
Over a number of years the following efforts to find a way to define accessibility metadata were promulgated.
The early work on the AccessForAll approach has been described. Now the special requirements for Dublin Core metadata are considered.
The 'rules' for DC metadata have always been that the metadata terms must comply with the Dublin Core model. That the model has not, until late in 2007, been expressed in an unambiguous way made this process very difficult. Once the accessibility work left the fold of the DC and was led elsewhere based on another type of metadata, the best that coulld be done was to ensure that the new metadata matched as closely as possible the DC model, and that it was at least possible to cross-walk without loss from one system to another.
Given the changing nature of the DC model, there were many iterations of the AfA metadata in an attempt to match the model but they always seemed to fail to do this. Once the model became stable, it was possible to determine the requirements once and for all and the most recent version of the abstract model of the DC AfA metadata appears to do this.
This model and the associated vocabularies have not been formally adopted by DC, which requires the approval of the DC Usage Board, but it has been informally accepted as now matching the rules. Achieving this status required input to the DC process of definition of that abstract model, as well as the development of this one to match it (Chapter ???; Pulis & Nevile, 2006).
In 2007, Andy Powell had the following to say in the context of educational metadata:
so what does history teach us? Why are we where we are now? I would argue that the "effort aimed at distilling semantics & simplifying them through delivering sufficient consensus across a significant community of practice" essentially failed. It failed because the approaches reached thru that consensus cost more to implement than the benefits they realise in the context of the original use-case (resource discovery on the Web).
When was the last time you found something because it had been described using DC?
What history tells us is that DC is too complex for the 'simple' resource discovery scenarios envisaged when the initiative started. Those scenarios now tend to be catered for by full-text indexing and social tagging of one form or another. At the same time DC is not complex enough for the scenarios typically found in digital libraries, scholarly communication, elearning, commerce and the like.
Yes, the DCMI Abstract Model tends to move us more towards the latter. Yes, explicitly modelling the entities in the world that we want to describe is more complex than not doing so.
Complex but necessary. All IMHO of course.(Powell, 2007)
In a sense, the metadata being proposed for accessibility is very complex but it is meant to be used differently in different circumstances. The typical use of it is with a single term (Dublin Core or other) where the values identify limitations to the perception mode for the content. The research shows that this information alone will make a huge difference to discoverability for a user. Then, when a resource is made or catalogued by experts and designed to satisfy an accessibility problem, those who have developed it can use their expertise to give maximum value, and exposure, to the resource.
The final stages of development of the WCAG 2.0 specifications were under way as the AfA metadata has been finalised as an ISO standard. Convincing the W3C Working Group responsible for WCAG 2.0 to include a requirement for AfA metadata would have made all WCAG 2.0 conformant resources suitable for adaptation according to AfA principles. For a number of reasons this was not possible, not the least being that the WCAG authors were not prepared to simultaneously allow that a resource might be less than conformant to the rest of WCAG and yet 'legitimately' be described by metadata as specified by WCAG. They did consider it important to allow for the use of metadata, however, especially to identify an alternative resource that could be used by a user when that alternative had special feature to make it more useful than a standard, conformant resource, and the original was already WCAG conformant. Given the inclusion of this as a technique, there is, of course, no reason why a developer should not provide full AfA metadata and if there are tools that make this easy, it might happen. Such tools are promised for demonstration at the September 2008 Dublin Core conference in Germany.
In this chapter, the availability of resource that will already have metadata is investigated for two reasons: if there are, in fact, no alternative components that are accessible to people with disabilities, there will be nothing to find, and secondly, because if such accessible components do exist, it is important that they are organised and described with electronic catalogues that are capable of providing metadata, even if it needs to be transformed to comply with the interoperable standards.
Given that universal design alone is not able to cater for the needs and preferences of all users, even when the principles embodied in the WCAG specifications are complied with, and that is not often as has been shown, it was timely when a complementary approach was suggested by the ATRC at the University of Toronto. Already there was a prototype system operating that could match resource components to user-determined requirements (TILE) when the work was shared with those at the IMS Global Learning Consortium (IMS GLC). At this time, the author was working with the IMS Global Learning Consortium for IMS Australia (Australian Department of Education, Science and Technology) ). The first task had been a set of guidelines for educators about accessibility (Barstow and Rothberg, 2002). These guidelines were developed at about the same time as the Accessible Content Development section (Appendix 8), and both showed the inadequacy of the then current work. The adoption of a complementary approach that would take into account the needs and preferences of individual users might make a difference.
In fact, the AccessForAll approach was a significant development and only the beginning of a chain of developments that has most recently led to the FLUID Project, a major user-interface architecture re-design project directed from the ATRC.
The first part of this chapter is a modified version of a paper for the 2005 Dublin Core Conference (Nevile, 2005b). It presents the case for a private (anonymous) personal profile of accessibility needs and preferences expressed in a Dublin Core format. It introduces the idea that this profile, identified only by a URI, is motivated by a desired relationship between a user and a resource or service. It assumes a new Dublin Core term DC:Adaptability (since renamed back to to DC:Accessibility) and argues that, without any reference to disabilities, personal needs and preferences, including those symptomatic of common physical and cognitive disabilities, context or location, can be described in a common vocabulary to be matched by resource and service capabilities.
As explained above, everyone, at some time or another, is disabled by the circumstances in which they find themselves and most people, as they age, will experience disabilities more often. Most people will find their disabilities vary according to the circumstances in which they are operating. Disability, in this sense, is a description of a poor relationship between a person and their immediate operational requirements.
Similarly, it is inappropriate and inaccurate to attribute descriptions of disabilities, which are descriptions of relationships, to named people. At the same time, it is efficient to recognize that many relationships are similar and that when involved in a user-resource relationship, many people will want to use the same description of that relationship. For instance, many blind people trying to access a Web page with images will want to use similar profiles of non-visual relationships between a user and a resource.
The existence of a machine-readable profile of a disability relationship can be used, by suitable applications, to match users with resources and services they can use. This process involves a description of a user's immediate needs and preferences being matched with a description of the components of a resource or service until there is no disability. This may involve the replacement, augmentation or transformation of components of the resource or service, such as changes of sensory modality. The user's descriptions of their needs and preferences, often called their profiles, will be used according to the context or circumstances and may differ according to the occasion. For convenience, a user will want to store and refer to such profiles rather than to create them afresh every time one is required. In some cases, they will depend upon profiles created for them by others and, in such cases, may be especially dependent on their being stored and available at all times.
An accessibility profile for use by a blind person attempting to read a newspaper online will be very similar to that for a person driving a car wanting to access Google News: both users will want vision-free access to the resource. Both users will need alternatives to visual content contained in the primary resource they seek and both will want to control their access to that resource using non-visual techniques. It is unlikely that either of them will want to see the 'Google ads' that would normally accompany the content on a screen presentation. A simple description of the relationship with the resource they seek will be non-visual. The description of the characteristics of this relationship, the user's needs and preferences profile, should be simply expressed in machine-readable form and available to any resource publisher. It can be identified by its URI and does not need to contain any information about any individual or community of people. It is, in fact, a description of functional requirements and could be known simply as non-visual functional profile "x".
A more complicated example occurs where, for whatever reason, there is a need for a visual relationship but the objects being viewed need to be larger than they might be when used on a stand-alone desktop computer. Such a case occurs frequently when resources are displayed on a large screen before a large audience. For this to be an accessible relationship, it does not need to be non-visual but there are some qualifications to be made to the visual qualities: the text and images need to be enlarged. Exactly how large the text should be will usually be decided by the author in a situation where the details of the relationship are well-known, as for the large audience, but should always be available for customisation where individuals may have special needs.
Flexibility of the kind required in this case means there needs to be a common way of describing the range of sizes of text and images so that the correct accessible relationship can be indicated by the user. Responses to the description of the relationship in such a case may depend upon the transformability of the resource components: scalar vector images will be easily transformed to suit such requirements and text that is to be presented according to cascading styles should be suitably transformable but, if it contains tables, there will be more complicated considerations.
In some cases, it is not a transformation of available components that is required so much as their replacement or augmentation. Such a case exists where a non-auditory relationship is required with, for example, a movie. Then, a text transcription of the background sounds might need to be supplied with captions for all speech. These may all need to be synchronized with the visual content. Where the only problem with the aural content is likely to be the choice of language, captions might be required but the background sounds will not be a problem.
The provision of resources and services that ensure the correct accessible relationship for a user depends upon the existence of many components all with special accessibility characteristics. Captions for films are usually made by organizations known as caption houses: caption houses specialize in making captions but not films. Signing for people who use sign languages is usually done by specialists in that field; videos of signing that might be needed to complete an accessible relationship are likely to come from a source other than the original publisher of the resource.
In other words, the components that may be required to complete an accessible relationship with a resource or service are often distributed and may be the result of cumulative authoring. All that is necessary is that the components are available just-in-time for delivery to the user. Very often, as is obvious from the examples already given, they may be combined in different ways for different user/resource relationships. This means it is most convenient to not fix them to a particular relationship with any one resource, but to maintain them separately and make available the necessary metadata for them to be discovered and fetched when needed. The same metadata can be used to identify a need for more components in anticipation of a demand for them.
The definition of accessibility implied here is that the relationship between the user and the resource is one that enables the user to make sensory and cognitive contact with the content of the resource. This is expected to occur at the time of accessing the resource or, in other words, to be achieved just-in-time. This is the definition being advocated as the AccessForAll definition of accessibility.
In addition to the availability of the necessary components to satisfy the relationship required by the user, there is a requirement for the metadata that will be used to arrange the final composition of the resource. There is also, of course, a need for a way of communicating the requirements, or the metadata. The vocabularies and common specifications for their description are the topic of this chapter. W3C was working on similar issues in their Device Independent Working Group and their focus is on what they call the Composite Capabilities and Personal Preferences specifications [CCPP].
Organising the possibilities for resource relationship descriptions means ensuring that the characteristics are uniquely described. Such organisation is common but can take some time to determine. Fortunately, the ATRC has been requesting needs and preferences for people with disabilities for some time, and they have reliably determined the best way to 'divide up' the characteristics for user needs and preferences profiles.
There are three sensory modalities universally recognized as relevant to the current human-computer relationship: visual, auditory and tactile. Smell and haptic modalities are not yet often included. There are many possible variations of the modalities and their roles can be important: auditory input and output are not necessarily related to a user rather than their context. In a library, one may be able to listen with headphones but asked not to use voice input; in a car, general auditory output may be acceptable and voice input may be essential. While input and output are useful distinctions to make, in some cases, in the case of accessibility, the ATRC uses three classes: display, control, and content characteristics (TILE). As this is not a relevant area of research, the practice of the ATRC was simply copied.
For people who use adaptive technologies with special settings, describing their control needs and preferences may mean providing information about the settings for their personal adaptive technology, especially when that requires something like an on-screen keyboard to be activated by a head-pointer. In the case of an on-screen keyboard, the display characteristics of the resource also need to be adapted to allow for the loss of screen space for display purposes. In addition, there may be requirements for other display characteristics, and there may be separate needs for content adjustment. Particularly for users for whom settings are crucial to their engagement with resources, needs and preferences need to be distinguished. If a need cannot be fulfilled, their preference for what to compromise can make all the difference. For others, if flexibility is possible, it can mean greater satisfaction. For accessibility reasons, it is essential that the user's profile always overrides all other profiles, as is the case with cascading style sheets (W3C, 1999).
As the requirements can conflict in combination, determining a structure for their representation that allows for them to be described fully and unambiguously is essential. For this reason, descriptions of needs and preferences for display, control and content characteristics need to be separated. The needs and preferences need to be easily describable, so it is essential that if there are no special needs, nothing needs to be described, but that when there is a need, there is a hierarchy of details that are easily understood and registered.
In addition to the three categories described and their details, there is an over-riding quality that is essential in the human-computer context. Usability is not a technical quality but it can be the most significant quality when user resource interactions are required. It is not included as a technical characteristic of AccessForAll but it must be considered. Figure ??? shows the classes of characteristics proposed by the ATRC for AccessForAll for digital resources.
Figure ???: AccessForAll structure and vocabulary (image from AccessForAll Specifications, [IMS Accessibility].
Where there can be no effective visual relationship with resources and services, all visual displays need to be presented in some other modality. Often the choice is for auditory presentation of the visual content but it may be for tactile displays such as Braille or other tactile forms. Where the adaptive technology does not change the modality but changes the characteristics of the display, as in the case where screen-enhancing software is being used, the requirements for the desired display may involve object sizes, colour, or placement on the screen. The requirements can be very detailed and vary depending on the circumstances. Changes in the modality of content, as occur when a screen reader renders visual content (text) as auditory content, may depend upon it being possible to transform the content in this way. This in turn will depend upon the form of the original content: it can be transformed easily unless there is formatting, for example, that interferes with the process. The ‘transformability' of the text will need to be described if it is relevant to the user's relationship with the text.
Tables ???, ??? and ??? show the potential characteristics (attributes), how many there may be and what kind of values are expected.
screen reader preference set
Zero or one per Display Preference Set
screen enhancement preference set
Zero or one per Display Preference Set
Table ???: 6.2.1 Display Preference Set (Treviranus et al, 2005)
Zero or one per Screen Reader Preference Set
screen reader generic preference set
Zero or one per Screen Reader Preference Set
application preference set
Zero or one per Screen Reader Preference Set
Table ???: 6.2.2 Screen reader Preference Set (Treviranus et al, 2005)
font face preference set
Zero or one per Screen Enhancement Generic Preference Set
font size preference
Zero or one per Screen Enhancement Generic Preference Set
foreground color preference
Zero or one per Screen Enhancement Generic Preference Set
background color preference
Zero or one per Screen Enhancement Generic Preference Set
Table ???: 6.2.9 Screen Enhancement Generic Preference Set (Treviranus et al, 2005)
Not all users control their systems using the typical mouse and keyboard combination. In some cases, they use assistive technologies that effectively replace these devices without any adjustment but in others they use technologies that require special configuration. An on-screen keyboard will use screen space that will have to be denied to the resource or service. Any resource or service that cannot accommodate this loss of screen space, for example because it demands a full-screen display for all controls to be available, will not be suitable for use in some circumstances.
It is necessary to be able to capture what is necessary with proprietary devices and systems as well as what is generic to types of systems and devices. It is also necessary to be aware of possible developments so there is room for extensions. A typical example of the definition of these needs is as shown:
text reading highlight generic preference set
a collection of data elements that states a user's preferences regarding how to configure a text
reading and highlighting system that are common to all text readers/highlighters, regardless of vendor
text reading highlight preference set
a collection of data elements that states a user's preferences regarding how to configure a text
reading and highlighting system (Treviranus et al, 2005)
These definitions have been represented in a structured hierarchy so that it is easy for users or their assistants to provide only as much detail as is necessary. Nevertheless, due to the complexity of dealing with the multitude of possible needs, the vocabulary is very large.
The relationship between a user and a resource or service will also be accessible only if the content is perceptible by the user. Perception in this sense includes the case where a dyslexic person needs more than the usual image-based content because they cannot process a text-heavy resource; or where a person with neurological damage, such as a stroke victim, can not manage a screen that is too ‘busy', or where a blind person is working with an explanation that is based on an example that is useful only to people with vision. It is often the case that the original content has to be supplemented, perhaps with the availability of a dictionary or captions, or replaced by different content that achieves the same outcome but in a different way. Information about the resource that indicates that it contains such alternative content, or the location of such content that is available externally, is needed to determine if the user will be able to form an accessible relationship with it in terms of perception.
The original IMS GLC approach was to add the AccessForAll element into the established hierarchy of the IMS Learning Resource Meta-Data Information Model Version 1.2.1 Final Specification (2001).
Whereas the DCMI metadata model provides several ways in which a DC metadata set can be extended whenever necessary, the LOM requires the extensions to be determined in advance:
In particular, most elements have <application> and <param> elements that allow additional parameters to be defined for a particular accessibility application. In addition, the binding provides for arbitrary extensions. See the Binding Guide document for more details. In general, these extension methods are provided as placeholders for future revisions of this specification. Both the <display> and <control> elements provide for sub-elements named <futureTechnology> which are intended to allow new technology approaches to be included (Jackl, 2003, Sec.4.1 Extensibility Statement).
Figure ??? shows the structure of the extension mechanisms in LOM.
Not only is the model hierarchical (see Figure ???) but the thinking was. If one has thought for a long time with a particular model, and is obliged to implement systems in a hierarchical environment, it is very difficult to think otherwise. This problem was acute for some time with the group working on AfA metadata but it led to very lively, activity as the participants struggled to make the AfA work as interoperable as they could, trying to accommodate both the hierarchical LOM model and the 'flat' DC model. The challenge was insurmountable but it led to very thorough efforts and what all parties in the end agreed was at least an elegant solution. There was considerable confidence that in the end it could, indeed, be implemented in both LOM and DC without loss, even if this did depend on a cross-walk from one to the other.
User needs and preference profiles are of no use if they are not available when they are needed.
Web-4-All uses a smart card to provide a portable set of user needs and preferences for adaptive devices and software available within a device. These cards were designed to make it easy for users of computers distributed throughout Canada and for those managing the computers. The computers are fitted with suitable adaptive technology and a card reader. By inserting or extracting the cards, users can set up the computers, use them, and then leave them in a basic state for other users, without the need for a technician.
The paper presented at the 2005 Dublin Core conference argued that if the resource or service's capacity to adapt to different user needs and preferences is described in a Dublin Core element, the individual user's needs and preferences also should be described in Dublin Core format (Nevile, 2005b). It proposed a resource that contains information about a user's needs and preferences; what in some contexts is being called the user's Personal Needs and Preferences (PNP) and a metadata record of that resource.
It reiterated the argument that in order to match a resource or service to a user to achieve accessibility, there is no need to identify the user. All that is required is machine-readable information about their needs and preferences.
The need for this paper lay in the fact that the more common use of DC metadata which was to describe an object. Previous attempts to encourage the DCMI to extend their way of working to include descriptions of people (Nevile & Lissonnet, 2004), even though that was often practised, were still being resisted. It was later discovered that this was because the person is usually not the resource that is being described, but the author of it, and so descriptions of the person are not really properties of the resource. In some cases, however, it has been of interest to describe people using DC style metadata, for example where an organisation uses software that manages DC metadata and so could be used to manage metadata about the people in the organisation as well.
In the case of the AccessForAll situation, the person is not being described, deliberately. In fact, the description is a profile of their functional needs and preferences relative to a context. This was, at the time, very contentious, particularly as it was not well-understood by the author that there was the historic problem that was worrying the experts. It was also difficult because, as explained earlier, some of the decisions made in the formation of the initial set of DC terms were made in the knowledge that they could lead to difficulties later on, and this was a typical case of what could highlight the problems with the early DC models. In addition, of course, there were potential problems with the model being used at the time for the semantics of the user needs and preferences profiles that would raise the hierarchy vs flat metadata issues. (In the end, the DC model has moved more closely to that of the Semantic Web and there is less emphasis on this issue because it is no longer relevant in the way that it was.)
The paper presented a way of thinking about some of the problems.
An application profile for user accessibility needs and preferences that satisfies the requirements needs to contain one vital element; the DC:identity of the information (resource) expressed as a URI. This URI must, therefore, point to the user's accessibility needs and preferences information which should be in a machine-readable form. Users may like to think of profiles as being associated with certain contexts, for instance the lecture theatre version, or the JAWS lap-top version, and in such a case the profile could be named. So we could find DC:title being used for this. The application profile may contain more DC elements, such as DC:subject, DC:description, DC:creator, etc. None of these need identify the user for or by whom the AccessForAll information will be used. On the other hand, they may clarify who could take advantage of the profile: for instance, all students in a lecture theatre will probably share the need for large print on the overhead screen. This could be explained in a DC:description element. It may be of interest to know who developed the user needs and preferences profile, so DC:creator could be used to indicate this. The date of a profile might be significant when new versions of adaptive software are released so DC:date may be useful. (Nevile, 2005b)
In general, the paper argued, a profile will be for a single person, sometimes from within a class of people, such as someone with Jaws with the default settings, for instance. The profile could cater for a combination of users, however, with a combination of needs and preferences, even asking for redundant components so that everyone in the group has what they need. It is very common for a person with a disability to be working with someone who has different needs. In fact, some users' needs include a person who can assist them. This may or may not mean they have special functional requirements for the resources they want to access.
When a system is to be used simultaneously by two users who point to different profiles, it may depend on the circumstances how this is to be handled. If they are to share a screen, their needs will have to be harmonized. If they are working on the same application but separately, as when two remote users share a chat session, their individual needs should be accommodated. When the two users are, for example, a corporate group for whom there is a corporate set of ‘needs and preferences' that conflict with the individual's essential needs and preferences, the latter should be matched in preference to the former.
Table ??? shows a typical set of user needs and preferences that might be used as a default set for some users with some specific values indicated.
By rendering the user's needs and preferences profile as a resource, problems associated with the politically unpopular activity of labeling people by disabilities can be avoided. The technical problem that a single person will be associated with a number of AccessForAll profiles is also avoided as they can point at different times to any of a range of profiles. In addition, where there is a need for many users to share a profile, as with students in a lecture theatre, this is easily achieved. This approach was difficult to work within the DC rules for profiles but on a day in 2007 when it became very important to solve the problem if metadata was to be included in the forthcoming WCAG Version 2.0 (W3C WCAG 2.0, 2008a), a W3C Working Group considering a similar problem, released their first version of the POWDER protocol. The POWDER protocol provides a way for exchanging metadata about a resource but it also defines a collection of metadata as a resource, in that case establishing the useful term 'description resource' (W3C POWDER, 2008). This seems very appropriate.
A system working on the match, to ensure accessibility, will read the AccessForAll profile selected by the user (or user group) and use that information to test the metadata of potential components for the resource or service to be delivered. In the absence of an AccessForAll profile, systems will have to assume that a user has no special needs to constrain their relationship with resources and services at that time.
At this point it should be noted that while the user's PNP is described by a metadata record, it is itself metadata in another sense. The value of this is that it can be used in conjunction with resource metadata in the matching process for accessibility.
The vocabularies for the metadata to be associated with the resource or service and with the user's needs and preferences for accessibility have been carefully matched in the AccessForAll profiles. Other technical device information might also need to be conveyed to the resource server but it is expected to be covered by the work of the W3C Device Independence Working Group or others using CC/PP.
For all preferences, usage is required to determine if the user must or must not have it or if they merely have a preference for the setting. Flashing content, for example, can be dangerous for some users and content with nothing but graphics will be useless to a blind person unless they have a friend available to describe it to them.
As the values of the descriptive elements are what is matched once the elements have been matched, it is important that there is a standard vocabulary available to be used for those values. This can occur several ways: a recommended form such as yyyy-mm-dd or mm--dd-yyyy, an encoding conformant to some set standard, such as Getty colour schemes, or what is called a controlled vocabulary - a set of words with definitions. All these rules need to be available to any matching software. It is very often possible to adopt existing standard vocabularies as has been done throughout the AccessForAll profiles. For example, the vocabulary for settings for dynamic Braille displays; AccessForAll has no reason to redefine it.
In this chapter, the redefiniton of accessibility that assumes all people have accessibility needs, or alternatively that these are just part of the environment, suggests a way in which the three areas of concern to users of digital resources might record their needs and preferences: diplay, control and content. In the next chapter, the matching characteristics of resources that users might access are examined.
The AccessForAll specifications are intended to address mismatches between resources and user needs caused by any number of circumstances including requirements related to client devices, environments, language proficiency or abilities. They support the matching of users and resources despite [some universal accessibility] short-comings in resources. These profiles allow for finer than usual detail with respect to embedded objects and for the replacement of objects where the originals are not suitable on a case-by-case basis. The AccessForAll specifications are not judgmental but informative; their purpose is not to point out flaws in content objects but to facilitate the discovery and use of the most appropriate content for each user (Jackl, 2004).
The AccessForAll specifications are part of the AccessForAll Framework. They do not specify what does or does not qualify as an accessible Web page but are designed to enable a matching process that, at best, can get functional specifications from an individual user and compose and deliver a version of a requested resource that meets those specifications. It depends upon other specifications (such as WCAC) for the accessible design of the components and services it uses.
Having a common language to describe the user's needs and preferences and a resource's accessibility characteristics is essential to this process. That is why the resource descriptions proposed below so closely match the descriptions of the needs and preferences of individual users. It is not essential, however, for there to be a matching process for there to be value in having a good description of the accessibility characteristics of a resource. In the discovery and selection processes, a user can take advantage of such a description and at least be forewarned about the resource.
It should be clear that, as AccessForAll does not specify the functional characteristics of Web content, but rather the specifications for the description of those characteristics, it is not intended to support any claims of conformance of resources to other standards and specifically, not conformance to the WCAG specifications. On the other hand, the WCAG specifications might well be used to determine the characteristics of the resource, such as if the text is well-constructed, or if images have correct alternatives. AccessForAll specifications are only concerned with metadata.
The AccessFprAll (AfA) way of organising metadata has to take into account that most resources are thought of as having a set form with modifications for accessibility purposes. This is not an inclusive way of thinking of resources, and it is not what is emerging as the model on the Web. Given the technology, resources are being formed at the time of delivery, according to the delivery mechanisms available and the point of delivery, but often resulting in many very different manifestations. AfA is designed to contribute to, in fact take advantage of, that process.
Matching users and resources involves not only the user's needs and preferences from a personal perspective, but also accommodations for their access devices. Figure ??? shows a single Web page rendered by 10 different access devices, not including any that don't produce visual displays of any kind:
AfA metadata is also designed to facilitate the just-in-time adaptation of resources to make them accessible for individuals. This process depends on metadata being available so it can be used to manage the substitution, complementing or adaptation of a resource or some of its components.
Given that most resource publishers do not know much about accessibility, and have been shown to not do much about it, it is assumed they will not be very careful about what metadata they contribute to resources, if any. For this reason, there has been an effort to find the minimum that makes a difference and is easy to write, with the hope that those who do more about accessibility, either making better resources or fixing others, are more inclined and better informed about what metadata to use. In cases where a resource contains or is intimately linked to alternatives, such as where there is an equivalent resource like a text caption for an image, the metadata for the resource should indicate this and provide metadata for both versions of that component. It is handy for one component to be referred to as the 'primary' component and for the other as the 'equivalent alternative'.
Equivalent alternative resources are of two types: supplementary and non-supplementary. A supplementary alternative resource is meant to augment or supplement the primary resource, while a non-supplementary alternative resource is meant to substitute for the primary resource. Although in most cases the primary and equivalent alternative resources will be separate, a primary resource may contain a supplementary alternative resource. For example, a primary video could have text captions included. In this case the resource would be classified as primary containing an equivalent supplement. A primary resource can never contain, within itself, a non-supplementary resource (Jackl, 2004, Sec. 3.2.1 Equivalent Alternative Resource Meta-data).
The AfA metadata is tightly specified and very detailed. This is not done in ignorance of the practicalities of metadata that suggest it should be very light-weight, easy to create, etc.. It is this way because people with disabilities have special needs. They use technologies that are built specially for them and that means for a small market given the range of different devices they need. This does not mean that the market for standardised accessibility metadata is small - it can be shared across all the different adaptive technologies and beyond them to great benefit. It means rather that it is very important to be very precise about the metadata and to maintain its stability very carefully so that adaptive technology device and software developers can be assured of the stability of the functional requirements for metadata and thus reliable availability of that metadata. There is not the usual room for tolerance when not having something means having no access to information for someone. Thus, the threshold for interoperability is high in this context.
The personal access systems used by people with disabilities can be seen as unique external systems that need to interoperate with the system delivering the resource. These personal access systems must interoperate with many different delivery systems. The personal access systems must also adjust frequently to updates or modifications in an array of delivery systems. For these reasons it is important that the delivery systems tightly adhere to a common set of specifications with information relevant to accessibility. To promote interoperability this information should be found in a known consistent place, stated using a consistent vocabulary and structured in a consistent way. To support this critical interoperability the AccessForAll specifications offer less flexibility in implementation than other specifications (Jackl, 2004, Sec. 3.2.4 The Importance of Interoperability for Accessibility).
The WCAG architecture treats resources as single entities despite the fact that it may take a number of files to form a Web page. This is not how resources are understood in AfA architecture:
Content can be considered either atomic or aggregate. An atomic resource is a stand-alone resource with no dependencies on other content. For example, a JPEG image would be considered an atomic resource. An aggregate resource, however, is dependent on other content in that it consists not only of its own content but also embeds other pieces of content within itself via a reference or meta-data. For example, an HTML document referencing one or more JPEG images would be considered an aggregate resource. The use and behavior of AccessForAll Meta-data for atomic content is straightforward. .... For aggregate content, the required system behavior is slightly more complex but it still involves matching. In other words, if the primary resource is an aggregate resource, then the system will have to determine whether or not the primary resource contains atomic content that will not pass the matching test. If so, it will examine the inaccessible atomic resources to determine which resources require equivalents. This means a primary resource must define its modalities as inclusive of those of its content dependencies (Barstow & Rothberg, 2002).
As the required metadata is quite detailed, there may be some concern about who will produce it. Even where the metadata is created by a well-intentioned party, there may be a question about how reliable it is. Fortunately there are a number of applications available that help with the description process and even do some of it automatically.
There are a number of tools for the authoring of metadata but in the accessibility context, there are tools for assessing accessibility that also produce metadata. Many of these produce their reports in a language called Evaluation and Report Language [EARL]. EARL provides a way to encode metadata such as AfA metadata. EARL requires all statements to be identified with a time and the person or agent making them. This makes it easier to identify the source of the description for trust purposes. EARL statements are generally intended to convey information about compliance to some stated standard or specification. This information is typical of what is needed for accessibility. An example is an EARL statement that includes information about the transformability of text determined by reference to the relevant WCAG provisions.
The original IMS AccessForAll specifications were very closely based on the specifications developed by the ATRC for TILE. These were subjected to rigorous scrutiny because of the need to satisfy the other stakeholders involved but the attributes of interest were assumed to have been well-identified by the ATRC. As those specifications were advanced through the ISO/IEC process, they were subjected to scrutiny and some modifications were made. These are not important in the sense that they are details about attributes that can be adapted and adopted within the framework. What is important here is how the framework operates and how the specifications work.
Just as the user will want to define three classes of attributes of personal needs and preferences, there are three classes of attributes of digital resource to be described using AfA metadata. They are the control, display and content characteristics.
Figure ???: IMS structure for accessibility metadata from 2.3, Page 7, AccMD Norton, 2004
As can be seen in Figure ???, the original structure of this metadata was deeply hierarchical. Somehow, it needed also to be represented as 'flat' Dublin Core metadata. This was achieved by using the DC structures but only interoperable with the assistance of cross walks. 'Depth', in Dublin Core metadata, is achieved by having qualifications of elements that comply with DC rules for such qualifiers. DC qualifiers constrain either the element itself or the potential values of those elements, by providing such as an encoding scheme or a controlled vocabulary. To achieve this in Dublin Core form, it was necessary to reconsider some of the elements so the final DC version is not merely a flattened version of the hierarchical IMS model.
This is most easily shown by a sample mapping from one form to the other. In the case of the LOM version, to indicate that a resource is a text alternative to an image, the following encoding would be used:
alternative >> alternative resource content description >> altToVisual >> textDescription >> French, caption
while the same information would be conveyed using the DC version, by:
isTextDescriptionFor: URI of the original component being made accessible; caption
While both systems can provide the same information, it can be seen that the DC model leaves the language independent of the type of resource (caption) and these properties need to allied while both these pieces of information are specific to the textDescription of the altToVisual of the alternative resource content description of the alternative.
The research involved finding a way to do this for all the information, satisfying both the requirements for IMS GLC and the ISO/IEC metadata definition, and for the DCMI community. Based on the hierarchical model of the IMS version, an equivalent version was developed according to the DC model. This meant ensuring that none of the deeply embedded information in one model was not available in the shallow format of the other. The two hierarchies (Appendix 7) allow for all the information that is available for an IMS profile to also be available in a DC profile. So long as this is done correctly, that is, so long as the DC rules for elements and application profiles are observed, the metadata can be encoded in a number of ways, particularly in HTML, XML and RDF(XML).
The DC rules state:
At the time of the ratification of this document, the DCMI recognizes two broad classes of qualifiers:
Element Refinement. These qualifiers make the meaning of an element narrower or more specific. A refined element shares the meaning of the unqualified element, but with a more restricted scope. A client that does not understand a specific element refinement term should be able to ignore the qualifier and treat the metadata value as if it were an unqualified (broader) element. The definitions of element refinement terms for qualifiers must be publicly available.
Encoding Scheme. These qualifiers identify schemes that aid in the interpretation of an element value. These schemes include controlled vocabularies and formal notations or parsing rules. A value expressed using an encoding scheme will thus be a token selected from a controlled vocabulary (e.g., a term from a classification system or set of subject headings) or a string formatted in accordance with a formal notation (e.g., "2000-01-01" as the standard expression of a date). If an encoding scheme is not understood by a client or agent, the value may still be useful to a human reader. The definitive description of an encoding scheme for qualifiers must be clearly identified and available for public use (DCMI, 2000)
Originally, qualifiers of elements were explicitly declared with a syntax of the type DC:<term>:<qualifier> but now they are just used as terms as in DC.<Qualifier>. This does not mean they do not follow the rules, but once this is established, they are used alone. That a term is a qualification of another is of significance when the metadata is being transformed for some purpose: a qualified term's value must make sense as the term's value, according to what is called the dumb-down rule. This often introduces some loss of specificity, but at least means that the information can be transferred without loss. It also accommodates what might otherwise be hierarchically structured information.
What it is hoped that the DC verison of the AccessForAll principles can do that would not be likely with other forms of metadata, is to provide a way for all resources to be classified and made available with accessibility metadata. DC metadata is used in many countries to describe government information, in libraries and museums around the world, within software applications such as Photoshop and MS Word, widely in education, and by international agencies such as the Food and Agriculture Organisation (FAO). There is an inordinate amount of DC metadata in existence. If that can be both harvested for use in accessibility, and interwoven with accessibility metadata, the hope is that the vast quantities will make the difference that only quantity can make - the network effect will become a possibility. This can be achieved by using the correct form. The example above shows the alternative captions in French. If the captions are, in fact, from a collection of resources for French speaking people, and the collection is described as using the French language, this would imply the captions are French. This information might not be available otherwise when the language of the captions is being questioned. (This is an example of how the use of the Semantic Web and Topic Maps can help with accessibility, as shown in Chapter 6).
One of the significant outstanding challenges for the metadata work is how to use these new specifications when it is not clear what the alternatives are and so a search is required to locate suitable alternatives. It is envisaged that the specification of display and control characteristics will not be a problem beyond the existence or otherwise of the necessary metadata but finding suitable alternative content may be a challenge. TILE has so far only worked with content developed with accessibility in mind and so can guarantee the availability of the necessary combinations of components.
Typical problems for the discovery of suitable content are exemplified by two scenarios:
Š There is a film of the play Hamlet with XXX and YYY as the lead actors. Those who cannot see the film but can hear it will require a description of the action but those who cannot hear it will need a description of the sound effects and the dialogue.
Š The dialogue has been documented in the past (by Shakespeare) so a text copy with the appropriate control and display qualities will satisfy their needs but it may need to be synchronised with the action in the film, so there will be a need for a synchronisation file (a Synchronised Multimedia Integration Language (SMIL) file, for instance). If this is not available, at least having access to the dialogue should satisfy many users’ needs but if the user is trying to work on the relationship between actions and dialogue in the play, they will need the synchronisation file. If the film does not follow the Shakespearean script, then there may be an issue with finding a text version of the film’s dialogue. Again, depending on the immediate user’s purpose, this may or may not matter.
Š It has been suggested that the work defining the functional requirements for bibliographic records [FRBR] provides some guidance as to how the appropriate alternative content might be located (Morozumi et al, 2006).
Š There is a Web site that contains resources for students working on economic modeling. The Web site contains a number of diagrams that are integral to the text available and yet cannot be viewed by a blind student undertaking the course. Her university has a policy that requires all materials to be accessible to all students and in cases where this is not immediately true, allows the university staff to create the necessary alternative content within 24 hours of receiving a request for it. It so happens that the diagrams in the course materials were taken from another source where they were used differently from in the course: in the former case they were used to demonstrate economic trends and in the latter to show how certain economic models are diagrammed. As the blind student has never seen graphs and does not have any facility with them, they are not suitable for her as illustrative unless they are accompanied by significant other descriptive information. As the graphs were generated from databases, however, there is material that would be suitable for her in the form of database material.
Š This example shows the use of content in a quite different form and format from that originally made available but, again, needs to be discoverable. It is not obvious that it is available and so the only way of finding it would be to search for material with the same content as the originally offered diagrams, taking no notice of the purpose of those diagrams in the original teaching resource, and then substitute the database content for the diagram. This means looking for content that is described differently from the content to be replaced but which serves the same purpose for the user.
In order to make it possible to discover alternatives, it may be necessary for descriptions of the content of resources to be mulitply-layered, as in the case of a 'FRBR-type' description. Such descriptions are not yet common on the Web, but it is apparent from work in some quarters that this may be the case in the future (Denton, 2007).
In this chapter, the possibility of user interface adaptation is considered as an extension of the AccessForAll model. First, a project being undertaken simultaneously with the AccessForAll work is discussed, and then some new work that has been started only since the emergence of the AFA model.
This chapter contains writing that was part of a paper about universal remote control devices that was co-written by the author (Sheppard et al, 2004).
At the time when the early AccessforAll work was being undertaken, Gregg Vanderheiden and a number of people from the National Institute of Standards and Technology [NIST] and elsewhere were working on a universal remote control (URC) in a technical committee working on standards for the InterNational Committee for Information Technology Standards [INCITS/V2/] in the area of Information Technology Access Interfaces. The aim of the URC was to be able to give a person with disabilities a single remote control device that would be able to talk to a range of devices. For example, they might use the URC to control their front door, garage door, car locks, office doors, office elevator, home air conditioner, microwave oven, etc.
The idea was that the remote control device would interact with the main device, say an oven, to obtain information about the controls available on that device, and then construct an interface setup that would allow the user to talk to the main device using the URC and the new 'skin'. This work led to some interesting problems such as those associated with a lift. If a person goes into a lift well in a modern building, they usually have to press a button to hail the lift, then another to indicate where they want the lift to stop, and then another to shut the door, if it is not already shut, and then maybe one to hold the door open for a bit linger while they exit the lift. All this button pressing is very difficult for some people with disabilities, and very confusing for a person with a vision disability. The URC was designed to enable them, in this situation, to simply press a button to indicate where they wanted the lift to stop. The URC should be able to transmit information to attract the lift, take it out of the usual pattern synchronising it with other lifts in the same location, hold the doors open for longer than usual, or as long as is required, and then to close the doors, go to the destination, and open the doors again for longer than usual before merging back in to the pattern.
The author was involved in this work at an early stage to advise on the possibilities for the descriptions necessary for the URC and the devices it would interact with. By 2006, it was also being considered as an international standard, this time by ISO/IEC JTC1 SC35.
As with other AfA specifications, the goal is to develop a common description language so that computers can interchange descriptive information and make use of it.
An URC is capable of being used with a range of devices, in a range of languages, and with a variety of accessibility features. It is, in fact, no more than a platform on which intelligence is loaded in real time for the benefit of users confronted by other devices. The type or brand of device is not important if the URC protocol is observed as each device can have skins and information specific to its needs and comply with the generic URC specifications for that type of device.
So URC compliance is about metadata standards: the description of device and user needs and commands in URC specified ways makes for a common language that can be used any time by an URC, in any context for a user.
Wireless communication technologies make it feasible to control devices and services from virtually any mobile or stationary device. A Universal Remote Console (URC) is a combination of hardware and software that allows a user to control and view displays of any (compatible) electronic and information technology device or service (which we call a “target”) in a way that is accessible and convenient to the user. We expect users to have a variety of controller technologies, such as phones, Personal Digital Assistants (PDAs), and computers. Manufacturers will need to define abstracted user interfaces for their products so that product functionality can be instantiated and presented in different ways and modalities. There is, however, no standard available today that supports this in an interoperable way. Such a standard will also facilitate usability, natural language agents, internationalization, and accessibility (Sheppard et al, 2004).
Disabled people are obvious beneficiaries of this technology but others, too, will want a more convenient way to control things in their environment.
The definition of a stable URC standard will enable a target manufacturer to author a single user interface (UI) that is compatible with all existing and forthcoming URC platforms. Similarly, a URC provider needs to develop only one product that will interact with all existing and forthcoming targets that implement the URC standard. Users are thus free to choose any URC that fits their preferences, abilities, and use-contexts to control any URC-compliant targets in their environment.
Figure ???: A wheel-chair user struggling to reach an ATM (HREOC (with permission).
We are using the Dublin Core Metadata Element Set (DCMES) to describe and find the additional resources that may be needed by a URC using the AIAP. The metadata for the AIAP defines a set of attributes for specifying resources. Text labels, translation services, and help items are examples of such resources. The metadata also defines the content model needed to interface with suppliers of such resource services.
The [Alternative Interface Access Protocol] AIAP metadata is being defined in multiple phases, two of which have been identified. The first phase deals with the identification of resources so that they can be found and used. Phase 2 involves establishing metadata for identifying targets (devices or services), classes of interfaces and user preferences. Taxonomies will be identified or developed for classifying values for each of these major areas (Sheppard et al, 2004).
Fluid is a new project that also aims to provide choice of suitable interfaces to people with disabilities, this time for interaction with digital resources.
Fluid is a worldwide collaborative project to help improve the usability and accessibility of community open source projects with a focus on academic software for universities. We are developing and will freely distribute a library of sharable customizable user interfaces designed to improve the user experience of web applications [Fluid].
Fluid expects to develop an architecture that will make it possible for users to swap interface components according to users' needs and preferences, following the AccessForAll model. This project at the time of writing had started with a demonstration of a drag-and-drop interface alternative for people with disabilities (Fluid Drag-and-Drop).
As with other AfA projects, it is essential that there is a common language for describing user needs and preferences and similarly, a matching set of descriptors for interface components.
In this chapter, resource description metadata is considered. Primarily, the research has been about the use of metadata to manage digital resources with which users are presented but, as shown, this process could be used for a wider range of resources and in a wider range of contexts. indeed, there are subsequent parts to the original metadata already being developed by ISO/IEC JTC1 SC 36 and other projects are already underway elsewhere. In the next chapter, the process of matching a resource to a user's needs and preferences is considered.
In this chapter, after considering the process of matching of resources to users, interoperability and the role of the Functional Requirements for Bibliographic Records [FRBR] are considered. The matching of resources to users' needs and preferences can be simplified when all the required components are available within a single context but is more complicated when they are either distributed or not yet available. When automated matching is not possible, it can still be done manually.
The close relationship between the FRBR model and accessibility metadata is slowly being recognised in the AccessForAll context as it is being realised simultaneously in emerging general metadata standards such as the Metadata Encoding and Transmission Standard [METS]. For accessibility, this is important because while those working in accessibility have for a long time been considered to be technical experts in encoding languages, due to the prominance of WCAG in the context, it may become more an issue for those information managers with library skills.
This chapter contains content from various presentations at the AusWeb 2005 Conference in Queensland, Australia (Nevile, 2005c); DC 2006 in Manzanillo, Mexico in 2006 (Morozumi et al, 2006), and an ASK-IT International Conference in 2006 in Nice, France (Nevile, 2006).
AccessForAll is a strategy for increasing accessibility by exploiting available technologies to match digital resources to users' individual accessibility needs and preferences. This is achieved just in time for the delivery of resources to users by working with descriptions of an individual user's accessibility needs and preferences and relating them to descriptions of a resource's accessibility characteristics. This strategy supports cumulative and distributed authoring of accessible components for resources where these are missing, and the reconfiguration of resources with appropriate components for users.
Assembling Web resources in an integrated way for delivery to the user is defined as just-in-time accessibility and can increase the availability of accessible resources. Moreover, compared to resources that are accessible to every potential user, universally accessible resources, these resources are less expensive, easier to develop (in terms of skills required), and developed using more satisfactory practices for authors and publishers. In addition, the provision of accessible content can be improved so significantly by the use of specifications-compliant accessibility tools, adopted by moderately competent computer users with no accessibility training, that it is cheaper and more effective to rely on the technology than yet-to-be-developed high-levels of human expertise.
The new approach involves a shift of responsibility from individual authors to technology and a supporting community. The shift means increasing responsibility in the final provision of resources, and thus, of server software but also content authoring tools. Where components are not universally accessible, e.g. well-formatted text that can be rendered in a variety of forms such as auditory, visual, and tactile, they may need to be re-written either in a universally accessible form, or with extra components to replace or supplement the existing components. The servers need to check the resources and possibly arrange for services to manipulate and reassemble them before delivering them. The accessible components need to be suitably described to enable their discovery. The components that constitute the final resources may be distributed. This means there is a need for metadata standards that promote interoperability. Finally, there is a need for descriptions not only of resources but also of user needs and preferences.
Accessibility is defined by AccessForAll as the matching of delivery of information and services with users' individual needs and preferences in terms of intellectual and sensory engagement with resources containing that information or service, and their control of it. Accessibility is satisfied when there is a match regardless of culture, language or disabilities (Ford & Nevile, 2005). For individual users, matching their needs is of primary importance and in some cases critical to their ability to function. It should be noted, however, that this does not mean some users only want resources that are dull or boring but simply that resources should be adjusted and adapted to suit the stated needs of individual users at the time so everyone can have what will be best for them.
Howell (2008) says,
Businesses are now investing a good deal more time and money into optimising ‘user journeys’ to ensure that the people using their sites find the route to making a purchase (or finding the information they are looking for) as quick, easy and enjoyable as possible.
I think of this as a pyramid. Web accessibility is the foundation. Usability by disabled people is the next layer. And both of these underpin the ultimate goal: excellent user experiences by disabled people (and everyone).
A logical extension of this gives the pyramid an apex:
What is significant, beyond the usual benefits from working with metadata before the final delivery of the resource, is that metadata is not the resource; it is not necessarily created by the same author as the resource, and it can always be added to, authored by someone else. It can be created by the resource author and stored as part of it or with it or it can be created by a complete stranger to the resource author and stored elsewhere. It can also link two or more resources that were not initially linked in any way. For increased accessibility of a resource, a third party may author a new component and use metadata to link it to the original resource. Where the original resource is well described in metadata, this may make for a new composition of the resource, avoiding any components that cannot be used by the particular user, and delivering only those that are useful, whatever their source. Where the original resource is not well described, that can be done after the event as well, and again, by a third party.
An example of the difference between the former approach of depending on the production of universally accessible resources and the shift to combine the use of metadata is well illustrated in the Australian universities context. As in many other countries, Australia has anti-discriminatory legislation that means any student at a university has the right to accessible versions of all the resources provided for students. A typical university will interpret this to mean that they must author all resources in universal accessible format (and typically will do this for only 3% of the resources) whereas a university using the AccessForAll approach could notify a student who has recorded their user requirements that a resource is not suitable for them and either re-author it or find a suitable alternative and link it to the original by metadata. It is true that a typical university can attempt to author an accessible version of the original resource but it is notoriously difficult to make an inaccessible resource accessible; finding alternative resources already in the chosen format, where successful, is much easier. (The reasoning here is based on the evidence provided in earlier chapters reporting quantitive assessments of accessibility.)
Providing materials that are accessible within 24 hours of a request would be considered much better than having only 3% of the resources available. It would probably make the resource suitable for that student while universally accessible does not always achieve this, and it would be possible to add to the metadata of the original resource so that next time a student searches for it, there will be more options available. It is perhaps relevant to repeat here that, without metadata, a universally accessible resource is unlikely to be found by someone who needs it. The AccessForAll approach being advocated means a shift from just-in-case to just-in-time and, as in many other circumstances, the latter can be much more economical (and, in this case, achievable).
One of the reasons Web developers use metadata is because it allows them to dynamically compose Web pages. They can develop components and then templates for various different sections of their Web site and have them dynamically composed just as they are to be delivered. This makes maintenance of content easier and can support accessibility, depending on the templates and tools being used. In some cases, re-use of single components can be extensive.
Figure ??? of the audit of content at La Trobe university several years ago demonstrated this dramatically; the La Trobe University logo, for example, is used in every Web page covered by the audit of 48,084 pages (Nevile, 2004). This is typical of organisational sites where content is produced using templates. Given an inaccessible object, it is transmitted with every page transmitted. Sometimes, a set of components are transmitted just-in-case. As shown by Fairfax Digital (Jackson, 2004), being able to transmit only what is required can save the publisher substantially, but also the reception costs for the user.
The Inclusive Learning Exchange (TILE) process provides both a proof of concept and a model for the matching of resources to people's needs and preferences. TILE checks the user's profile and then finds objects from which to compose a resource that suits their needs. As TILE includes a tool for creating and editing the user's profile, this can be done while the user is using the service. TILE uses the AccessForAll metadata profiles to match resources to users' needs, with the capability to provide captions, transcripts, signage, different formats and more to suit users' needs.
The TILE prototype has the benefit that within the TILE system, all the necessary components are available. The resources are put together dynamically (Figure ???) so it demonstrates the desired outcomes but it does not offer a model for situations where either metadata, or sought components, are elsewhere and not identified, where the resource is being made accessible to the user just-in-time.
Given that few resources are universally accessible, one can assume that most resources will need attention if they are to be rendered accessible for a particular user. As a strong motivation for accessibility often arises in a community of users rather than authors, it is not uncommon to find a third party creating an accessible component for an existing resource or part of a resource. Usually closed captions for films, for example, are produced by a party specializing in captions . So are the foreign language versions of the spoken sound tracks. A number of organisations offering such resources are listed in Chapter 7 where their availability of resources and description of them are considered.
Not all such services are perform in advance; some are able to provide the service either instantaneously, using automated services, while some involve people and take time. Nevertheless, being able to associate such a service with a resource can increase its accessibility. ubAccess has a service that transforms content for people with dyslexia; a number of Braille translation services operate in different countries to cater for different Braille dialects, and online systems such as Babelfish help with translation services.
Creating the accessible alternative components and making them available for use is shared by accessible content authors and repositories. Once there is an alternative for a resource component, it is a pity if a new one has to be created just because the existing alternative cannot be found. This means, of course, that repositories of accessible content should be online and their collections available and discoverable (see below). In the case of communities, such as an educational system, there should be no barriers to the development of networks of distributed accessible components.
To perform the accessibility match, there is a need for a service that provides the right combination of content and services for the user, where and when they need it. This depends on the user and resource profiles, the context information, and the pieces that are to be assembled for delivery to the user as the resource they require.
For a user, or an assistant working with them, it must be possible to create the necessary profiles and to change them for the immediate circumstances. In addition, it must be possible to make formal descriptions of the resources and link all of these together for the matching process. There are several layers of discovery involved. There is more than just discovery information needed, however, and therefore a need for systems that facilitate the making of such descriptions.
In 2003-4, Fairfax Digital redeveloped their web site with accessibility in mind and the result is a saving of an estimated $AUD1,000,000 per year in transmission costs alone (Jackson, 2004). A bigger publisher would save even more. Flexible assembly satisfies the requirements for the users, allows for more participation in the content production process and has the benefit that it limits the production and transfer of content that will not be of use to the recipient. Descriptions of the accessibility of content of large collections can be done with tools designed for that purpose. Publishers can identify potential problems and gaps in their resource collections in advance, as was the case with the La Trobe University Web site when audited (Nevile, 2004).
Publishers who do not have complete sets of components for all potential users will need to provide or point to services that can either discover missing components, or create them. Their servers will need to be able to integrate the new components without having the original resource 'fall-apart' so the original resources should be composed dynamically of components just so others can be subsituted, added, etc. This does call for the design of more flexible resources, but can be done. If it is part of the general practice for a publisher, bringing in a foreign component should be possible without 'destroying' the original resource.
Where the original publisher does not manage the accessibility, a third party publisher will have to absorb an original resource, deconstruct it and test the individual components, and then find what is necessary and re-construct it for delivery to the user. In this case, it may not be well-formed unless it was well designed with flexibility in the beginning, even if it is accessible. That is, it may be 'accessible' component by component but not very usable. This is often better, however, than if it is simply not accessible.
The Web 2.0 strategy proposed, using technology to augment, supplement and in some cases replace author expertise, is more likely to be achieved by a combination of tools than the adoption of any particular tool. Many of these are not yet available as one-stop Web services but many are available as system components. The big changes will be possible when they are made into Web services as this will increase the network capabilities of the systems, and thus the quantity of sharing that will be possible. The possibilities will only be realised if there is commitment to them. This is not so difficult to imagine: the achievements of normal people using word processors, electronic spreadsheets and presentation tools today are similar to what could be expected for accessibility in the future with the tools and practices proposed.
An outstanding issue is then, what is necessary for an accessibility service to find a suitable resource or component in a distributed environment. In the usual discovery process, users define the topic of interest and one or more other properties. In the case of an AccessForAll search, the user's needs and preferences impose additional constraints on the suitability of the resource. Initially, the author and others assumed that this would be possible and started with a simple model in which the user's needs and preferences profile (PNP) was simply added to a search query (Figure ???). The problem with this approach is that if no suitable resource is returned, or if components of a resource are unsuitable, a new search, with different requirements, will be necessary to find what is needed. Given that the results of the old search have already been evaluated, and the search information has already been used, this new search will need new requirements.
So this is where the use of FRBR becomes relevant (see Chapter 9). If resources are described with their content related to the intellectual work contained within them, it should be possible to find other resources or components with similar or even the same intellectual content.
In order to obtain the metadata that might be needed, it becomes necessary to not combine the user's needs and preferences with the other requirements in the primary search, but to use them to filter the results so that as much metadata about equivalent resources as possible can be gleaned from the resources found in the search. For this reason, the original diagram needs modification as shown in Figure ???.
There are a number of possibilities, in fact, for constructing a new query.
Let us assume somewhere a suitable result exists. (In case there isn't one, we will have to specify a fail condition.) So let us imagine we are seeking an alternative for an image that is usually inserted into a resource. Let that resource be a map, so we are looking for either a textual version of the content of the map or a recorded verbal description of it, and for our current purposes, we assume at least one such target resource exists. In other words, the problem is not to find a suitable resource so much as to find a resource with the same intellectual content as the map we already had in a situation where we did not find that alternative in the first search. This is not a new problem. We are, then, dealing with a classic problem of how to find resources like a given one that are not described in a way that has already found them. Many search engines offer a facility to ‘find similar'.
There are a number of potentially useful processes for doing this. For example, Jeon et al (2005) have proposed a method for finding similar questions by reference to the answers to those questions. Another approach is to find similar words to those used for the original search and then use the new set of words to search for more resources (Otkidach et al, 2004). Google offers some simple approaches such as: press the ‘Similar Pages' button, use the Page-Specific Search selector on the Advanced Search page, or use the related search operator. They even offer a browser button for those who are doing this frequently [Google] and provide a detailed explanation of how they find similar resources (Google Similar Pages).
The library community is faced by the problem that a single work, such as a Shakespearean play, can be published in many forms, by many publishers, and usually with multiple copies of any particular publication. This means that a librarian offering a single copy needs to be fitted into a community of providers and, from another perspective, a user has a complex set of providers and locations for a single work.
To simplify matters, the International Federation of Library Associations developed a framework for the functional requirements for the catalgoue records they have for works. In fact, they defined four levels of development of a book starting with the intellectual endeavour, the work, which is expressed in some form, say a play, then manifested in some form, perhaps a publication by XYZ company, of a set of items, books. The four entities are therefore: work, expression, manifestation and item.
In the context of accessibility, while the FRBR authors did not explicitly take it into account because it was not relevant to them at the time, FRBR's entities can be very useful. The FRBR model assumes four user tasks: find, identify, select and obtain (Figure ???). These are not just for those seeking books but also relevant to users of digital resources. Just as book searchers may need to use information about the expression of the work they seek, so may the user who wants an alternative manifestation or item. In the case of items, of course in the digital context, any item may be displayed in many ways, and similar book items can be distinguished too, for example, the difference might be who owns them, where they arelocated, or what condition they are in. These qualities of the item, however, are similar in kind to those of interest to the digital resource user, and will need to be described to users for whom they make a difference, e.g. users of heritage books.
Translation into a foreign language is equivalent to tranformation into a different form, and conversion of a graphic of symbolic mathematics into a MathML version suitable for automatic transformation into Braille, for instance.
In the accessibility context, FRBR is relevant for helping locate other resources with the same intellectual content, as described above, but it is also relevant in that it can provide guidance for the development of application profiles for accessibility. FRBR is not a metadata schema and is not intended to be one. It is not implemented as metadata anywhere. It is a model for use by those who are working on metadata for user requirements.
The author and colleagues (Sugimoto & Morozumi) analysed FRBR as a way of testing the AccessForAll metadata (Morozumi et al, 2006). They compared the FRBR relationships and attributes of entities with Dublin Core Metadata Terms [DCMT Terms] and the ISO/IEC JTC1 Digital Resource Description (DRD) terms. In other words, the aim was to find out if the FRBR model proposed metadata that would be useful in an AccessForAll context with respect to accessibility characteristics of a resource. Similar work had been done previously with respect to the Dublin Core model when the Dublin Core accessibility work first commenced (Chapter 7).
DCMT (properties) describe what FRBR calls attributes of entities with the exception of the relation element. dc:relation is useful for describing relationships that can be of interest in the accessibility context, as demonstrated in the emerging DC Application Profile for AccessForAll (<http://dublincore.org/accessibilitywiki/>). The relationship between the attributes of dc:format and dc:type would be of interest but this depends on implementations, and is not in the metadata per se. dc:description and dc:audience may also be useful, depending on their use.
Not surprisingly, there was little in common between the elements of the DRD and the FRBR model; the DRD was designed to complement existing metadata schemas, not to duplicate them. These results led to the observation that the DCMT terms are limited in respect of accessibility adaptability in the same way as is the FRBR model. It was asserted then, that as the DRD represents the information as metadata that is required in the description of a resource to indicate its adaptability for accessibility, neither the FRBR model, nor examples of metadata such as the DCMT and MODS that are closely related to it, provide the metadata necessary for accessibility adaptability (Morozumi et al, 2006).
Much of the content of this chapter became a co-authored journal article (Nevile & Treviranus, 2006) and a co-authored paper presented at the World Summit on the Information Society [WSIS 2005] conference in Tunisia (Nevile & Mason, 2005).
AccessForAll fits within a framework for educational accommodation that supports accessibility, mobility, cultural, language and location appropriateness and increases educational flexibility. Its effectiveness will depend upon widespread use that will exploit the ‘network effect' to distribute the responsibility for the availability of accessible resources across the globe. Widespread use will depend upon the interoperability of AccessForAll which, in turn, will depend on the success of the four major aspects of its interoperability: structure, syntax, semantics and systemic adoption. The last criterion, systemic adoption, is added here deliberately to the convention trio of criteria (Weibel et al, 2002).
There is no doubt that an important aspect of achieving interoperability is the widespread adoption of common solutions to problems. The new framework can inherit this from extensively used standards. In the case of educational resources and services, there are many major communities concerned with relevant aspects of descriptive standards and of those, a number have been engaged in the development of the AccessForAll model. Cross-domain metadata also has well-established standards that have been considered. The model is based on a set of principles that, when implemented in a variety of standard languages or systems, should maintain their interoperability at syntactic, structural and semantic levels. It also depends upon widespread systemic adoption to generate the volume of accessible components required.
The AccessForAll strategy complements work to determine how to make resources as accessible as possible done primarily by the World Wide Web Consortium Web Accessibility Initiative [WAI]. The focus of that work is technical specifications for the representation and encoding of content and services, to ensure that they are simultaneously accessible to as many people as possible. W3C also develops protocols and languages that become industry standards to promote interoperability for the creation, publication, acquisition and rendering of resources.
The focus of AccessForAll is ensuring that the composition of resources, when delivered, is accessible from the particular user's immediate perspective. It complements the W3C work by enabling a situation where a particular suitable resource is discoverable and accessible to an individual user even when it may not be accessible to all users. In some cases, this may mean discovery and provision of alternative, supplementary or additional resource components to increase the accessibility of an original resource. The distinguishing feature of AccessForAll is that it assembles distributed, sometimes cumulatively-created, content into accessible resources and so is not wholly dependent upon the universal accessibility of the original resource.
The AccessForAll specifications, while initiated in the educational community, are suitable for any user in any computer-mediated context. These contexts may include e-government, e-commerce, e-health and more. Their use in education will be enhanced if they are adopted across a broad range of domains and used to describe the accessibility of resources available to be used in education even if that was not their initial purpose. The AccessForAll specifications can be used in a number of ways, including: to provide information about how to configure workstations or software applications, to configure the display and control of on-line resources, to search for and retrieve appropriate resources, to help evaluate the suitability of resources for a learner, and in the sharing and aggregation of resources.
The AccessForAll specifications are designed to gain extra value from what is known as the ‘network effect': the more people use the specifications, the more there will be opportunities for interchange of resources or resource components, and the more opportunities there are, the more accessibility there will be for users.
So implementation has many paths available and, although only time will tell, it is important to consider these and their potential at this stage.
In Chapter 6, the need for metadata to be defined in a structured, formal way was defined. It was made clear that unless this is so, metadata cannot be used by machines that cannot reason or make judgements about how to interpret resource descriptions. Implied was the need for these constraints from an interoperability perspective. That is, if metadata is to be used to find distributed resources, the same query will need to be applied to a number of search engines. Interoperability implies that the single query will be comprehensible and useful to all such query engines. It is not necessary that the query is used by all of them in its original form as they may be able to transform it, prior to using it, to suit their purposes. In some cases, this means a cross-walk where two sets of metadata are linked by a mapping, one-way or two-ways. If such a mapping is not perfect, in other words is not lossless, the mappings will be, correspondingly, less than perfectly interoperable.
In Chapter 7, the need to ascertain if the metadata of resources that are already available and suitable as alternatives for inaccessible resources, was an analysis of the potential interoperability of AccessForAll metadata and the original metadata. In Chapters 9 and 10, the task of finding an accessible alternative to an inaccessible resource was considered. In such a case, it is obvious that the metadata search for the alternative needs to be interoperable with any catalogue referring to such a resource.
One aspect of interoperability is the ability to share the same kind of information with others using the same systems and acting with the same goals. Another is to work across devices including using different hardware and software without losing the necessary ‘look and feel' that facilitates learner mobility between devices.
W3C has a working group focused on Device Independence, another focused on the Mobile Web, another working on Evaluation and Repair, and a fourth working on metadata, the POWDER working group. All four Working Groups produce specifications that are important to the interoperability of AccessForAll (Table ???).
The vision we share with others is to allow the Web to be accessible by anyone, anywhere, anytime, anyhow. The focus of the W3C Web Accessibility Initiative is on making the Web accessible to anyone, including those with disabilities. The focus of the W3C Internationalization Activity is on making the Web accessible anywhere, including support for many writing systems and languages. The focus of the W3C Device Independence Activity is on making the Web accessible anytime and anyhow, in particular by supporting many access mechanisms (including mobile and personal devices, that can provide access anytime) and many modes of use (including visual and auditory ones, that can provide access anyhow).
"Content authors can no longer afford to develop content that is targeted for use via a single access mechanism. The key challenge facing them is to enable their content or applications to be delivered through a variety of access mechanisms with a minimum of effort. Implementing a web site or an application with device independence in mind could potentially save costs, and assist the authors in providing users with an improved user experience anytime, anywhere and via any access mechanism." (W3C, Device Independence, 2003)
This group aims to tackle ""interoperability and usability problems that make the Web difficult to use for most mobile phone subscribers." (W3C, Mobile Web, 2005)
Evaluation and Repair Language
"The Evaluation And Report Language is an RDF based framework for recording, transferring and processing data about automatic and manual evaluations of resources. The purpose of this is to provide a framework for generic evaluation description formats that can be used in generic evaluation and report tools." (W3C EARL, 2001)
"Working Group is to develop a mechanism through which structured metadata ("Description Resources") can be authenticated and applied to groups of Web resources. This mechanism will allow retrieval of the description resources without retrieval of the resources they describe." (W3C POWDER, 2007)
Table ???: Relevant W3C metadata and interoperability activities
For a network delivery system to match users' needs with the appropriate configuration of a resource, two kinds of descriptions are required: a description of the user's preferences or needs and a description of the resource's relevant characteristics. If users are to be able to quickly configure their devices, they need their needs and preferences to be quickly recognized and implemented by the device they are using. Similiarly, if they are to search for appropriate resources (including where their search for resources causes their system to search for accessible components from which to make the resource they want), their needs and preferences descriptions have to be available to the search engine for searching and matching with the resources and their components. Where this is happening across collections of resources, a common way of describing the resources will be necessary and it will need to mirror the descriptions of the resources. So interoperability between the two sets of descriptions is necessary so that even though one is concerned with the user's needs and the other with a resource, they can both be used by the search engine. In effect, this means that the description of the user's needs should be in the same format as the description of the resource.
Typically, users with special needs will be looking for resource components that are developed by specialists. Usually specialists who have not made the original resources produce closed captions, image descriptions and video files of people signing. They are likely to know the standard assistive technologies and what they will require and can do to use the special components. In automating the matching process for the user, it is very important that the standard triggers are available for the assistive technologies. This means that the resources should be described in the way they can be understood by particular assistive technologies but also so that there is a generic description specification that all the assistive technologies can be expected to refer to. For this reason, care has been taken in AccessForAll to ensure that there is a seamless match and the established industry terms are used.
The implications for interoperability here are for exchange between systems known as ‘user agents' that typically include browsers. It is well known that browser developers pride themselves on the non-standard features they offer and that it is not easy to satisfy all browser specifications simultaneously. Fortunately, assistive technology developers who have a much smaller market are often more concerned to serve their customers and their industry associations. Nevertheless, it is important to recognize their differences and allow for their use so the AccessForAll model has to be capable of such flexibility. In fact, it aims for some generic functions to be described in a common way while allowing for extensions to accommodate custom functions or features.
AccessForAll metadata was first developed for use within the educational sector. As most resources for educational purposes are created within educational institutions, and therefore described by the educational community, descriptions of those resources are usually created according to standards designed for the educational community. Having worked with the goal of sharing resources for some time now, the educational communities have a number of ‘standards', the best-known being those developed by IEEE LOM, known as Learning Object Metadata [IEEE/LOM]. Clearly, the accessibility characteristics of resources that are ‘learning objects' need to be described in a way that interoperates with all other aspects of LOM descriptions.
Often, however, educational activities involve learners using resources that have been developed and described by other communities for their own purposes. For example, technical manuals are often used in Computer Science courses but they are not usually written for this purpose. Government information is often used in education, as are images of paintings and objects held in museums and galleries. The resources to be used by learners then, do not always originate from the educational or even the same communities and their description for discovery purposes can be very specific to the community from whence they come. In order to discover resources across communities or disciplines, then, the descriptions of the accessibility characteristics of resources need to be consistent with descriptions used in those communities.
Dublin Core metadata is not domain specific. As DC metadata is commonly used by governments, museums, galleries, and others for information sharing, AccessForAll needs to be able to take advantage of their interoperability. DC metadata also has the advantage that it is used in many countries for resources that are created in many different languages and so can be used for cross-language discovery.
Not everything that will be useful to have as AccessForAll metadata is unique to the AccessForAll model so in a DC implementation, a significant amount of information will be expressed using standard DC elements. Exactly how to do this will be described in a DC Application Profile for which specific terminology (semantic values) will be defined. The value of this work for DC users is that they will be able to express the AccessForAll metadata in DC compliant ways so it will interoperate with other DC metadata. They will also be able to use standard DC applications without significant modification.
In summary, AccessForAll needs to interoperate with a number of other relevant metadata specifications and standards.
In 2003, Kevin Keenoy reported on the main metadata standards in use in education (2003):
The Dublin Core Metadata Element Set seems to be by far the most widely accepted and used set of metadata standards for ‘core’ categories applicable to any internet-based content. Almost all existing learning object metadata standards use the Dublin Core as a basis and then extend it with more specialised elements. (Keenoy, 2003, p.2)
The standard builds on the Dublin Core, and is based on recommendations from the ARIADNE project and IMS (see later). The LOM metadata specification forms the basis of almost all existing implementations of metadata specifications for learning objects, and should probably be the basis for metadata used in SeLeNe. (Keenoy, 2003, p.3)
He goes on the explain the complex relationship between the LOM standard in many contexts and formats but explains they are closely related. This is also the case in Australia where the Educational network of Australia uses a DC-based metadata schema anas do many of the other educational systems in Australia,
So, between them, IEEE LOM and DC metadata describe a vast proportion of the resources that are of interest in education. Many educational systems use IEEE LOM metadata to describe their resources but others use DC metadata. It makes sense that these two communities should be able to exchange metadata records about their resources so they can, in fact, share their resources. To do this, they need to be able to transform metadata from one specification to the other. There is an activity, started in 2001, that aims to bring the two sets of specifications into harmony. It cannot be done easily because LOM and DC metadata are based on very different models.
The LOM abstract model is hierarchical and instead of having property-value pairs as DC metadata does, it has a rule that every element is either a container (of another element) or a leaf (to another element). This is a more typical model but very different from the DC one. Attempts to cross-walk (transform) metadata from the LOM to DC metadata, or vice-versa, typically result in substantial loss either in detail or value. LOM metadata has many more elements than the simple DC core set and so when LOM metadata is transformed into DC metadata there is a many-to-few transformation with a lot of metadata being discarded. When DC metadata is transformed for use as LOM metadata, a lot of the metadata of interest to educators is found to be missing. DC metadata lacks the structure of LOM metadata: DC metadata is ‘flat' while LOM metadata is hierarchical.
Figure ??? shows that in oprder to send a query across a number of metadata repositories in use in education, a special federated sytem is required. In this case, Stefaan Ternier et al (2008) have defined a new query language to facilite this process but each repository has to develop its own special way of exchanging information using that language - it is not possible to send the same query, as is, to all repositories, because their metadata schema are not interoperable.
(insert the image of how DC and LOM cannot support a round-trip transformation....) etc - esp to quote from paper for ISO about why MLR is not right...
There have been several attempts to find good ways of moving metadata back and forth from one system to the other without loss. In late 2005 there was what appeared to be a useful model developed for this. Early work focused on moving information expressed as metadata from one system to the other but recently was decided that it is more effective to relate the elements that contain that information and then express the metadata in whatever syntax is chosen. Mikael Nilsson explains this in his model:
From “The Future of Learning Object Metadata Interoperability Towards a Framework for Metadata Standards“ publisher etc?]
We have demonstrated that true metadata interoperability is still, to a large extent, only a vision, and that metadata standards still live in relative isolation from each other. The modularity envisioned in application profiles is severely hampered by the differences in abstract models used by the different standards, and efforts to produce vocabularies often end up in the dead end of a single framework. In order to enable automated processing of metadata, including extensions and application profiles, the metadata will need to conform to a formal metadata semantics.
To achieve this, there is a need for a radical restructuring of metadata standards, modularization of metadata vocabularies, and formalization of abstract frameworks. RDF and the Semantic Web provide an inspiringly fresh approach to metadata modelling: it remains to be seen whether that framework will be reusable for learning object metadata standards.
This suggested that is may not be until there is a shared, single IEEE/LOM/DC abstract model for education that there will be perfect interoperability between DC and IEEE/LOM resource descriptions but it may, on the other hand, be possible in the particular case of AccessForAll metadata because it is based on a more interoperable abstract model.
The document presents a "layered" approach, describing four distinct "interoperability levels", each building on the previous one, and attempting to specify clearly the assumptions and constraints which apply at each of those levels, and the expectations which a consumer can have for metadata provided "at" a specified level.
Level 1: "Informal interoperability", based
essentially on the natural-language definitions of metadata terms;
Level 2: "Semantic interoperability", based on the RDF model;
Level 3: "DCAM-based syntactic interoperability", introducing the notions of descriptions and description sets, as defined by the DCMI Abstract Model;
Level 4: "Singapore Framework interoperability", in which an application is supported by the complete set of components specified by the Singapore Framework for Dublin Core Application Profiles.
Keenoy (2003, p.7) points to a set of standards that are used to describe, in one way or another, learners for the purposes of learning management systems. DC is conspicuously missing from the list (as is to be expected according to Chapter 8). This is primarily because the DC profile has not yet been developed, but also because the AccessForAll proposal does not attempt to describe permanent characteristics of people, as does most learner profile metadata.
A key challenge in accessibility is the diversity of need; different people require different accommodations. Established approaches towards addressing this are to allow customization by the end user (e.g. text size and color) and to offer alternative presentations of the same content where automatic customization is not possible (e.g. text description of diagrams or audio descriptions of video content).
Integrated systems potentially offer an efficient way of managing and even extending this. They can personalize the way the interface and the content are presented to the user and further, which content is presented to them can be determined by the system on the basis of stored information about them and their preferences.
Such systems offer organisations the opportunity to efficiently manage their requirement to meet the needs of their users with disabilities. If they implement user profiles and adopt the AccessForAll approach, the system will “know” how best to present content and interfaces to each individual user. If they implement the approach for the metadata of the content stored in their repositories, then the system can automatically offer their content, and other information, in the most appropriate format to meet individual user needs.
The Semantic Web offers one obvious technology that will be enabled by the AccessForAll approach. Already the AccessForAll specifications recommend using EARL so that the metadata will be as flexible and rich as possible. The range of other extensions includes opportunities for valuable cross-lingual exchanges to suit learner needs as well as cross-disciplinary changes of emphasis. Applications and Web services that transform resources or resource components to suit the needs of users with cognitive disabilities is a huge area that has hitherto not received the attention it deserves.
In this chapter, .....???
chapter faces the question of p
or generality: will this research lead to new behaviours and so make the
Web of information more accessible to more people? It has a focus on the
issues to do with interoperability that put pressure on all metadata
developments and, in particular, AccessForAll metadata.
By July 2006, it was clear that the AccessForAll approach was being adopted in the educational domain (Appendix 4). By October 2007, there were 86 resources listed as relevant to AccessForAll and a glance through the list shows the dissemination of this idea throughout the academic world (Appendix 5). The Accessibility Guidelines that preceded the AfA work were read 176,505 times between Sept 2002 and June 2006 and in the same period the IMS AfA Specifications were downloaded 28,082 times. The United Kingdom Government had included the need for metadata in its standard for accessible documents in the UK (Appendix 6) and on October 16, 2007 the Australian Government Locator Standard Committee voted to include an AccessForAll metadata element for all accessible documents in Australia (IT-021-08, 2007, p. 14). At the same AGLS meeting, the National Library of Australia representative reported that the NLA is starting to write metadata for individual components such as images and songs (IT-021-08, 2007, p. 14). This is an important, although independent, action that will contribute towards implementation of AccessForAll. Finally, the ISO/IEC JTC1 SC35 is now developing a user profile for use with the universal resource console (Chapter 8).
notes: see the important new paper from italy: "Automatically producing
IMS AccessForAll Metadata" in Proceedings of the 2006 international
cross-disciplinary workshop on Web accessibility (W4A): Building the
mobile web: rediscovering accessibility?Year of Publication: 2006
authors:Matteo Boni CRIAD -- Centro per la Didattica, Via Sacchi, Cesena
Sara Cenni Universitą di Bologna, Via Sacchi, Cesena (FC), Italy
Silvia Mirri Via Nura Anteo Zamboni, Bologna Italy
Ludovico Antonio Muratori Corso di Laurea in Scienze dell'Informazione, Via Sachhi, Cesena (FC), Italy
"Accessible e-learning is becoming a key issue in ensuring a complete inclusion of people with disabilities within the knowledge society. Many efforts have been done to include accessibility information in e-learning metadata and the major result consists in the IMS AccessForAll Metadata definition. Unfortunately the complex behavior managed by this standard could be perceived by authors as a new boring and difficult activity enforcing the idea that the production of accessible Learning Objects (LOs) is too complex to be accomplished. This paper presents a novel component of an authoring and producing software architecture, designed and implemented to automatically create the IMS AccessForAll Metadata description of an accessible LO."
Note that they have integrated the process into the workflow and have the following diagram:
Having described acclip and accmd, they say:
While these metadata represent a truly enabling option, implementing an ACCMD description of each LO could turn into a new tiresome and protracted task for authors. Reducing the distance between users’ needs and authors’ efforts is now a crucial aspect to ensure accessibility of e-learning materials. The solution relies on authoring tools for creating LO that have to accomplish two main goals:
1. Offering support to author in creating fully inclusive materials by suggesting correct behaviors and sometimes imposing the completion of all additional information needed to ensure accessibility (e.g. once the image is inserted, the authoring tool ask for a description that is required for blind users).
2. Automatically structuring the media alternatives, both inserting correct markup inside the (X)HTML pages and describing the whole structure with ACCMD.
Such a tool is now integrated in a complex process used inside the University of Bologna to create accessible LOs. Accessibility of e-learning materials produced has been widely tested by involving a group of people with disability in verifying on-line contents and services. Universality of materials has been tested by using different browser running on different platforms (specifically MS Internet Explorer 5.0 and later, Mozilla Firefox 1.0 and later, Netscape Communicator 7.0 and later, Lynx 2.8.4 rel. 1, IBM Home Page Reader 3.0, Apple Safari 1.0). Finally, LOs produced by our process are compliant to all the constraints considered by the Italian Law on Web Accessibility,
but they also say:
Unfortunately, the IMS description is ignored by the LCMS (Learning Content Management System) in use. Generally this new technology is not fully supported and there are just few solutions that use ACCMD and ACCLIP to provide adaptive accessible contents. We assume that a growing availability of IMS ACCMD tagged LOs will drive the development of adaptive modules for the more diffuse LCMS and will definitively diffuse the use of the whole IMS specification on accessibility.
The ATRC developed a system known as 'The Inclusive Learning Exchange' initially as a prototype and then as the server for students at the University of Toronto. TILE is open source????
The author experimented with the idea of distributed metadata 'just for fun'. The result was surprising, and pleasing.
A page of the Australian Broadcasting Commission site offering video on command (ABC Video On Demand online at <http://www.abc.net.au/vod/news/>) was visited. This page had been casually recommended as a well-written resource. It was hoped that there might be sufficient information available from the resource for an alternative resource in a different mode to be found relatively easily using Google. On the day of testing (26/4/2006), the author took some words from the 'alt tag' for a video and submitting them to Google (and Flickr). This led to a blog (<http://biukili.blogspot.com/>) that provided text information about the topic – amazing and satisfying given that the first resource was only several hours old on the Web, as was the topic. Admittedly news might be a special case, but the exercise was gratifying. Google was used but not the special ‘similar resource' features. That too may have produced a text description of what was in the video.
In associated work attempting to explain why this approach will work, the author and colleagues have mapped Dublin Core and AccessForAll metadata to the FRBR model. They noted the distinction between metadata to identify the intellectual content of resources and that used to determine their presentation, control and content characteristics relevant to accessibility (as defined in the user's PNP). They were able to relate all the relevant attributes of potentially suitable resources using the hierarchy in the FRBR model but that when it comes to discovery of such a resource, we may need more than subject descriptions to find it. This means that descriptions of authors, publishers, etc may also be necessary.
Implementation of AfA is not yet simple. While there is a set of machine-readable resources to help those implementing it in the educational context where they use IEEE LOM metadata, this is not yet the case for DC metadata, expected to be a much larger implementation context. Nevertheless, the signs are very positive as shown by the emerging evidence of acceptance of the AccessForAll approach.
The set of POWDER use cases include the following:
2.1.6 Web Accessibility B (self labeling, content features, profile matching)
A report from Italy included the following:
This work presents components, which are embedded in an existing authoring/producing tool and automatically creates the IMS AccessForAll Metadata description of a LO, starting from the natural structure of multimedia contents.
Such a tool is now integrated in a complex process used inside the University of Bologna to create accessible LOs. Accessibility of e-learning materials produced has been widely tested by involving a group of people with disability in verifying on-line contents and services (Boni et al, 2006).
That tool and its use are described in more detail in "Automatically Producing Accessible Learning Objects" (Di Iorio et al, 2006). The author also reported the benefit of using good accessibility evaluation tools that can produce the necessary metadata (Nevile, 2004).
The IMS Tools Interoperability project is part of the Engage project at the University of Wisconsin - The Engage program partners with UW Madison faculty and academic staff to apply innovative uses of technology for teaching and learning. In this project, UW-Madison, WebCT, Blackboard, Sun Microsystems, SAKAI, QuestionMark, and staff from Stanford, UC Berkeley, MIT, Indiana University, and the University of Michigan are all involved. A special server edition of ConceptTutor, and a Moodle LMS were proposed for 2005 Alt-i-lab [Alt-i-lab 2005] conference in Sheffield, England in June 2005.
The aim is:
To promote accessibility and to demonstrate the use of IMS ACCLIP and ACCMD standards for accessibility, we have modified Fedora to implement an RDF binding of ACCLIP and ACCMD. A student’s accessibility preferences are matched to the accessibility characteristics of the content at the time of the request. Thus, a visually impaired student will receive content tuned to her needs when she requests a ConceptTutor without having to know how to request the specially tuned content (Engage, 2007).
In "Beyond the LOM: A New Generation of Specifications," Michael J. Halm says:
The importance of the ACCLIP specification may not be immediately understood, but this specification provides enormous opportunities to customize and adapt the learning experience based on the users preference. This powerful capability now can be used for anyone, not just those with disabilities. These preferences will be stored in the Learner Information Package and could travel with the learner from one on-line environment to another. Since these preferences are created and maintained by the learner, this gives the individual the control to change the environment as needed. This also allows one to consider the learning style of the learner as part of the environment. Visual learner will be better able to set preferences that are unique to the type of way they learn. This preference can translate into the type of learning objects that are selected and deliver in the learning environment (Halm, 2003).
SAKAI is a university consortium effort ot develop a set of open source tools for tertiary education.
FLUID is a huge project in which the AccessForAll idea is being taken to the next logical step: while it is useful to be able to switch components, it is really necessary to also be able to switch user interface components, and that is what FLUID is about.
In late 2007, the WCAG Working Group is finalising Version 2.0 of WCAG. The last remaining problem is what to do about metadata. It has produced some interesting challenges. The AccessForAll position, put by the author to the WCAG WG, is that there should be metadata to describe the content of every resource, inclusing its accessibility characteristics, on every Web page that is considered accessible. The Chair of the WCAG WG, Gregg Vanderheyden, is interested because he sees that in the case where a page is accessible in the sense that it is conformant, someone who wants a version of the page that happens to suit them but is not fully conformant, might want to find that version. As Jutta Treviranus wrote, (24/10/2007 - email):
I think we are missing the point. An important consideration is that Metadata does not require and is not about conformance. It is about labelling and finding accessible resources. You need to think beyond a single site or a single page. If there are a number of resources and some are accessible to you and some are not, Metadata helps you to find the ones that are accessible to you or alternatively to gather the same information as the Web resource you want from a number of pieces that are accessible to you. So is WCAG only about access to a single site or about access to the Web? If it is about access to the Web then you need to think about systems and varied resources, some that are more accessible to a given user and some that are not.
Sadly, some think, the response to this was:
This is beyond the scope of WCAG 2.0. It sounds like a good candidate for the next version.
WCAG 2.0 is addressing the accessibility of Web pages, the unit of conformance. There are a number of other issues related to the larger view of the web that have also been deferred to future work. (Loretta, 24/10/2007 - email)
One major constraint for W3C's work is that it needs to result in technical specifications; nothing can be recommended that cannot be tested. Another constraint is that it must be possible in every case. Vanderheyden posed the problem of the resource that is to be published but, by law, cannot be altered any way in the process. An example is an historic digital image, that has value in being that image. The problem with that image would be that metadata could not be added to it and nor could even a link to metadata. Fortunately, on the day this problem was to be solved, another W3C WG released their first version of a solution. The Internet Content Ratings Association community want to be able to add metadata about resources that is very similar to the AfA metadata in type - they want to describe the relevant characteristics of resource content that leads to ratings for nudity, violence, etc. The W3C Protocol for Web Description Resources (POWDER) Working Group [POWDER WG] developed POWDER to enable information to be conveyed via the http head of a resource and this is just what is needed for the Vanderheyden problem. The issue is what is to be conveyed, and the POWDER WG has now modified their examples to include two use cases that draw upon AfA metadata.