Deep Accessibility

Last week I said that ‘If open data, and its access by citizens, is as important as governments seem to think, then the deep accessibility of that data is just as important.’ Now, we’ve seen the specific case in relation to Big Open Data; but what do I mean by deep accessibility in more general terms…?

Deep Water

Most of us know what accessibility is – or at least we think we do. We normally think that good accessibility goes had in hand with following standards, and the provision of ‘additional’ information for use by assistive technologies. This additional information is usually aimed at turning the implicit to the explicit (for example, alt attributes turn images which are implicitly visually described, to be explicitly described for people without vision).

But I think we need to go beyond this conception of accessibility, this more shallow view of accessibility, and start to add aspects to the computational model of accessibility which is not just about mark-up to make the implicit, explicit.

The most important part of deep accessibility is that knowledge needs to be created to fill in gaps left in the data and/or code of the original computational artefact. It not just enough to rely on attributes being present, but for deep accessibility we may need to create information for accessibility to effectively occur. For example in large data sets the raw information may very well be accessible but without suitable tools to summarise and enunciate it – the data is not accessible. In this example, these tools need to be able to create summaries, gists, and glances, all forms of description extrapolated from the raw data. This extrapolation can only occur with some degree of understanding, and for this we meaning.

It seems to me that deep accessibility can be thought of as seamless analysis of the inaccessible/or impenetrable raw computational artefact to afford the user direct access to its meaning.

Advertisements

5 thoughts on “Deep Accessibility

  1. I’m an e-government consultant in The Netherlands, specializing both in web accessibility and public sector information (PSI)/open data. This is a great post, something that needs to be addressed and -in our case- policies developed.
    However, IMHO there’s one hole in the argument. And that is that the raw data stored has a meaning in just about any context -it being open to any and all uses in society. If the data is *produced* for that purpose from the very start, that may (or even may not) be the case, but as the PSI is collected/gathered/produced in order to facilitate the proper execution of democratically sanctioned tasks by trained professionals, within very specific contexts, there’s a giant leap between internal data being made available for other use and being able to access the *meaning* of that data. It’s why we have professionally trained public servants who work for a politically and democratically managed organization in the first place.
    Data put in context is information. Information made applicable is knowledge. In both upgrades, trained professionals are essential. A good example is medical records. That’s data about someone’s medical history and status and factors. But you’d have to be a trained medic to put that data in its proper context and a specialized doctor to be able to apply the information to a treatment of a patient. Aside from the privacy issues, the meaning of such data is in the hands of professionals.

    • Thanks Herko,

      To an extent – I agree – however the Raw Data does have a meaning – and the interpretation is the thing that I think gives context. That interpretation could be quite ‘vanilla’ in some cases, and we’d just have to deal with it. But by combining both meaning (semantics) and heuristics we could do better then we are currently. Indeed, I would suggest that if the data can’t be interpreted without specialised knowledge – why are we making it open, what value does open but non-interpretable data have? However, semantics can be used to add meaning across different specialities – by making the data explicit – and I would also argue across different conceptual or cognitive levels.

      Lets me address your comment with a response to your example. While the raw data collected in a patients records are very specialised with regard to treatment and what that means for the clinician, the implications for the patient are still predictable from the raw data. The computational machinery would need to know the context of the interpretation – be it from a patient, or specialist, say – but the machinery could then provide a context sensitive interpretation of the raw-data.

      My mother had Alzheimer’s, and was discharged on occasion from hospital with a discharge letter. The raw data was not accessible to most people. After searching the Web I was able to translate from the medical context to the patient context, the clinical language (intended for any clinician upon re-admission, if required) on the discharge form. The mean was then accessible to me and in fact allowed me to pursue the hospital regarding inaccurate reports that re-admission (if required) was not appropriate.

      I think this is another example of deep access to the data and in fact the addition of interpretation and context (including conceptual or cognitive levels).

  2. Pingback: Defining Deep Accessibility (or how to make Mustikka-rahkapizza) | Thinking Out Loud…

  3. Pingback: Accessibility for All | Thinking Out Loud…

  4. Pingback: Google Cards NOW! | Bugs Become Features…

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s