To make Mustikka-rahkapizza, you need vettä. It’s a recipe I learnt while in Helsinki on holiday, but I could not have learnt this without my smart translator app which uses a combination of camera, OCR, and translation engine to give me a view onto that recipe I would not normally have. In this case the paper recipe was made accessible by my translator application; which added value by converting and adding information via a computational process. This idea then of different views onto information, or functionality, adding new meaning by performing some computational process is just the kind of deep accessibility I’ve been thinking about recently; and maybe isn’t our usual conception of accessibility.
Now it seems to me that deep accessibility can be thought of as a seamless analysis of the inaccessible/or impenetrable raw computational artefact to afford the user direct access to its meaning. It seems to me that the camera translator app is a nice example of a deep accessibility application, kind of. It uses computational machinery, coupled with semantics (meaning), and raw information (in the form of the recipe text in the case of Mustikka-rahkapizza) to give me something that was not present previously – to facilitate my access to that artefact.
We can already see that this kind of deep accessibility (Raw-Data + Computation + Semantics = Deep Accessibility) in-play with projects which sense the dynamic updates to web pages via the DOM; by firing different events to communicate the fact that an update has occurred (in this case a DOM Mutation Event). In this case, we are relying on listening for interprocess communications via the event handler to ‘understand’ what is happening and replicate it in our own accessibility centric model.
In fact this mutation example is exactly what I’m getting at – it relies in proactive accessibility code – listening / waiting for general traffic; an explicit understanding of the ‘meaning’ of both communication and data; dedicated functionality, invisible to the user – imbedded (invisible and embedded) – so that the accessibility is seamless and does not require user control; along with a healthy set of heuristics based on domain knowledge to fill in the gaps which can’t be explicitly arrived at. The only thing that is missing in this example is my final requirement that both the data and code of the original software are independent of the ‘accessibility’ being created (encapsulation of ).
Deep Accessibility then is about understanding and invisibly making the right accessibility decisions based on a semantic knowledge of both data, and code functionality including interprocess communications. We need to understand what the data is about, how it is represented, the purpose of the functionality created to work on that data, and the meaning/intention of the communications used by functions to talk to each other and the underlying operating systems.
So, do we already have deep accessibility? To some extent yes, but mostly it is not imbedded, and that ‘deep accessibility’ which does exist normally requires that it is written by the same people who have created the original functionality thereby not fulfilling my encapsulation property.
In reality I think it is pretty difficult to create this kind of Deep Accessibility, but it is possible and necessary, and it is not just about disability but about all of us being able to access the information and functionality as we want or need.
Late update
See also comments on deep accessibility – extending the definition with the addition of interpretation and context (including conceptual or cognitive levels).
Pingback: Accessibility for All | Thinking Out Loud…