A few weeks ago I gave a presentation at the Instituto Superior Técnico (IST) at the Technical University of Lisbon (UTL) in Lisbon. It was about Deep Accessibility and how we should think about adapting interfaces to suit our senses. Here are my slides and two entries in the style of a Tiny Transactions on Computer Science (TinyToCS); “the premier venue for computer science research of 140 characters or less”.
Well, as far as I’m aware it’s the ONLY venue for computer science research of 140 characters or less; but here we go:
- Can frequency bounding to the Brains tonotopy enhance the distinguishability of multi-talker auditory interfaces?
- Does Neuro-Plasticity enable people who have become blind up to 18, perceive more talkers than sighted users?
So what was I really trying to say here – well that ‘Content Driven Transcoding’ is focused on transforming the content based on its representation in the DOM; ‘Experience Driven Transcoding’ attempts to transform the content based on both its representation, and the predicted equivalent experience of the user; while ‘Modality Driven Transcoding’ adds another step, attempting to transform the content based on its representation, the predicted equivalent UX, and tailored to the sensory modality of the user.
It is this tailored aspect that is exciting and that requires us to look deeper into the way we as humans interact with and conceive of or environment and everything in it. In this case I’m suggesting that and understanding of Auditory Cortex may be useful for increasing the number of distinguishable talkers in a multi-talker system.
Who knows – but it’ll be interesting finding out!
2 thoughts on “Adapting Interfaces to Suit Our Senses”
Pingback: In the meantime… (#3) « A11y 2020
Pingback: Experiential Transcoding: An Eye-Tracking Approach | Bugs Become Features…