Auditory saliency and its role in accessible gaming

Today I met with my supervisors to discuss my upcoming transfer viva and interesting topics for studies moving into 2015. While nearing completion, my supervisors gave some valuable feedback for the tone and message of my report; especially the necessity of a coherent narrative. I’ve tried to be as thorough as possible in my report writing but I need to make sure that the reason for reading it is conveyed clearly to the reader.

After discussing the report we moved on to the interesting stuff; what to start working on next. The overarching theme of my thesis is inclusive gaming. By this, I am interested in design techniques and technologies that can enable games to reach a wider audience by becoming artefacts of an accessible design.

How can an interface surpass the status quo and becoming “accessible” whilst maintaining the same level of efficiency as one for sighted players? Given that our aural senses have a much smaller capacity for interpreting information when compared to our visual sense, this sounds like it is unachievable. However, a lot of the information our visual sense perceives in the occipital lobe and propogates through the visual cortex is redundant, irrelevant. Most of the time, there is a subset of the data perceived that is of interest or rather necessary to understand the scene which we are observing. In other words, what we want is salience.

Whilst current accessible tools focus on mapping visual elements 1-1 to an equal yet slower modality (temporality issues exist in screen readers and text-to-speech systems), might there be a way to extract the information in a visual scene, remove non-salient information and then compress it into a form that enables a ~1:1 mapping of the objects in the scene and the sounds used to represent them?

Back to the books for me. Below are references shared with me by my supervisor. Passing them on here.