Audio-Visual Assistive Tools for use with the Personal Communication Link

Personal Communication System (PCS) to support assistive communications on a handheld device. The system includes several new assistive communication applications based on wireless technology i.e. using the PCL to connect with hearing aids or headsets. The communication applications will bridge the PCL to other communication systems (e.g. phone, internet, public address, etc). New assistive applications can be designed to benefit both normal hearing and hearing-impaired persons. 

D8_2Fig1.jpg

Personal Hearing System (PHS) integrated in a mainstream based handheld device allowing interaction with other assistive services and providing advanced audio signal processing for use with hearing devices. This hearing system may provide a low cost, easy to adopt tool for improved speech understanding.
For practical reasons the PHS may first be integrated into a separate portable system, but in design will be targeted for integration within a PCS.

Wireless Public Address (WPA) that relays auditive and textual information from local public announcements to the PCS using a local area wireless network. At the PCS the information will be processed for personal use. Textual information will be generated by a client server ASR application and/or by direct textual information that can be integrated into the wireless information stream.

Automated Speech Recognition (ASR) for use on a PCS to assist hearing impaired persons for communication tasks. The Project will research how ASR may be used and be targeted and improved to assist on hearing and communication tasks. Its result may lead to ASR systems optimized to specific applications for hearing impaired persons (for instance bimodal speech and text communication). The ASR application is planned to be demonstrated as a client server application.

Bimodal Communication: The presentation of textual information to improve communication in adverse conditions. For this, additional to speech, textual information will be presented on the PCS. The information will originate from an automated speech recognizer or being the (possibly condensed) textual information from other information sources. The presentation of the bimodal text information is intended to be integrated on the PCS, e.g. to work with the client-server ASR application.

Public Report: D-9-1: Requirements specification of user's need for assistive applications on a common platform: pdf