The PhonicStick is a novel augmentative and alternative communication joystick-like device which enables individuals with severe speech and physical disorders to access forty-two sounds (phonics) and blend the sounds together to create spoken words. The device aims to allow the users to select phonics and produce speech without the need for a visual interface. One of the problems with the current prototype of the PhonicStick is that the phonic entry is relatively slow and may involve many physical movements which will cause great difficulties for users with poor hand function.
Therefore, in this research we are investigating whether natural language processing (NLP) technology can be utilized to facilitate the phonic retrieval and word creation processes. Our goal is to develop a set of phonic-based NLP acceleration methods, such as phonic disambiguation and phonic prediction, which will reduce the user effort required to select the target phonics and improve the speed of producing words. This paper will discuss the challenges of applying such methods to the PhonicStick and report on the current state of the development of the proposed techniques. The presentation will also include a live demonstration of the latest prototype of the PhonicStick.