What is the Project?
Intelligent Sounds is a new app to help people improve the clarity of their speech sounds. Using this app, you can listen to over 30,000 model recordings of words, taken from Oxford University Press’
dictionaries. You can then record your own voice and compare your speech to the model – visually and aurally.
The touch-screen facility allows you to pinpoint difficult segments of words, or play sounds back slowly. You can create your own list for words and sounds you want to practise. You can use this app on your own, in your own time and at your own pace, or with a speech and language therapist or family member. It’s easy to use, builds your confidence, and gets quick results!
Who are we looking for?
We are looking for people to help us understand the changes we can make to tailor the app for people who have different conditions – for example, post-stroke dysarthria, speech difficulties resulting from Parkinson’s disease, Cerebral Palsy, Multiple Sclerosis or traumatic brain injury.
How do I get involved?
We are looking for people to feed back on the concept via our survey and test the existing app. Using your feedback, we will be able to create a more powerful and personalised speech sounds app.
If you would like to get involved, please take our survey here.
More about Jenny
Jenny Dance, a linguist and marketing analyst, runs a small technology start-up called Phona. Phona develops audio-visual apps to help people analyse, practise and improve speech sounds. Phona’s first app, Say It: Pronunciation from Oxford, was developed in partnership with Oxford University Press to help non-native speakers of English improve their pronunciation. Having suffered episodes of slurred, unintelligible speech due to a neurological problem, Jenny could see the potential for Say It to be used as a speech therapy tool as well.
Professor Karen Sage, previously Head of the Bristol Speech and Language Therapy Research Unit, confirmed the potential for the technology to help in speech therapy, and is now a long-term collaborator on the Intelligent Sounds project. Together, Jenny and Karen are looking at how the app can be adapted to support people with a range of conditions which can impact the clarity of speech – for example stroke, Parkinson’s disease and Multiple Sclerosis.