Apple’s iOS 17 update introduces a new accessibility feature called Personal Voice, which uses machine learning to create a synthesized version of the user’s voice. This tool works alongside Live Speech, another accessibility feature, to convert text into audio. It allows users to type out messages on FaceTime or during a call and have them verbally spoken in a voice that resembles their own. The process of setting up Personal Voice requires recording 150 phrases and takes about 15-20 minutes. Once the recordings are done, the machine learning process begins, which can take several hours to complete.
Personal Voice serves as a valuable tool for individuals with disabilities to fully express themselves. It also becomes useful in situations where a user loses their voice, as Personal Voice can step in and speak on their behalf. The setup process involves going to the Settings menu, selecting Accessibility, and then Personal Voice. Following the provided instructions, users record the required audio samples. After completion, the machine learning process begins, which requires the device to be locked and plugged in. Once set up, Personal Voice can be added to the collection of voices in Live Speech, enabling users to use it on FaceTime or calls.
To use Personal Voice, Live Speech must be enabled by triple-clicking the side button of the device. This activates a window where users can type out their message, and the AI-generated version of their voice will speak it. This feature allows users to communicate efficiently and effectively, even if they are unable to speak themselves. Overall, Personal Voice proves to be a powerful accessibility feature in iOS 17, granting individuals with disabilities the ability to communicate and express themselves with ease.