While video calling has become a daily communication tool, many users face challenges in fully participating due to hearing impairments, background noise, or language barriers. Current solutions focus mostly on professional meeting platforms, leaving personal video calls like FaceTime without proper accessibility features. This creates significant participation hurdles for millions of users who could benefit from real-time text support during conversations.
One approach could involve adding a toggle-activated subtitle feature to FaceTime that processes speech entirely on the device. When enabled, the system would:
The feature might use the device's neural engine for efficient speech recognition while adapting to different accents and speakers in group calls through voice separation technology.
Unlike third-party apps or meeting-specific captioning tools, embedding this capability directly into FaceTime could offer:
For developers at Apple, this could serve as both an accessibility milestone and a competitive differentiator, potentially increasing device loyalty among users who need such features.
Starting with a basic English-only version for one-on-one calls could validate the concept before expanding to group calls and multilingual support, with UI refinements based on user testing with hearing-impaired communities.
Hours To Execute (basic)
Hours to Execute (full)
Estd No of Collaborators
Financial Potential
Impact Breadth
Impact Depth
Impact Positivity
Impact Duration
Uniqueness
Implementability
Plausibility
Replicability
Market Timing
Project Type
Digital Product