Google quietly released 'AI Edge Eloquent' on iOS — a free, Gemma-powered offline dictation app with filler-word removal and text polish.
Google released an experimental app called 'Google AI Edge Eloquent' on iOS, built on Gemma-based ASR models that run entirely on-device. The app transcribes speech in real time, strips filler words, and offers reformatting options like 'Formal,' 'Short,' and 'Long.' It optionally connects to cloud Gemini models for enhanced cleanup and can import vocabulary from Gmail. A floating button feature similar to Wispr Flow's is planned for a future update.
Eloquent is built on Google's AI Edge framework and Gemma ASR, which means the on-device model pipeline is accessible to developers building voice-first apps. This is less a consumer product story and more a signal that production-quality offline ASR on mobile is now within reach without a cloud dependency. If Eloquent's model weights or inference stack surface via the AI Edge SDK, the integration cost for native voice UX drops sharply.
Pull the Google AI Edge SDK docs this week and check whether the Gemma ASR model used in Eloquent is exposed as a standalone API — if it is, prototype an offline voice input layer for your highest-friction text-entry screen.
Go to ai.google.dev/edge and search for 'speech recognition' or 'ASR' in the documentation
Tags
Signals by role
Also today
Tools mentioned