A month after announcing changes to ML Kit, its developer toolset for infusing apps with AI, Google today launched the Digital Ink Recognition API on Android and iOS to allow developers to create apps where stylus and touch act as inputs. As the name implies, the API — which is powered by the same technology underpinning Google’s Gboard software keyboard, Quick Draw, and AutoDraw — looks at a user’s strokes on the screen and recognizes what they’re writing or drawing.

Google says that with the new Digital Ink Recognition API, developers can enable users to input text and figures with a finger and stylus or transcribe handwritten notes to make them searchable. Some classifiers parse written text into a string of characters; other classifiers describe shapes such as drawings, sketches, and emojis by the class to which they belong (e.g., circle, square, happy face, and so on).

The Digital Ink Recognition API performs processing in near-real-time and on-device, according to Google, with support for over 300 languages and more than 25 writing systems, including all major Latin languages, Chinese, Japanese, Korean, Arabic, and Cyrillic. Developers must download one or more classifiers weighing in around 20MB. Google says the recognition time is about 100 milliseconds, depending on the device hardware and the size of the input stroke sequence.

Google ML Kit

The new API comes after Google added new natural language processing services for ML Kit, including Smart Reply, last year. (Smart Reply suggests text responses based on the last 10 exchanged messages and runs entirely on-device, and it’s been incorporated into GmailGoogle Chat, and Google Assistant on smart displays and smartphones.) Last year during its I/O 2019 developer conference, Google added three new capabilities to ML Kit in beta, including a translation API supporting 58 languages and a pair of APIs that let apps locate and track objects of interest in a live camera feed in real time. More recently, ML Kit gained support for custom TensorFlow Lite image labeling, object detection, and object tracking models as it transitioned from ML Kit for Firebase’s on-device APIs to a new standalone SDK (ML Kit SDK) that doesn’t require a Firebase project.

Google ML Kit

Earlier this year, Google noted that more than 25,000 applications on Android and iOS now use ML Kit’s features, up from just a handful at its introduction in May 2018. Much like Apple’s CoreML, ML Kit is built to tackle challenges in vision and natural language domains, including text recognition and translation, barcode scanning, and object classification and tracking.