Shafik Quoraishee is a Senior Games ML Engineer at The New York Times, where he’s a key member of the team behind flagship titles like Wordle, Connections, Crosswords, and Strands sessionize.comstevensducks.com+9shafikquoraishee.com+9x.com+9. With a strong foundation in computer vision and AI, he has developed tools, demos, and real-world systems that bridge cutting‑edge research and practical applications sessionize.com+2shafikquoraishee.com+2nyc.droidcon.com+2.
An experienced Android developer with over 11 years in the field, Shafik led the MakerWeek 2023 initiative to build an experimental handwriting recognition feature within the NYT Crossword Android app. His work involved end‑to‑end design: from “SketchBox” input handling, preprocessing diverse handwriting styles, to training a convolutional neural network and integrating it using TensorFlow Lite nyc.droidcon.com+1sessionize.com+1.
As a thought leader in AI and games, he regularly speaks at industry events, including GDC’s AI Summit 2024, Eng X engineering summits, and Droidcon NYC 2025. His talks cover topics such as handwriting recognition, generative AI in games, GANs, and the interplay of AI and game theory . He is also an active researcher and writer, publishing on Medium and Nieman Lab about topics like neural nets, auto‑encoders, facial recognition, and experimental game‑AI tools.
This work was featured in Nieman Lab: https://www.niemanlab.org/2024/01/how-the-new-york-times-is-building-experimental-handwriting-recognition-for-its-crosswords-app/
And NyTimes Open: https://open.nytimes.com/experimenting-with-handwriting-recognition-for-new-york-times-crossword-a78e08fec08f
The New York Times Crossword team built and tested a handwriting recognition system that lets users fill in crossword puzzles with a stylus or finger instead of typing. The goal was to make the experience feel natural while maintaining the speed, accuracy, and responsiveness solvers expect. This meant balancing tight UI constraints, noisy input data, and fast local inference—inside a React Native app used by millions.
This talk covers the technical and product decisions behind the feature: how the model was trained, how inputs were normalized and interpreted, why we built it to run entirely on-device, and what kinds of problems showed up only once it was in front of real users. It’s a case study in applied machine learning, lightweight prototyping, and building interaction models that feel simple—but aren't.
What you’ll learn:
- How to build a handwriting recognition pipeline using EMNIST and TensorFlow.js
- How to process stylus/finger input for real-time inference in a mobile UI
- How to handle latency, UX feedback loops, and user correction flows
- What to watch for when deploying ML features in production apps
- Lessons from testing an experimental input method with a real user base
Searching for speaker images...