This talk will cover the ways to approach machine learning on mobile devices. I am going to compare cloud-based and on-device inference, focusing more on the latter. Local (on-device) machine learning means that inference is happening directly on a mobile device, giving us the benefit of keeping the user’s data private and not depending on the network connection. However the ML models should be prepared and optimized for efficiency and performance. I will do an overview of different ML frameworks that work on iOS and Android, namely: CoreML, MLKit, and TensorFlow Lite (with some code examples). The attendees will understand the capabilities and limitations of each of these frameworks, and will have a general idea of how ML models should be prepared for deployment on mobile.