About the Role
We’re seeking an experienced Android Engineer who thrives at the intersection of AI, mobile development, and user experience. In this role, you’ll architect and implement Android applications that leverage computer vision, speech recognition, and heuristic-driven intelligence to deliver adaptive, context-aware features.
You’ll work closely with ML engineers, designers, and product teams to bring AI models to life on-device, combining real-time inference, rule-based logic, and user-focused design into seamless mobile experiences.
Key Responsibilities
- Design and develop Android applications that integrate Vision, Speech, and Heuristic intelligence systems.
 - Implement and optimize on-device ML models using TensorFlow Lite, MediaPipe, and Android ML Kit.
 - Develop heuristic pipelines that refine or post-process model outputs — improving reliability, interpretability, and user experience.
 - Build efficient data flows between camera/audio input, ML inference, and UI rendering in real time.
 - Collaborate with backend and ML teams on model deployment, quantization, and edge optimization.
 - Ensure robust performance across devices, with emphasis on low latency, memory efficiency, and battery conservation.
 - Contribute to continuous integration and testing pipelines for AI-enhanced mobile applications.
 - Stay up to date with emerging Android ML technologies, including Edge TPU and Neural Networks API (NNAPI) improvements.
 
Required Qualifications
- Bachelor’s or Master’s degree in Computer Science, Electrical Engineering, or related discipline.
 - 7–9 years of professional experience developing native Android applications using Kotlin and Java.
 - Strong experience integrating TensorFlow Lite, ML Kit, or custom on-device ML models.
 - Proven work with computer vision (e.g., image classification, object detection) and speech recognition (ASR, TTS) models.
 - Knowledge of heuristic and rule-based systems, including hybrid AI pipelines.
 - Deep understanding of Android architecture components, coroutines, and asynchronous data processing.
 - Experience with camera frameworks (CameraX, Camera2 API) and audio capture/processing.
 
Preferred Qualifications
- Experience building multimodal systems combining vision, speech, and contextual data.
 - Familiarity with MediaPipe or ONNX Runtime for Android.
 - Knowledge of hardware acceleration via NNAPI, GPU, or Hexagon DSP.
 - Exposure to reinforcement learning, behavioral heuristics, or adaptive user models.
 - Experience designing privacy-preserving, on-device ML pipelines.
 - Background in real-time inference or AR/VR interfaces using ARCore.
 - Deep understanding of Android architecture components (MVVM/MVI), Android Jetpack libraries, Coroutines, and modern UI principles.
 - Familiar with performance tuning, memory management, app lifecycle, background threading, and battery/network optimization.
 
Soft Skills
- Strong cross-disciplinary communication skills — able to collaborate with AI researchers and UX designers.
 - Passion for building intelligent, human-centered mobile experiences.
 - Analytical mindset with attention to detail and performance metrics.
 - Self-driven with a bias for execution, experimentation, and continuous learning.
 
Tools & Technologies
- Languages: Kotlin, Java, Python (for model preparation)
 - Frameworks: TensorFlow Lite, ML Kit, MediaPipe, NNAPI
 - APIs: CameraX, SpeechRecognizer, AudioRecord, WorkManager
 - Tools: Android Studio, Gradle, ADB, Git, Firebase, Jira
 - ML Ecosystem: TensorFlow, PyTorch → TFLite Converter, ONNX