Beschreibung
Summary
Apple's Sensing & Connectivity team is building the next generation of spatial awareness and applied perception technologies. This role focuses on productizing new machine learning capabilities that help Apple devices understand the physical world and interact with users more intuitively.
Description
You will architect and implement production software systems for new ML technologies on the Location Services team, combining machine learning with wireless and spatial sensors to improve location-aware user experiences. The work spans egocentric vision, action recognition, sensor-based perception, and efficient deployment on Apple platforms.
Key Responsibilities
- Develop state-of-the-art perception algorithms for egocentric vision, action recognition, and sensor-based perception
- Design efficient ML training pipelines, fine-tune Vision Transformers, and apply active learning techniques to reduce annotation cost
- Architect production software systems that fuse machine learning with wireless and spatial sensors for new iOS experiences
- Collaborate with partner teams across iOS, sensing, and hardware to integrate new models into end-user features
- Optimize data-loading and training infrastructure to maximize GPU efficiency while maintaining real-world robustness
Minimum Qualifications
- PhD in Computer Engineering, Computer Science, Electrical Engineering, or a related field, or an MS with equivalent applied research experience
- Deep expertise in machine learning and computer vision, especially Vision Transformers, CNNs, and foundation models
- Hands-on experience building efficient ML training pipelines with active learning, domain adaptation, or semi-supervised learning
- Experience with egocentric video understanding, action recognition, scene classification, or related perception tasks
- Strong programming skills in Python and C/C++ with modern deep learning frameworks such as PyTorch or TensorFlow
- Experience working with novel or multi-modal sensor data, including depth, IR, or synthetic and semi-synthetic data generation
- Familiarity with 3D computer vision and spatial or robotic perception
Pay range: $147,400 – $272,100 (Cupertino, CA). Apple accepts applications on an ongoing basis.