We are a group of engineers and researchers responsible for building foundation models at Apple. We build infrastructure, datasets, and models with fundamental general capabilities such as understanding and generation of text, images, speech, videos, and other modalities and apply these models to Apple products!DescriptionWe believe that the most interesting problems in deep learning research arise when we try to apply learning to real-world use cases, and this is also where the most important breakthroughs come from. You will work with a close-knit and fast growing team of world-class engineers and scientists to tackle some of the most challenging problems in foundation models and deep learning, including natural language processing, multi-modal understanding, and combining learning with knowledge. We are looking for engineers who are passionate about building systems that push the frontier of deep learning in terms of scaling, efficiency, and flexibility and delight millions of users in Apple products! Further, you will have opportunities to identify and develop novel applications of deep learning in Apple products. You will see your ideas not only published in papers, but also improve the experience of millions of users.Minimum Qualifications
Proven track record in training or deployment of large models or building large-scale distributed systems.
Proficient programming skills in Python and one of the deep learning toolkits such as JAX, PyTorch, or Tensorflow.
Ability to work in a collaborative environment.
Preferred Qualifications
Web-scale information retrieval
Human-like conversation agent
Multi-modal perception for existing products and future hardware platforms
On-device intelligence and learning with strong privacy protections
PhD, or equivalent practical experience, in Computer Science, or related technical field.