AI Safety & Robustness Analysis Manager - System Intelligent and Machine Learning - ISE
Cupertino, California
Summary
Are you passionate about inclusion, fairness and safety in AI powered features that ship on 1.5B Apple products across the globe? Are you excited about Generative AI and motivated to build out robust and safety capabilities of generative models?
We are the Intelligence System Experience (ISE) team within Apple's software organization. The team works at the intersection between multimodal machine learning and system experiences. System Experience (Springboard, Settings), Keyboards, Pencil & Paper, Shortcuts are some of the experiences that the team oversees. These experiences that our users enjoy are backed by production scale ML workflows. Visual Understanding of People, Text, Handwriting & Scenes, multilingual NLP for writing workflows & knowledge extraction, behavioral modeling for proactive suggestions, and privacy preserving learning are areas our multi disciplinary ML teams focus on.
We have multiple on-going efforts involving generative models, and we are looking for talented candidates to led the Robustness Analysis effort in ISE to ensure that features built on top of generative models are safe for deployment, and perform equally well for diverse customers within Apple's global user base.
This is an exciting time to join us: grow fast, and have a positive impact on multiple key features on your first day at Apple!
Key Qualifications
2+ years of experience as a machine learning manager, or 7+ years of professional machine learning experience with demonstrated technical leadership
Capacity to build a team and establish innovative and agile processes that push for a high level of service and for scalability
Strong ML fundamentals and hands-on experience in training ML models (NLP or Computer Vision), familiarity with ML toolkits, e.g., PyTorch
Proven experience in accessing and addressing potential risks and ensuring the safety and fairness in generative models. Prior experience in LLM Safety is desired
Strong experience in quantitative methods, data analysis and machine learning, leading to a deep understanding of the challenges associated to building ML datasets and machine learning models (potential biases, potential failure modes)
Capacity to operate at the intersection of ethics, product experience and machine learning: translate product needs (e.g. fairness & safety) into analytical requirements; design and lead experiments to answer feature level questions
Description
In this position, you will manage a team of people passionate about leading RA operations for key future facing Apple features with focus on ensuring safety and robustness for generative models. Apple's dedication to deliver incredible experiences to a global and diverse set of users, in full respect of their privacy, has led to the development of a dedicated Robustness Analysis function. With the generative experience, creating a safe and robust platform is vital to our mission. Team's responsibilities include monitoring ML model performance on relevant axes, and surfacing, measuring and mitigating ML failure modes, in order to improve overall user experience and reduce risks, with specific attention given to safety, inclusion and fairness.
THE TEAM'S RESPONSIBILITIES INCLUDE:
- research and develop approaches to mitigate harmful and risk behaviors in generative models
- define product-centered axes of analysis relevant to target feature, in collaboration with model DRI and feature DRI
- develop processes (models, tools and data) to identify other potential biases or failure modes
- implement automated pipelines based on advanced ML technology and humans/models in the loop to create test sets covering the various axes of investigation
- report progress and issues found in technical and sponsor meetings
- suggest mitigation options (data and/or model) and lead mitigation experiments, when issues are found
- become key contact within our organization, in company wide efforts related to safety, fairness and inclusion, robustness analysis, interpretability
Education & Experience
M.S. or PhD in Computer Science, Data Science, Mathematics, Physics, or a related field; or equivalent practical experience
Are you passionate about inclusion, fairness and safety in AI powered features that ship on 1.5B Apple products across the globe? Are you excited about Generative AI and motivated to build out robust and safety capabilities of generative models?
We are the Intelligence System Experience (ISE) team within Apple's software organization. The team works at the intersection between multimodal machine learning and system experiences. System Experience (Springboard, Settings), Keyboards, Pencil & Paper, Shortcuts are some of the experiences that the team oversees. These experiences that our users enjoy are backed by production scale ML workflows. Visual Understanding of People, Text, Handwriting & Scenes, multilingual NLP for writing workflows & knowledge extraction, behavioral modeling for proactive suggestions, and privacy preserving learning are areas our multi disciplinary ML teams focus on.
We have multiple on-going efforts involving generative models, and we are looking for talented candidates to led the Robustness Analysis effort in ISE to ensure that features built on top of generative models are safe for deployment, and perform equally well for diverse customers within Apple's global user base.
This is an exciting time to join us: grow fast, and have a positive impact on multiple key features on your first day at Apple!
Key Qualifications
2+ years of experience as a machine learning manager, or 7+ years of professional machine learning experience with demonstrated technical leadership
Capacity to build a team and establish innovative and agile processes that push for a high level of service and for scalability
Strong ML fundamentals and hands-on experience in training ML models (NLP or Computer Vision), familiarity with ML toolkits, e.g., PyTorch
Proven experience in accessing and addressing potential risks and ensuring the safety and fairness in generative models. Prior experience in LLM Safety is desired
Strong experience in quantitative methods, data analysis and machine learning, leading to a deep understanding of the challenges associated to building ML datasets and machine learning models (potential biases, potential failure modes)
Capacity to operate at the intersection of ethics, product experience and machine learning: translate product needs (e.g. fairness & safety) into analytical requirements; design and lead experiments to answer feature level questions
Description
In this position, you will manage a team of people passionate about leading RA operations for key future facing Apple features with focus on ensuring safety and robustness for generative models. Apple's dedication to deliver incredible experiences to a global and diverse set of users, in full respect of their privacy, has led to the development of a dedicated Robustness Analysis function. With the generative experience, creating a safe and robust platform is vital to our mission. Team's responsibilities include monitoring ML model performance on relevant axes, and surfacing, measuring and mitigating ML failure modes, in order to improve overall user experience and reduce risks, with specific attention given to safety, inclusion and fairness.
THE TEAM'S RESPONSIBILITIES INCLUDE:
- research and develop approaches to mitigate harmful and risk behaviors in generative models
- define product-centered axes of analysis relevant to target feature, in collaboration with model DRI and feature DRI
- develop processes (models, tools and data) to identify other potential biases or failure modes
- implement automated pipelines based on advanced ML technology and humans/models in the loop to create test sets covering the various axes of investigation
- report progress and issues found in technical and sponsor meetings
- suggest mitigation options (data and/or model) and lead mitigation experiments, when issues are found
- become key contact within our organization, in company wide efforts related to safety, fairness and inclusion, robustness analysis, interpretability
Education & Experience
M.S. or PhD in Computer Science, Data Science, Mathematics, Physics, or a related field; or equivalent practical experience
Created: 2024-06-07
Reference: 200519780
Country: United States
State: California
City: Cupertino
About Apple
Founded in: 1976
Number of Employees: 154000
Website: https://www.apple.com/
Career site: https://www.apple.com/careers/us/
Wikipedia: https://en.wikipedia.org/wiki/Apple_Inc.
Instagram: https://www.instagram.com/apple/
LinkedIn: https://www.linkedin.com/company/apple
Similar jobs:
-
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in San Diego, California💸 $26 - $32 per hour -
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in Los Angeles, California💸 $26 - $32 per hour -
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in Long Beach, California💸 $26 - $32 per hour -
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in Torrance, California💸 $26 - $32 per hour -
Senior Manager, Financial Planning & Analysis
Kaiser Permanente in Fresno, California💸 $162900 - $210760 per year -
Manager, Financial Planning & Analysis - RI
Kaiser Permanente in Riverside, California💸 $125900 - $162910 per year -
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in Fullerton, California💸 $26 - $32 per hour -
Sr. Manager, Financial Planning & Analysis - Santa Ana
Kaiser Permanente in Santa Ana, California💸 $150000 - $194040 per year -
Manager, Financial Planning and Analysis - Remote
City of Hope in Irwindale, California -
Behavior Program Manager - Applied Behavior Analysis
Intercare Therapy in Roseville, California💸 $23 - $30 per hour -
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in Van Nuys, California💸 $26 - $32 per hour -
Manager, Financial Planning & Analysis
Aurora Innovation in Mountain View, California💸 $216000 per year -
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in South Gate, California💸 $26 - $32 per hour -
Senior Manager, Business Analysis and Insights
Walmart in SUNNYVALE, California💸 $96000.00 - $186000.0 per year -
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in Hawthorne, California💸 $26 - $32 per hour -
Legislative Analysis Group Manager
State Of California in Sacramento, California -
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in Pasadena, California💸 $26 - $32 per hour -
Value Analysis Senior Manager - Hybrid
City of Hope in Duarte, California -
Manager, Business Analysis And Insights
Walmart in SAN BRUNO, California💸 $84000.00 - $156000.0 per year -
Behavior Program Manager - Applied Behavior Analysis (MA Required)
Intercare Therapy in Poway, California💸 $26 - $32 per hour