Product Policy Manager, AI Applications - Trust and Safety - Bay Area

San Francisco, California


Employer: TikTok
Industry: Operations
Salary: Competitive
Job type: Full-Time

Responsibilities

TikTok is the leading destination for short-form mobile video. At TikTok, our mission is to inspire creativity and bring joy. TikTok's global headquarters are in Los Angeles and Singapore, and its offices include New York, London, Dublin, Paris, Berlin, Dubai, Jakarta, Seoul, and Tokyo.

Why Join Us:
Creation is the core of TikTok's purpose. Our platform is built to help imaginations thrive. This is doubly true of the teams that make TikTok possible. Together, we inspire creativity and bring joy - a mission we all believe in and aim towards achieving every day. To us, every challenge, no matter how difficult, is an opportunity; to learn, to innovate, and to grow as one team. Status quo? Never. Courage? Always. At TikTok, we create together and grow together. That's how we drive impact - for ourselves, our company, and the communities we serve. Join us.

About the Team:
Our Trust & Safety team's commitment is to keep our online community safe. We have invested heavily in human and machine-based moderation to remove harmful content quickly and often before it reaches our general community.

As a Product Policy Manager, AI Applications, you will empower innovation in our AI products by researching and defining product risks, and accordingly develop appropriate safety policies, guidelines and strategies. To do so, you will partner with cross-functional stakeholders across the product, engineering, research, legal teams and so forth to innovate and define safety approaches based on leading industry practices and latest market trends. Your work will be critical in shaping ByteDance's AI initiative and ensuring content safety on our platforms.

Our Product Policy Team performs a critical function that supports our efforts to address objectionable or disturbing content. Content that the Product Policy Team interacts with includes images, video, and text related to every-day life, but it can also include (but is not limited to) bullying; hate speech; child safety; depictions of harm to self and others, and harm to animals.

Responsibilities:
- Establish clear and operable safety and ethical guidelines for our AI products, developing expertise across a range of safety and editorial topics;
- Assess risks and design mitigations (including policy enforcement frameworks) associated with the deployment of our AI products, working closely with product teams;
- Monitor and analyze user interactions with our AI products, to inform policy development and enforcement;
- Partner with product, engineering, research, ops, legal, and PR teams to innovate our safety approaches based on the latest developments and best practices.

Qualifications

Minimum Qualifications:
- 3+ years of experience working in the technology industry, with exposure to policy development, program management, model training, and/or risk assessment;
- You have a deep understanding and interest in the key policy issues that impact AI, alongside knowledge around how large language models are trained and function;
- You have an understanding of how policy principles translate into concrete, high-impact, and robust guardrails and safety interventions for diverse products;
- You have a bachelors or masters degree in artificial intelligence, public policy, politics, law, economics, behavioural sciences, or any other related field;
- You are a confident self-starter with excellent judgment, and can balance multiple trade-offs to develop principled, enforceable, and defensible policies and strategies;
- You have persuasive oral and written communication, with the ability to translate complex challenges into simple and clear language and persuade cross-functional partners in fast-paced and often uncertain environment;
- You have experience working with international stakeholders and teammates across different time zones and cultures.

Preferred Qualifications:
- You ideally have prior experience developing and/ or implementing policy for different harms domains.

Trust & Safety recognizes that keeping our platform safe for TikTok communities is no ordinary job which can be both rewarding and psychologically demanding and emotionally taxing for some. This is why we are sharing the potential hazards, risks and implications in this unique line of work from the start, so our candidates are well informed before joining.We are committed to the wellbeing of all our employees and promise to provide comprehensive and evidence-based programs, to promote and support physical and mental wellbeing throughout each employee's journey with us. We believe that wellbeing is a relationship and that everyone has a part to play, so we work in collaboration and consultation with our employees and across our functions in order to ensure a truly person-centred, innovative and integrated approach.

TikTok is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At TikTok, our mission is to inspire creativity and bring joy. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.

TikTok is committed to providing reasonable accommodations in our recruitment processes for candidates with disabilities, pregnancy, sincerely held religious beliefs or other reasons protected by applicable laws. If you need assistance or a reasonable accommodation, please reach out to us at https://shorturl.at/cdpT2

Created: 2024-08-22
Reference: A109736
Country: United States
State: California
City: San Francisco
ZIP: 94130


Similar jobs: