Y

Scaled Abuse Analyst, Threat Investigations, YouTube

YouTube
Full-time
On-site

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 2 years of experience in data analysis, including identifying trends, generating summary statistics, and drawing insights from quantitative and qualitative data.
  • 2 years of experience managing projects and defining project scope, goals, and deliverables.
  • 3 years of experience with one or more of the following languages: SQL and Python.

Preferred qualifications:

  • Experience in data management, metrics analysis, experiment design and automation.
  • Experience with classification systems or ranking systems using LLMs or statistical modeling.
  • Experience translating analytical insights into business strategies and actions.
  • Experience in content policy, anti-abuse, or fraud identification.
  • Ability to build relationships with cross-functional partners across geographies.

About the job

Fast-paced, dynamic, and proactive, YouTube’s Trust & Safety team is dedicated to making YouTube a safe place for users, viewers, and content creators around the world to create, and express themselves. Whether understanding and solving their online content concerns, navigating within global legal frameworks, or writing and enforcing worldwide policy, the Trust & Safety team is on the frontlines of enhancing the YouTube experience, building internet safety, and protecting free speech in our ever-evolving digital world.

As a Scaled Abuse Analyst on the Threat Investigations Team, you will be responsible for working with a global team from policy, enforcement, product, engineering, tooling, and legal to prevent violative content from appearing on the platform. You'll quantitatively evaluate abuse trends and leverage creative technical solutions to address detection gaps, quality workflows, and processes. You'll also review decisions about the appropriateness of different content. You will succeed in a changing dynamic, and demanding environment where the threat actor landscape is constantly evolving.

On this team, you will be a key contributor to the efforts to reduce harmful content on YouTube. We are a fast-moving team of problem solvers. We are constantly navigating ambiguity to keep YouTube safe from threats ranging from platform manipulation to adversarial generative AI content.

At YouTube, we believe that everyone deserves to have a voice, and that the world is a better place when we listen, share, and build community through our stories. We work together to give everyone the power to share their story, explore what they love, and connect with one another in the process. Working at the intersection of cutting-edge technology and boundless creativity, we move at the speed of culture with a shared goal to show people the world. We explore new ideas, solve real problems, and have fun — and we do it all together.

The US base salary range for this full-time position is $126,000-$181,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google.

Responsibilities

  • Apply advanced statistical methods to large data sets to understand the impact of abuse to the YouTube ecosystem. Contribute strategy and development of new workflows against additional known vectors of abuse.
  • Perform fraud and spam investigations using various data sources, identify product vulnerabilities and drive anti-abuse experiments to prevent abuse. Work with engineers and interact cross-functionally with stakeholders to improve workflows via process improvements, automation and anti-abuse system creation.
  • Research and stay up-to-date on key trends and suspicious patterns of abuse across the team’s policy areas.
  • Maintain, foster and promote quality by providing regular feedback to stakeholders. Manage technological solutions for streamlining quality assurance and produce training solutions to emergent workflows.
  • Design and refine prompts for Large Language Model (LLMs) to improve their accuracy in identifying and classifying abusive content and behavior.
Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.