FIND_THE_RIGHTJOB.
India
Trust & Safety team members are tasked with identifying and taking on the biggest problems that challenge the safety and integrity of our products. They use technical know-how, excellent problem-solving skills, user insights, and proactive communication to protect users and our partners from abuse across Google products like Search, Maps, Gmail, and Google Ads. On this team, you're a big-picture thinker and strategic team-player with a passion for doing what’s right. You work globally and cross-functionally with Google engineers and product managers to identify and fight abuse and fraud cases at Google speed - with urgency. And you take pride in knowing that every day you are working hard to promote trust in Google and ensuring the highest levels of user safety.
The AI Safety Protections team within Trust and Safety develops and implements cutting-edge AI/LLM-powered solutions to ensure the safety of generative AI across Google's products. This includes safeguards for consumer products, enterprise offerings (Vertex AI), on-device applications, as well as foundational models (Gemini, Juno, Veo) in collaboration with Google DeepMind. We are a team of passionate data scientists and machine learning experts dedicated to mitigating risks associated with generative AI, and addressing real-world safety with LLM/AI technology (e.g., imminent threat, child safety).
As a member of our team, you will have the opportunity to apply the latest advancements in AI/LLM, work with teams developing cutting-edge AI technologies, as well as protecting the world from real-world harms.
This role works with sensitive content or situations and may be exposed to graphic, controversial, or upsetting topics or content.Similar jobs
No similar jobs found
© 2025 Qureos. All rights reserved.