Google Industry · Engineering

Security Engineer, Information Security Engineering Product AI Research and Security

CHF 130'000 – 150'000 / year
ZÜRICH
AI-TITLEAGENTIC

About the job

Our Security team works to create and maintain the safest operating environment for Google's users and developers. Security Engineers work with network equipment and actively monitor our systems for attacks and intrusions. In this role, you will also work with software engineers to proactively identify and fix security flaws and vulnerabilities.

The Information Security Engineering (ISE) team is dedicated to safeguarding Google's products and the data of billions of users. Security Engineers within ISE partner with engineering teams across Google to conduct in-depth security reviews throughout the product lifecycle.

The AI Product Security team (ISE PAIRS) is dedicated to ensuring the safe and secure development and deployment of Google's Artificial Intelligence products and features. Our mission is to protect our users and their data by identifying, mitigating, and preventing security vulnerabilities within Google's AI technologies. We embed security into the entire product lifecycle, from initial design and research through launch and beyond. We work on AI security, addressing novel threats and shaping best practices for the industry.

Responsibilities

  • Analyze AI products to uncover vulnerabilities and pinpoint opportunities for hardening, reporting findings to stakeholders for mitigation.
  • Promote quality security practices across the organization, influencing software engineers, immediate colleagues, and beyond Google.
  • Perform threat modeling of large and complex systems to quickly determine areas that warrant further investigation and direct security review.
  • Conduct research to identify and mitigate entire classes of vulnerabilities.
  • Leverage AI tools and agents in your day-to-day work to scale your own capabilities.

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 5 years of experience in security assessments.
  • Experience with security.
  • Experience with vulnerability assessment.

Preferred qualifications:

  • Experience in the AI security space, including research, exploits, and mitigations.
  • Experience with threat modeling, data analytics, web applications.
  • Experience mentoring junior researchers or leading technical security projects.
  • Proficiency in programming for security research tasks such as scripting, tool development, and data analysis.
  • Understanding of threat modeling, risk assessments or security assessments of products.
  • Ability to read, understand, and analyze code as well as write small tools and scripts and excellent communication skills.