Anthropic Fellows Program — AI Security Research
💰 $70,000 – $100,000/yr
Job Description
About Anthropic
Anthropic's mission is to create reliable, interpretable, and steerable AI systems. We are building AI that is safe and beneficial for users and society. Our growing team of researchers, engineers, policy experts, and business leaders collaborate to develop secure, trustworthy AI systems that prioritize safety and alignment with human values.
Program Overview
The Anthropic Fellows Program is designed to foster exceptional AI research and engineering talent. We provide comprehensive funding and mentorship to promising technical talent, regardless of previous experience or background. This is an opportunity to work on cutting-edge AI security research in a supportive environment with world-class researchers.
Fellows will conduct empirical research projects aligned with Anthropic's research priorities, primarily using external infrastructure such as open-source models and public APIs. The program emphasizes producing high-quality public research outputs—historically, over 80% of fellows have produced publishable papers from their work.
We run multiple cohorts annually and review applications on a rolling basis. The next cohort begins July 20, 2026, with an application deadline of April 26, 2026.
What You'll Experience
- 4 months of full-time, focused research dedicated to your project
- Direct mentorship from Anthropic researchers with expertise in AI security and alignment
- Shared workspace access in Berkeley, California or London, UK
- Community connection to the broader AI safety and security research ecosystem
- Competitive stipend: $3,850 USD / £2,310 GBP / $4,300 CAD weekly, plus country-specific benefits
- Research funding: Approximately $15,000/month for compute resources and additional research expenses
Selection Process
The interview process includes an initial application review, reference checks, technical assessments, technical interviews, and a research discussion with the team. We encourage applications from candidates who may not meet every single listed qualification. Research demonstrates that individuals from underrepresented groups in tech often experience imposter syndrome and self-select out prematurely. We actively welcome diverse perspectives and believe the strongest candidates may come from non-traditional backgrounds.
Why AI Security Matters
AI systems have profound social and ethical implications. This fellowship offers the opportunity to contribute meaningfully to AI security research during a critical period in the field's development. You'll work on problems that directly impact how AI systems are built to be safer and more aligned with human values.
💰 Compensation not publicly listed. Market estimate for similar roles: from $70K, varying by experience and location. This fellowship provides $3,850–$4,300 weekly stipend plus research funding.