Red Teaming Domain Expert - AI Training (Contract)
Handshake
Location
Remote (USA)
Employment Type
Contract
Location Type
Remote
Department
HAI AI Tutor
Compensation
- $40.00 – $60.00 per hour
For cash compensation, we set standard ranges for all U.S.-based roles based on function, level, and geographic location, benchmarked against similar stage growth companies. In order to be compliant with local legislation, as well as to provide greater transparency to candidates, we share salary ranges on all job postings regardless of desired hiring location. Final offer amounts are determined by multiple factors, including geographic location as well as candidate experience and expertise, and may vary from the amounts listed above.
About Handshake AI
Handshake is building the career network for the AI economy. Our three-sided marketplace connects 18 million students and alumni, 1,500+ academic institutions across the U.S. and Europe, and 1 million employers to power how the next generation explores careers, builds skills, and gets hired.
Handshake AI is a human data labeling business that leverages the scale of the largest early career network. We work directly with the world’s leading AI research labs to build a new generation of human data products. From PhDs in physics to undergrads fluent in LLMs, Handshake AI is the trusted partner for domain-specific data and evaluation at scale.
This is a unique opportunity to join a fast-growing team shaping the future of AI through better data, better tools, and better systems—for experts, by experts.
Now’s a great time to join Handshake. Here’s why:
Leading the AI Career Revolution: Be part of the team redefining work in the AI economy for millions worldwide.
Proven Market Demand: Deep employer partnerships across Fortune 500s and the world’s leading AI research labs.
World-Class Team: Leadership from Scale AI, Meta, xAI, Notion, Coinbase, and Palantir, just to name a few.
Capitalized & Scaling: $3.5B valuation from top investors including Kleiner Perkins, True Ventures, Notable Capital, and more.
About the Role
As a Red Teamer, you will stress-test AI models by intentionally trying to break them. Instead of checking whether an answer is correct, you’ll design creative, adversarial prompts that expose vulnerabilities—unsafe content, bias, broken guardrails, or unexpected behaviors. Your work directly supports AI safety and model robustness for leading research labs.
This role requires creativity, curiosity, and an ability to think like an adversary while operating with strong ethical judgment. No technical background is required. What matters most is how you think, how you write, and how you problem-solve.
This is a remote contract position with variable time commitments, typically 10–20 hours per week.
Day-to-day responsibilities include
Crafting creative prompts and scenarios to intentionally stress-test AI guardrails
Discovering ways around safety filters, restrictions, and defenses
Exploring edge cases to provoke disallowed, harmful, or incorrect outputs
Documenting experiments clearly, including what you tried and why
Reviewing and refining adversarial prompts generated by Fellows
Collaborating with engineers, tutors, and researchers to share findings and strengthen defenses
Working with potentially disturbing content, including violence, explicit topics, and hate speech
-
Staying current on jailbreaks, attack methods, and evolving model behaviors
Desired Capabilities
Strong hands-on experience using multiple LLMs
Intuition for crafting prompts; familiarity with jailbreak or evasion techniques is a plus
Creative, adversarial problem-solving skills
Clear and thoughtful written communication
Ability to tolerate emotionally heavy or graphic content
Curiosity, persistence, and comfort with frequent failure in experimentation
Strong ethical judgment and ability to separate adversarial thinking from personal values
Self-directed, collaborative, and comfortable in feedback-heavy environments
You go deep into unusual interests (fandoms, niche internet cultures, gaming exploits, Wikipedia rabbit holes, etc.)
You come from a creative background, writers, visual artists, etc
-
You are obsessed with AI and can’t stop talking about it
Extra Credit
Prior red teaming, moderation, or adversarial testing experience
Background in writing, gaming, improv, or niche internet subcultures
Experience documenting complex processes or research
-
Familiarity with safety, trust & safety, or digital security concepts
Additional Information
Engagement: Contract, remote, variable time commitment
Schedule: Flexibility required, with some evening or weekend availability
Location: Fully remote (no visa sponsorship available)
Technical Requirements: Personal device running Windows 10 or macOS Big Sur 11.0+ and reliable smartphone access