^

This is the page for jobs based in the US; for international hires, see here.

Palisade is hiring for the following roles:

  • Executive Assistant to the Executive Director (Jeffrey Ladish)

  • Content Lead

  • Operations Lead

  • Policy Lead

Applications are rolling, and you can fill out our short (~10 minutes) application form here. Feel free to indicate interest in multiple positions, or make a general expression of interest. We’re flexible in structuring roles and open to a single candidate taking on more than one position if they’re a great fit.

We’re aiming to make hiring decisions by December 1st, 2024. Applications are rolling, so please apply as soon as possible.

About us

Palisade is a 501(c)(3) organization that studies the strategic capabilities of AI systems to better understand the risk of losing control to AI systems and other catastrophic outcomes. Palisade was founded by Jeffrey Ladish, a security professional with experience securing frontier AI systems and investigating ways to reduce catastrophic risks from emerging technologies. Our work has been cited in the US senate and used to brief a range of government agencies including the UK AI Safety Institute.

We believe that successfully navigating the intelligence explosion without losing control to AI systems will require significant national and international coordination. To this end, our immediate goal is to build awareness among policymakers regarding the trajectory of AI development, the AI risk landscape, and the space of potential solutions to loss-of-control risks. We’re achieving this by building rigorous technical demonstrations of strategic AI capabilities and alignment failures, to help us communicate key messages on AI risk to influential actors in AI policy.

Past work

  • We built a series of BadLlama models, which show how safety guardrails can be subverted for just a few dollars, allowing bad actors to utilize the full offensive capabilities of open-weight models. This work was cited in a Senate hearing.

  • We co-organized the AI Security Forum before DEFCON, which brought together top security researchers, lab security teams, and policy makers to address urgent problems in: securing AI systems, evaluating the offensive and defensive cyber capabilities of frontier models, and creating secure mechanisms for technical governance and oversight.

  • We presented Ursula, our automated voice deepfake platform, to several audiences of high-net-worth individuals and policy makers.

  • We developed FoxVox, an open source Chrome extension that demonstrates how AI could be used to manipulate content you consume.

Executive Assistant to the Executive Director (Jeffrey Ladish)

Jeffrey’s time is one of Palisade’s biggest bottlenecks, and the right person in this role could help leverage Jeffrey’s time and significantly increase Palisade’s effectiveness.

Responsibilities:

  • Managing Jeffrey’s inbound and outbound communications.

  • Helping prioritize his time, including managing his schedule, and booking meetings.

  • The best candidates will be able to support Jeffrey with his day-to-day work, like by helping him prepare presentations and briefings, and accompanying him on trips to DC.

An ideal candidate gets along well with Jeffrey, is a strong written and spoken communicator, has some facilitation / space-holding skills, and has excellent organization and prioritization skills.

Content Lead

We’re looking for someone to take responsibility for Palisade’s content. Concretely, that involves writing static website content about our research, and working closely with Jeffrey to produce presentations that lay out the AI risk argument.

More fundamentally, we’re trying to create content that actually conveys key messages on AI risk to the most influential people in AI policy. The Content Lead would work closely with Jeffrey Ladish and Ben Weinstein-Raun to refine our messages and produce excellent content based on these messages.

You might be a good fit if:

  • You are a very skilled written communicator: you write quickly, compellingly, and with high-integrity.

  • You already know a lot about AI risk (you don’t have to know a lot about AI policy).

  • You’re good at finding crisp concepts to communicate difficult ideas about the state of AI and AI risk.

  • You know what looks good and can take responsibility for the look and feel of our work.

  • (Preferred) You have multimedia communications experience and know how to integrate tables, charts, graphs, images, and/or video to clearly communicate complex ideas.

Operations Lead

We want someone to to help us operate smoothly and effectively. We want our Operations Lead to spot problems and proactively fix them, and be someone we can trust to handle whatever needs to get done. We want our Operations Lead to act quickly and reliably, proactively spot problems and fix them, and take responsibility for whatever needs to get done.

Responsibilities:

  • Help coordinate events like team retreats, collaborator events, happy hours, and external events like the AI Security Forum ‘24.

  • Manage our financial processes, including managing our finance contractor, budgeting and monitoring, financial forecasting, and helping with grant applications and reporting.

  • Help with project management by tracking OKRs, organizing project meetings, and investigating project management tools.

  • Build a supportive and lively office space for Palisade in our shared MIRI-Palisade office space. MIRI handles the physical space, but we want to get good people together in the space and make good things happen there.

  • Help run hiring rounds by managing our Airtable, creating job descriptions, searching for candidates, and screening applicants.

  • Manage relationships and information by managing internal documents, helping with cross-team coordination, and managing a CRM to track key relationships.

  • Liaise with lawyers to make sure our research is compliant, or get visas for new team members.

You might be a good fit if:

  • You enjoy getting stuff done. You’re excited to contribute to impactful work in a supportive capacity, even when some of the tasks are indirect or behind-the-scenes.

  • You like the scope of a generalist operations role, and owning a broad range of areas.

  • You have operations experience, and are excited to build out Palisade’s operations processes from the ground up as the team grows. An ideal Operations Lead would grow into a COO role.

Policy Lead

We want to help policy makers get an accurate map of the AI risk landscape and potential solutions to loss-of-control risks. For that, we need someone to:

  • Maintain and grow our policy network.

  • Develop Palisade’s short-term government recommendations, alongside our recommendations for what government action is necessary in the long-term to mitigate AI risk.

  • Work closely with Jeffrey to prepare and deliver briefings and presentations to key policymakers.

Our Policy Lead needs to have a strong understanding of the AI policy landscape and how to get things done in the US government, as well as excellent written and spoken communication skills.

Salary

Since each role could accommodate varying levels of seniority, salary will depend strongly on your skill and experience — likely falling between $70K and $150K. We also offer a generous health care and benefits package.

Location and visas

The Executive Assistant, Content Lead, and Operations Lead roles are in-person at our Berkeley office, where we’d expect you to work at least 50% of the time. We’re open to our Policy Lead being based in DC, as long as they can make regular trips to Berkeley.

We can sponsor US visas for technical roles, and we’re H1-B cap-exempt.

Our application process

Applications are rolling. Here’s a rough sense of what to expect, although the process may vary on a case-by-case basis:

  • Application form (~10 minutes)

  • Interview (culture)

  • Work test (paid, ~2-8h)

  • Work trial (paid, ideally 1-3 weeks). We realize that this is a big ask, and we’re open to adapting to your circumstances.

Please consider applying, and feel free to reach out to charlie@palisaderesearch.org if you have any questions. We look forward to hearing from you :)