BAIST’s 8-week AI safety technical program is a structured in-person reading group on technical solutions to AI risk, with a focus on AI alignment. 

Experts have voiced concerns about the potentially catastrophic impact that advanced AI systems could have. This program focuses on understanding these extreme risks, particularly those caused by misalignment of advanced AI, and possible approaches to mitigating them through technical research.

We will discuss questions such as:

  • What is Artificial General Intelligence and when could it develop?

  • What causes AI to be misaligned with human values, and what risks are posed by advanced misaligned AI?

  • What are current promising approaches to AI alignment, monitoring, and evaluation, and how do they scale with rapidly advancing AI?

  • How can you pursue a career in technical AI safety research?

You can see the topics and readings we will do each week in the program’s syllabus (subject to change).

For those interested in the policy/governance side of AI safety, we recommend applying to our policy program. People can participate in both programs.

If you’re already familiar with this material, consider applying for membership instead.

Apply here by Monday, September 16th, 2024 23:59 (AOE)

Frequently Asked Questions

Is prior experience expected?

We do no expect any prior knowledge about technical concepts specific to AI safety or about AI risks. However, we do expect prior experience with Deep Learning – at minimum, you should have a good understanding of Deep Learning fundamentals and a basic idea of key architectures/paradigms such as transformers and reinforcement learning. If you are unsure whether or not you have enough experience or would like to review/learn any of these concepts, please refer to Session 0 of our syllabus. If you are already familiar with the material in the syllabus, contact us at team@BAIST.ai to discuss other ways of getting involved.

What will be the outcomes of participating?

The program aims to equip you with the knowledge to seriously explore a career technical AI safety research. We will share relevant internship and job opportunities throughout and after the semester, and we will provide a completion certificate upon completion of the program.

Who can participate?

All Brown University students are welcome to apply. This includes undergraduate and graduate students from all backgrounds and concentrations.

When and where will the meetings be held?

We will try to find a time that works with everyone’s schedule to meet on campus (location TBD). We will meet once a week for two hours, with dinner or lunch provided.

What is the time commitment?

Two hours per week for eight weeks (ends well before finals period). That is the duration of our weekly meetings (roughly one hour reading together and one hour of discussion), and no work or reading is required beyond these meetings.

How competitive is the application?

We hope to accept everyone who is genuinely interested in exploring the field of AI safety and has sufficient technical knowledge (detailed above), but this may not be possible due to our limited capacity.