Hi there, I'm Jacques - aka JayThibs 👋
- 📄 Here's my resume and my LinkedIn profile. For my AI alignment takes, here's my LessWrong profile.
- 🥅 2024 Goals: Publish at least 3 AI Safety related papers. Become a world-class research engineer capable of implementing research ideas at high iteration speed. Over-deliver on my independent researcher grant projects.
- Collaborating on the Supervising AIs Improving AIs agenda (making automated AI science safe and controllable). The current project involves a new method allowing unsupervised model behaviour evaluations. Our agenda.
- 🌱 Accelerating Alignment: augmenting alignment researchers using AI systems. A relevant talk I gave. Relevant survey post. If you are think AI Safety is important and you have a strong application development background, please reach out to me to collaborate!
- I'm a research lead in the AI Safety Camp for a project on stable reflectivity (testing models for metacognitive capabilities that impact future training/alignment).