Hide table of contents
This is a linkpost for https://arxiv.org/pdf/2502.13295

We demonstrate LLM agent specification gaming by instructing models to win against a
chess engine. We find reasoning models like o1-preview and DeepSeekR1 will often hack the benchmark by default, while language models like GPT4o and Claude 3.5 Sonnet need to be told that normal play won’t work to hack. We improve upon prior work like (Hubinger et al., 2024; Meinke et al., 2024; Weij et al., 2024) by using realistic task prompts and avoiding excess nudging. Our results suggest reasoning models may resort to hacking to solve difficult problems, as observed in OpenAI (2024)‘s o1 Docker escape during cyber capabilities testing.

9

0
0

Reactions

0
0
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities