AI Trust & Safety Research

Discover the latest research from Amii's Applied Research Scientists in AI Trust & Safety

Learning to be Cautious

Published: 12 Oct 2025 | Read paper

Authors

Montaser Mohammedalamen, Dustin Morrill, Alexander Sieusahai, yash satsangi, Michael Bowling

Abstract

A key challenge in the field of reinforcement learning is to develop agents that behave cautiously in novel situations. It is generally impossible to anticipate all situations that an autonomous system may face or what behavior would best avoid bad outcomes. An agent that could learn to be cautious would overcome this challenge by discovering for itself when and how to behave cautiously. In contrast, current approaches typically embed task-specific safety information or explicitly cautious behaviors into the system, which is error-prone and imposes extra burdens on practitioners. In this paper, we present both a sequence of tasks where cautious behavior becomes increasingly non-obvious, as well as an algorithm to demonstrate that it is possible for a system to learn to be cautious. The essential features of our algorithm are that it characterizes reward function uncertainty without task-specific safety information and uses this uncertainty to construct a robust policy. Specifically, we construct robust policies with a K-of-N counterfactual regret minimization CFR subroutine given a learned reward function uncertainty represented by a neural network ensemble belief. These policies exhibit caution in each of our tasks without any task-specific safety tuning.

Our code is available at https://github.com/montaserFath/Learning-to-be-Cautious.

Watch the video

Amii Tea Time Talks: "Learning to be Cautious" by Montaser Mohammedalamen