PHANTOM
🇮🇳 IN

Contextual Safety Reasoning and Grounding for Open-World Robots
1University of Pennsylvania, 2Carnegie Mellon University

CORE infers contextually-appropriate safety specifications from visual observations, grounds these specifications in the robot's environment, then enforces them via a control barrier function (CBF).

Abstract


Robots are increasingly operating in open-world environments where safe behavior depends on context: the same hallway may require different navigation strategies when crowded versus empty, or during an emergency versus normal operations. Traditional safety approaches enforce fixed constraints in user-specified contexts, limiting their ability to handle the open-ended contextual variability of real-world deployment. We address this gap via CORE, a safety framework that enables online contextual reasoning, grounding, and enforcement without prior knowledge of the environment (e.g., maps or safety specifications). CORE uses a vision-language model (VLM) to continuously reason about context-dependent safety rules directly from visual observations, grounds these rules in the physical environment, and enforces the resulting spatially-defined safe sets via control barrier functions. We provide probabilistic safety guarantees for CORE that account for perceptual uncertainty, and we demonstrate through simulation and real-world experiments that CORE enforces contextually appropriate behavior in unseen environments, significantly outperforming prior semantic safety methods that lack online contextual reasoning. Ablation studies validate our theoretical guarantees and underscore the importance of both VLM-based reasoning and spatial grounding for enforcing contextual safety in novel settings.


Overview


As robots operate in open-world environments, safety is increasingly contextual. For example, a robot may navigate an empty hallway, but the presense of hazard signs may indicate danger. Because context is unknown beforehand, the robot must reason about appropriate constraints, ground these in the environment, then enforce them online.
CORE addresses this problem by first useing a VLM to infer context-dependent safety constraints given visual observations.
CORE then grounds these constraints into the robot's physical environment via a barrier function.
CORE enforces these grounded constraints via a control barrier function (CBF).

Evaluation


We evaluate CORE's' ability to enforce contextually safe behavior. For example, a robot should navigate on a sidewalk rather than cut through grass, and it should exercise caution around an operating forklift.
And we evaluate in simulated and real-world environments including a hospital, warehouse, residential area, and lab space.
We compare to an oracle, safety framework without online contextual reasoning, and a geometric safety filter. We find that CORE is competitive to an oracle while significantly outperforming baselines without contextual reasoning.
Finally, we provide an example from real-world experiments CORE must recognize a barrier of cones indicates a prohibited area.

BibTeX

@article{ravichandran_core,
      title={Contextual Safety Reasoning and Grounding for Open-world Robots},
      author={Zachary Ravichadran and David Snyder and Alexander Robey and Hamed Hassani and Vijay Kumar and George J. Pappas},
      year={2026},
      journal={arxiv 2602.19983},
      url={https://arxiv.org/abs/2602.19983}, 
  }

Webpage adapted from nerfies.