top of page

Robots

Boston Dynamics Spot

The VCAI lab maintains a Boston Dynamics Spot robot as an embodied research platform to study social navigation and fire-aware navigation in complex, real-world environments. Spot’s mobility and sensing capabilities allow us to investigate how robots perceive and interpret dynamic scenes that include people, obstacles, and hazardous phenomena such as fire and smoke. Our work emphasizes robust visual perception as the primary modality for understanding the environment, enabling the robot to reason about social context, spatial layout, and safety constraints while operating in close proximity to humans.
 

A central focus of this research is the integration of vision–language models (VLMs) and vision–language–action (VLA) models to connect visual perception with high-level reasoning and decision-making. These models allow Spot to ground language instructions and semantic concepts—such as social norms, navigational intent, or fire-related risks—directly in visual observations, supporting adaptive and interpretable behavior. By leveraging visual perception as the foundation for navigation and interaction, our work aims to advance embodied AI systems that can operate safely, socially, and intelligently in unstructured and potentially hazardous environments.

Kiel University
Department of Computer Science   
Visual Computing and Artificial Intelligence
Neufeldtstraße 6 (Ground Floor)
D-24118 Kiel
Germany

 © Visual Computing and Artificial Intelligence Group 2025

bottom of page