Refactor voice control core and robot behavior

This commit is contained in:
cristhian aguilera
2026-02-02 12:29:59 -03:00
parent b9798a2f46
commit 695d309816
36 changed files with 3436 additions and 1065 deletions

44
CLAUDE.md Normal file
View File

@@ -0,0 +1,44 @@
# CLAUDE.md
## Purpose
Project guidelines for Claude Code working in this repository.
## Project Overview
Robotics project using Dora to control a ULite6 robot arm with Spanish voice commands for children.
## Building and Running
- Build: `dora build <dataflow.yml> --uv`
- Start: `dora run <dataflow.yml> --uv`
## Key Packages
- `dora_voice_control`: Voice processing and robot behavior
- `dora_yolo_object_detector`: YOLO-based object detection
- `dora_ulite6`: Robot arm control
- `dora_zed_cpp`: ZED camera (C++)
- `dora_iobridge`: WebSocket bridge for external clients
## Robot Adapter
Voice control uses a robot adapter pattern to support different robots:
- `ROBOT_TYPE=vacuum`: Vacuum gripper (grab→vacuum_on, release→vacuum_off)
- `ROBOT_TYPE=gripper`: Parallel gripper (grab→gripper_close, release→gripper_open)
To add a new robot, create a new adapter in `robot/adapter.py`.
## Guidelines
- Keep the event loop responsive. Run slow operations (LLM) in threads.
- Robot commands are serialized: one command at a time, wait for status before next.
- State is shared via `SharedState` (thread-safe).
- Handlers process events, tick callbacks run after each event.
## Safety
- This controls a real robot. Test changes carefully.
- Workspace bounds are enforced in behaviors.
- Use `DRY_RUN=true` to test without robot motion.