camerad — Openpilot Camera Pipeline¶
Goal: Understand how openpilot captures, processes, and delivers camera frames to the perception stack. camerad is the first process in the pipeline: raw sensor → ISP → VisionIpc → modeld.
Table of Contents¶
- Overview
- Architecture
- Camera Configuration
- Hardware Stack
- ISP Pipeline
- Auto Exposure (AE)
- VisionIpc
- Data Flow
- Source Map
- Code Walkthrough — Every Piece
- Key Concepts
1. Overview¶
camerad is a native C++ process that:
- Captures frames from up to three cameras (wide road, road, driver)
- Runs them through the Qualcomm Spectra ISP (Image Signal Processor)
- Publishes YUV frames via VisionIpc (shared memory IPC)
- Publishes FrameData (metadata) via cereal messaging
- Implements auto exposure (AE) to adapt to lighting
Location: openpilot/system/camerad/
Entry point: main.cc → camerad_thread() (pinned to CPU core 6)
2. Architecture¶
┌─────────────────────────────────────────────────────────────────────────────┐
│ camerad │
├─────────────────────────────────────────────────────────────────────────────┤
│ Sensor (I2C) │ CSI/MIPI │ IFE (Image Front End) │ BPS (optional) │
│ OX03C10/OS04C10 │ → RAW │ Demosaic, CCM, Gamma │ Downscale │
│ │ │ Vignetting correction │ (driver cam) │
└─────────────────────────────────────────────────────────────────────────────┘
│ │
▼ ▼
Exposure control YUV output
(AE algorithm) VisionIpc buffers
│ │
└────────────────┬───────────────────┘
▼
FrameData (cereal)
frame_id, timestamps, gain, exposure
Three cameras (comma 3X):
| Camera | Stream | Role | Focal length |
|---|---|---|---|
| Wide road | VISION_STREAM_WIDE_ROAD |
Wide FoV, peripheral | 1.71 mm |
| Road | VISION_STREAM_ROAD |
Main driving view | 8.0 mm |
| Driver | VISION_STREAM_DRIVER |
Driver monitoring (DMS) | 1.71 mm |
3. Camera Configuration¶
Defined in cameras/hw.h:
// Wide: fisheye, peripheral
WIDE_ROAD_CAMERA_CONFIG = {
.camera_num = 0,
.stream_type = VISION_STREAM_WIDE_ROAD,
.focal_len = 1.71,
.publish_name = "wideRoadCameraState",
.output_type = ISP_IFE_PROCESSED,
};
// Road: main forward-facing, narrow FoV
ROAD_CAMERA_CONFIG = {
.camera_num = 1,
.stream_type = VISION_STREAM_ROAD,
.focal_len = 8.0,
.publish_name = "roadCameraState",
.vignetting_correction = true,
.output_type = ISP_IFE_PROCESSED,
};
// Driver: cabin-facing
DRIVER_CAMERA_CONFIG = {
.camera_num = 2,
.stream_type = VISION_STREAM_DRIVER,
.focal_len = 1.71,
.publish_name = "driverCameraState",
.output_type = ISP_BPS_PROCESSED, // BPS for extra processing
};
Python intrinsics (for modeld warp, calibration): common/transformations/camera.py
- Road: 1928×1208, focal 2648 px (OX) or 1344×760, focal ~1142 px (OS)
- Wide: 1928×1208, focal 567 px
- Driver: same as wide
4. Hardware Stack¶
Sensors¶
| Sensor | Device | Resolution | Notes |
|---|---|---|---|
| OX03C10 | comma 3X (tici/tizi) | 1928×1208 | OmniVision, 3 MP |
| OS04C10 | comma 3X (mici) | 2688×1520 | OmniVision, 4 MP |
| AR0231 | Legacy (neo) | 1164×874 | Aptina |
Sensor interface: I2C for register control (exposure, gain, init). MIPI CSI for image data.
Files: sensors/ox03c10.cc, sensors/os04c10.cc, sensors/sensor.h
Qualcomm Spectra ISP¶
- IFE (Image Front End): demosaic, color correction (CCM), gamma, vignetting
- BPS (Bayer Processing Segment): used for driver camera (extra downscale/processing)
- CDM (Camera Data Mover): DMA, buffer management
- CSIPHY: MIPI CSI physical layer
Kernel: Linux V4L2 (Video4Linux2), CAM_REQ_MGR (Request Manager) for frame synchronization.
V4L2: Linux Kernel Camera API¶
V4L2 (Video4Linux2) is the standard Linux kernel API for video capture, output, and codecs. camerad uses V4L2 to drive the Qualcomm Spectra ISP and receive processed frames.
What V4L2 Provides¶
| Concept | Description |
|---|---|
| Device nodes | /dev/video0, /dev/video1, … — one per capture/output device. camerad opens /dev/video0 (sync device) for the request manager. |
| Media Controller | Models the pipeline: sensor → CSI → IFE → BPS → video node. Used to discover and link entities. |
| Request API | One request = one frame through the pipeline. Sensor capture, IFE, BPS all tied to the same request for synchronization. |
| Buffer flow | VIDIOC_QBUF enqueue empty buffer → hardware fills it → VIDIOC_DQBUF dequeue filled buffer. |
camerad's V4L2 Flow¶
1. Open /dev/video0 (sync/request manager device)
2. Media Controller: discover sensor, IFE, BPS entities; configure links
3. VIDIOC_REQBUFS: allocate capture buffers (YUV output)
4. VIDIOC_QUERYBUF: get buffer addresses for mmap()
5. VIDIOC_QBUF: enqueue buffers into the pipeline
6. VIDIOC_STREAMON: start streaming
7. poll(fd, POLLPRI): wait for frame-done event
8. VIDIOC_DQEVENT: dequeue event (frame completed)
9. processFrame() → YUV ready → sendFrameToVipc()
10. VIDIOC_QBUF: re-enqueue buffer for next frame
Key ioctls Used by camerad¶
| ioctl | Purpose |
|---|---|
VIDIOC_REQBUFS |
Allocate buffer queue (memory-mapped or DMA) |
VIDIOC_QUERYBUF |
Get buffer info (offset, length) for mmap |
VIDIOC_QBUF |
Enqueue buffer for capture |
VIDIOC_DQBUF |
Dequeue filled buffer (alternatively, events can signal completion) |
VIDIOC_DQEVENT |
Dequeue async event (e.g. frame done) — used with Request API |
VIDIOC_STREAMON / VIDIOC_STREAMOFF |
Start/stop streaming |
VIDIOC_S_FMT |
Set pixel format (e.g. NV12) |
VIDIOC_S_PARM |
Set frame rate |
Event-Driven Model¶
camerad uses event-driven capture, not blocking DQBUF:
poll(fd, POLLPRI)— wait until a frame-done event is availableVIDIOC_DQEVENT— dequeue the event; payload indicates which request/frame completed- Process the frame, re-enqueue buffers, then poll again
POLLPRI (priority/exceptional condition) is set when an event is pending. This avoids busy-waiting and integrates cleanly with the Request API.
CAM_REQ_MGR (Qualcomm Request Manager)¶
On Qualcomm platforms, CAM_REQ_MGR is a kernel component that:
- Synchronizes sensor capture with ISP (IFE, BPS) — one request ties them together
- Queues requests — each request carries buffers and controls for the whole pipeline
- Signals completion — when the pipeline finishes a frame, an event is queued for userspace
Without the Request API, sensor and ISP could run out of sync (e.g. wrong exposure applied to a frame). The Request API guarantees: this sensor frame → this IFE config → this output buffer.
Device Layout (Qualcomm Spectra)¶
| Device | Role |
|---|---|
/dev/video0 |
Sync/request manager — camerad polls here for frame-done events |
/dev/media0 |
Media controller — pipeline topology (sensor ↔ IFE ↔ BPS) |
Subdevs (e.g. /dev/v4l-subdev*) |
Sensor, IFE, BPS as separate entities; configured via media controller |
Debugging V4L2¶
# List video devices and capabilities
v4l2-ctl --list-devices
# List supported formats
v4l2-ctl -d /dev/video0 --list-formats-ext
# Media controller: show pipeline
media-ctl -d /dev/media0 -p
References: V4L2 API (kernel.org), VIDIOC_QBUF/VIDIOC_DQBUF, V4L2 poll()
5. ISP Pipeline¶
Output format: NV12 (Y plane + interleaved UV plane). Standard format for vision/ML.
Buffer flow:
SpectraMasterinitializes/dev/video0, sync, ISP, ICPSpectraCameraper camera: sensor init, IFE config, BPS config (driver only), link devicesCameraBufallocates raw (optional) and YUV buffersVisionIpcServercreates shared-memory buffers for consumers (modeld, etc.)- On V4L2 frame event:
handle_camera_event→processFrame→sendFrameToVipc+ publish FrameData
6. Auto Exposure (AE)¶
camerad implements software AE (no sensor AEC). Goal: keep a target region at ~12.5% grey (median luminance).
Algorithm (camera_qcom2.cc):
- Measure:
calculate_exposure_value()bins luminance in an AE rectangle, returns median grey fraction - Target:
target_grey_fraction≈ 0.125, scaled by scene brightness (darker → lower target) - Control loop: PI-like update with ~3-frame latency (sensor register buffering)
- Optimizer: Brute-force over
(exposure_time, analog_gain)to minimizegetExposureScore(EV error) - DC gain: High conversion gain for low light; hysteresis to avoid flicker
AE rectangles (per camera, in pixel coords):
- Wide:
{96, 400, 1734, 524}— lower part of frame - Road:
{96, 160, 1734, 986}— most of frame - Driver:
{96, 242, 1736, 906}— face region
Exposure params: exposure_time (µs), analog_gain, dc_gain_enabled. Sent to sensor via I2C.
7. VisionIpc¶
VisionIpc = shared-memory IPC for video frames. Avoids copies; modeld reads directly from shared buffers.
Server (camerad):
VisionIpcServer v("camerad");
v.create_buffers_with_sizes(stream_type, VIPC_BUFFER_COUNT, width, height, yuv_size, stride, uv_offset);
// ...
vipc_server->send(cur_yuv_buf, &extra); // on each frame
Client (modeld):
from msgq.visionipc import VisionIpcClient, VisionStreamType
vipc = VisionIpcClient("camerad", 0, VisionStreamType.VISION_STREAM_ROAD)
# vipc.recv() → frame buffer
Stream types: VISION_STREAM_WIDE_ROAD, VISION_STREAM_ROAD, VISION_STREAM_DRIVER
Buffer count: 18 (VIPC_BUFFER_COUNT). Allows producer/consumer to run at different rates.
8. Data Flow¶
V4L2 poll(POLLPRI) on video0
│
▼
VIDIOC_DQEVENT (frame done)
│
├─► handle_camera_event()
│ │
│ ├─► processFrame() → YUV ready
│ │
│ ├─► sendFrameToVipc() → VisionIpc send
│ │
│ ├─► calculate_exposure_value() → grey_frac
│ │
│ ├─► set_camera_exposure() → I2C exposure registers
│ │
│ └─► pm->send("roadCameraState", FrameData)
│
└─► (next frame)
FrameData (cereal): frameId, timestampSof, timestampEof, integLines, gain, measuredGreyFraction, targetGreyFraction, exposureValPercent, sensor, processingTime.
9. Source Map¶
Local path (in this roadmap): openpilot/system/camerad/ under the Autonomous Driving folder.
| Component | Path | Meaning / Responsibility (including file name meaning) |
|---|---|---|
| Main loop | system/camerad/main.cc |
Main entry point; runs main() and pins process to CPU/core; launches camerad main thread. |
| Camera state, AE (qcom2: Qualcomm Gen2) | system/camerad/cameras/camera_qcom2.cc |
Camera state logic and auto-exposure (AE); "qcom2" refers to Qualcomm 2nd-Gen ISP platform (used in Snapdragon SoCs). Manages pipeline and event handling for these ISPs. |
| Camera buffer, VIPC send | system/camerad/cameras/camera_common.cc |
Camera buffer ring management and Vision IPC (Interprocess Communication) data sending. "common" means shared/common camera logic (not platform- or sensor-specific). |
| ISP (IFE, BPS, CDM: Image Signal Processor, Bayer processing, Camera Data Mover) | system/camerad/cameras/spectra.cc, cdm.cc |
Qualcomm "Spectra" is the ISP subsystem. IFE = Image Front End (captures/initial process), BPS = Bayer Processing Segment, CDM = Camera Data Mover (DMA/HW offload management). These files initialize and operate the ISP. |
| Sensor drivers | system/camerad/sensors/ox03c10.cc, os04c10.cc |
Sensor-specific drivers: ox03c10 and os04c10 are image sensor model names (OmniVision OX03C10 & OS04C10). Each file contains initialization, I2C register access, and sensor settings. |
| Camera config (hw: hardware) | system/camerad/cameras/hw.h |
Camera hardware and configuration header. "hw" stands for "hardware": defines types, capabilities, and constants for different cameras used. |
| Intrinsics (Python) | common/transformations/camera.py |
Camera intrinsic/extrinsic parameters and transformations, projection and rectification logic, written in Python. Used for geometric camera model math. |
| VisionIpc | msgq/visionipc/ |
VisionIPC (Inter-Process Communication) framework directory: shared memory and message-passing (server/client) for image/frame data transport between processes. |
10. Code Walkthrough — Every Piece¶
Use the local openpilot clone at ../openpilot/system/camerad/ (relative to this Guide). Read in this order:
Entry Point¶
| File | What it does |
|---|---|
| main.cc | main() → pins process to CPU 6 → calls camerad_thread(). No RT priority (isolcpus used). |
Configuration¶
| File | What it does |
|---|---|
| cameras/hw.h | CameraConfig struct: camera_num, stream_type, focal_len, publish_name, output_type (IFE vs BPS). Defines WIDE_ROAD_CAMERA_CONFIG, ROAD_CAMERA_CONFIG, DRIVER_CAMERA_CONFIG, ALL_CAMERA_CONFIGS. |
Core Loop & AE (Qualcomm)¶
| File | What it does |
|---|---|
| cameras/camera_qcom2.cc | camerad_thread(): init SpectraMaster, create CameraState per config, start sensors, poll video0_fd for POLLPRI → VIDIOC_DQEVENT → handle_camera_event() → sendState(). CameraState: holds SpectraCamera, exposure params, AE rect. set_camera_exposure(): PI-like control, brute-force over (exp_t, gain_idx), DC gain hysteresis, I2C via sensors_i2c(). set_exposure_rect(): AE rectangles per camera (wide/road/driver). |
Buffer & VIPC¶
| File | What it does |
|---|---|
| cameras/camera_common.h | CameraBuf: vipc_server, stream_type, cur_buf_idx, cur_frame_data, cur_yuv_buf, camera_bufs_raw. FrameMetadata: frame_id, request_id, timestamps. Declares camerad_thread(), calculate_exposure_value(), get_raw_frame_image(), open_v4l_by_name_and_index(). |
| cameras/camera_common.cc | CameraBuf::init(): allocates raw buffers (if BPS), creates VIPC buffers. sendFrameToVipc(): gets YUV buf, sets VisionIpcBufExtra, calls vipc_server->send(). calculate_exposure_value(): bins Y luminance in ae_xywh, returns median/256. open_v4l_by_name_and_index(): scans /sys/class/video4linux/ for subdev name, opens /dev/v4l-subdevN. |
ISP (Spectra)¶
| File | What it does |
|---|---|
| cameras/spectra.h | SpectraMaster: video0_fd, cam_sync_fd, isp_fd, icp_fd, MemoryManager. SpectraCamera: sensor, IFE/BPS config, handle_camera_event(), camera_open(), sensors_init/start/i2c(), config_ife(), config_bps(), enqueue_frame(). SpectraBuf: mmap'd DMA buffer. |
| cameras/spectra.cc | SpectraMaster::init(): opens /dev/video0, sync, ISP, ICP. SpectraCamera: sensor probe, IFE/BPS setup, link devices, request queue. handle_camera_event(): validates event, calls processFrame(), re-enqueues. CDM (Camera Data Mover) setup. |
| cameras/cdm.cc, cdm.h | CDM programs for IFE/BPS: DMA descriptors, striping. |
| cameras/ife.h | IFE (Image Front End) register layouts, LUTs. |
| cameras/bps_blobs.h | BPS (Bayer Processing Segment) binary blobs / config. |
| cameras/nv12_info.h, nv12_info.py | NV12 layout (stride, uv_offset) for different resolutions. |
Sensors¶
| File | What it does |
|---|---|
| sensors/sensor.h | SensorInfo: frame dimensions, exposure limits, analog gains, DC gain, CCM, gamma LUT, linearization, vignetting. OX03C10, OS04C10 subclasses: getExposureRegisters(), getExposureScore(), getSlaveAddress(). |
| sensors/ox03c10.cc | OX03C10 init, exposure register writes, I2C slave addr. |
| sensors/ox03c10_registers.h | Register addresses for OX03C10. |
| sensors/os04c10.cc | OS04C10 init, ife_downscale_configure(), exposure. |
| sensors/os04c10_registers.h | Register addresses for OS04C10. |
Other¶
| File | What it does |
|---|---|
| snapshot.py | Utility to capture a frame (for debugging). |
| SConscript | Build rules for camerad. |
| test/ | test_camerad.py, debug.sh, stress_restart.sh, test_ae_gray.cc — tests and debug scripts. |
Reading Order (Suggested)¶
- main.cc — entry
- hw.h — config
- camera_common.h + camera_common.cc — buffers, VIPC, AE measurement
- sensor.h — sensor interface
- camera_qcom2.cc — main loop, AE control,
camerad_thread() - spectra.h + spectra.cc — ISP init,
handle_camera_event,processFrame - ox03c10.cc / os04c10.cc — sensor-specific init and exposure
11. Key Concepts¶
| Concept | Description |
|---|---|
| NV12 | YUV 4:2:0: Y plane full res, UV interleaved half res. Common for ISP output and ML. |
| VisionIpc | Zero-copy frame sharing via shared memory. Server creates buffers; clients subscribe by stream type. |
| FrameData | Cereal message with frame metadata. Used for sync (frame_id, timestamps) and AE debugging. |
| ISP | Image Signal Processor. Converts RAW Bayer → demosaic → color correct → gamma → YUV. |
| AE | Auto exposure. Software loop: measure luminance → compute desired EV → set sensor exposure/gain. |
| V4L2 | Video4Linux2 — Linux kernel API for video capture. Device nodes (/dev/video*), ioctls (QBUF/DQBUF), Request API. |
| Request Manager | V4L2 CAM_REQ_MGR: ties sensor capture to ISP pipeline. One request = one frame through the pipeline. |
| QBUF/DQBUF | V4L2 buffer exchange: QBUF enqueue empty buffer, DQBUF dequeue filled buffer. |
Further Reading¶
- Local openpilot camerad:
../openpilot/system/camerad/(relative to this Guide) - V4L2 API: kernel.org V4L2 documentation — device nodes, ioctls, buffer flow, Request API
- Trace the pipeline: Start at
camerad_thread()incamera_qcom2.cc, followhandle_camera_event→processFrame→sendFrameToVipc - modeld consumption:
selfdrive/modeld/modeld.py—VisionIpcClientforVISION_STREAM_ROAD(and wide if used) - Calibration:
common/transformations/camera.py,get_warp_matrixfor model input warp