Test Platform
SDVA test platform: Runtime configuration, low-latency switching, remote usage.
The OSxCAR SDV Test Bench is a scalable, runtime-configurable test platform for Software-Defined Vehicles: More efficient validation, flexible architectures, one test bench for diverse deployment scenarios. Modular hardware (t.RECS), physical-layer switching, remote access, and AI-assisted optimization.
Runtime Reconfiguration
Software-defined topologies – switch scenarios flexibly, no hardware modifications.
Modular Hardware
t.RECS with standard form factors (SMARC, COM-HPC) – x86, ARM, RISC-V, GPU/FPGA-ready.
Flexible Architectures
Legacy domains, zone controllers, central computers – all E/E topologies in one test bench.
HIL/SIL + Shadow Mode
Hardware- and Software-in-the-Loop, A/B testing parallel to production.
- More Efficient Validation: Significantly reduce test cycles via software reconfiguration instead of hardware rebuilds
- Cost Potential: One test bench for various vehicle architectures instead of separate hardware duplicates
- Scalability: Platform grows with SDV requirements (L2+ to L3+ autonomy)
- AI Integration: Collect realistic latency data for GNN training
Software-Defined Vehicles require flexible test environments for different vehicle architectures (from legacy domains to central computers, L2+ to L3 autonomy). Traditional setups are hard-wired – hardware rebuilds are time-consuming. The OSxCAR test bench solves this through Physical-Layer Switching: Reconfigure topologies dynamically, no cable swapping.
- Hardware Duplicates: Each partner their own infrastructure → costs, CO₂
- Slow: Rebuilds take a long time, delaying validation
- Data Silos: No central collection for AI training
- Software-defined: Fast reconfiguration, no hardware changes
- Agile: Switch scenarios quickly instead of long rebuilds
- AI-ready: Central data collection for GNN training
OSxCAR Integration: Remote access enables location-independent testing – validate Wasm modules, train AI models, without target hardware on-site. Central test bench reduces hardware duplicates and carbon footprint.
The test bench combines Physical-Layer Switching (switching matrix) with RECS Microservers and heterogeneous compute nodes. The core principle: Switch signals physically rather than route virtually – minimal latency, realistic jitter characteristics.
- Switching Matrix: Physical-Layer Switching – switch signals directly, scalable from 8×8 to 64×64 ports, bus-agnostic (Ethernet, CAN, LIN)
- RECS Microservers: Thermally optimized t.RECS modules – expandable with GPU/FPGA for AI workloads
- Compute Nodes: x86 (high-performance), ARM (power-efficient), RISC-V (open-source ISA), FPGA/ASIC (custom accelerators)
- Metrics: Hardware timestamps, µs jitter analysis, Prometheus/Grafana telemetry, ELK logging
Architecture Support: The test bench supports decentralized control units (legacy), regional zone controllers, and highly integrated central computers (L3+ autonomy).
The switching matrix connects ECUs, sensors, and actuators at the physical layer – runtime-configurable topologies without hardware rebuilds. Latency-/jitter-sensitive paths are characterized (critical for ISO 26262).
The test bench supports all relevant automotive bus systems: From low-speed LIN via robust CAN networks to deterministic TSN. Each bus system can be flexibly switched via the switching matrix. Standard: IEEE 802.1 TSN Speed: 1 Mbit/s (CAN), 8 Mbit/s (FD) Speed: Up to 20 kbit/s Latency Measurement: Hardware timestampsEthernet / TSN
Speed: 100M / 1G / 10G
QoS: Time-Aware Shaper
Use Case: Zone backbone, ADASCAN / CAN-FD
Robustness: Differential, fault-tolerant
Payload: 8 Byte (CAN), 64 Byte (FD)
Use Case: Powertrain, chassisLIN
Topology: Single-master
Cost: Very low
Use Case: Comfort, sensorsMetrics
Jitter Analysis: µs resolution
Telemetry: Prometheus, Grafana
Logging: ELK stack
TSN Integration: The test bench uses TSN for deterministic latency guarantees – critical for ADAS and vehicle dynamics control. Time-Aware Shaper (TAS) and Per-Stream Filtering (PSFP) are configured and validated in the bench.
CAN-FD Advantages: Higher data rate than standard CAN and larger payloads reduce bus load.
Software-defined topologies are the core of the SDVA test bench: Scenarios are defined via configuration files and loaded onto the switching matrix at runtime. No hardware modifications, no cable swapping – just software updates.
Use Cases for Reconfiguration:
- L2+ → L3 Migration: Add central computer, reconfigure zone controllers
- Gateway Tests: Test CAN↔Ethernet gateway in different topologies
- Failover Scenarios: Simulate ECU failure, activate redundancy paths
- TSN Configurations: Test different QoS profiles (VLAN, priorities)
Example Workflow: (1) Define topology (nodes, links, bus types), (2) Generate switching matrix config (automatic), (3) Deploy config to bench (REST API), (4) Trigger switchover, (5) Validation (latency measurement, connectivity check).
The bench logs all switchovers (timestamp, config hash, user) – important for TISAX audits and reproducibility. Configs are versioned (Git) and signed.
The SDVA test bench is cloud-accessible: Partners reserve time slots, deploy software remotely, and access measurement data – without being on-site. Multi-tenant architecture guarantees data isolation according to TISAX. Time Slots: Reservation system (calendar-based) TISAX-compliant🎯 Management
Multi-Tenant: Parallel usage, isolated data spaces
Deployment: Software upload via REST API
Monitoring: Live telemetry (Grafana dashboards)🔒 Security (planned, currently optional)
Data-isolated
Encrypted
Audit trail
Reservation System: Partners book time slots (e.g., 2 hours) via web interface. During the slot, they have exclusive access to configurable resources (switching matrix, RECS nodes). Shared resources (e.g., central logging infrastructure) remain multi-tenant.
Software Deployment: Wasm modules are uploaded via REST API, signed, and deployed to RECS nodes. Fast rollback possible. Native binaries (x86/ARM) also supported, but Wasm preferred (portability, security).
Measurement paths are essential for bench validation and AI training: Latency, jitter, throughput are captured at µs level. Hardware timestamps (FPGA-based) eliminate software overhead.
Telemetry Stack:
- Prometheus: Metrics collection (CPU, memory, bus load, latency)
- Grafana: Live dashboards (time series, heatmaps)
- ELK Stack: Log aggregation (Elasticsearch, Logstash, Kibana)
- Jaeger: Distributed tracing (for Wasm modules)
Integration with AI: The bench collects realistic data for GNN training. Topology graphs (nodes=ECUs, edges=bus links) + latency measurements are exported (CSV, Parquet). GNN models learn latency predictions for different E/E architectures and optimize software placement. Validation in shadow mode.
Extended Test Functions:
- HIL (Hardware-in-the-Loop): Real ECUs with simulated sensors/actuators – real signals, controlled environment
- SIL (Software-in-the-Loop): Purely software-based validation before hardware availability
- A/B Shadow Mode Testing: Run new software version parallel to heuristic – log suggestions, don't apply. Validation without production risk
- Test Framework: Integrated result collection, visualization (Grafana), audit trail for ISO 26262
Wasm Integration: Wasm modules run identically on laptop, bench, and target hardware – reproducible tests. Deterministic environment (AoT) for latency characterization. Trace data shows interop overhead.
More Technology Pages: SDV Platform · Artificial Intelligence · WebAssembly · FAQ · Glossary


