Technical informationposition:Baoxingwei > Technical information > 正文
PCIe® and Automotive Ethernet: Working Shoulder to Shoulder to Deliver Real-Time Connectivity in Automobiles
Edit:Baoxingwei Technology | Time:2023-07-18 14:01 | Number of views:322
What Does the Future Hold for ADAS and Vehicle Connectivity?
Technological advances in Advanced Driver-Assistance Systems (ADAS) and vehicle connectivity have required automotive Original Equipment Manufacturers (OEMs) to revisit the data architecture used in their vehicles, with “zoning” emerging as the preferred approach. Enabling distributed processing in a zonal architecture which connects to a central computer requires a high-speed data backbone with high bandwidth and low latency over long cables. These are essential requirements for safety-critical applications which rely on real-time data. In addition, the amount of computing performed onboard automobiles has also grown considerably as Artificial Intelligence (AI) and Machine Learning (ML) have become increasingly important features of the automotive ecosystem. This post discusses how Peripheral Component Interconnect Express (PCIe®) and automotive ethernet combine to create the high-speed backbone required to address these demands.
Automotive Ethernet
The development of multi-speed xBASE-T1 standards for full-duplex transmission over a Single Twisted Pair (STP) has made automotive Ethernet an ideal choice for various automobile applications, from low-speed sensors up to providing the inter-ECU interface and simplifying Inter-Vehicle Connectivity (IVN) within the zonal architecture. The IEEE® 802.3ch standard (released in 2020) further increased the data bandwidth up to 10 Gbps over cables up to 15m and supports time-sensitive networking (TSN) and audio-video bridging (critical), essential requirements for camera connectivity. This has enabled OEMs to reduce the size and weight of wiring harnesses, leading to more fuel-efficient vehicles. Apart from advances at the physical layer (Layer 1), these automotive Ethernet still supports mature protocols at Layers 2 and 3, enabling point-to-point transmission checks to be performed up to the application layer. Other advantages of this networking technology include its support for native Operating Systems (OSs) and that the length of an ethernet frame can be varied to match the size of the data payload.
PCIe
In data center type environments, PCIe excels as a connection between high-performance processing elements and to connect add-in cards—for example, Network Interface Cards (NICs). In the automobile environment, PCIe is ideally suited to support next-generation High-Performance Compute (HPC) architectures. Major performance benefits include:
Scalable bandwidth: PCIe bandwidth has doubled with each new generation, reaching 64 Giga transfers per second for PCIe 6.0, allowing designers to implement an interface that continuously scales to meet the ever-growing demand for bandwidth. PCIe also offers flexible link widths, where parallel lanes allow a bandwidth expansion (up to a factor of 16) to a maximum bidirectional data rate of 256B/s.
Ultra-low latency and reliability: PCIe is a physical layer standard that does not rely on software handling by higher-layer protocols; hence, it can transport data reliably with ultra-low latency (in the order of tens of nanoseconds). This allows central processing units (CPUs) to interface with dedicated AI and ML engines in real time.
Direct Memory Access (DMA): PCIe does not require packets for DMA (instead uses a fixed frame size). This helps to optimize latency for processors accessing the vast shared data storage resources, like solid state drives (SSD). Whereas other interface technologies incur an overhead in CPU cycles to access, copy and buffer memory data from another location, processors can use PCIe to access shared memory resources efficiently as if they were available locally.
PCIe is standards-based and has been widely adopted by multiple vendors for CPUs, Graphics Processing Units (GPUs) and hardware accelerators. In addition, high-performance components of the data-processing ecosystem provide native support for PCIe as their primary networking infrastructure. PCIe-based switch fabrics with non-transparent bridging (NTB) topologies enable the highest-performance data sharing. As the automotive OEMs move towards distributed processing with added redundancy in the high-speed data backbone, deploying native PCIe beyond the board level may become an increasingly attractive alternative. The traditional approach of translating PCIe to Ethernet (via NIC for cable transport) and then back again at the destination control unit means the benefits inherent to PCIe are lost. Using native PCIe to connect high-performance or safety-critical SoCs could allow OEMs to harness the benefits of ultra-low latency, reliable data delivery and low overhead DMA.