CAN stands for Controller Area Network. It was developed by Robert Bosch in 1986 as a flexible, reliable, and robust solution for communication within the automotive vehicle. It is a serial, half-duplex, and asynchronous communication protocol and follows a decentralized communication infrastructure. The benefit of a decentralized protocol is that there is no central entity that can control the bus, making the node hot-pluggable, i.e., we can add or remove a node from the bus without disrupting the communication between other nodes.
It features high data transmission speed (up to 1 Mbps) with excellent error handling, automatic re-transmission of faulty messages, and high tolerance to electrical noise. It standardizes the physical and data-link layers, the lowest layers of the OSI model. Let’s first discuss the physical layer of the CAN protocol.
The physical layer is concerned with the transmission of a raw bit stream over a physical medium. There are multiple specifications for the physical layer of the CAN bus, some of which are mentioned below:
This tutorial will focus on the High-Speed CAN (ISO 11898-2). It uses two wires for communication: CANH (CAN High) and CANL (CAN Low). The wires are a twisted pair terminated by 120 Ω termination resistors on both ends that prevent reflection of signals on the bus, and the wires have no ground shielding. Generally, in the case of automotive vehicles, the nodes on the bus are powered by the same power source and have equal ground potential. Otherwise, a ground wire is used to set a common ground for the nodes on the bus.
The figure gives a brief idea of the hardware interfacing for the CAN bus, which may look like any other protocol. Still, when it comes to its electrical properties, it utilizes differential mode signals instead of digital signals for data transmission. To understand why CAN uses differential signal mode, we first need to discuss the challenges single-ended signals (digital signals) face due to electromagnetic interference.
Consider a wire that is carrying digital signals and is disturbed by a noise source, which causes voltage spikes on the wire. The receiver receives data by monitoring the voltage levels on the wire and will consider voltage spikes as ‘1’, corrupting the bitstream.
The figure above demonstrates how the ideal bit stream was supposed to be ‘00011000’, but due to the voltage spike, the receiver read ‘01011011’ and received corrupted data. Practically, the voltage spikes can be filtered using certain methods, but they come with a price! Instead, CAN protocol deals with these issues almost free of cost using differential signals.
In differential mode signals, data is transmitted using two complementary signals, and the information is extracted from the potential difference between those signals. In this case, the signals are CANH and CANL, and the potential difference between them is given by:
Data on the 3.3V CAN bus is transmitted using two states derived from the potential difference between CANH and CANL signals:
for recessive voltage:
for dominant voltage:
Quite a fancy technique to transfer data, isn’t it? But the above explanation still doesn’t explain its resilience against noise. Well, to understand that, consider a noise source that causes voltage spikes on our CAN bus. Logically, the spikes on both wires of the twisted pair will be of almost equal magnitude. As we already know, in differential mode signals, we consider the potential difference between two signals. Therefore, the potential difference between two signals with an equal magnitude of noise will be almost equal to the signals with no noise. Hence, by considering the difference in potential between the two signals, the CAN bus ensures the transmission of data even in an environment that is prone to electrical noise.
What about the interference caused by the twisted pair to the devices in their surroundings, as there is no ground shielding present? The electromagnetic waves radiated by CANH and CANL wires while transitioning from dominant to recessive state or vice versa will be equal and opposite in magnitude therefore will perfectly cancel each other out.
I hope this has helped you develop a good understanding of how data is transmitted on the CAN bus. Unfortunately, many microcontrollers can only generate digital signals, so CAN transceivers are connected to the microcontrollers to generate differential signals. Together, the CAN controller and transceiver form a CAN node that connects to the bus.
CAN bus is an asynchronous half-duplex communication protocol, thus the CAN controller has TXD and RXD pins that are used for the transmission and reception of data, respectively. These pins are connected to the CAN transceiver for converting the digital signals to differential signals for the transmission of messages on the bus and vice-versa for the reception. Figure 2.5 below illustrates the functional block for a CAN transceiver.
The transceiver converts the differential signal back to single-ended signals so that the RXD pin can monitor the bus for the latest messages broadcasted by active nodes on the bus. Therefore, the CAN bus provides a seemingly complex but extremely reliable and cost-effective solution for data transmission on the physical layer.
Before we understand how the CAN protocol functions on the data link layer, let’s briefly review what is expected from the data link layer on the OSI model. It is responsible for facilitating reliable transmission and reception of data between devices over the physical layer. It includes features such as framing, addressing, error detection, and correction, etc., which are implemented on the software level of the CAN bus.
CAN bus is a message broadcast type of bus, and the pre-defined data frame is called a CAN message. The broadcasted messages are heard by all the nodes on the bus. Keep in mind that it is an asynchronous protocol, and all the nodes have to operate on at the same baud rate to communicate with each other.
Figure 3.1 illustrates the CAN data-frame, which may seem a little overwhelming, but this tutorial tries to simplify every field of the data-frame individually.
To synchronize the data frame in an asynchronous protocol, a bit or flag is necessary to indicate the start of the frame. When idle, the state of the bus is recessive (‘1’ in digital logic) to start the transmission, the node broadcasting the message changes the state of the bus from recessive to dominant (‘0’ in digital logic), which acts as the SOF and alerts the inactive nodes on the bus of a new message.
Identifiers in the CAN data frame are used for the broadcasted message to reach the target node, as each CAN node has a unique identifier on the CAN bus. The node that matches the arbitration field value with its identifier accepts the message while other nodes ignore it.
CAN protocol supports two different types of data frames which are distinguished from the size of bits used in arbitration field value. In the base frame format, the CAN bus uses an 11-bit identifier which supports up to 2 11 nodes with a unique identifier. While in the extended frame format, it uses a 29-bit identifier which can support up to 2 29 nodes. The important item to note here is that the extended frame does come with some modifications compared to standard frame formats such as the extended 18-bit identifier, SRR, and R1 bits.
The 18-bit identifier is the second part of the 29-bit identifier, of which 11-bits have already been transmitted before. The substitute remote request (SRR) bit replaces the RTR bit in the standard message location as a placeholder in the extended format and r1 bit follows the RTR, and r0 bit is an additional reserve bit transmitted before the DLC bits.
Other than addressing the nodes, identifiers decide the priority of the message on the bus. The message with the highest priority will be the one with the lowest identifier value. An important thing to remember here is that CAN nodes are open collectors and can drive the output of the bus only in one direction, which allows the active devices to drive a shared line without interference from other inactive devices.
To understand how messages are prioritized on the bus, consider a scenario where two nodes access the bus to broadcast at the same time, node A has a message with the identifier ‘00000000001’, and node B has a message with the identifier ‘0000000011’. The nodes initiate the communication by transitioning the bus state from recessive to dominant i.e. it starts with the SOF of the data frame.
From the above figure, you can see that while transmitting the identifier field of the data frame, the case occurs when node B expects the state of the bus to be recessive, but node A keeps it dominant. As mentioned earlier, CAN transceivers can only drive the bus state in one direction i.e. dominant state. Nodes monitor the state of the bus to check for the expected state of the bus while doing so, node B is notified that a message of higher priority is in transmission, and it stops its transmission while the state of the bus for node A is as it had expected therefore node A wins the arbitration.
Hence, identifiers make the CAN protocol scalable and assign priority to the nodes on the bus without the requirement of a central entity or bus master.
Normally, the nodes broadcast the message automatically without the need for the target node to request it, but the target node can also request the message using a remote frame. A remote frame can be generated by transmitting a dominant bit in the data frame after the identifier instead of a recessive bit.
In the case of a data frame and a remote frame with the same identifier being transmitted simultaneously, the data frame wins arbitration due to the dominant RTR bit following the identifier. The node transmitting the remote frame will be notified of the state of the bus to be dominant, while the node expects it to be recessive and will stop its transmission.
The control field in the data frame contains the bits that hold important information about the data in the data frame. It contains the following bits
The data field contains the main information that is to be transmitted to the target node. The data size for the data field varies from 0 to 64 bits, which is defined by the DLC bits transmitted earlier. The CAN controller adjusts its receive buffer to accommodate the incoming data per the DLC bits.
CRC stands for cyclic redundancy check, it is a mechanism used for error detection that can occur during the transmission of data. The length of the CRC field is 15 bits and transmitted by the source node after the data field.
ACK stands for acknowledgment, which provides security to the source node whether or not the target node has successfully received the data frame. After the transmission of the CRC, there is an ACK slot during which the source node awaits the acknowledgment bit from the target node. If the target node has received the message successfully, it transmits a dominant bit. If not, the state of the bus is kept recessive, which notifies the source node about the error in transmission and prepares for the retransmission of the message.
The EOF field typically comprises seven recessive bits. After the ACK slot, the bus returns to the recessive state, indicating the end of the frame. The EOF field helps receivers recognize the end of the message frame and prepare for the next frame. It marks the conclusion of the transmitted data and allows receiving nodes to interpret the boundaries of the data frame correctly.
Interframe space consists of 7 bits and separates data and remote frames from preceding and succeeding frames. Typically, IFS gives the controller enough time to move the received frame to its right position in a message buffer.
It is important to talk about bit-stuffing because when analyzing the dataframe in an oscilloscope, the CAN message might not look as expected in terms of the number of bits transmitted. This technique is used by CAN nodes to ensure proper synchronization using stuffed (or extra) bits to avoid long occurrences of bits with the same values, which can be misinterpreted by the receiver as a loss of synchronization.
The rule for bit stuffing is simple: whenever the sender encounters five consecutive bits of the same values, a bit of the opposite value is sent by the sender to maintain synchronization. On the other side, the receiver is aware of this, and it’ll remove the stuffed bit encountered after five consecutive bits of the same value, restoring the original datastream. Keep in mind that bit stuffing can add overhead to the transmitted data, but it ensures reliable communication.
In summary, the CAN bus is widely adopted in various industries, particularly in automotive and industrial applications, for its ability to facilitate reliable, real-time communication between the nodes in a network. Its cost-effectivity, scalability, and resilience to fault on both hardware and software levels provide a foundation to build complex systems on top of it.
An electronics engineer who programs robots. Amazed by how simple algorithms can solve the hardest problems. Learning robotics and enjoys photography and videography.