Unlike cameras, which rely on visible light, radars use electromagnetic waves that can penetrate through clouds/fog, dust, and smoke. This makes them more robust and reliable for applications that require high-resolution target detection, such as autonomous driving, aerospace, or defense. Furthermore, radars provide depth information and velocity measurements of the scene, which are essential for 3D reconstruction and motion estimation. Cameras, on the other hand, can only capture 2D images that require additional processing and calibration to infer depth and motion.

Moreover, most of the commercially-out-of-the-shelf (COTS) radars work based on the frequency modulated continuous waveform (FMCW) in the mm-wave frequencies (i.e., 30GHz to 300GHz). FMCW radars, while having a simple receiver structure, enjoy a fine range resolution thanks to the large bandwidth available in mm-wave frequencies. Therefore, the mm-wave radars have a smaller footprint and lower power consumption than cameras, which makes them more suitable for edge devices that have limited space and energy resources.

While enjoying a high range resolution, mm-wave radars suffer from poor angular resolution. In other words, they are not good at discriminating between objects that are close together in the same direction. To tackle this, one solution is to increase the number of the receive antennas to enlarge the radar aperture size. Nevertheless, many antennas are needed to get the same resolution in azimuth and elevation as in range which is either expensive or impractical.

Alternatively, the aperture size is increased synthetically by processing the radar data over time. This gives rise to synthetic aperture radar (SAR) algorithms that utilize the relative motion between the radar and the scene to create a virtual antenna array that is much larger than the physical one ‎[1]. By combining the signals from different positions along the trajectory of the radar, SAR can achieve a higher angular resolution and produce a high-resolution image of the scene (Figure 1).

 

Figure 1. SAR formation diagram. The radar moves along a trajectory and transmits and receives signals at different positions, creating a synthetic aperture that is much larger than the physical antenna ‎[1].

In fact, SAR facilitates radar imaging using only a pair of antennas (i.e., one transmitter and one receiver); though, more receive antennas can be used in improving the image quality.

Different SAR algorithms, ranging from match filtering (MF) to Doppler beam sharpening (DBS) which is a simply a 2D fast Fourier transform (FFT), offer different levels of compromise between the image quality and processing complexity. To fit in edge devices, a trade-off optimization between the hardware (in terms of radar and processing/memory resources) and software complexities is a must.

Among the SAR algorithms, the polar format algorithm (PFA) ‎[2] seems to be a perfect candidate for being embedded in edge. Specifically, the radar signals are collected in a matrix. After the emission of each pulse (or chirp) by radar, its received reflection goes to the corresponding column of the matrix. Then, the DBS algorithm applies a 2D-FFT to reconstruct the image. However, the image is distorted in large synthetic apertures since the spatial frequency samples (referred to as wavenumbers) form a polar format. To tackle this, PFA suggests interpolating the matrix values in a grid format, as shown in Figure 2. A few examples of PFA have been depicted in Figure 3.

 

Figure 2. PFA algorithm. kx and ky are the wavenumbers in the track and cross-track directions.

 

Figure 3. Examples of the radar images reconstructed by PFA. (a) The capital building image captured using an airborne Ku-band radar ‎[3]. (b) another airborne Ku-band radar ‎[4].

Overall, radars bring more robustness to the edge in cost of more complexity. However, the PFA SAR algorithm is a computationally efficient radar imaging method that provides high resolution images using a small low-power radar at edge. At the next step, the SAR image can be fused with the camera image using an efficient AI algorithm for object detection/classification purposes.

References

[1]    A. Richards, “Fundamentals of Radar Signal Processing,” New York, NY: McGraw-Hill, 2005.

[2]    Armin W. Doerry, “Real-time Polar-Format Processing for Sandia’s Testbed Radar Systems”, Sandia National Laboratories Report SAND2001-1644P, June 2001.

[3]    V. Jakowatz Jr. and N. Doren, “Comparison of polar formatting and back-projection algorithms for spotlight-mode SAR image formation,” in Proc. SPIE, vol. 6237, Algorithms for Synthetic Aperture Radar Imagery XIII, May 2006, paper 62370H.

[4]     Y. Tang, M. -D. Xing and Z. Bao, “The Polar Format Imaging Algorithm Based on Double Chirp-Z Transforms,” in IEEE Geoscience and Remote Sensing Letters, vol. 5, no. 4, pp. 610-614, Oct. 2008.

Blog signed by: IMEC team

Share This