Mobility value chain vision: To develop edge AI-based perception, cognition, and monitoring technologies to advance the mobility sector in three different aspects: the mobile agent (e.g., drones, UAVs, UGVs and other vehicles), stationary and mobile multi-agent collaboration (distributed edge intelligence), and infrastructure. These technologies are key to enable connected and automated mobility with increased energy-efficiency, reliability, privacy, and reusability for both indoor and outdoor applications. The value chain integrates the edge AI technological developments into three demonstrators.
VCD 4.1: Cognitive Mobile Multi-Agent Platform demonstrator (lead: EDI, IMEC) – aims to develop the deep learning-based perception and acting system, including hardware, that provides the mobile agents with the perceiving, comprehending, and reasoning abilities complemented with modular classical low-level reactive and deep-learning-based deliberative planning and task allocation components on the deep-edge level.
VCD 4.2: Roadside Perception Units (RSPU) Connected with a LoRa 2.4 GHz Mesh Network demonstrator (lead: EDI, IMST, GNT, HUA, INTRA, ITML) – aims to provide stationary agents interconnected by LoRa 2.4 GHz mesh network equipped with energy-efficient edge platforms for AI-based multi-sensor processing, resilient to seasonal drifts. Decentralized wireless communication enables the transmission of non-safety critical messages (e.g., re-calibration messages). LoRa 2.4 GHz solutions will be investigated, in a meshed network structure.
VCD 4.3: Condition Monitoring for Mobile Agents demonstrator (lead: NEURO, INTRA) – aims at developing an AI-based hardware/software solution and the AI algorithms to use condition monitoring of selected mobile agent components (e.g., UAV motor bearings and other vehicle components).