F1Tenth Autonomous RC Car - Lockheed Martin

August 2024 - May 2025

As the Software Lead for the F1Tenth Autonomous RC Race Car project in collaboration with Lockheed Martin, I am developing an intelligent robotic system capable of autonomous navigation alongside 3 of my peers. This project leverages sensor fusion, machine learning, and ROS2 to enable an RC car to navigate independently using real-time sensor inputs, based on the F1Tenth competition.

The primary objective of this project is to develop an autonomous 1/10th size model robotic car that synthesizes inputs from a camera, a LIDAR sensor and an IMU sensor to navigate a track. By integrating these sensor inputs and deploying a machine learning model we aim to acheive precise, real-time navigation on a scaled down racecar platform.

The required hardware for this project consists of an embedded computer capable of deploying real time inference models, various sensors for learning the training environment, a motor speed controller and some electronic components. For our computing device, we decided on the Nvidia Jetson Orin Nano for it's CUDA capability and its small form factor. To integrate the sensors we chose to use an Intel Realsense D435i Depth Camera, a Hokuyo UST-10LX LIDAR scanner, and the built in IMU from the VESC MKVI motor speed controller.

To develop our software for operating the car, we utilized the manual driving ROS2 workspace from the official F1Tenth github to provide drivers for some low level communication for the joystick, motor speed controller and lidar. From here we developed several nodes using python for implementing our autonomous mode inlcuding a camera interface based on OpenCV-python, a data collection node to write training data to a .csv file and an autonomous node to predict steering and throttle commands for the car using the deployed machine learning model.

To deploy our machine learning model, we used pytorch for the comfortable user experience, readability and prior experience using the library. The data collection process in the training mode of the ROS2 workspace writes sensor data to a csv file. The model was trained using a Jupyter Notebook due to its ability to run individual code snippets and markdown for readability. The csv file is read into the jupyter notebook runtime as a pandas dataframe and a pytorch dataset, which is used to train our neural networks. Separate models are trained for different data types, such as camera for track navigation and LIDAR for object avoidance.

Because the training inputs are labeled by steering and throttle values automatically through the data collection, there is no discriminator in the dataset during training. This however means that the car will learn to drive as best as the drivers driving the car in the manual driving mode while using a controller. While this speeds up the process of manual labeling in the dataset, it also opens up the training set to be more vulnerable to learn to recreate user errors. For a model with end to end classification, from sensor input to steering and throttle generation, a large dataset is required, in the order of 10s of thousands at least, which would be impractical to manually label in the time frame of this project.

In terms of the future of this project, at the end of the spring term, it will be passed onto next year's project group. There were many considerations outweighed in interest of not exceeding the capability of the hardware, including implementing a time series machine learning model. This consideration would implement something like an LSTM or RNN model to better learn specific movements on the track. Currently the car produces steering and throttle frame by frame of the dataset, whereas implementing a model like this would allow the car to learn predictions based over multiple frames at once, able to characterize entire turns. However this significantly increases the input size to the model as well as the model size, theoretically causing a great increase in inference time.

This project represents a cutting-edge fusion of robotics, machine learning, and embedded systems, demonstrating the potential of AI-driven autonomous navigation. As the Software Lead, I am excited to continue refining the system, exploring advanced algorithms, and pushing the boundaries of what’s possible in small-scale autonomous racing.