Peachtree Corners Announces Launch of Autonomous Vehicle Deployment in Partnership with May Mobility and T-Mobile
September 26, 2024
We often get asked: how do May Mobility autonomous vehicles work? If this is a question that’s been on your mind, you’ve come to the right place.
Since opening our doors in 2017, May Mobility has transformed mobility as we work to design the world’s best autonomous vehicle (AV) technology. The transit agencies, cities and businesses we partner with rely on us to fill transit gaps and solve unique problems in their communities.
But for all the benefits we bring to our clients, we often get asked: how do May Mobility autonomous vehicles actually work? If this is a question that’s been on your mind, you’ve come to the right place.
AVs are equipped with sensors that let them build a map of their environment. This information is processed by autonomous driving software in the vehicle’s computer. Vehicles with autonomous driving levels 3 and above can determine a suitable path and tell the vehicles whether they should accelerate, brake or steer at any given moment. May Mobility designs Level 4 autonomous vehicles, which can handle complex, nuanced driving situations without any driver intervention.
The exceptional safety and performance of May Mobility AVs come from two cooperative systems: our combined sensor stack and our revolutionary Multi-Policy Decision Making (MPDM) platform. By working together, these systems aim to allow the vehicle to react quickly to ensure the safety of every road user.
Our vehicles feature a platform-agnostic Autonomous Driving Kit (ADK) composed of the autonomous driving node (ADN) computer within the car and cameras, radars and LiDAR (Light Detection and Ranging) sensors on top and around the vehicle. These provide a complete view of the vehicle’s surroundings, including static elements, dynamic elements and environmental conditions. The detailed information they gather is then sent to the ADN.
A combination of cameras with narrow- and wide-angle lenses allows the AV to classify and differentiate between objects around the vehicle. They also serve to detect traffic lights and look out for vehicles and pedestrians. Each camera is paired with a LiDAR for more comprehensive detection capabilities.
LiDAR is a laser-based sensor that bounces pulses of light off the vehicle’s surroundings to build a 3D representation of its environment. This allows it to measure distances and identify and classify objects and obstacles in its environment. Our AVs hold one large, long-range LiDAR on top, two smaller ones on each side and one on the front and the rear for comprehensive perception and localization of the area
Radar is a sensor that uses radio waves to provide accurate information about the distances and velocities of surrounding elements. Radars are positioned on each corner and in the front, allowing the AV to detect how fast or slow vehicles and pedestrians are moving and react accordingly.
Each type of sensor technology has its own strengths and weaknesses, but each provides a piece to the larger puzzle. By working together, they provide all the information the ADN needs to maintain a complete 360-degree view of the environment up to 120 meters away. While some sensors provide overlapping data, it is that redundancy that helps the vehicle to consistently make safe driving decisions.
One of the biggest challenges to autonomous driving is safely handling the unexpected. Some AV companies train their software using heavy data sets and machine learning. But if their vehicle encounters a situation that these data sets didn’t capture, it may not know how to handle it in a robust – or perhaps even safe – manner.
Instead, it’s important to have a vehicle that can generalize. This is the technical term for a vehicle that can handle situations it wasn’t programmed for. In other words, it’s an AV that can imagine novel situations and make decisions like a real human driver.
Through the use of a high-end NVIDIA graphics processor unit (GPU), imagery from the combined sensor stack is processed within milliseconds, ready for our proprietary Multi-Policy Decision Making (MPDM) technology to do its work. Our MPDM platform uses that constant stream of data to run thousands of simulations per second. These allow it to predict potential hazards and choose the safest course of action at every moment. MPDM doesn’t just calculate what cars, pedestrians and other dynamic agents might do, but also how those actions might interact and change. All of this makes MPDM especially capable of handling the full range of possible driving conditions in the real world.
The union of our autonomous driving kit and MPDM platform empowers us to solve some of the biggest problems in autonomous driving. When you step into one of our vehicles, you can feel confident in our ability to get you to where you’re going safely, even when the unexpected occurs. To learn more, access our Resource Library for whitepapers and videos about our vehicles and technology.
We love meeting transit agencies, cities, campuses, organizations and businesses where they are to explore how our AV solutions can solve their transportation gaps for good. Ready to partner up? Let’s talk.
We love meeting transit agencies, cities, campuses, organizations and businesses where they are to explore how our AV solutions can solve their transportation gaps for good. Ready to partner up? Let’s talk.