Autonomous cars and smart cities must detect traffic lights to function. Discover how you can acquire and label training data for traffic lights efficiently, and develop human-level accuracies in understanding any model of traffic light. The world's best performing AI teams are differentiated by the quality and quantity of the training data at your disposal. Learn how you can reach these levels in traffic light understanding in computer vision.
Hover over the image below to see what the AI identifies
Running red lights kills almost 1,000 people each year in the US alone. More than half of those killed were passengers or people riding in other vehicles which did not run the red light. Autonomous vehicles and next generation smart cities could help prevent thousands of deaths, and millions of accidents. The solution? Exposing AI to the right training scenarios in recognizing and respecting red lights. This is no small feat - traffic lights look different from city to city, and they must be identified with high accuracies several times per second to accurately control a car's speed in time for a safe stop.
Millions of autonomous driving images are labelled on V7 to improve the performance of self-driving cars across different cities. Label them using V7's automated annotation tools, and then train a model in one click to pre-label all of your remaining data. You may also rely on pre-trained models to kickstart your labelling, in a new domain, such as a new class of traffic light or weather condition, and re-train your models with this additional data to widen your AI's coverage and avoid out of distribution errors.
There's different ways of labelling a traffic light for identification. V7 is always happy to advise you on how data should be labelled - it's our primary research focus! Here's a few we recommend:
The traffic light is 1 object, and the lights are 2-3 more with classes based on colour and significance. For example, a traffic light with a red light on has 1 label for the light, and 1 label for the red colour. This way a model's features aren't shared between two conflicting classes, but instead are relegated to the light areas. You can then associate the centroid of 1 light to the position of its corresponding parent traffic light, or use a grouping label.
The traffic light object classes are defined by the colour of the active lights. This method is becoming outdated, as it risks teaching a model that Red, Green, and Yellow traffic lights share the same visual features from a side. Whilst easier to create larger annotations and friendlier on small model input sizes (easier to spot a whole traffic light than a the light within), it's a less elegant approach.
Sometimes the supporting structure of a light matters as much as the light itself. In cases where smart-cities offer proximity sensing to vehicles, or in vehicles that do not rely on LIDAR, knowing the position of a free-floating object can only be determined by seeing its supporting structure. This type of annotation is also used to distinguish traffic lights that are presented in a series over lanes, with each light being correlated and grouped to a lane below.
Ultimately the best schema for your AI to learn traffic lights depends on that it will run on. Robo taxis, delivery robots, ADAS features, and traffic control systems will have different embodiments, angles, and edge cases to resolve. The right labelling schema can take years to master, and can make or break your AI. Here at V7, we'll help you with defining how to best prepare your training data for a successfully trained neural network. Whether you're an experienced deep learning team, or a new startup, you'll find all the tools you need to get started on traffic light understanding.
The best ML models start to learn objects across domains after they've seen 1,000 examples, but you can start training with as little as 30 on V7. What's important is that you fine-tune your model in new scenarios. This means uploading a sample of data from a new city, country, or domain where traffic lights are involved, so that the AI can adapt to the new appearance of these objects, as well as any weather or optics conditions that may have changed, bringing your accuracy from 80% using generic models to 95%+. If you are building autonomous vehicles, these amounts can be in the millions. You can use V7's active learning-based labelling to pre-process large datasets with models and only pick out low-confidence images to re-annotate.
AI performance shouldn't stop at the first training session.
Sync your product with a dataset and continually improve your AI performance by turning its output into new ground truth. Continual learning will allow your models to surpass 95% accuracy barriers and reach 99% and above as more data from your use case or product is supervised by humans, and learnt by your AI.