Autonomous vehicles have been in the limelight for various reasons lately, but a recent study from King’s College in London brings forward a more concerning problem than just tech glitches. The research highlights potential biases in the AI systems that guide these vehicles.
The researchers delved into eight different AI systems designed to detect pedestrians and found some alarming trends. The systems had a challenging time spotting pedestrians with darker skin tones, overlooking them nearly 8% more than those with lighter skin. The consequences of such biases are potentially dangerous.
To get these results, the team studied 8,111 images, labeling them based on aspects such as gender, age, and skin tone. With 16,070 gender labels, 20,115 age labels, and 3,513 skin tone labels, they had a vast dataset to work with. The findings highlighted a 7.52% gap in detection accuracy between lighter and darker-skinned individuals. The disparity became even more pronounced in low-contrast or dimly lit conditions, like at night.
Another surprising discovery was related to age. The AI systems were 20% less likely to detect children compared to adults, pointing to another significant oversight.
Although the exact AI models used in autonomous vehicles remain proprietary, Lie Zhang, one of the study’s co-authors, suspects real-world applications may not differ much from their findings. Zhang told New Scientist, “While these companies won’t share specifics, we know many base their technology on existing open-source models. It’s likely similar biases exist in those systems too.”
As we increasingly rely on AI-driven tech, these underlying biases can’t be ignored, especially when lives are at stake. It’s imperative for both the tech community and regulators to address these concerns proactively.
Read More: 30 Best ChatGPT Alternatives with Features in 2023