(Bloomberg) -- Microsoft Corp. will stop selling artificial intelligence-based facial-analysis software tools that infer a subject’s emotional state, gender, age, mood and other personal attributes after the algorithms were shown to exhibit problematic bias and inaccuracies.
Existing customers of the tools can keep using them for a year before they expire. The company is also limiting the use of other facial-recognition programs to ensure the technologies meet Microsoft’s ethical AI guidelines. New customers will need to apply for access to facial-recognition features in Microsoft’s Azure Face API, Computer Vision and Video Indexer, while current customers have a year to apply for continued access. The changes were outlined with the release of the second update to Microsoft’s Responsible AI Standard, in blogs written by Chief Responsible AI Officer Natasha Crampton and Azure AI Product Manager Sarah Bird.
The changes come two years after Microsoft and Amazon.com Inc., whose cloud unit competes with Azure, paused sales of facial-recognition technology to U.S. police agencies in the wake of research showing it performed poorly on subjects with darker skin. Some states have passed laws governing the use of such products, including Washington, where both tech companies are headquartered. Even as some of the biggest technology companies back away from the controversial technology, smaller companies such as NEC Corp. and Clearview AI maintain robust businesses selling facial-recognition tools for use in ways that raise privacy and security questions, including by law enforcement.
Microsoft isn’t doing away completely with the use of AI to help read human reactions. The company continues to add other features that make guesses about people’s feelings or emotional state. A new program designed for sales representatives, announced last week, will use AI to run sentiment analysis on customer engagements on Microsoft’s Teams teleconferences to analyze how potential clients may be reacting.