Microsoft disables facial recognition, AI may not think what you express

Expressing emotions with facial expressions is an almost innate ability of everyone. People are also used to facial expressions to guess other people's emotions. However, with the rapid development of technology, AI (artificial intelligence) can also recognize people's facial conditions and emotions. expression.

▲ Picture from: ResearchGate

Not long ago, Microsoft, which has been working on developing facial recognition technology, released a guide on the topic of "A framework for responsibly building artificial intelligence systems." Openly shares Microsoft's Responsible AI Standards, the framework that guides how Microsoft builds AI systems.

▲ Picture from: Microsoft

The article mentions that Microsoft has decided to retire facial recognition features, Azure Face Services, because these features can be used to attempt to infer emotional states and internal identity attributes, which, if abused, could expose people to stereotypes, discrimination, or unfair denial of service .

Currently, the face service is only available to Microsoft hosted customers and partners, existing customers will have a year to transition, but must stop using these features by June 21, 2023; new users can use the face recognition request form Request access.

▲ Picture from: Microsoft

In fact, Microsoft is not completely disabling the feature, but integrating the technology into "controlled" accessibility tools, such as "Seeing AI" for people with visual impairments. It can describe objects and text for the visually impaired, read signs, decipher someone's facial expressions, and provide navigation. It currently supports English, Dutch, French, German, Japanese and Spanish.

▲ Picture from: Microsoft

The guidance, released by Microsoft, describes the tech company's decision-making process, including a focus on principles such as inclusivity, privacy and transparency, and is the first major update to the standard since its launch in late 2019.

Microsoft said it made such a big change to facial recognition because it recognized that for AI systems to be trustworthy, they needed to properly address the problems they were designed to solve.

▲ Picture from: Microsoft

As part of efforts to align the Azure Face service with responsible AI standards requirements, Microsoft will also deprecate inferring emotional states (happy, sad, neutral, angry, etc.) and identity attributes (such as gender, age, smile, facial hair, hair and makeup).

In the case of emotional states, Microsoft decided not to provide open API access to technologies that scan people's faces and claim to be able to infer their emotional state based on their facial expressions or movements.

▲ Picture from: Microsoft

Microsoft's page shows that it can be identified by 27 facial signs of people, and there are various facial attributes that can be judged, such as whether a given face is blurred, whether it has accessories, estimated gender and age, whether it wears glasses, and the type of hair. , Whether you wear glasses, whether you have a smile…

Experts inside and outside Microsoft have highlighted the lack of scientific consensus on the definition of "emotion" and the high level of privacy concerns surrounding this ability. So Microsoft also decided that it needed to carefully analyze all AI systems designed to infer people's emotional states, whether they used facial analysis or any other AI technology.

▲ Picture from: Microsoft

It is worth mentioning that Microsoft is not the only company that has taken a serious look at facial recognition. IBM CEO Arvind Krishna has also written to the US Congress, revealing that the company has withdrawn from the facial recognition business. The reason for the decision of the two companies is inseparable from the "Death of Floyd" that caused a sensation before.

▲ Picture from: BBC

Because it is worried that this technology may provide monitoring tools for law enforcement agencies, which will lead to some human rights violations, and also worry that the privacy of citizens will be leaked, and the current US legislation in this regard is not very perfect.

Therefore, these companies with technology decided to start by restraining themselves, so that the technology will not be abused, and the privacy and human rights of citizens should be more protected. When the use of a technology is not constrained by perfect specifications, it may be a better choice to constrain the technology development itself.

#Welcome to pay attention to the official WeChat account of Aifaner: Aifaner (WeChat: ifanr), more exciting content will be brought to you as soon as possible.

Love Faner | Original link · View comments · Sina Weibo