Facebook Plans to Shut Down Its Facial Recognition System

These discoveries led to congressional hearings and regulatory inquiries. Last week, CEO Mark Zuckerberg renamed Facebook’s parent company Meta and said it would shift resources toward building products for the next frontier online, a digital world known as the metaverse.

The change affects more than a third of daily Facebook users whose accounts have face recognition turned on, according to the company. This means that they received alerts when new photos or videos of them were uploaded to the social network. The feature has also been used to indicate accounts that may be impersonating another person and has been incorporated into software that describes images to blind users.

“Making this change requires us to assess where facial recognition could be useful in the face of growing concerns about the use of this technology as a whole,” said Jason Gross, a spokesperson for Meta.

Although Facebook plans to delete more than a billion facial recognition templates, which are digital scans of facial characteristics, by December, it will not eliminate the software that powers the system, an advanced algorithm called DeepFace. Mr. Gross said the company has also not ruled out incorporating facial recognition technology into future products.

However, privacy advocates applauded the decision.

“Facebook’s exit from facial recognition is a pivotal moment in growing national dismay with this technology,” said Adam Schwartz, a senior attorney at the Electronic Frontier Foundation, a civil liberties organization. “Companies’ use of facial surveillance is extremely dangerous to people’s privacy.”

Facebook isn’t the first big tech company to back away from facial recognition software. Amazon, Microsoft and IBM have paused or stopped selling facial recognition products to law enforcement in recent years, while expressing concerns about privacy and algorithmic bias and calling for clearer regulation.

Facebook’s facial recognition software has a long and expensive history. When the software was introduced in Europe in 2011, data protection authorities there said the move was illegal and that the company needed approval to analyze a person’s photos and extract the unique pattern of an individual’s face. In 2015, the technology also led to a class action lawsuit in Illinois.

Leave a reply:

Your email address will not be published.