Here’s what facial recognition regulation should look like, according to Microsoft
In a speech at the Brookings Institution on Thursday afternoon, Microsoft President Brad Smith took the next step in his ongoing advocacy of federal legislation around the proper use of facial recognition technologies: He revealed what he thinks that legislation should look like.
In his remarks and a concurrent blog post, Smith identified three major risks associated with the technology that Microsoft believes should be mitigated through legislation. Broadly, he said, these include the risk of bias and discrimination, the risk of intrusion into personal privacy and the risk of mass surveillance by government.
Microsoft believes that each of these risk areas is ripe for new laws. For example, in the area of bias and discrimination, Smith argued that a law requiring that companies be transparent about the capabilities and limitations of their systems, as well as a law requiring third-party system testing, could go a long way.
“As a society, we need legislation that will put impartial testing groups like Consumer Reports and their counterparts in a position where they can test facial recognition services for accuracy and unfair bias in a transparent and even-handed manner,” Smith wrote in the blog post accompanying his remarks at Brookings.
This suggestion makes sense in light of a recent National Institute of Standards and Technology study that revealed, while facial recognition algorithms across the industry have improved a lot in the past five years, not all are equally accurate. “There remains a very wide spread of capability across the industry,” NIST computer scientist Patrick Grother said in a statement about the report. “This implies you need to properly consider accuracy when you’re selecting new-generation software.”
It bears mentioning that Microsoft’s algorithms did very well in NIST’s study.
Microsoft’s interest in supporting legislation stands in contrast to some competitors in the market who think these risks can be mitigated by the free market alone. When Smith published his first blog post calling for regulation in July, some speculated that this stance belied a company that is behind in the market and wants help catching up. This isn’t the case, Smith said — but don’t interpret this to mean that Microsoft’s interest in regulation is entirely altruistic.
The company doesn’t want to find itself competing in a “race to the bottom” where “tech companies forced to choose between social responsibility and market success,” Smith wrote in the blog post.
“We believe that the only way to protect against this race to the bottom is to build a floor of responsibility that supports healthy market competition. And a solid floor requires that we ensure that this technology, and the organizations that develop and use it, are governed by the rule of law.”
In addition to his suggestions for federal legislation, Smith introduced Microsoft’s six new “principles” of facial recognition. “We need not wait for the government in order to act responsibly,” he said.
The six principles Microsoft will follow are fairness, transparency, accountability, non-discrimination, notice and consent, and lawful surveillance. The company will elaborate more on these principles in the coming days, Smith said.
On both counts — corporate responsibility and government responsibility — Smith said the time to start acting is now.
“The facial recognition genie is just starting to leave the bottle,” he said. “The world needs to have confidence that this technology will be used well.”