Forum

Google's blunder with AI ethics

Editorials featured in the Forum section are solely the opinions of their individual authors.

We’ve seen it all on the big screens. HAL from A Space Odyssey. Skynet from The Terminator. Ultron from Avengers: Age of Ultron. With the intense fear of artificial intelligence (AI) taking over the world and initiating the apocalypse, you would think that we would take more precautions and considerations of the ethics of AI in industry today.

Yet researcher Margaret Mitchell was fired from Google Brain, Google's AI lab, where she previously co-led a group focusing on ethical approaches to artificial intelligence. In a statement, a Google spokesperson said Mitchell had shared “confidential business-sensitive documents and private data of other employees” outside the company.

Just the December before that, the former co-leader of that group, Timnit Gebru, was also fired. Gebru states that she was fired for refusing to remove her name from a research paper that cautioned against the use of artificial intelligence which processes text, including technology that is used in the Google search engine. As stated in a Wired article, a source familiar with Mitchell’s suspension said she had been using a script to search her email for material related to Gebru’s time at the company.

Gebru, Mitchell, and their ethical AI team were vital contributors in researching and mitigating the potential downsides of AI. They contributed to decisions that limit some of Google’s AI products, such as retiring a feature of an image recognition program that identified the gender of people in photos. Although Google’s AI research boss, Jeff Dean, stated that the cause of Gebru’s departure was due to her research paper being of poor quality, researchers inside and outside of Google have disputed this.

The firings of these head ethicists set a poor precedent. It is a warning sign that the safeguards against unethical artificial intelligence are wavering.

If you look at every other avenue of research and practice, ethical regulations exist. Doctors must protect patient privacy by HIPAA and are regulated by medical boards. Attorneys are bound by a code of conduct and attorney-client privilege, with the risk of losing their license. Pharmaceutical trials involving human subjects must be approved by the Federal Drug Administration and are scrutinized heavily. Yet, artificial intelligence research — which often bleeds into the fabric of society — relies solely on internal, self-regulated ethical considerations, making the recent turn of events especially concerning.

The responsibility, in effect, falls on these tech giants. Yet, it gets worse. Internal reviewers at Google had even demanded that at least three papers on AI be modified to refrain from casting Google technology in a negative light, Reuters reported. According to the email reviewed by Reuters, edits included “negative-to-neutral” swaps such as changing the word “concerns” to “considerations,” and “dangers” to “risks.”

Prioritizing brand image over the safety of consumers is not viable morally or economically. When creating a product that has the potential to greatly affect someone’s life, it is the responsibility of the creator to also address biases and flaws. Just like a manufacturer is morally obligated to recall damaged and harmful products, the same should apply to creators of artificially intelligent services.

By commercializing AI research, this problem of bias in machine learning and artificial intelligence is allowed to snowball and become worse as the technology spreads to other critical areas like medicine and law, and as more people without a deep technical understanding utilize it.

Given the influence and broad reach of Google, it especially has no excuse to not be considerate about the ethics of AI. Have we forgotten what happens when we allow bias in AI to remain? A 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft, and found that they have higher error rates when transcribing Black people's voices than white people's. In 2018, Amazon had to retire an artificial intelligence recruiting tool that had a bias against women. When biases are left unchecked, they wreak havoc on the world around us, which is why stripping down the few efforts to keep bias in check is chaos in the making.

With great power comes great responsibility. Google, along with every other major tech company, must be vigilant in its use of artificial intelligence that is capable of shaping our everyday lives. Firing the people who are literally hired to prevent the apocalypse for the sake of maintaining brand reputation is not worth the risk. Industries must be comfortable with finding problems and discovering bias in today’s AI. That is why even our academic institution has understood the importance of teaching ethics. Here at Carnegie Mellon, ethics is discussed even from the introductory Concepts of Artificial Intelligence course, and an ethics course is required for all AI majors. What is the point of teaching ethics to future scientists and researchers if our employers will fire us for doing our job?