Header Ads

Leading Voice In AI Ethics Says Big Tech Do Not Prioritize Safety

Timnit Gebru, a leading AI ethics voice, says Big Tech needs to “slow down on AI” and “let researchers build the technology they know must be built.” AI is one of the next big things in technology. Despite it being used daily by anyone using a phone or a computer, and its uses expanding to climate change, medicine, and finding solutions to problems beyond human capacity, AI has a dark side.

Gebru gained international recognition due to the Google-Gebru scandal involving her messy departure from Google. At Google, Gebru had a high-profile position co-leading the company's ethical AI team. Gebru is known for her work on AI bias in facial recognition software and other applications. Artificial intelligence bias has been cited as impactful in things like credit applications and even the judicial system. Facial recognition systems are sometimes used to predict the likelihood of an individual committing a crime, for example.

Related: This Creepy New AI Is Way Too Good At Pretending To Be Human

During an interview with Wired, Gebru said that the structure and the incentives for developing AI technology cannot always be to “make more money for huge corporations that already have so much power,” or to get “money from the Department of Defense DOD.” Gebru announced she will be opening an independent AI research institute on December 2, exactly one year after she “got fired from Google.” Gebru questioned the retaliation, structure, and reward system that Big Tech uses to push their AI teams.

The goal of Gebru’s independent AI research institute is to give experts the time and resources needed to develop the technology the right way for the right reasons, prioritizing safety and not profit. AI has been linked to biases that can impact lives. Gebru said that working for Google is “putting out fires” and that AI ethics researchers should not be criticizing AI technology once it has already been released, but given the time to present positive models and evaluate the technology before it is rolled out to the public.

I don’t see that incentive in academia and I don’t see that incentive in industry,” Gebru told Wired. AI is mostly unregulated, and there is no set of laws that governs the use of AI. Gebru, having worked at Apple and Google, believes that Big Tech will not voluntarily self-regulate and prioritize safety because that would be asking them to “lose billions of dollars.” The only solution to “slowing downAI is to strengthen laws including whistleblower protections, anti-discrimination laws, worker protection laws, and set legal standards for high stakes AI scenarios, Gebru added. When questioned whether the fight for a more ethical AI should be done independently or from within a large company Gebru said it should take place in both environments, from the inside and the outside.

Next: Scientists Team Up With AI To Develop Treatment For Childhood Brain Cancer

Source: Wired

No comments:

Powered by Blogger.