Google also applied a layer of analysis to research papers on sensitive subjects, particularly ethnicity, gender, and political opinion. The senior manager also advised the researchers to “stroke a positive tone” in a paper this summer. The news was first reported to Reuters.

“The new technologies and the growing levels of our surrounding environment are progressively leading to circumstances where unpretentious projects raise ethical, reputational, regulatory or legal issues,” the policy reads. Three employees told Reuters that the law had begun in June.

The organization has also asked workers to “refrain from casting their technology in a poor context” on several occasions, Reuters says.

Employees who work on the AI recommendation report, which is used to customize content on sites like YouTube, have been advised to “take great care to make a positive tone,” according to Reuters. The writers then revised the paper to “remove all Google product references.”

Another paper on utilizing AI in order to understand numerous languages “softened a link to how the Google Translate product made mistakes,” Reuters wrote. The move was made in response to a comment from the reviewers.

Google’s regular review procedure is structured to ensure that researchers do not uncover trade secrets unintentionally. Employees who want to test Google’s bias programs are asked to contact legal, PR, and policy teams first. Other sensitive issues reportedly involve China, the oil and gas industry, place details, religion, and Israel.

Google has recently requested workers to hit an optimistic tone in the research article.

AI ethicist Timnit Gebru says she’s ended up with an email from Google Brain Women and Allies, an internal organization for Google AI research staff. In it, she spoke about Google managers pressuring her to withdraw a paper on the risks of large-scale speech recognition models. Jeff Dean, Google’s head of AI, said she had sent it too early to the deadline. But Gebru’s team pushed back on this statement, arguing that the policy was implemented “unfairly and discriminatorily.”

Gebru contacted Google’s PR and policy team about the paper in September, according to The Washington Post. She knew that certain aspects of the research could be contested by the company, as it uses broad speech recognition models in its search engine. The date for making improvements to the paper was not until the end of January 2021, allowing researchers enough time to react to any concerns.

A week before Thanksgiving, however, Megan Kacholia, VP of Google Research, asked Gebru to withdraw the article. Gebru was shot the next month.

 

 

LEAVE A REPLY

Please enter your comment!
Please enter your name here