Sign In  |  Register  |  About Walnut Creek Guide  |  Contact Us

Walnut Creek, CA
September 01, 2020 1:43pm
7-Day Forecast | Traffic
  • Search Hotels in Walnut Creek Guide

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Ex-Google safety lead calls for AI algorithm transparency, warns of ‘serious consequences for humanity’

SmartNews safety lead Arjun Narayan stressed the importance of algorithm transparency and said human oversight is needed in AI-generated news and media.

SmartNews' Head of Global Trust and Safety is calling for new regulation on artificial intelligence (AI) to prioritize user transparency and ensure human oversight remains a crucial component for news and social media recommender systems.

"We need to have guardrails," Arjun Narayan said. "Without humans thinking through everything that could go wrong, like bias creeping into the models or large language models falling into the wrong hands, there can be very serious consequences for humanity."

Narayan, who previously worked on Trust and Safety for Google and Bytedance, the company behind TikTok, said it is essential for companies to recognize opt-in and opt-outs when using large language models (LLMs). As a default, anything being fed to an LLM will be assumed training data and collected by the model. According to Narayan, many companies, especially new enterprises, should opt-out to avoid leaks and ensure confidentiality.

AI AROUND THE WORLD: HOW THE US, EU, AND CHINA PLAN TO REGULATE AI SOFTWARE COMPANIES

Additionally, Narayan said regulators should push for transparency around how the algorithm is trained. If the algorithm uses SmartNews data or data from another company, that company needs to know whether that is happening and if they will be compensated.

"You don't want the whole world to know what's your next product launch," he said.

As new applications for AI continue to develop, Narayan highlighted the need for increased media and consumer literacy. In the last few months, several countries, including China and Kuwait, have released AI news anchors that use natural language processing and deep learning to create realistic speech and movements to convey the news to their audience.

Narayan warned that scammers and other bad actors could exploit that technology to catfish people and get access to personal information. Such instances highlight the dissipating line between AI and human-generated media.

WHAT IS GOOGLE BARD? HOW THE AI CHATBOT WORKS, HOW TO USE IT, AND WHY IT'S CONTROVERSIAL

"This is kind of a fascinating space where, as long as the user knows they're making, who they're talking to and what's real, what's not real, I think that's okay. But if the user doesn't know, then we have a real problem on our hands," Narayan said.

He also spoke to several other use cases for AI in media and news. For example, a journalist could use AI for investigations or extracting data from existing data. Additionally, some companies have tasked AI with partial content writing, wherein the model creates a rough draft that is edited and published by a human.

Other companies, like SmartNews, use AI to curate and pick stories that matter to the user based on their unique signals and readership categories. Narayan said this recommender system process often uses an amalgamation of different algorithms for "hyper-personalization."

In extreme cases on social media, these filter bubbles, curated by AI, are dead set on the user's interests. Narayan used soda as an example of this process. Say a user likes soda. The AI would give the user soda regardless of whether that is good or bad. If the user keeps drinking that soda, the recommender system algorithms will continue to push that product. Narayan also noted that the long-term psychological implications of this type of model are unclear.

WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

Narayan has spent years ensuring reputation and user safety are top priorities within the various organizations for which he has worked. A small subset of that responsibility includes evaluating fairness, accountability and bias. He stressed that humans set the AI model rules. If a company wants high-emotion stories for engagement, or high-crime stories, the system will pick just that.

"In some cases, obviously, the AI will optimize for engagement, but that also means that it will skew to certain types of stories," Narayan said. "It'll set a certain tone for your platform. And I think a question worth asking is, ‘Is that is that the right tone for your media outlet? Is that the right tone for your platform?’ I don't have an answer, but this is something where companies, editorial boards need to take ownership."

Regarding "robot reporters," or editors using AI to create content entirely, Narayan said that train has already left the station. A quick Google search finds that Amazon Books and online news platforms already have hundreds of pieces and novels written exclusively by AI.

At SmartNews, Narayan said AI is not used for fact-checking, not because they do not trust AI, but because there is not enough accuracy at this point. He added that a human element is needed to proofread the report's accuracy and ensure all the boxes related to journalistic values, diversity or political bias are checked.

SCHUMER ANNOUNCES BIPARTISAN SENATOR-ONLY BRIEFINGS ON 'ASTOUNDING' AI ADVANCES

"We have seen other platforms that claim to be completely digital and AI-driven, and that, as it may be, I don't think society is ready for that," Narayan said. "AI is prone to errors. We still believe in human ingenuity, creativity and human oversight."

While AI can sometimes be prone to quantitative errors, Narayan said the technology primarily works with accuracy. In his experience, AI has been extremely useful in conducting research, extracting facts and evaluating large datasets. However, since AI models are often trained with data from several years back, the models will be unable to answer questions about recent or current events. To that extent, the latency issue makes the model great for evergreen topics but inefficient when it comes to breaking stories, such as a recent mass shooting.

Narayan believes there should be certain ground rules for AI, such as a system to prioritize user transparency for readers to judge the material fairly. If AI fully wrote an article, there should be a way for the publisher to indicate that so the information can be passed down to the consumer.

"Either it's a digital watermark or a technology that cannot be misused," he said. "And so that way, at least I know that this content was AI-generated or a student wrote an essay using ChatGPT and it has this watermark. So, it needs a professor who knows how to grade it and I think those things we definitely should insist on or at least push for in terms of regulation."

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 WalnutCreekGuide.com & California Media Partners, LLC. All rights reserved.