Microsoft working on 'trustworthy AI' to curb fake newstext_fields
Redmond: Alarmed by the mammoth spread of fake news, tech giant Microsoft is working round-the-clock on building "trusted" algorithms to control false news on its platforms like Bing Search engine and professional networking website LinkedIn.
According to David Heiner, Strategic Policy Advisor at Microsoft, the problems Facebook and YouTube are facing with fake news today have alerted the tech companies the world over and Microsoft is right on the job in building Artificial Intelligence (AI)-driven systems to fight back.
"We are already working on a couple of such AI-powered initiatives towards Bing and Linkedin. We are also trying to forge tie-ups with trusted news sources and then indicate to users what is the source of the news and letting them make their own decisions," Heiner told IANS at the company's sprawling, 500-acre campus here.
The main challenge, according to him, are concerns regarding censorship and defining what is fake news.
"A very high percentage of people get news from Facebook and (Google-owned) YouTube and both these major platforms are having troubles with handling fake news.
"In the meantime, we have to draw a line for giving too much power to tech companies - in order to figure out what is being presented to the users that often leads to utterly fake news which is injurious to democracy and the civil society," the senior company executive noted.
The need of the hour is to build "trustworthy AI" that is fair and doesn't differentiate between religion, caste and colour.
"The whole idea is to build applications around AI in a trustworthy way. People will not share data and they must not be. With respect to users' privacy, we need trusted AI systems that are safe and transparent," Heiner explained.
There is a notion outside that a handful of big US tech companies or Chinese ones like Tencent Holdings and Alibaba will finally control the AI technology.
"Our vision is very different. We want AI for humanity, for all. AI is the new fundamental technology in terms of creating a huge impact on the society," the executive emphasised.
There are six core concepts to achieving "trustworthy AI".
AI-based systems need to treat everyone fairly, must be safe, protect privacy and security, include everyone and need to be transparent as algorithms can be mysterious at times.
"Lastly, people who are deploying AI systems -- be it at Microsoft or at other companies - are to be treated as accountable for the trustworthiness of their AI systems," Heiner explained.
Microsoft is infusing AI into a whole range of applications -- Office 365, Azure Cloud, cognitive services and translation solutions, etc.
"The challenge is to ensure how data is it being collected and fed into an AI system. This demands utmost fairness and responsibility and we are very serious about it," he added.
AI promises new capabilities but it has also raised questions about potential disruptions to the nature and distribution of jobs.
"It is really hard to predict the future on whether AI will kill jobs. Technology has always eliminated certain jobs and created new ones. We don't know yet whether this will hold true for AI.
"But one thing is sure. Technology in overall, irrespective of AI, will have some impact on jobs like with automation in the recent past," Heiner said.
What will not be transferable to AI, perhaps, are distinct human qualities like judgment, emotions and empathy.
Microsoft is optimistic that AI systems are needed to be built in such a way that they augment human capabilities.
The company has announced "AI for Accessibility" -- a new $25 million, five-year programme for developers globally including in India.
"AI is well suited to us as we build software to improve lives. The technology can do wonders in voice, speech recognition and translation solutions. Aided by AI, humans can reduce errors to a great extent -- almost negligible -- be it in healthcare or any other field," Heiner noted.
(Nishant Arora was at Redmond on an invitation from Microsoft. He can be contacted at firstname.lastname@example.org)