Begin typing your search above and press return to search.
proflie-avatar
Login
exit_to_app
exit_to_app
Homechevron_rightOpinionchevron_rightEditorialchevron_rightRisks of destruction...

Risks of destruction through AI

text_fields
bookmark_border
Risks of destruction through AI
cancel

The final summit of the Global Partnership on Artificial Intelligence (GPAI), which was launched in 2020 aiming at global cooperation in the field of Artificial Intelligence (AI), was held in New Delhi on December 12, 13 and 14. The summit hosted by India reaffirmed the decision to ensure responsible use of AI. The initiative was borne out of the conviction that international understanding and cooperation are essential when AI threatens everything from national security to individual privacy. Recent events have confirmed the apprehensions. AI, which is capable of generating new data by managing a wide range of data can be put to use for the benefit of humans. It is being used to discover new antibiotic drugs and so on. However, AI is also capable of causing serious harm. It has already raised a threat to privacy. In espionage that transcends international boundaries, AI is the most dangerous newcomer. The Pegasus spyware, which hacked the phones of opposition leaders and journalists including in India, is capable of subverting democracy with the connivance of the ruling dispensation, other AI systems that spy without the government's knowledge will destroy national security itself. AI, with its deadly potential now comes in a situation, where individual privacy has been largely lost. Another aspect is the ability to manipulate data. Not only is AI able to spread falsehood in such a way that it is impossible to distinguish between truth and lies, but it also gives even children the ability to create and spread fake news and videos worldwide. Deepfake videos have already proved how terrifying their possibilities are.

AI creates a situation where there is unlimited information, yet there is a total lack of values while using it. The structure and workings of AI models created by various companies are to some extent, mysterious and non-transparent. Copyright infringement, harm to human dignity and privacy, and counterfeiting are all causes of concern. In today's state of affairs, there are no AI regulations to control or properly guide companies. Although no one knows where and how the COVID-19 virus originated, everyone is aware of its impact and scope. In the case of AI too, lack of transparency is a major threat. No one knows how many artificial destroyers are lurking around like unknown enemies, ready to jump over the horizon and attack. That is why some scientists warn that AI could also lead to extinction, just as it may occur due to a pandemic, nuclear war, or climate change. This means that there is an urgent need to build a common understanding and consensus at a global level. Democratic systems must have policies that provide the necessary oversight rights. There should be laws to curb dangerous research initiatives and it should be ensured that AI is only used for useful purposes.

At the summit, Prime Minister Narendra Modi suggested that it would be more practical to combat dangerous AI models if they could be classified as extreme or moderate. Even as this is right, unfortunately, his own government did not show caution with Pegasus, which proved too brutal to need any branding or labelling as extremely dangerous. Apple had warned iPhone users about the Pegasus spyware. It was followed by an investigative report published by the New York Times, which revealed that the Indian government had purchased weapons and intelligence from Israel in 2017. When other democratic countries conducted transparent investigations into similar revelations, the Modi government here concealed the information by maintaining silence. The country still does not know the extent of that surveillance operation. Another clear sign of danger is the report stating that Israel has been using AI-controlled weapons in the current Gaza offensive. It is believed that such widespread and deadly destruction was possible in a relatively short time thanks to AI being used to determine the targets for bombing. However, almost all of those killed are civilians. The question remains as to what kind of understanding and regulations can be formed at the global level when countries themselves use AI for espionage and warfare.

Show Full Article
TAGS:AIArtificial IntelligenceEditorialPegasusNational securityGPAI
Next Story