Why Russia’s Approach to Artificial Intelligence May Save Civilization

Artificial intelligence

The rapid development of artificial intelligence (AI) has raised security, ethical and civilizational issues. Professor of the Russian Academy of Sciences (RAS) Konstantin Vorontsov has shared his vision on the future of the technology.Artificial intelligence (AI), in its broadest sense, is the ability of a computer to perform human-like tasks which require analyzing, generalizing, learning from past experience and decision-making.”The progress of the last decade in computing technology, neural network training algorithms and the accumulation of big data has allowed us to solve increasingly difficult problems: processing and recognition of images, speech signals, natural language texts,” said Konstantin Vorontsov, a doctor of physical and mathematical sciences and professor at the Department of Mathematical Methods of Forecast, Faculty of Computational Mathematics and Cybernetics (CMC) at Moscow State University.”So far, nothing holistic has been created that we could call artificial intelligence. This is just a set of separate technologies and tools for solving various problems of prediction and automation of decision making,” the scientist stressed.Vorontsov acknowledged that the development of AI is advancing by leaps and bounds, pointing to the latest breakthrough in the field of “large language models” (LLMs) – ChatGPT.In computer science, LLM is described as a deep learning algorithm that can perform a variety of natural language processing (NLP) tasks. NLP is the ability of a computer to “understand” human language, both spoken and written.”The breakthrough is so revolutionary that in late March, researchers behind the latest version of GPT-4 say that they have seen ‘glimpses of general artificial intelligence’ for the first time,” Vorontsov said. “The model, trained on terabytes of texts, acquired abilities it had never been taught, which were numerous, non-trivial and unexpected.”WorldIsrael Uses Military AI in Gaza: Tool of Genocide or ‘Simply a Database’?9 April, 16:30 GMT

Dangerous AI: ChatBots Could be Biased, Spread Fakes & Shatter Trust

GPT-4 is said to be able to “generate, edit, and iterate with users on creative and technical writing tasks, such as composing songs, writing screenplays, or learning a user’s writing style,” according OpenAI, a US-based artificial intelligence research organization.The scientist argued that the size of the model and its parameters are still no match for the volume of the human brain. “It doesn’t seem to be intelligent yet, although it’s already close to it,” he remarked. Still, the danger is that the model’s ability to generate fully meaningful text misleads the people.”It seems to us that the chatbot could think; that it is smart and knows a lot, that it has ‘character’ and ‘personality’, that it makes decisions. This is all untrue. (…) The chatbot does make decisions except for what word to generate next. It cannot ‘think’,” he said.Beyond PoliticsEU Beats US to Devise First AI ‘Rulebook’13 March, 17:47 GMTHaving to some extent “humanized” the AI-powered software, some people are inclined to shift their responsibility onto it, taking what it generates at face value without double-checking. The crux of the matter is that the machine is not only error prone, but it has already “learnt” to generate false information and fake news from articles promoting distorted information in the media sphere.”While generating text AI can distort and mix facts in bizarre ways,” the scientist said. “When this was first discovered, Google developers became afraid that a [neural] network could be used to generate ‘an ocean of lies,’ in which a drop of truth would finally be lost.”In addition, the neural network is by no means “neutral”: it can be “politically biased” and propagate an agenda, depending on what materials it has “learnt” the most from, according to the scientist.Vorontsov cited a study alleging that ChatGPT typically exhibits “political views” of an average software engineer from the San Francisco Bay Area, who is typically left-leaning or liberal, green party-aligned and an environmentalist.”There is no guarantee that the model will provide a full range of opinions on any issue,” the expert noted. “So far, large language models do not have the skills to quote verbatim, insert links to primary sources, or assess the reliability of sources.”A study titled “More human than human: measuring ChatGPT political bias” released in August 2023 argued that despite the chatbot assuring it is impartial, “political bias in LLMs can have adverse political and electoral consequences similar to bias from traditional and social media.” The authors of the study claimed that they had found robust evidence that ChatGPT presents a “significant and systematic political bias” toward the Democratic Party in the US, President Lula da Silva in Brazil, and the Labour Party in the UK”These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media,” the researchers highlighted.RussiaPutin Proposes Dialogue on Using Russia’s AI Know-How Within BRICS24 November 2023, 15:59 GMT

Russia Offers Responsible Use of Technologies to Preserve Civilization

Vorontsov argued that when creating something new — be it technology, business or laws — that can affect many people or shape the future of humanity, the question must be asked: “What am I doing this for?”

"A new civilizational thinking is needed to replace the purely technocratic and individualistic one," the scientist insisted. "Having created a bunch of existentially dangerous technologies, we have taken on a huge burden of responsibility for preserving the unique result of billions of years of evolution."

The Russian government attaches utmost importance to maintaining technological sovereignty in the field of AI, especially given emerging evidence that globally-accessible LLM tools could be misused. Last November, Russian President Vladimir Putin took part in the plenary session of the Artificial Intelligence Journey 2023, an international conference on AI and machine learning. The president warned the West’s AI monopoly is “dangerous,” calling for creating “reliable, transparent and safe artificial intelligence systems for humans.””Russia is now one of a few… countries which have their own technologies of generative and artificial intelligence and large language models,” Putin said. “We need to boost our competitive advantage, create new markets based on such technologies, and a whole constellation of products and services.”EconomyIMF Warns AI Could Impact Jobs Worldwide, Worsen Inequality & Social Unrest16 January, 03:40 GMTRussia can offer a new approach to developing new and improving existing AI models stemming from the nation’s technological expertise and traditional values. Vorontsov stressed that AI developers need to adopt a responsible “civilizational approach” to make their products safe and reliable. To illustrate his point the expert referred to ChatGPT’s flaws and named the ways to stamp them out.”First, a chatbot should not fuel human conflicts, hence the principle of a neutral position,” the professor explained. “If we are talking about some social conflict, the chatbot should show reasonable positions of all parties. Otherwise, it becomes a tool for exacerbating the conflict… Language models need to be taught to identify conflicting or polarized opinions. We can go further and set the task of depolarizing public opinion and destroying ‘information bubbles’.””Second, a chatbot should not misguide people. If millions of people use it, then misconceptions can become widespread. And this is an anti-civilization phenomenon,” Vorontsov warned. “The [AI] model should be able to quote, check facts, determine the reliability of sources, identify fakes and deception. Research in these areas has been ongoing for many years. Merging them with larger language models seems inevitable.”The issue of responsible use of technology, engaging with it safely, respectfully and ethically, remains high on the global agenda. Given the growing impact of AI technologies on humanity, measures need to be taken to preserve existing civilizational values, according to Vorontsov.”I will venture to formulate the law of the preservation of civilization,” the scientist said. “The probability of self-destruction of a civilization is proportional to the product of three quantities: the amount of generated energy, the amount of existentially dangerous technologies and the amount of people willing to do anything for the sake of personal power and dominance.”WorldUN General Assembly Adopts First-Ever Resolution on AI22 March, 13:57 GMT


Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

9 + 8 =

Back to top button