58JL Casino.jili22.net app download,Jollibet casino free 100 no deposit bonus

Science & Technology

AI Companies Agree to Kill Switch Policy

At the artificial intelligence summit held last week in Seoul, companies operating in the AI industry reached an agreement on the application of the so-called kill switch policy, which provides for stopping the process of developing configurations of the advanced technology if certain risk thresholds are exceeded.

AI Companies Agree to Kill Switch Policy

The mentioned decision provoked a very lively discussion on the future of artificial intelligence. Also in the context of this discourse, special attention is paid to the impact of the agreement reached at the Seoul Summit on the commercial aspect of the existence of the AI industry. According to some experts, the security policy agreed upon by the developers of artificial intelligence in the capital of South Korea is impractical and contains questionable potential in terms of efficiency prospects. Also, supporters of this opinion note that the mentioned mechanism can become a source of negative impact on the opportunities for promoting innovation, competition, and the global economy.

Companies such as Microsoft, Amazon, OpenAI, and other firms from the United States, China, Canada, the United Kingdom, France, South Korea, and the United Arab Emirates have made commitments in the capital of South Korea.

The position of the companies that reached an agreement in Seoul on an algorithm to stop the development of an artificial intelligence model when significant risks are detected provides that this is a necessary protection against potential dangers associated with uncontrolled AI progression. The mentioned firms also note that this is a responsible step towards the formation of a practice of safe and ethical development of advanced technology that can revolutionize many spheres of activity, including the financial services sector, the healthcare area, and the transportation services business.

Camden Swita, head of AI and ML Innovation at AI company New Relic, says that the term Kill Switch is odd because it sounds like the firms have agreed to stop research and development of certain artificial intelligence models if the boundaries associated with risks to humanity are crossed. According to the expert, this is not a decisive step, but simply a soft agreement to comply with several ethical standards when evolving AI configurations. Camden Swita says that technology companies have already concluded similar agreements before, which is why the result of the Seoul summit is not something fundamentally new.

The practicality of the mentioned policy also provokes doubts about the realism of execution. Vaclav Vincalek, virtual CTO and founder of 555vCTO.com suggests that the theoretically agreed mechanism in Seoul is that all companies operating in the artificial intelligence industry should clearly understand what is a risk and how their AI models relate to the corresponding concept. The expert also noted that firms should provide reports on compliance with requirements and when the restriction algorithm was used. Vaclav Vincalek says that even against the background of government regulations and the legal force of mechanisms to prevent the development of artificial intelligence beyond security, there are suspicions that some companies raise thresholds if their AI models approach the risk line.

Also, skepticism about the policy of regulating the evolvement of digital thinking systems agreed upon at the Seoul summit, is associated with doubts about the effectiveness of this activity strategy. Camden Swita says that this mechanism will not show the same results as similar practices involving mandatory or strict measures. The expert also suggested that the kill switch policy would be as effective as any stakeholder would allow. In this case, the main problem is that a certain company may decide that, for some reason, it should not use the agreed mechanism, and not bear any responsibility for this, since the appropriate measure of impact is not provided in principle. The absence of a compulsory component already at the conceptual level reduces the effectiveness of virtually any mechanism. In some cases, this practice means reasonable and fair freedom of action. At the same time, in the context of security, such specifics may even form certain threats.

Artificial intelligence has significant potential and, according to some experts, can surpass the human mind in terms of cognitive abilities. Against this background, precautions are obviously very important, since there is currently no understanding of what the expected superintelligence’s strategy of action will be. At the same time, this perspective should not be taken in an alarmist context and characterized as a sign of excessive danger of artificial intelligence.

There are also significant doubts circulating about the ability of governments to properly oversee research projects in the AI industry. Camden Swita says that even if state-owned agencies adopt strict regulations aimed at the mentioned control, it is unlikely that such organizations will be able to act quickly enough or with sufficient experience to fully monitor the implementation of each research project. The fact of rapid development of artificial intelligence to a certain extent testifies in favor of this assumption. The control and impact mechanisms available to governments may be completely useless in the framework of interaction with certain machine intelligence systems.

Adnan Masood, chief AI architect at UST, says that there are significant limitations and challenges associated with using a kill switch exclusively. According to the expert, in this case, the definition of criteria is a complex and subjective process. Adnan Masood says that in the mentioned mechanism there is no clear algorithm for what should be defined as an unacceptable risk and who should make the decision if threats are detected.

Mehdi Esmail, co-founder and chief product officer at ValidMind, drew attention to the problems of self-regulation faced by companies operating in the artificial intelligence industry. Recently, this problem has been increasingly manifested. Mehdi Esmail says that the policy agreed in Seoul is a step in the right direction, but the mentioned inability to self-regulate may be the reason that the restrictive mechanism will not be used at the critical moment.

Camden Swita, during a conversation with media representatives in response to a question about the ability of artificial general intelligence to circumvent risk prevention measures, stated that he was much more interested in human responsibility. The expert is concerned about what people can do to their civilization and the world as a whole. In this context, he drew attention to the importance of what decisions shareholders and individual governments are ready to make in the context of curbing research in the area of artificial intelligence and at the same time following the goal of dominating the AI industry. The corresponding issue is relevant. There is an absolutely realistic risk that certain companies will not be able to abandon profit growth, even taking into account potential security risks. This issue is also relevant in the geopolitical context. Artificial intelligence, being a powerful force of multifunctional and multi-purpose transformation, will very soon become one of the important factors determining the potential of the state. Given this fact, there is a risk that some global political centers may sacrifice security for a higher level of technological development compared to similar capabilities of other countries.

In the foreseeable future, the main task for the artificial intelligence industry is likely to be to achieve the right balance between security and innovation. This issue is becoming increasingly relevant against the background of the rapid development of area AI. The policy agreed in Seoul is formally an absolutely correct decision and what can be called constructive, but in this case, there are too many questions about what the practical implementation of the relevant mechanism will be. In this context, the need for specifics is obvious. Norms should be formed that have an unambiguous interpretation and do not generate space for discrepancies. The purpose of such regulatory efforts is to prevent the implementation of scenarios of destructive use of artificial intelligence and the formation of an environment for AI to become a tool for the qualitative transformation of industries and a source of stimulating economic growth.

As we have reported earlier, OpenAI Creates Oversight Team.

Serhii Mikhailov

2864 Posts 0 Comments

Serhii’s track record of study and work spans six years at the Faculty of Philology and eight years in the media, during which he has developed a deep understanding of various aspects of the industry and honed his writing skills; his areas of expertise include fintech, payments, cryptocurrency, and financial services, and he is constantly keeping a close eye on the latest developments and innovations in these fields, as he believes that they will have a significant impact on the future direction of the economy as a whole.