The elected president of Venezuela Edmundo González Urrutia had to flee to Spain and is currently in exile in that country after the regime issued an arrest warrant against him for subversion. González Urrutia obtained 67% of the votes in the election day of July 28, against 30% for Nicolás Maduro with 83.5% of the votes verified with published tally sheets, winning in all states (source: resultadosconvzla.com). We reject the arrest warrant, and the fraud intended by the National Electoral Council – CNE of Venezuela, proclaiming Nicolás Maduro as president-elect for a new presidential term and its ratification by the Supreme Court of Justice-TSJ, both without showing the voting minutes or any other support.  EnergiesNet ” Latin America & Caribbean web portal with news and information on Energy, Oil, Gas, Renewables, Engineering, Technology, and Environment.– Contact : Elio Ohep, editor at  EnergiesNet@gmail.com +584142763041-   The elected president of Venezuela Edmundo González Urrutia had to flee to Spain and is currently in exile in that country after the regime issued an arrest warrant against him for subversion. González Urrutia obtained 67% of the votes in the election day of July 28, against 30% for Nicolás Maduro with 83.5% of the votes verified with published tally sheets, winning in all states (source: resultadosconvzla.com). We reject the arrest warrant, and the fraud intended by the National Electoral Council – CNE of Venezuela, proclaiming Nicolás Maduro as president-elect for a new presidential term and its ratification by the Supreme Court of Justice-TSJ, both without showing the voting minutes or any other support.
10/28 Closing Prices / revised 10/29/2024 08:18 GMT | 10/28 OPEC Basket  $71.59 –$2.22 cents | 10/28 Mexico Basket (MME)  $62.55 –$4.36 cents |  09/30 Venezuela Basket (Merey) $54.91   -$7.24 cents  10/28 NYMEX Light Sweet Crude $67.38 -$4.40 cents | 10/28 ICE Brent Sept $71.42 -$4.63 cents | 10/28 Gasoline RBOB NYC Harbor  $2.9257 -0.113 cents | 10/28 Heating oil NY Harbor  $2.1398 -0.1093 cents | 10/28 NYMEX Natural Gas $2.863 +0.229 cents | 10/18 Active U.S. Rig Count (Oil & Gas) = 585 0 | 10/29 USD/MXN Mexican Peso 20.0092 (data live) 10/29 EUR/USD  1.0814 (data live) | 10/29 US/Bs. (Bolivar)  $41.73610000 (data BCV) | Source: WTRG/MSN/Bloomberg/MarketWatch

The AI Safety Debate Is All Wrong – Daron Acemoglu

Today’s anxiety about the risks posed by artificial intelligence reflects a tendency to anthropomorphize AI and causes us to focus on the wrong issues. Since any technology can be used for good or bad, what ultimately matters is who controls it, what their objectives are, and what kind of regulations they are subjected to.

Moor Studio/Getty

By Daron Acemoglu

BOSTON – A huge industry has emerged in recent years as China, the United States, the United Kingdom, and the European Union have made the safety of artificial intelligence a top priority. Obviously, any technology – from cars and pharmaceuticals to machine tools and lawnmowers – should be designed as safely as possible (one wishes that more scrutiny had been brought to bear on social media during its early days).

But simply raising safety concerns isn’t enough. In the case of AI, the debate is focused far too much on “safety against catastrophic risks due to AGI (Artificial General Intelligence),” meaning a superintelligence that can outperform all humans in most cognitive tasks. At issue is the question of “alignment”: whether AI models produce results that match their users’ and designers’ objectives and values – a topic that leads to various sci-fi scenarios in which a superintelligent AI emerges and destroys humanity. The best-selling author Brian Christian’s The Alignment Problem is focused mostly on AGI, and the same concerns have led Anthropic, one of the main companies in the field, to build models with their own “constitutions” enshrining ethical values and principles.

But there are at least two reasons why these approaches may be misguided. First, the current safety debate not only (unhelpfully) anthropomorphizes AI; it also leads us to focus on the wrong targets. Since any technology can be used for good or bad, what ultimately matters is who controls it, what their objectives are, and what kind of regulations they are subjected to.

No amount of safety research would have prevented a car from being used as a weapon at the white supremacist rally in Charlottesville, Virginia in 2017. If we accept the premise that AI systems have their own personalities, we might conclude that our only option is to ensure that they have the right values and constitutions in the abstract. But the premise is false, and the proposed solution would fall far short.

To be sure, the counterargument is that if AGI was ever achieved, it really would matter whether the system was “aligned” with human objectives, because no guardrails would be left to contain the cunning of a superintelligence. But this claim brings us to the second problem with much of the AI safety discussion. Even if we are on the path to AGI (which seems highly unlikely), the most immediate danger would still be misuses of non-superintelligent AI by humans.

Suppose that there is some time (T) in the future (say 2040) when AGI will be invented, and that until this time arrives, AI systems that don’t have AGI will still be non-autonomous. (If they were to become self-acting before AGI, let that day be T.) Now consider the situation one year before T. By that point, AI systems will have become highly capable (by dint of being on the cusp of superintelligence), and the question that we would want to ask is: Who is in control right now?

The answer would of course be human agents, either individually or collectively in the form of a government, a consortium, or a corporation. To simplify the discussion, let me refer to the human agents in charge of AI at this point as Corporation X. This company (it could also be more than one company, which might be even worse, as we will see) would be able to use its AI capabilities for any purpose it wants. If it wanted to destroy democracy and enslave people, it could do so. The threat that so many commentators impute to AGI would already have arrived before AGI.

In fact, the situation would probably be worse than this description, because Corporation X could bring about a similar outcome even if its intention was not to destroy democracy. If its own objectives were not fully aligned with democracy (which is inevitable), democracy could suffer as an unintended consequence (as has been the case with social media).

For example, inequality exceeding some threshold may jeopardize the proper functioning of democracy; but that fact would not stop Corporation X from doing everything it could to enrich itself or its shareholders. Any guardrails built into its AI models to prevent malicious use would not matter, because Corporation X could still use its technology however it wants.

Likewise, if there were two companies, Corporation X and Corporation Y, that controlled highly capable AI models, either one of them, or both, could still pursue aims that are damaging to social cohesion, democracy, and human freedom. (And no, the argument that they would constrain each other is not convincing. If anything, their competition could make them even more ruthless.)

Thus, even if we get what most AI safety researchers want – proper alignment and constraints on AGI – we will not be safe. The implications of this conclusion should be obvious: We need much stronger institutions for reining in the tech companies, and much stronger forms of democratic and civic action to keep governments that control AI accountable. This challenge is quite separate and distinct from addressing biases in AI models or their alignment with human objectives.

Why, then, are we so fixated on the potential behavior of anthropomorphized AI? Some of it is hype, which helps the tech industry attract more talent and investment. The more that everyone is talking about how a superintelligent AI might act, the more the public will start to think that AGI is imminent. Retail and institutional investors will pour into the next big thing, and tech executives who grew up on sci-fi depictions of superintelligent AI will get another free pass. We should start paying more attention to the more immediate risks.

______________________________________________________

Daron Acemoglu, Institute Professor of Economics at MIT, is a co-author (with James A. Robinson) of Why Nations Fail: The Origins of Power, Prosperity and Poverty (Profile, 2019) and a co-author (with Simon Johnson) of Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity (PublicAffairs, 2023).  Energiesnet.com does not necessarily share these views.

Editor’s Note: This article was originally published by Proyect Syndicate PS on August 5, 2024. EnergiesNet.com do not reflect either for or against the opinion expressed in the comment as an endorsement of Petroleumworld or EnergiesNet.com

The AI Safety Debate Is All Wrong by Daron Acemoglu – Project Syndicate (project-syndicate.org)

Use Notice: This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of issues of environmental and humanitarian significance. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml.

EnergiesNet.com 08 06 2024

Share this news

Support EnergiesNet.com

By Elio Ohep · Launched in 1999 under Petroleumworld.com

Information & News on Latin America’s Energy, Oil, Gas, Renewables, Climate, Technology, Politics and Social issues

Contact : editor@petroleuworld.com


CopyRight©1999-2021, EnergiesNet.com™  / Elio Ohep – All rights reserved
 

This site is a public free site and it contains copyrighted material the use of which has not always been specifically authorized by the copyright owner.We are making such material available in our efforts to advance understanding of business, environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have chosen to view the included information for research, information, and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission fromPetroleumworld or the copyright owner of the material.