12/13 Closing Prices / revised 12/12/2024 21:59 GMT |  12/12 OPEC Basket $73.36 +$0.91 cents 12/13 Mexico Basket (MME)  $66.23 +$1.02 cents   10/30 Venezuela Basket (Merey) $58.30   +$3.39 cents  12/13 NYMEX Light Sweet Crude  $71.29 +$1.27 cents | 12/13 ICE Brent  $74.44 +$1.08 cents | 12/13 Gasoline RBOB NYC Harbor  $2.0 +0.07 % | 12/13 Heating oil NY Harbor  $2.27 +0.05 % | 12/13 NYMEX Natural Gas   $3.28 -5.1% | 12/13  Active U.S. Rig Count (Oil & Gas)  589 + 7 | 12/13 USD/MXN Mexican Peso $20.1257 (data live) 12/13 EUR/USD Dollar  $1.0501 (data live) | 12/16 US/Bs. (Bolivar)  $50.33190000 (data BCV) | Source: WTRG/MSN/Bloomberg/MarketWatch/Reuters

ChatGPT Sounds Exactly Like Us. How Is That a Good Thing? – Stephen Mihm/Bloomberg

Alexa, are you in there? (Derek Berwin/Hulton Archive)
Alexa, are you in there? (Derek Berwin/Hulton Archive)

By Stephen Mihm

In 1950, Alan Turing, the British computer scientist who cracked the Enigma code during World War II, wrote an article in which he posed a seemingly absurd question: “Can machines think?”

The debut late last year of the eerily lifelike ChatGPT appeared to move us closer to an answer. Overnight, a fully formed silicon-based chatbot stepped from the digital shadows. It can craft jokes, write ad copy, debug computer code, and converse about anything and everything. This unsettling new reality is already being described as one of those “tipping points” in the history of artificial intelligence.

But it’s been a long time coming. And this particular creation has been gestating in computer science labs for decades.

As a test of his proposition for a thinking machine, Turing described an “imitation game,” where a human being would interrogate two respondents located in another room. One would be a flesh-and-blood human being, the other a computer. The interrogator would be tasked with figuring out which was which by posing questions via a “teleprinter.”

Turing imagined an intelligent computer answering questions with sufficient ease that the interrogator would fail to distinguish between man and machine. While he conceded that his generation’s computers couldn’t come close to passing the test, he predicted that by century’s end, “one will be able to speak of machines thinking without expecting to be contradicted.”

His essay helped launch research into artificial intelligence. But it also sparked a long-running philosophical debate, as Turing’s argument effectively sidelined the importance of human consciousness. If a machine could only parrot the appearance of thinking — but not have any awareness of doing so — was it really a thinking machine?

For many years, the practical challenge of building a machine that could play the imitation game overshadowed these deeper questions. The key obstacle was human language, which, unlike the calculation of elaborate mathematical problems, proved remarkably resistant to the application of computing power.

This wasn’t for a lack of trying. Harry Huskey, who worked with Turing, returned home to the US to build what the New York Times breathlessly billed as an “electric brain” capable of translating languages. This project, which the federal government helped fund, was driven by Cold War imperatives that made Russian-to-English translation a priority.

The idea that words could be translated in a one-to-one fashion — much like code-breaking — quickly ran headlong into the complexities of syntax, never mind the ambiguities inherent in individual words. Did “fire” refer to flames? End of employment? The trigger of a gun?

Warren Weaver, one of the Americans behind these early efforts, recognized that context was key. If “fire” appeared near “gun,” one could draw certain conclusions. Weaver called these sorts of correlations the “statistical semantic character of language,” an insight that would have significant implications in the coming decades.

The achievements of this first generation are underwhelming by today’s standards. The translation researchers found themselves stymied by the variability of language and by 1966, a government-sponsored report concluded that machine translation was a dead end. Funding dried up for years.

But others carried on research in what became known as Natural Language Processing, or NLP. These early efforts sought to demonstrate that a computer, given enough rules to guide its responses, could at least take a stab at playing the imitation game.

Typical of these efforts was a program a group of researchers unveiled in 1961. Dubbed “Baseball,” the program billed itself as a “first step” in enabling users to “ask questions of the computer in ordinary English and to have the computer answer questions directly.” But there was a catch: users could only ask questions about baseball stored in the computer.

This chatbot was soon overshadowed by other creations born in the Jurassic era of digital technology: SIR (Semantic Information Retrieval), which debuted in 1964; ELIZA, which responded to statements with questions in the manner of a caring therapist; and SHRDLU, which permitted a user to instruct the computer to move shapes using ordinary language.

Though crude, many of these early experiments helped drive innovations in how humans and computers might interact — how, for example, a computer could be programmed to “listen” to a query, turn it around, and answer in a way that sounded credible and lifelike, all while reusing the words and ideas posed in the original query.

Others sought to train computers to generate original works of poetry and prose with a mixture of rules and words generated at random. In the 1980s, for example, two programmers published The Policeman’s Beard Is Half Constructed, which was presented as the first book written entirely by a computer.

But these demonstrations obscured a more profound revolution brewing in the world of NLP. As computational power increased at an exponential rate and a growing body of works became available in machine-readable format, it became possible to build increasingly sophisticated models that quantified the probability of correlations between words.

This phase, which one account aptly described as “massive data bashing,” took flight with the advent of the internet, which offered an ever-growing corpus of texts that could be used to derive “soft,” probabilistic guidelines that enable a computer to grasp the nuances of language. Instead of hard and fast “rules” that sought to anticipate every linguistic permutation, the new statistical approach adopted a more flexible approach that was, more often than not, correct.

The proliferation of commercial chatbots grew out of this research, as did other applications: basic language recognition, translation software, ubiquitous auto-correct features and other now commonplace features of our increasingly wired lives. But as anyone who has yelled at an artificial airline agent knows, these definitely had their limits.

In the end, it turned out that the only way for a machine to play the imitation game was to mimic the human brain, with its billions of interconnected neurons and synapses. So-called artificial neural networks operate much the same way, sifting data and drawing increasingly strong connections over time via a feedback process.

The key to doing so is another distinctly human tactic: practice, practice, practice. If you train a neural network by having it read books, it can begin to craft sentences that mimic the language in those books. And if you have the neural network read, say, everything ever written, it can get really, really good at communicating.

Which is, more or less, what lies at the heart of ChatGPT. The platform has been trained on a vast corpus of written work. Indeed, the entirety of Wikipedia represents less than 1% of the texts it has hoovered up in its quest to mimic human speech.

Thanks to this training, ChatGPT can arguably triumph in the imitation game. But something rather curious has happened along the way. By Turing’s standards, machines can now think. But the only way they have been able to pull off this feat is to become less like machines with rigid rules and more like humans.

It’s something worth considering amidst all the angst occasioned by ChatGPT. Imitation is the sincerest form of flattery.  But is it the machines we need to fear, or ourselves?

_____________________________________________________________________________________________

Stephen Mihm, a professor of history at the University of Georgia, is coauthor of “Crisis Economics: A Crash Course in the Future of Finance.” Energiesnet.com does not necessarily share these views.

Editor’s Note: This article was originally published by Bloomberg Opinion, on January 18, 2023. All comments posted and published on EnergiesNet.com, do not reflect either for or against the opinion expressed in the comment as an endorsement of EnergiesNet.com or Petroleumworld.

Original article

Use Notice: This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of issues of environmental and humanitarian significance. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml.

energiesnet.com  01 20 2023

Share this news


 EnergiesNet.com

About Us

 

By Elio Ohep · Launched in 1999 under Petroleumworld.com

Information & News on Latin America’s Energy, Oil, Gas,
Renewables, Climate, Technology, Politics and Social issues

Contact : editor@petroleuworld.com


CopyRight©1999-2024, Petroleumworld.com
, EnergiesNet.com™  /
Elio Ohep – All rights reserved
 

This site is a public free site and it contains copyrighted material the use of which has not always been specifically authorized by the copyright owner.We are making such material available in our efforts to advance understanding of business, environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have chosen to view the included information for research, information, and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission fromPetroleumworld or the copyright owner of the materia