A_map_of_New_England,_being_the_first_that_ever_was_here_cut_..._places_(2675732378).jpg
artificial intelligence RWhitcomb-editor artificial intelligence RWhitcomb-editor

Llewellyn King: How will we know what’s real? Artificial intelligence pulls us into a scary future

Depiction of a homunculus (an artificial man created with alchemy) from the play Faust, by Johann Wolfgang von Goethe (1749-1832)

Feature detection (pictured: edge detection) helps AI compose informative abstract structures out of raw data.

— Graphic by JonMcLoone

#artificial intelligence

WEST WARWICK, R.I.

A whole new thing to worry about has just arrived. It joins a list of existential concerns for the future, along with global warming, the wobbling of democracy, the relationship with China, the national debt, the supply-chain crisis and the wreckage in the schools.

For several weeks artificial intelligence, known as AI, has had pride of place on the worry list. Its arrival was trumpeted for a long time, including by the government and by techies across the board. But it took ChatGPT, an AI chatbot developed by OpenAI, for the hair on the back of the national neck to rise.

Now we know that the race into the unknown is speeding up. The tech biggies, such as Google and Facebook, are trying to catch the lead claimed by Microsoft.

They are rushing headlong into a science that the experts say they only partly understand. They really don’t know how these complex systems work; maybe like a book that the author is unable to read after having written it.

Incalculable acres of newsprint and untold decibels of broadcasting have been raising the alarm ever since a ChatGPT test told a New York Times reporter that it was in love with him, and he should leave his wife. Guffaws all round, but also fear and doubt about the future. Will this Frankenstein creature turn on us? Maybe it loves just one person, hates the rest of us, and plans to do something about it.

In an interview on the PBS television program White House Chronicle, John Savage, An Wang professor emeritus of computer science at Brown University, in Providence, told me that there was a danger of over-reliance, and hence mistakes, on decisions made using AI. For example, he said, some Stanford students partly covered a stop sign with black and white pieces of tape. AI misread the sign as signaling it was okay to travel 45 miles an hour. Similarly, Savage said that the smallest calibration error in a medical operation using artificial intelligence could result in a fatality.

Savage believes that AI needs to be regulated and that any information generated by AI needs verification. As a journalist, it is the latter that alarms.

Already, AI is writing fake music almost undetectably. There is a real possibility that it can write legal briefs. So why not usurp journalism for ulterior purposes, as well as putting stiffs like me out of work?

AI images can already be made to speak and look like the humans they are aping. How will you recognize a “deep fake” from the real thing? Probably, you won’t.

Currently, we are struggling with what is fact and where is the truth. There is so much disinformation, so speedily dispersed that some journalists are in a state of shell shock, particularly in Eastern Europe, where legitimate writers and broadcasters are assaulted daily with disinformation from Russia. “How can we tell what is true?” a reporter in Vilnius, Lithuania, asked me during an Association of European Journalists’ meeting as the Russian disinformation campaign was revving up before the Russian invasion of Ukraine.

Well, that is going to get a lot harder. “You need to know the provenance of information and images before they are published,” Brown University’s Savage said.

But how? In a newsroom on deadline, we have to trust the information we have. One wonders to what extent malicious users of the new technology will infiltrate research materials or, later, the content of encyclopedias. Or are the tools of verification themselves trustworthy?

Obviously, there are going to be upsides to thinking-machines scouring the internet for information on which to make decisions. I think of handling nuclear waste; disarming old weapons; simulating the battlefield, incorporating historical knowledge; and seeking out new products and materials. Medical research will accelerate, one assumes.

However, privacy may be a thing of the past — almost certainly will be.

Just consider that attractive person you just saw at the supermarket, but were unsure what would happen if you struck up a conversation. Snap a picture on your camera and in no time, AI will tell you who the stranger is, whether the person might want to know you and, if that should be your interest, whether the person is married, in a relationship or just waiting to meet someone like you. Or whether the person is a spy for a hostile government.

AI might save us from ourselves. But we should ask how badly we need saving — and be prepared to ignore the answer. Damn it, we are human.

Llewellyn King is executive producer and host of White House Chronicle, on PBS. His email is llewellynking1@gmail.com and he’s based in Rhode Island and Washington, D.C.

whchronicle.com

Read More