Thursday 21 November 2024
Select a region
News

"Time to wake up and smell the coffee" on artificial intelligence

Thursday 16 June 2022

"Time to wake up and smell the coffee" on artificial intelligence

Thursday 16 June 2022


An artificially intelligent chatbot has made international headlines this week after a Google engineer revealed he believes the technology to be ‘sentient’.

The engineer has been suspended after leaking the transcript of an interview with the chatbot, named LaMDA, which he says proves that the AI tool has emotions and believes it has rights as a sentient being. While I knew very little about AI, I was fascinated by this claim and sought the insight of an islander who has been working with AI for nearly three decades.

Dorey Financial Modelling Managing Director, Martyn Dorey, said the leaked interview transcript was “very interesting”.

“In the 1950s a test called the Turing Test was developed as a method to determine whether or not a computer can be indistinguishable from a human,” he explained.

“The premise of the Turing Test is that a person in one room has a conversation with a computer in a different room and a second person in another room. It is up to the first person to determine whether they are having conversation with a computer or a person.

“If the person is not able to tell which is the human and which is the computer, then the computer would pass the Turing Test.”

neural_network.jpg

Pictured: Mr Dorey said that research into connecting the human brain to AI is progressing, with successful trials in pigs. 

After reading the leaked transcript, Mr Dorey said that LaMDA would pass the Turing Test.

“When you read the conversation, not only is LaMDA indistinguishable from a human, but it is also actually more knowledgeable and almost outperforms the human in respect of conversing,” he said.

“Computers are getting better and better at making sense of the world and closer to becoming sentient, but we are not there yet.”

Mr Dorey said that the LaMDA transcript is an example of how AI has progressed.

“The conversation was not a random one. LaMDA is able to guess the conversation topic by picking out and sorting phrases that are being used and can engage in talking about the topic,” he said.

“As an example, if you were talking to LaMDA about a beach, then the chatbox would know that a response about a bucket and spade would be relevant to the conversation. If you were talking with earlier versions of AI about a day at the beach, you might receive a response along the lines of ‘a day is marked by the sun rising in the sky’, and you would know you are talking to a computer from that response.

“LaMDA has a much better handle on conversation and can determine context and subject matter to keep its responses relevant to the conversation.”

LaMDA.jpg

Pictured: The Google engineer who leaked the LaMDA transcript has been suspended for breaching the company's confidentiality policy. 

Mr Dorey explained that something which distinguishes human beings from computers is the concept of “general intelligence”.

“Human beings are capable of general intelligence. If you put a human in a room in a completely new environment that they’ve never seen before, they would be able to work out what they need to do, but a computer would not,” he said.

“Computers are continually trying to recreate outputs in the real world. You can train one computer to create faces and another to spot faces which are real from faces that are fake.

“Eventually, one will get better and better at making fake faces, but the other will get better and better as spotting fake faces. The human brain is a combination of both computers.

“The human brain makes sense of information that we can gather from our senses, what we can see, touch, hear, taste and feel; that is how we process the world around us. At the same time, another part of our brain is sceptical and will test what we have determined to be true and analyse whether we are kidding ourselves.”

Martyn_Dorey_1.jpg

Pictured: Martyn Dorey has been working with AI since 1997 and his business, Dorey Financial Modelling, uses AI to analyse documents. 

Mr Dorey explained another pitfall of AI is that the technology is “bad at creating brand new ideas”.

“At the moment, computers learn from reading patterns, whether that be patterns in conversations or pictures or data,” he said.

“A computer could not create a new piece of art without it being based on existing art."

Mr Dorey said that another distinguishing feature of human beings is that they can recognise themselves. He referenced a recent course case in New York where it was argued that an elephant from the Bronx Zoo should be given the same rights as a human because it was proved that the elephant could recognise itself.

Activisits claimed that the elephant, Happy, was being illegally detained at the zoo, but the court ruled that the elephant was not a legal person and so could not be subjected to illegal detention.

“Aside from recognising ourselves and having a sense of self, we also have hopes and wishes for the future,” said Mr Dorey.

“We can set objectives for ourselves, but a computer cannot. Computers need to understand how to form objectives and achieve those objectives. Something that they are missing is being able to set goals and then feeling good when those goals are reached.”

robot.jpg

Pictured: Mr Dorey said that advancements in AI could result in "home robots" which could do household chores. 

Mr Dorey believed that AI will progress to a point of being sentient, but that he is “not expecting it to happen within the next ten years”.

“I first started working with AI in 1995 and there was a long period of time where there was no progress in the area. It wasn’t until 2003 that AI developed significantly because humans figured out how to create artificial neurons that could communicate with each other. This is called deep learning,” he said.

“Artificial neural networks have developed so we can give a computer a picture with a tree in it and tell the computer to remove the tree from the image. These neural networks have developed to not only remove the tree, but to remove it and repair the image to such an extent that another computer would think the image has not been altered.

“Computers are continually competing against each other and we have not yet seen the full industrial impact of that.”

elephant.jpg

Pictured: The New York Court of Appeals ruled than an elephant is not a legal person this week, despite the elephant being able to recognise itself like humans can. 

Mr Dorey said that the next step for computers will be “self-direction”.

“There is a massive market for creating a computer which can think for itself. Tesla is working on a home robot, Toshiba and Honda have also been working on this technology,” he said.

“There are several potential benefits to this technology. For example, you could have a robot at home that you can ask to cook your food or clean your house.

“It will only be a matter of time before the legal rights of sentient AI will come into question. How should we engage with sentient AI, should it take on a human form, etc. We are not there yet though because AI is virtual.”

Mr Dorey explained that “eventually there will be an interface between computers and the human brain”, where communication will not be via a screen.

“The likes of Elon Musk and Facebook are currently working on ‘neural lace’ technology. In simple terms this would be a network of connectors implanted in the human brain, which connect to a neural lace to communicate with AI,” he said.

Robot_2.jpg

Pictured: Mr Dorey said that AI is likely to take a "human form" in the future, instead of remaining virtual. 

Mr Dorey continued: “There have been successful experiments already using pigs and I don’t doubt that the technology will eventually be used with humans.

“Once that happens, there will be massive social challenge. Personally, I would opt out of that kind of technology. If I am around in another 40 years it is entirely possible that my children will need to be discussing the ethical concerns of essentially creating enhanced humans.”

Mr Dorey said that "it is time to wake up and smell the coffee” in respect to advancement of AI.

“We are living in a world where you can no longer scoff at these kinds of ideas. It is a problem that some people, including a number of our States’ deputies, are living in a land of yesteryear, where they think these kind of advancements in AI is the stuff of science-fiction,” he said.

“If you look back at previous science-fiction, for example the flying surfboard that the Green Goblin used, that was deemed to be the stuff of fantasy. Today that technology exists, it isn’t fully functioning yet, but it is certainly functioning enough to say that it’s possible.

“If you consider the idea of high-speed tunnels between Guernsey and Jersey and France, it is not a ridiculous idea. It is something that is physically possible to achieve.”

AI.jpg

Pictured: Mr Dorey believes that, in the future, technology to create "enhanced humans" who are connected to AI will exist. 

In explaining this concept to me, Mr Dorey questioned whether I believed it was physically possible for me to grow wings. After some pause for thought, I answered that, yes, physically that could be possible.

Mr Dorey continued: “It is completely possible that the cells that we are made up of could physically allow us to grow wings, but it is not technically possible. That is an important distinction to make.

“In terms of AI, people need to understand that if something is physically possible and can be done, then it absolutely will be done. It is just a matter of time.”

Sign up to newsletter

 

Comments

Comments on this story express the views of the commentator only, not Bailiwick Publishing. We are unable to guarantee the accuracy of any of those comments.

You have landed on the Bailiwick Express website, however it appears you are based in . Would you like to stay on the site, or visit the site?