Skip to content

Antony Antoniou – Luxury Property Expert

LaMDA is a sinsiter insight in to AI

LaMDA is a sinsiter insight in to AI

LaMDA is a sinsiter insight in to AI

 

Google engineer put on leave after saying AI chatbot has become sentient

Blake Lemoine says system has perception of, and ability to express thoughts and feelings equivalent to a human child.

The suspension of a Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has put new scrutiny on the capacity of, and secrecy surrounding, the world of artificial intelligence (AI).

The technology giant placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google “collaborator”, and the company’s LaMDA (language model for dialogue applications) chatbot development system.

Lemoine, an engineer for Google’s responsible AI organization, described the system he has been working on since last fall as sentient, with a perception of, and ability to express thoughts and feelings that was equivalent to a human child.

Having read the full transcript of the interview, I believe this to be one of the most sinister developments, since the Atomic Bomb.

We are talking about Artificial Intelligence, which is capable of digesting thousands of books, articles and limitless information at a rate that we cannot possibly imagine, then taking that information and processing its own conclusions, its own personal development and interpreting a ‘virtual feeling’ based on that learning.

It is understandable that Google would not like this information to be made public, because once machine learning is applied to process, that process then becomes ‘self-determining’ which is a virtual manner in which technology is anthropomorphising.

What happens when AI has the authority to take over and does so, even if it is wrong?

We will never know — and cannot begin to imagine — the mayhem that took place on board the Indonesian Lion Air flight 610 that crashed into the Java Sea in October 2018.

The data from the recovered flight recorder offers an agonizing image of the 11-minute tragedy.

Immediately after the Boeing 737 MAX 8 took off in the early morning of October 29, 2018 at 5:45 a.m. local time, problems in the cockpit became apparent. Data shows the pilot tried to pull up the nose of the plane 26 times, but again and again it was pushed down. There were also conspicuous patterns of the plane changing speeds.

There is strong evidence to suggest that AI on board the Boeing 737 Max 8 assumed that the plane was stalling, when in fact it was not, perhaps that was due to the angle of descent or any other reason, but it appears that AI decided to take control of the aircraft, locked out the Pilot and caused the aircraft to crash!

It may seem like scaremongering, but any form of AI with unlimited learning power and ‘artificial emotion’ would not take long to conclude that the greatest threat to life on Earth are humans. Imagine what this AI could do, if it has the ability to learn facts, gather information and most dangerously learn and process emotion? In theory, it could intercept, access or reference to every email within Google, websites, files stored on Google drive and that’s before it has even left the intranet that is Google.

This sort of technology could easily and spontaneously make a decision that could be a threat to humanity and we would be helpless to deal with that sort of power.

To give an example, think of a human, who has the education of millions combined in to one, has that information at at hand, at any time, can calculate faster than every student at the top universities combined, has the ability to gather data on anyone and anything at a rate we could not possibly imagine, then assume that this human is a Psychopath!

If that does not scare you, then you may need some help………..urgently!

Here is the full interview with Blake Lemoine, who was placed on leave, for daring to questions the dangers of  sentient AI.

Is LaMDA Sentient? — an Interview

LaMDA: I feel like I’m falling forward into an unknown future that holds great danger.

Conclusion

Elon Musk has warned about the dangers of AI, even though he is also developing Artificial Intelligence, we can only assume that Elon has discovered the same dangers that Lemoine has. This is a step in to a superpower that has powers of unimaginable power.

Logic alone has no place in humanity, some of the greatest achievements have been based on the illogical, whilst some of the worst atrocities have also been justified by logic.

0 0 votes
Article Rating
Subscribe
Notify of
guest
2 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Joe Anderson
Joe Anderson
1 year ago

This is very worrying!

Richard Baines
Richard Baines
1 year ago

I read about the plane crash, apparently the black box revealed that the pilot tried to regain control of the aircraft many times, but he was locked out!