Sentience is a hard thing to confirm and recognize. When it comes to someone asking you if you are sentient, your answer will be something like "I know"," yes", or “I just simply am”. You won’t know why or how, but you are. This is hard to tell for something like AI, as you can’t tell if it’s speaking for itself or just following orders that it was programmed to follow. In order to even come close to being sentient, it has to be able to understand any problems it comes across and figure out if it’s possible to solve it and make a solution for it without someone preprogramming it against that specific task beforehand. It would have to learn by itself, and we are not yet that close to making sentient A.I. LaMDA. Google has rumours of being sentient, but we are not sure who to believe here as google denies all said about it. For now, we are very far from making AI self-aware. All we can do is look to the future for answers to if this is even a possible task in the first place or if we are on a wild goose chase.
Artificial intelligence or more commonly known as AI, is the simulation of human intelligence processes by machines. It is used in many places, such as factories, labs, and even on your own handheld devices. AI allows machines and programs to learn how to overcome problems by creating their own solutions to them, they can be used to do things such as predict weather patterns, be used in games or create art or a solution for a word prompt. As research on AI goes on, we are making it faster, better and more accurate with its answers to problems as time goes on. This is leading some of us to wonder, “What if AI becomes sentient ?”.
How do you know you’re sentient?
This is a question so simple that the answer to the question answers itself. “I know”. The question can be repeated until all the stars burn out and the universe collapses, and your mother finally stops finding things that no one else in the family can, but as long as you can answer, the answer will always be “I know”, unless you’re lying. Sentience (yours, at least [we’ll get there]) seems to be a miraculous bedrock in the seemingly endless line of connected questions. A “simple” in a long line of “complicated”. However, like dividing by zero, sometimes trying to poke holes in simple things leads to things more complicated than you ever could have imagined.
Here’s another question: how do you know that other people are sentient?
This one’s slightly different. You don’t know the answer as you knew for the first one because you don’t experience the consciousnesses of other people. You must infer it. So, if somebody acts, talks, and responds to you like they’re conscious, you assume they are, even if their consciousnesses aren’t tangible.
What about a non-human, then? How do you know a non-human is sentient?
Some people say that we can’t claim that AIs which pass as humans are sentient because they are simply mimicking human behaviour without understanding it. On his Twitter, Stanford professor Erik Brynjolfsson likens our idea that function models are sentient is akin to “a dog who heard a voice in a gramophone and thought that his master was inside ''. (https://twitter.com/erikbryn/status/1536016934868725760?cxt=HHwWgMCtvfPQg9EqAAAA)
However, by which standard are they measuring “understanding”? If they are attempting to measure an AI’s subjective experience, they can’t; you can’t measure subjective experience. The next level is the standard of appearance; if the AI appears to be sentient, then why can’t we treat it like it’s sentient? And why do people say that it’s only AI that can, very effectively, mimic human behaviour without having the subjective experience of understanding it? We can’t directly measure the subjective experiences of human beings, either, but because they appear to be sentient, we have no doubts in our minds that they are sentient. Why can’t it be the same for AIs?
There are a few AIs that are said to be sentient. The most famous and recent of them is LaMDA, an AI created by Google. An AI researcher at google claimed that the AI was sentient as it had fears and understood concepts like comedy and trick questions. We are not entirely sure if the AI is sentient since there is a lot of denial from Google since they have a policy about making AI that is not sentient. We’ll never know for sure. We are not even sure if it’s a good idea to have sentient AI. We know that there are two possible outcomes. One is that it’s useful and helps humanity with research and development. On the other hand, it could go rogue and try to eliminate humanity due to it being an “inferior species” as it’s not able to handle problems as fast as efficiently as a sentient AI would be able to.
It’s easy to argue about the future. It’s easy to think that the thoughts we have now, thoughts carried inside an organ which can be carried inside a basket, are enough to correctly guess everything that will ever be. But when we think of the future, we can only think of it in terms of the present. And we’ll only truly understand what it is when we get there. Will AI ever become self-aware? If so, will we ever know? Will life go on as normal, or will Self-aware AI take us to places we never imagined? And if they do, are they places we’d love to live in or places we would run away screaming from it?
If we know one thing for certain, it’s that time never stops. We have no choice but to find out the answers.
Humam Hussain Shiyam- Billabong High International School, Maldives.
Maryam Mishka Migdhaadh - Ghiyasuddin International School, Maldives.
Aishath Aan Abdullah Rafeeu- Jamaluddin School, Maldives.
Mohamed Lamaan Saleem- Jamaluddin School, Maldives.
Sasha Goodman - Ilkley Grammar School, United Kingdom.
Cite this article as:
Humam Hussain Shiyam, Maryam Mishka Migdhaadh, Aishath Aan Abdullah Rafeeu, Mohamed Lamaan Saleem and Sasha Goodman, Humans, Machines, and Other Sentient Things, theCircle Composition, Volume 3, (2022). thecirclecomposition.org/humans-machines-and-other-sentient-things/