I have not been writing content on this blog for a long time. Partly because I have been busy with other stuff. Partly because I have been waiting for something really valuable to arrive t my mind that I could write about. And partly also because I have been simply a bit lazy, I must confess.
Yesterday we had a great conversation in our research group about an article by the late legendary Professor Marvin Minsky, Why People Think Computers Can’t. The article was published more than forty years ago and I found it quite interesting just like the rest of Minsky’s work.
https://doi.org/10.1609/aimag.v3i4.376 Abstract Today, surrounded by so many automatic machines industrial robots, and the R2-D2’s of Star wars movies, most people think AI is much more advanced than it is. But still, many “computer experts” don’t believe that machines will ever “really think.”
I have had a chance to read The Emotion Machine and to review the lectures of the course The Society of Mind by Professor Minsky.
As soon as the conversation began I had the tendency to contribute to the discourse time and again. A few of our students asked questions that related to the current state and limits of what we now generally know about artificial intelligence (AI). I had the urge to explain to them that this paper was not about applied machine learning but about the future of what could be the field of applied true AI.
Slowly we reached the section related to consciousness in machines and computers. Consciousness is one such idea that has baffled philosophers from a very early age even when it comes to how to define it. How do we say if something is conscious?
Anyways, the real question in the group was how do we make conscious machines? There were the usual thwarting sort of arguments that it is difficult or impossible. Moreover, it is generally considered impossible to have consciousness in machines the way we humans do. So I had this tendency to jump into the conversation. I took permission from the moderator to jump into the discourse and started sharing my thoughts. The idea I wanted to bring across was that Professor Minsky actually makes it easier to conceive that consciousness can be brought in machines. So in the article, the professor notes that when he is asked about the question that whether conscious machines can be built, he would ask a similar question back to the person, are you conscious? This can be a very puzzling question. What would you, as the reader, answer to this question? Let me count myself in for the moment in the list of most ordinary folks around the planet. And I would say that yes I am conscious. And then Professor Minsky would ask, how do you add up a couple of numbers, like two and three, and give the answer? Our answer would be that we just do that. The answer is five and I really don’t know how I did that. It is just that I did that. No matter how sophisticated the methods of addition I learned as a child, I have forgotten them all. I can add two numbers and give the answer back really fast. But Professor Minsky, in as much as I know about his personality after auditing his work, would say to me that I really don’t know how my mind works. So there is something I seriously lack about self-consciousness.
By the way when I took to the mic to deliver my ideas to the audience, just before that I had a chain of thoughts go through my head. When I started speaking I was quite confident that I will be able to deliver my ideas one by one to the audience in a coherent manner. However as I was speaking, I slowly began to realize that I had forgotten most of the ideas that I wanted to speak about. I kept speaking about the current idea I was talking about in the hope that I would soon recall what I wanted to say. But in just a few moments I realized that my mind was absolutely blank. I tried a couple of times to recall, but I failed. Soon I began to have a feeling that the audience, who were still not aware of what had really happened to my mind, would begin to lose patience with me. I told them that I forgot what I wanted to say. I was a bit embarrassed about myself. But I stepped off the conversation with an excuse. I also joked at the end that computers are not capable of such human errors. They normally don’t forget unless the memory hardware has failed.
Did I know about this aspect of my mind? Did I know in advance that I will simply forget my very heartfelt thoughts? Clearly, I am not really aware of this aspect of my mind. And this is the point Professor Minsky wants to make as well. What I understand about his idea is that although consciousness is an important aspect of the human mind, we need not worry about a perfect implementation of it for a computer to be truly intelligent. We can work away with a machine being partly self-conscious.
What is consciousness anyway? I said in the beginning that hyper-educated people from society have found it challenging to define it for millennia. Let me try to describe it through a couple of examples from my memory. I was reading the annotation of a podcast on consciousness between two famous neuroscientists once upon a time. So when the guest was asked to describe consciousness, he somewhat gave the example of a worm and said that when we prick it with a needle, it suddenly squeezes. That is, what he said, was consciousness. Let me give another example that does not involve physical contact, as the latter may cause pain that machines are incapable of feeling. Hoover a tennis ball over the head of your pet dog who is also playful somehow. The eyes of the dog will vigilantly follow the ball. Suddenly throw the ball to a side and your dog will jump to fetch it. That is consciousness. And any living thing that can do something like this is also conscious.
By the way, I would like to end this with Avicenna’s famous thought experiment about consciousness. He said that if you lie down on a flat surface and imagine that your body is lying behind, on the bench, and you are lifted up and out of it, lying in the air. And that if you can still conceive yourself to be alive, then you are conscious. Again, I have quoted this from memory. And this is a wonderful thesis.
Anyhow, whatever it may be completely, the truth is that we are not genuinely self-conscious. I know things about myself. I know that I can be lazy and so I have to overcome that. I know that I have to get up early in the morning, I have to do my work, spend time reading, indulge in my hobbies, etc. I also know that I can be a bit moody so I have to be wary of that as well. And all of this kind of knowledge is enough for me to get around my daily chores and life. I think most of us are the same. We all have our unique set of likes and dislikes. And there is a great deal of stuff that we don’t know about ourselves. And that is absolutely alright. We can be reminded of that. And we can and do forget that again. And I think that is absolutely fine as forgetfulness is also considered a gift in certain cultures.
The very fact that a part of our consciousness helps us keep moving in life is very good. I think that Professor Minsky also wants to argue about the same. What I get from his reading is that when it comes to building conscious machines, it would be nice if we figure out first what that machine needs to be conscious about. We can achieve that using sensors and actuators and some intelligent logic. If we want machines to be conscious of heat and humidity, we can have respective sensors for them in the machines. We can have actuator logic built into machines that tells them to evacuate places where there is fire. Pain sensors wouldn’t do as machines are not living beings and do not have pain nerves. But to have intelligence is a totally different thing than to be alive.
Nowadays we have AI for self-correcting codes and software. This is a great leap human beings have taken since the time when Professor Minsky wrote this article. The ability to self-correct source code is normally even more than what a human has. If we have a tumor in our head, all we can normally do is complain about something being wrong with it. What exactly is wrong with it is normally not known to us unless the radiologist reveals to us that we have a tumor. So I think that machines have the potential to be even smarter than humans in certain respects.
Similarly, normally our vision system can only tell us how far or near an object is. Computer vision nowadays can tell us with considerable accuracy the coordinates of the object a system is looking at. This is a great deal of accuracy. Similarly, in other domains we might not be able to get machines to be as intelligent as human beings. But perhaps we may get machines to be as intelligent as snails or insects. And that is also true intelligence.
So the crux of what Professor Minsky implies in this context is not to strive for perfection but for pragmatism. And this is a great piece of advice.
I think if we embrace just this idea, it will usher in a new era of innovation in AI and computer science and engineering. This will be a paradigm shift of sorts in which we will not simply be creating narrow AI for the particular problem we are trying to solve but also building systems that shall be addressing practical concerns related to general AI.
If you found an error, highlight it and press Shift + Enter or click here to inform us.