One of my ex-colleagues remarked on Facebook today about the limits of language models in terms of their ability to acquire knowledge. Out of a random thought, I commented on it that the language models indeed do not have any knowledge at all. In my opinion, they are just probabilistic models that churn out next word phrases in a sequence by following a particular probability distribution. I further remarked whether we as humans really knew anything as well. Are we also probabilistic models that are trying to produce words (as we speak or write) based on what we have learned over our lifetimes? I further commented that Marvin Minsky would have asked us such a question. Indeed, it was the habit of Professor Minsky to take aback a curious interrogator with such stunning questions in response. A typical scenario could be that an attendant asked Professor Minsky, could machines ever be able to become conscious? In response to that Professor Minsky asked, are you conscious? How do you know that you are conscious? These are very stunning questions that can baffle the most intelligent student for a moment.
Anyhow, to this, my colleague replied that Professor Minsky would indeed be able to say upon looking at me. I found this comment of my friend rather sarcastic. I literally found this embarrassing. To this I decided to give a bitter response. And I did that! The whole thread and the conversation are given below.
My friend clarified this by saying that he literally meant to prove this point that a machine was not able to do what I did as a human i.e., to respond while feeling bad. I agree with this. But then there are still a myriad of other questions that we need to address about the essence of epistemology.
Did I really know in advance that I will get a sarcastic response to my comment, which I made simply out of goodwill? Did I know at the time of commenting that after a few minutes of this, I will be feeling bad about the experience of commenting? Did I know that the deluge of bad thoughts I had about my friend’s remark on my intellectual aptitude would be hard to suppress? Did I know that my composure will fail? This is a very important question that one needs to address while building intelligent machines. Will the machine be able to predict its own failure? This is a profound question about intelligence. But can humans predict their own failure in advance? This is a rather liberating question for intelligent machines. How did I know how to choose to respond? I had the choice of staying quiet and ignoring the comment. I had the opportunity to respond to a joke with a joke. The response could have been a self-condescending joke or even a pun. Or the response could be a bitter letter of words that I posted back. And why did I choose to reply in this manner?
Most of our personalities are made of third-person influences. Indeed, if this had happened to me like twenty years ago, my response could have been extreme. I would either have internalized it or unleashed myself on my friend. But over the years I have learned to become a bit more calculated in my dealings with other people. Yet, I literally could not see it coming or else, I would have totally avoided my friend’s comment. It was a collegial relationship that I had tried to brew with calculated diplomacy over the years and then I had left it unperturbed in a casket of old things up in the attic of my mind. Even if he did not care about being diplomatic with me, I definitely would have taken it to diplomacy and would have created a win-win situation for both of us. This has been a part and parcel of my nature that I tend to cherish to this day. But then, as of recently, I come out of my skin and give a piece of my mind to the perpetrator.
How did I learn to do this? As I said, it is mostly out of third-person influences. Movies, friends, behaviors of elders, books, and stylish dialogues have taught me all such stuff. But the question is do I really know this stuff? Whenever I have to indulge in a behavior, I keep in front of me a bunch of templates of behaviors that I have learned over the years and choose the one that sounds more appropriate. You may argue that I really know this stuff.
Whenever I have to write something these days, I tend to consult a few templates in my mind about various writing styles that I find fashionable. I like the styles of Sam Harris (for his charism), Malcolm Galdwell (for the alacrity), Daniel Dennet (for the depth), and Marvin Minsky (for the simplicity and elegance). I just choose stuff and mix it and out come the words that I keep scribbling. But do I really know all of this stuff that I am writing? You might want to argue that I don’t know any of it. How about language models? Do they really know what they are talking about? I really don’t think so. By the way, do the language models really know how they are doing their calculations at every step while they generate a word? And this brings us to this million-dollar question, do we really know about the mechanisms that happen in our heads while we write things? I would say that we really don’t know any of that. Hu-aah! I think I have made a bit of an argument by this point in this article. When I started writing this article, I was not sure where I am headed. But I think I have already summed up the crux of what I wanted to write.
But I have to explain something about why Professor Minsky would ask us such offsetting questions. One of the reasons I guess is that Professor Minsky wants us to be clear about why we want to create intelligent machines. He wants us to understand what we want them to achieve. So when you ask him can we make conscious machines, he would ask in turn, are you conscious? And all such questions that can sound very puzzling in the beginning but when you think about them very deeply, they have simple answers and they largely simplify our quest for intelligence. I wrote about this in one of my articles that was a reflection on a research article by Professor Minsky; can machines be conscious.
So when Professor Minsky asks us this question “Are we conscious?” or “Do we know anything”, he is doing two things. The first thing is that he challenges us to think about our self-knowledge. And it always turns out that that self-knowledge is very limited. And then he consoles us by saying that it is alright to have this limited knowledge. This consolation is a recurring theme in his famous books; the society of mind and the emotion machine. Secondly, he assures us that it is perfectly alright to create machines with limited intelligent ability as even humans have limited intelligent abilities as well. This is a remarkable thesis as in my opinion, it is an enabler of growth in the area of artificial intelligence. Otherwise, if you read the first chapter of any contemporary book about artificial intelligence, you would find the quest for true intelligence a philosophical quagmire. What we can achieve tends to be limited by the thesis of Turing’s imitation game or Shrodinger’s cat, whatever that means.
As a matter of fact, Professor Minsky was a progenitor of artificial intelligence. A long time ago, he almost abandoned all the fancy algorithms. He assumed their ever-growing powerful presence in the known world and started looking for a practicable theory of mind. What he came up with was the society of mind. This postulates that the mind is not made up of one monolithic process but a bunch of small processes each one of which is responsible for a certain specialty. And then he talks about classifying these processes into certain types of modules, some of which are useful for calculations and the others are responsible for self-reflection etc. This is a beautiful theory. This has been somewhat adopted in the past under the umbrella discipline of cognitive architectures. Cognitive architectures were abandoned for a long time. But I think the time has come for their revival.
So, all in all, it does not really matter if the machines are as good as humans or not. So long as we have the ability to improve them, we are doing a fine job. And it is possible to have some machines that are better in certain disciplines than humans and in some their performance might remain dismal. But this is how humans are too. Whether or not those machines know stuff shall always beg us to ask this question that whether humans know stuff or not. And this will keep us challenged to redefine the essence of epistemology.
If you found an error, highlight it and press Shift + Enter or click here to inform us.