Contemplations on free will

11 minute read

For those of you who know me better, you will know that I tend to get sucked into people’s worldview. Every now and again I discover a new intellectual podcaster that I really like, and start compulsively reading up on this persons ideas. Luckily for me, there are quite a few of these internet philosophers around that seem to feel the same way about their own ideas. Thus compulsively writing down and discussing their own ideas. Perhaps I myself will give into this as well one day.

My latest podcast superhero has been Sam Harris, who has a background in philosophy and cognitive neuroscience. He is also known as one of the ‘four horsemen of atheism’, for his criticism of religion. You might have seen him coming by in some of my earlier blogs. What intrigues me about this guy, is that he combines two big ideas in his work:

The first idea is that to Sam Harris, ‘the unexamined life is not worth living’. (This dictum from Socrates is something I also adhere to.) And the only thing that we can directly examine in our life, the only thing that we can be unambiguously sure of that exists, are our direct conscious experiences (thoughts, feelings, sensory experience). In turn, the best way to study these experiences is to sit down on a pillow and, well, pay attention. This practice is better known as meditation, but for once I will not talk about that in length today.

1986008_1
Sam Harris

His other big idea is that there is no free will. No such thing. Our brains and bodies are made up from atoms that follow physical laws, this gives rise to ideas and those ideas determine the choices we make. If you could rewind time and ask somebody the same question a trillion times, they would make the same choice every. single. time. The universe is one big deterministic machine, and there is nothing we can do about that. Given all the information about somebody’s brain, we could – in theory – exactly predict what they are going to do. And from this Sam concludes that there is no free will. We just kind of go with the flow and see where it takes us.

This last idea has been bugging me for quite some time now. You see, I certainly believe that there is such a thing as ‘free will’. In my own life I experience having a free will all the time and I know almost everybody around me experiences the same. However, with my own background in physics, Sam’s argument seems quite strong. I have given it a lot of thought and looked into many discussions on the topic. And every discussion I have seen this far has left me very unsatisfied. That’s why I present to you here my own thought experiment on how we should think about the concept of free will.

First of all, I would like to clarify that I do not believe in any supernatural or other dimension outside of the physical world all around us. The physical world is quite mystical enough for me already, thank you very much. This means that I also believe our body and brain are nothing more than a very complicated biological machine, that happens to be able to perform some very complicated (quantum?)computations. So let’s, for argument’s sake, imagine the subject of our thought experiment to be an actual AI (artificial intelligence) machine that we humans have built, in some hypothetical world. It doesn’t matter if this would actually be possible or not. That’s beside the point. The point is that I don’t know anybody who believes that there is a spiritual dimension for robots, thus giving us some more room to philosophize about free will.

So suppose we have built our AI machine, vastly more intelligent than us. And we realise that this machine, being more intelligent than us, must experience some level of consciousness. Meaning that it would be ‘like something’ to be this machine. This poses us with an ethical question. Would it be slavery to demand from this machine to do whatever we desire from it? In order to ease our minds, humanity decides that we will give the choice to our AI machine. Let’s ask our machine whether it wants to be 1) put into some kind of mainframe and control all the traffic and transportation in New York, or if it wants to be 2) placed in some remote monastery to contemplate the Meaning of Live, the Universe and Everything. 

Since the machine will be more intelligent that we are, we won’t be able to understand the choice that it ends up making. Nor will we understand how it got to this decision. We do know however, that the machine is completely deterministic. After all, it’s just a computer made up of electric circuits that we designed.

Now let’s also suppose that our AI is not infinitely smart. So it will still need to boil down all the data that it gets from its sensory components and other input down to some informative chunks. It is not very useful just to have a lot of data. We need to have some meaningful chunks of information in order to make decisions.

I will elaborate on this last point a little bit more, before we continue. In current neural networks we can clearly see this argument displayed in the performance of networks that incorporate some form of pooling. That is, distilling a lot of data into a smaller pile of more meaningful data. In the example below we see a 4×4 pixel image being reduced to a 2×2 image by taking the maximum value of every block. In this case we reduce the amount of pixels, in other words the amount of data, by a factor 4, but in return every pixel we have has got more meaning. The technique is used often in neural networks and it has been shown to increase the performance of neural networks, rather than decreasing the performance. The point being that we can increase the amount of understanding we have about a system, if we throw away (a specific) part of the data

Screenshot 2019-08-16 at 13.57.08

Of course, we could design our AI to not throw away any data, but that would probably just mean it would take some 7.5 million years to answer our question. And then it will probably just answer us some arbitrary number, like 42…

The above goes to show that we cannot just expect our machine to make a decision based on a big pile of data. It also needs to perform computations on this data in order to make sense of it.

And so the big moment arrives. We can now pose the question to the machine… And as we ask our AI to make its choice, it will start computing. It will start crunching numbers, reducing information to distill meaning, it might come up with new higher order concepts and idea’s in order to aid its decision-making. And in the end, it will just look at us with its cold iron eyes and tell us in a flat, monotone voice whether it chose option 1) or option 2) . (You know how these AI’s are, emotionless creeps…)

285327238012211
I. BECOME. A. MONK. NOW. GOODBYE.

Now the question that remains is: Did our AI have free will? Was the choice it just made completely free and voluntarily, or should we say it in fact had no free choice in the matter? 

So here is the way that I think about this:

Yes, the choice that the machine end up making was a product of all its physical components and all its experiences in the past. However, the choice that the AI ended up making was a result of the way the AI treated, reduced and manipulated all of this data. It could potentially forget some chunk of data all by itself, without any outside initiative to do so. Perhaps there was a corrupt part that altered its decision. Maybe the AI would even be able to completely make up some data and lie to itself about the origin of this data. The complete decision-making process happened by the AI, inside of the AI, according the all the preferences and faults the AI might have.

If we live in a purely physical world, as our AI would surely do, then there is nothing more to being an AI then being a highly organised bulb of atoms in space that happens to be able to think for itself. And having a free will is nothing more than being able to make choices based on your own ideas, experiences and thoughts without any other conscious entity forcing you to do so.

The fact that our thoughts, ideas and actions are following from the laws of physics in a completely deterministic way does not change anything about the meaningfulness of the concept of free will.

Yes, we could predict in theory exactly what the AI will answer. However, this would require us to model every single detail of the machine. Every part of its components and every single bit of experience it has had in its existence. The only way to really do this is to build an exact replica of the AI, at which point we should ask ourselves the same question about the free will of the second AI machine. And if we continue on this track: There is no reason following from the argument of determinism why the two identical AI’s couldn’t flip a coin and agree to both commence some completely different activity at any one moment. The tiniest deviations in both their experiences could cause them to commence in completely different behaviour for the rest of their existence.

So the conclusion should not be that the deterministic nature of the universe somehow took away our ability to make a free choice. The concept doesn’t lose any part of its meaningfulness through this argument. The conclusion should be that we are that part of the deterministic universe that is making the free choice. We can make any choice that we want at any point in our lives, yet we choose to do that which fits with our past experiences, ideas and believes.

Free will means that a conscious entity is able to make its own choices based on the information available to it, without being forced by other conscious entities to do otherwise. There is simply no way in which the deterministic laws of physics that dictate the interactions between atoms and groups of particles, can force me to do anything. Those particles cannot force me to do anything because they are the building blocks from which my consciousness arrises. The concept of free will simply doesn’t apply to that level of abstraction.

To say that the ‘freeness’ of this choice can be taken away by the deterministic nature of the universe is to say that our consciousness is somehow separated from the physical matter that we are made of, instead of an emergent property of that matter. It is to say that the physical universe can somehow use it’s knife of determinism and force our consciousness to make a choice, as if there can be an experience of ‘me’, of my own unique consciousness, separated from my brain and body being part of that same bully of a physical reality.

44eb9dad90d7da8686d0700f6d33f84b

Perhaps Sam is confusing free will with some notion of ‘pre-determined will’. Some notion that the way in which the universe will continue its development through time could be completely determined in theory. This, to me, seems like a very bulky and useless concept however. True or not, the only way in which it could be useful is in determining what other people will do before they do it. But in order to do so you would need to rebuild their brains and bodies completely, which constitutes cloning a person. This is practically impossible and doesn’t help us to understand the person in front of us one bit. All in all I don’t see how this deterministic worldview could change anything about the usefulness of any psychological concept that we work with.

In his meditation app, Waking Up, Sam Harris talks about letting go of the idea that there is an entity inside of us, that we have an ego. That we are the thinker of our thoughts and the hearer of sounds and the seer of sight. Instead, he says, we should realise that our experiences are pure consciousness and there is no way to not think and hear and see what appears to our senses. In other words, he asks you to cut out this layer of the experience of an ego. He asks you to experience the physical world around you directly, as being a part of it. He asks you to realise that there cannot be any separation. I think Sam shares a beautiful and powerful idea in this, and I would now like to ask Sam to do the same with respect to his ideas about free will.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s