Monday, June 12, 2017

Weekly Wrap-up Video | Weekly Wrap-Up | DART.ICE.01X Courseware | edX




So this was a week where there are lots of very interesting questions.
And let me just say right from the start that you should not
feel bad about being confused or puzzled by all of this
because this is not like the normal physics kind of world
where there is a question, and there may be a very specific answer.
Here we really don't know.
There is no obvious answer.
So these are more provocations, things for you
to think about, than here I am giving you all the answers that we know of.
So what we did is we selected a few topics that we thought a lot of you
raised on the lingering questions.
But before we even go there, I wanted to just set things
straight in terms of definitions.
So one very important one is what is determinism?
So maybe Michelle can give me her version of determinism.
MICHELLE STEPHENS: So I think of determinism in science or physics
like, say I have a particle.
And a particle is sitting here at this point in time, or it has some velocity.
And I can calculate via the laws of physics
exactly the trajectory of the particle in time.
So I know where it's going to end up at any future date.
And I also know where it would have been in the past, given this instant.
Now of course in the real world, we have lots of particles, lots of information
to keep track of.
We can't always figure things out exactly like we'd like to be able to,
but it's always possible in principle to do this for deterministic systems.
MARCELO GLEISER: So that's exactly right.
In a deterministic system, ideally if you
know the initial position and the velocity of that point particle,
you'll be able to predict where it's going to be in the future.
And a very good example of a very deterministic system, but not
perfectly deterministic system, is the solar system.
When astronomers tell you, look there is going
to be an eclipse, a total eclipse of the sun, on the 15th of August of 2048.
Look at that.
They're predicting something decades ahead of time.
How can they do that?
Well, they can do that because using the equations of Newtonian mechanics
and gravity, you can actually predict this.
So that's a predictive, deterministic system.
Of course, even the solar system is not perfectly deterministic
because there are always little errors, little fluctuations.
And those fluctuations accumulate in time.
And there is a cycle time where the solar system
itself could become chaotic, but it's a very far away time.
Nobody needs to panic.
But the point here is that the notion of determinism
is that there are underlying laws that, in principle, will define and determine
the behavior of physical systems.
And if you adopt a very strict, materialistic approach,
we are just a bunch of atoms, meaning particles,
functioning according to the laws of physics.
And so if we live in a strictly deterministic universe, in principle
we are also predictable.
And that is the notion that goes against free will
because if you're perfectly predictable, we know exactly what we're going to do.
I didn't know, but it was in the stars, so to speak,
that I was going to be teaching this MOOC.
And this has been known even before I existed.
Very far-fetched because there are many, many, many issues
with that sort of problem.
For example, to be deterministic, you need
to be able to know the positions and the velocities of all the particles.
And we just can't do that because how is the measurement going to be made?
Who is going to make that measurement?
Who could be everywhere in the universe at the same time
to measure the position of all the particles,
including all the particles you have in your brain and the velocity of all
the particles that exist in the universe at the same time?
That violates causality because you can only
get information at the speed of light.
So unless you are God and you are omnipresent and omniscient,
that just does not make any sense.
So this sort of naive, strict, classical determinism is just silly.
It's just not going to go anywhere.
But you still need to understand how does that clash with free will?
And we did a little research.
And we talked, and we have two very important recommendations.
First of all, watch my interview with Peter Tse, the neuroscientist,
in its entirety because it does go into a lot of detail about that.
And very excitingly, in 2018, he is going
to be teaching a course on free will.
So there you go.
It's going to be a whole MOOC based on free will with Peter.
And I very much invite you to do that.
That's an initiative from ICE, our institute.
So this is going to-- the sense is a continuation of the current course.
So when people talk about different kinds of free will,
there is this idea of first order and second order.
So do you remember, Michelle, more or less what the first order is?
MICHELLE STEPHENS: Yes.
So first order free will is just that you have a choice
to be made, some decision, and you're free to make
that decision, free of outside influence in whatever way you will,
in some sense.
MARCELO GLEISER: So a tiger.
A tiger has first order free will because it could decide,
I'm going to go after the gazelle this way.
Or I'm going to go that way.
I'm going to take this path.
I'm going to take that path.
And in principle, so that's the first order kind of free will.
MICHELLE STEPHENS: Yes, and I think another important element to that
is that this tiger is able to consider the consequence of different actions.
He could have chosen differently.
And in some sense, he's thinking forward and saying,
I'm going to take this way because it's the fastest path to the gazelle
or something.
MARCELO GLEISER: So there's even an optimization procedure going on
in there.
And what about second order free will?
MICHELLE STEPHENS: So second order free will was what Peter Tse was talking
is more a human thing.
Animals don't seem to have it as much.
That you have the freedom to choose what kind of chooser you're going to be.
So Marcelo, I think you said, well a tiger can't say,
I'm going to be a vegetarian tiger.
But humans can make those sorts of decisions
and reflect on how they're making the decisions.
MARCELO GLEISER: And here's a great one.
So we just got a puppy in our house.
It's a very little, beautiful little thing.
And I also have a five-year-old son.
So immediately my five-year-old son became a puppy too.
He's the puppy.
He's going to the puppy.
He's great, and he's behaving like a puppy all the time.
The puppy does not do that.
The puppy cannot become a boy or become anything else but the puppy that it is.
So that's a great example, I think, of the difference between free will
at the level of humans and at the level of animals,
so first order, second order free will.
But the fundamental question is, if there
are underlying laws of nature that are controlling everything,
are we free or not to make our choices?
Or are we just living a sort of dream or illusion of freedom
when actually we are just like puppets in the hands of these laws?
And I think some people mentioned that, that the laws of nature are,
in a sense, we must obey them.
But that's not quite the same thing in strict determinism
that takes care of not just if you run a lot,
you're going to sweat because your temperature is going to go up.
And there is forces you have to exert in order to lift your arm.
So these are all nice laws of nature.
But I don't think they're going to decide if you're
going to get married or buy a bicycle.
These are things that go to a much more complex level of mental processing.
And it does not really speak directly to this, kind of the universe influencing
how we are making our minds.
MICHELLE STEPHENS: Yes.
And both the first order and second order free wills
that we've been talking about are incompatible with that sort
of determinism because under determinism, you wouldn't even
have the option to make any other sort of choice.
MARCELO GLEISER: Now here's something complicated
because we've been talking about the nature of reality
as being unknowable really.
Because at the very fundamental essence that we've
been talking about in this course is that you
can know as much as you want about the world, you'll never know all of it.
There will always be something that you do not know.
And of course that means that you will never really know
what is the essence of reality.
So it could very well be that deep down at the very substrate of existence,
there are a set of laws that control everything.
But because we will never know them, because we are blind to that,
that blindness could be expressed as our free will.
So we think we're making choices because we are autonomous when, in fact, there
is something quite deep down there that is controlling us.
But since we don't know about it, we just make the choices.
And we are happy doing that.
Which brings us to another point that people made, which I think
is a great question, which is this--
I want to know what your answer, your opinion, on this is--
which is let's assume that we live in a simulation.
So that means that there are some gamers playing us,
and we are just characters in this video game.
But we don't know, and we can't know because the simulation is so good.
There are no glitches, so we can't know.
Does it matter?
MICHELLE STEPHENS: That is such a hard question to answer.
In some sense, no.
If we can never know, if we can never distinguish
whether we truly have free will or whether we
have this illusion of free will, whether we know we're in the simulation or not,
then it doesn't matter.
And I think it sort of ties into the discussion about
whether a video game could be created with players that have free will.
Well yes, in some sense we're programming the video game.
But let's say that we could make these players think like people.
They would still have to operate sort of under the laws of the game,
or the laws of nature for us, but they would
be making these decisions that, in some sense,
we had set in to some sort of rules for the game.
And they wouldn't know it.
There would be no way for them to think outside of that simulation.
MARCELO GLEISER: So meaning relax because if this is a simulation,
it's such an amazing simulation that we just will not be able to find out.
In fact, in my book, The Island of Knowledge,
I do talk about some research that has been
done of how you could use high energy physics, collisions of particles,
to find glitches in the matrix, in the program,
because basically every simulation needs to course grain,
needs to rough on small distances.
And so if you could probe the limit of that resolution with some experiment
and find that there is something crazy going on, then you could be suspicious.
But our gamers may be so good that they can always adjust their matrix so
that this doesn't happen.
So it's kind of a losing game.
The other thing I wanted to mention is that there
is an issue with this "do we live in a simulation?"
argument, which is the following-- the gamers themselves could be simulations.
And the simulations could be simulations.
And so you have a network, an onion-like thing.
We're here, and then our gamers are simulated.
And their simulators are also simulated, and also simulated, and also simulated.
And this never ends.
MICHELLE STEPHENS: Turtles all the way down.
MARCELO GLEISER: Exactly.
So it's exactly the problem what we call in philosophy called the first cause.
Which is the issue that if everybody's being simulated,
who is the simulator, the first simulator, the one
that does not get simulated but gets to start all the simulations?
Of course, that's just a metaphor for God.
And the problem is that our minds just cannot put an end to this.
We need a first cause, a first kick to get things in motion.
And that is the limitation of the way we think about nature,
and we think about ourselves.
Perhaps because we are persons with a history.
We are born.
We have a beginning, and it is very difficult for us
to transcend that limitation.
But that's something for another discussion.
I wanted to also touch briefly on the issue of artificial intelligence.
So one of the interesting questions is will artificial intelligence
first exist?
Second, will it be like a human mind?
And third, are we doomed?
If that's true.
So let's see.
Could artificial intelligence-- could a artificial intelligence exist?
What do you think, Michelle?
MICHELLE STEPHENS: Yes, sure.
The level of sophistication of the intelligence
is a matter of open question.
And really I'm not sure to what extent we'll
be able to develop our computing power and memory to be
able to hold a true human-level artificial intelligence.
But I'm sure something will exist.
MARCELO GLEISER: So it turns out that the way the computing power
is increasing right now, people estimate that by the year 2020
we're going to have supercomputers.
They'll be able to have enough processing power
to have the same amount of operations per second as the human brain.
So people say, aha, this is going to be it.
We're going to be able to dump all that we know about human brains
into that machine.
And that machine is going to then develop consciousness,
and that consciousness is going to be just like human.
And I think that's a huge stretch for many reasons.
First of all, as we have learned in this course, to know about the brain
you need information about the brain.
And we know that information is going to be incomplete, first of all.
Second of all, you cannot think of the brain in a vat.
So the idea that you put a brain separate from the body,
and you have this floating intelligence.
It's just this brain.
That is just not who we are.
We are completely integrated with our bodies.
Our minds cannot function without our bodies.
Sure, you can stimulate something here, and have your finger twitch.
Or you can feel happy or feel sad, depending
on what you take, chemicals you take.
But you need your body to be yourself.
So whatever artificial intelligence comes up,
which I think it might because you just have
to have more and more sophisticated programming.
Eventually they'll have some sort of autonomy.
I don't know exactly how that emergence is going to happen.
But I'm also not going to say it's impossible because we just do not know.
But that's definitely not going to be us.
It's going to be something else.
Which brings us to the question that people like Stephen Hawking and even
Elon Musk--
they've been saying that could be the last invention of humanity.
Do you know why?
MICHELLE STEPHENS: Well, it's saying it could replace humans.
MARCELO GLEISER: Yes.
Basically if you create a machine which is more intelligent than we are,
then we could become obsolete.
And the machine is just going to dispose of us if it chooses to do so,
just like we could just kill all the gorillas in the world.
Apart from all these horrible poachers, we do have the power, if we so choose,
to go to Africa and machine-gun down all the gorillas.
That would be the extinction of a species.
Could the machines think of us as the gorillas and just get rid of us?
So that's the fear, the fear that the machine intelligence will kind of--
some people say, who cares?
It's the next step in the evolution.
Well, I care a lot because I don't think so.
I think it's a horrifying idea to think that something non-human is
going to take our place.
Another possibility is something transhuman,
where we are not dumping our brains complete in a machine.
But we are enhancing ourselves through machines.
And we already do that.
Can you give me one example of how we do that today?
MICHELLE STEPHENS: Sure.
Well, there was Google Glass a little while ago.
MARCELO GLEISER: Google Glass.
I'll go even more mundane, cell phones.
Nobody here-- I'm sure you guys cannot exist without a cell phone anymore.
A cell phone is essentially a continuation of yourself.
It's part of who you are.
The apps that you choose are like a fingerprint of who you are.
The way you communicate and use it is just like an extension of your being.
So in a sense, we're already enhancing ourselves through a machine.
The next step is just to put this stuff inside our heads,
and then we start becoming cyborg-like.
So is that possible?
Sure.
And I think it's much more feasible and much more
probable than having a machine that is going to become like a conscious mind
just like a human mind.
I have no idea what the machine is going to be like.
So going back to the issue of simulation and free will
because obviously within a simulation, we
may have the illusion of free will and stuff.
Some people said, but wait people have to obey laws of nature.
Aren't the laws of nature essentially our program,
so to speak, so that we are living in this program,
which is the laws of nature?
And in a sense isn't that what our simulation is?
You could invent another universe with different laws of nature,
and maybe creatures there will have a different behavior.
Possible, like in a multiverse like we discussed in module 2.
But there is a fundamental difference, which is as far as we know,
nature operates by itself without a gamer making choices.
And so if nature has a gamer out there, that gamer
is just another name for God.
And we have nothing to say about that.
But what we think is that that's just not the case, that nature is evolving.
We are making sense of what it is.
We try very hard to make sense of what it
is without being completely successful.
But it is undeniable that science has made tremendous advances
understanding the nature of reality.
And I hope you realized in this course a lot
of this, a lot of what we have learned and what we have to learn.
So I think that's it from us.
We hope you enjoyed this.
There are lots of questions that are not answered, and that's wonderful.
That's exactly how it should be.
We live in ignorance.
And the whole idea of science is to expand the shores of our ignorance,
so that we can move on to this ocean of the unknown.

Sunday, June 11, 2017

Microsoft Word - ConversationWithPeterTseArtificialIntelligence.doc - ConversationWithPeterTseArtificialIntelligence.pdf

Microsoft Word - ConversationWithPeterTseArtificialIntelligence.doc - ConversationWithPeterTseArtificialIntelligence.pdf



I mean, so the brain is just manifestly not a computer.
So the dominant metaphor of my field-- neuroscience,
cognitive neuroscience, psychology-- is that the brain is a kind of computer.
And this just fails.
Right?
Because there is no software/hardware distinction in the brain.
No computer is rewiring itself on a millisecond timescale.
And computers are not conscious.
So we know that our governing metaphor is false.
And I would argue that computers as they are currently realized
are very algorithmic.
And algorithms-- one simple way to think of it is as a thread of decisions.
There's this input.
And then there's yes or no decision, and then a single output,
whereas the neurons are just radically different from that.
They're not just taking a single thread of input.
They're taking 10,000 inputs, integrating them, and then setting
hundreds or thousands of inputs out.
So I'm skeptical.

Friday, June 09, 2017






Mene kauas merelle


Anna yksinäisyyden


hyväillä sinua


kunnes ihosi


on kyllin ohut


Niin ohut


että sydämesi


näkee sen läpi minut


että minä se olin


joka hyväilin,


hyväilen sinua


Mene, mene
Tommy Tabermann

Microsoft Word - ArtificialIntelligence.doc - ArtificialIntelligence.pdf

Microsoft Word - ArtificialIntelligence.doc - ArtificialIntelligence.pdf



If the brain is essentially
a machine, a device that can capture information about the world
and process this information into action,
we may wonder if it is possible to construct an artificial brain,
an artificial intelligence, or AI.
After all, we can model the brain as having hardware--
that is the neurons and the synapses that connect them--
and software, even though we don't quite know what the software is.
We understand that this software must be expressed
in terms of the firing of neurons and the flow of biochemicals in the brain,
but don't know how it works.



That soon enough, perhaps by the year 2040 or so,
the processing power of computers will be so enormous
and the sophistication of our programs so amazing
that they will have an intelligence vastly superior to our own.
This is sometimes called the singularity, a point in history
when machines become more intelligent than humans.
Such a possibility brings out all kinds of nightmarish visions,
like the old Frankenstein story.

Monday, June 05, 2017

Microsoft Word - ConversationWithPeterTseFreeWill.doc - ConversationWithPeterTseFreeWill.pdf

Microsoft Word - ConversationWithPeterTseFreeWill.doc - ConversationWithPeterTseFreeWill.pdf



I think it's largely irrelevant to the issue of free will
because free will doesn't reside in this domain
of meaningless, pseudo-random finger movements.
It resides in the domain of human imagination, considering our options
and playing things out.
And that's an entirely different brain process
than preparing to make a meaningless motor action.



I believe that the human brain realizes not only the first order
libertarian free will, the capacity to consider options
and that our mental processes can turn out otherwise.
But we have this meta free will, or second order libertarian
free will that allows us to become new kinds of nervous systems
in a year from now, or 10 years from now.

Saturday, June 03, 2017

Microsoft Word - WhatIsTheNatureOfFreeWill.doc - WhatIsTheNatureOfFreeWill.pdf

Microsoft Word - WhatIsTheNatureOfFreeWill.doc - WhatIsTheNatureOfFreeWill.pdf



The notion that we have freedom or autonomy as individuals is called free
will.
We believe that we are free to will our lives in any way we want.



Or are we being deluded by thinking we are free when we are not,
and we are just puppets in the hands of a more powerful being?



This is the question of free will and it speaks directly
to the nature of reality.



If we can't know all of nature, isn't it possible
that deep down there are laws that do dictate everything?
Could nature be our clockwork mechanism, satisfying rigid laws,
and we just don't know about it?
It could, at least in principle.



Does it matter if we are characters in a simulation
and don't know about it like the slaves in Plato's cave?
Can we be that blind to reality?