Tuesday, November 20, 2012

GEB Chapter 10


Chapter 10: Levels of Description, and Computer Systems

From the moment I received this assignment I was filled with trepidation.  I believe myself to be relatively tech-savvy in that I can operate all my devices, and generally troubleshoot if something is not working correctly with them.  However, open up the back of a computer or an Xbox, or a TV, and I am totally lost.  The hardware (electrical connections), and leveled software (programming) are totally lost on me.  My mind does not work in that way, with binary code software programming.  The closest I’ve come to understanding computer programming is when I took a Web Design class, and it was so far from my realm of knowledge that I found it mildly refreshing, but overall incredibly difficult.

Summary:

The beginning of this chapter starts out in not such an intimidating way, and speaks of ideas I can understand.  We all hold in our minds the ability to conceptualize one situation in multiple ways.  For example, when we watch TV, we know that we are seeing an image compiled of thousands of small dots, but we see it as one whole image.  This is the same manner in which we understand our own selves.  The human body we understand as one unit that works as a whole.  But it is made up of different systems, and all those systems are made up of tiny individual cells.  The human brain can hold all of these different perspectives at once, and still understand the whole.

I was then surprised to see an example I had talked about earlier in the semester in a completely different class.  When novice chess players and masters are asked to look at a board of legally placed chess pieces for 5 seconds, the masters can more easily recreate it from memory.  This is because they see the board in chunks, and can visualize the legal moves available.  Often their chunks can be misplaced on the board, but the chunk itself is correct.  However, when the board was randomly filled with pieces, with no essence of legality or reason, the experts were no better at recreating it than the novices.  This is because there is no ability for them to understand the origins of the moves.

Then the book gets into writing code for a computer, and I start to get lost.  What I understand is that it is written in many different levels (just like nestings, wait what?!).  There’s machine language (101111001) and above that is assembly language, which basically puts the machine language into chunks, like the chess masters.  Then there are programs to translate the languages between one another.  Above that is Boostrapping which allows programs that are not complete, to complete other programs.  This is like a child’s language development.  Once the child has enough language, it can use its language to acquire new language.

Then comes the operating system (Windows XP, Vista, Snow Leopard, etc), which is the level between the human instructions and the machine language.  The meaning of this system is to “cushion” the user.  The user doesn’t want to think (why would they?) of all the workings of the machine language (for good reason- cuz it’s complicated!!). They just want the machine to work the way it’s supposed to work.  When something goes WRONG, then they begin to realize how complicated and intricate the system actually is.  Many times it’s user error because the user has to be extremely specific in its orders, otherwise the computers will get confused, even if the user thinks he is being clear.

When “flexibilities” are programmed into the computer, such as allowing for certain types of misspellings, a user may simply work within these new rules, and still see the computer language as being entirely rigid.  We are also rigid in our understanding of the computer.  In the same way that we do not know why we are not producing as many red blood cells today, the computer is not “aware” of the operating system that is making it work.  A computer that can generate responses to questions does not understand that it is a computer, it is simply completing a function.

Hardware vs. Software.  Hardware is physical machinery, and software is programming.  A piano is hardware, sheet music is software.  Humans have this too.  Our brains are made of a certain number of cells and neurons, we cannot change that, it is our hardware.  However, we can, and do, change the way we think, and what we think.  This is because it is our software.

Weather is a good example of how we look at intermediate phenomena.  We understand weather encompasses many things; it is the higher level.  Then there are the molecules that make up the water and the air in the atmosphere.  That is the lower level.  The intermediate level encompasses our understanding of rain, wind, tornadoes, hurricanes, and snow.  The question is posed about whether or not there are other types of intermediate levels that occur that we don’t even know about, and if we did, would they help us understand weather even more?  The weather movements described are simply parts of a whole.  In the same way football players are individual players, but also members of the team.  They retain their individuality, but become slightly different when associated with the whole.

At this point the chapter starts talking about quarks (which my dictionary defines as any of a number of subatomic particles carrying a fractional electric charge, postulated as building blocks of the hadrons. Quarks have not been directly observed, but theoretical predictions based on their existence have been confirmed experimentally.)  I’m completely lost in the quarks section, I mean not a CLUE what it’s talking about… Luckily it’s only 2 sections of the chapter, so… moving on…

By using chunked models (humans are a collection of cells and molecules) we lose specificity in order to simplify an idea enough to understand it.  We chunk together our estimations of peoples behavior.  For example, if a joke is told, there are a few possibilities: to laugh, or not laugh.  The possibility that someone will go climb a flagpole is small.  Therefore we chunk together possible behaviors in order to be prepared, but we could potentially be losing site of behaviors that are not common, but are possible.

A computer can only compute what you ask it to.  But this idea is bigger than just “tell it to do something and it does.”  For we can ask it for something we do not understand, and it will tell us.  But we do not have to understand exactly the kind of answer it will give, for we don’t know.  In this case, it is telling us what we want to know, even if we didn’t know exactly what we were asking for.

The last big conclusion from this chapter is the “Epiphenomena.”  The author’s computer works very well with up to 35 users, but at 35 users, the operating time is incredibly slow.  So the author suggests to the computer programmer, just go into the program and turn 35 to 60.  This isn’t how it works though.  That’s like telling the runner who sprints 100 years in 9.3 seconds to do it in 8.6 seconds instead.  It’s simply a constraint of the physical makeup of the system.  The Epiphenomena is: a visible consequence of the overall system organization.”  Gullibility in the brain is an example, it is not programmed in, and cannot be removed, it is simply a constraint of the individual makeup of the person.

The last questions will be developed in later chapters: what is the difference between the brain and the mind?

What I have taken from this chapter is a better understanding of computer levels, as well as the similarities between the human brain, and human interaction, and how those relate to the technical memory of computers.  We are not so different.  And yet, I still feel we are worlds apart.

No comments:

Post a Comment