Jump to content

Can Machines Think?


ETHREADZNY

Recommended Posts

This forum is supported by the 12ozProphet Shop, so go buy a shirt and help support!
This forum is brought to you by the 12ozProphet Shop.
This forum is brought to you by the 12oz Shop.
Guest imported_El Mamerro

Hell yeah Browner, I remember when that site was part of your signature. The man is the best source for imagining scenarios, and I keep hearing amazing things about The Age of Spiritual Machines. I need to cop it.

 

As for the explanation of the Gödel Incompleteness Theorem, as stated above, it'll probably take me forever to go into detail... the book I'm reading took 500 pages to explain it. But here's an attempt at the gist of it.

 

 

All consistent axiomatic formulations of number theory include undecidable propositions.

 

 

Gödel crushed the mathematics world when he proved that no matter how perfect and consistent any formal system you define is, there is always a certain statement within that system that can't be proven true or false, which in turn renders that system incomplete. It's the mathematical equivalent of the following sentence (the Epimenides Paradox):

 

"This sentence is false."

 

If the sentence is true, then it really is false, which means it's not really true, which means... you get the point.

 

Any formal system that deserves to be called solid and reliable must be able to make statements about itself (self-reference), in the same way that sentence above is making a statement about itself, without having to resort to a meta-system. If it has to resort to a meta-system, it is a weak system to begin with. Once you make a formal system talk about itself, you can make it say "I will never say that G (the Gödel formula for that particular system, more info on what this is later) is true"... and since the whole purpose of formal systems is to be able to decide if statements are true or false, this is an unanswerable question. Your Ultimate Truth Seeking System fails.

 

All computers and machines are ruled by rigid internal codes that obey the rules of mathematical formal systems. Sure, it might be super complex code that can change itself and do all sorts of shit, but it still behaves according to rules and axioms. So what we're getting to is that no matter how insanely powerful your ultra-intelligent machine is, we can still ask it a question that will completely floor it. I snooped around online to find a good example of how this works, and I got this. It shows how you can derive G for this kind of machine, and use it to fuck it up:

 

From: Rucker, Infinity and the Mind

 

The proof of Gödel's Incompleteness Theorem is so simple, and so sneaky, that it is almost embarassing to relate. His basic procedure is as follows:

 

1. Someone introduces Gödel to a UTM, a machine that is supposed to be a Universal Truth Machine, capable of correctly answering any question at all.

 

2. Gödel asks for the program and the circuit design of the UTM. The program may be complicated, but it can only be finitely long. Call the program P(UTM) for Program of the Universal Truth Machine.

 

3. Smiling a little, Gödel writes out the following sentence: "The machine constructed on the basis of the program P(UTM) will never say that this sentence is true." Call this sentence G for Gödel. Note that G is equivalent to: "UTM will never say G is true."

 

4. Now Gödel laughs his high laugh and asks UTM whether G is true or not.

 

5. If UTM says G is true, then "UTM will never say G is true" is false. If "UTM will never say G is true" is false, then G is false (since G = "UTM will never say G is true"). So if UTM says G is true, then G is in fact false, and UTM has made a false statement. So UTM will never say that G is true, since UTM makes only true statements.

 

6. We have established that UTM will never say G is true. So "UTM will never say G is true" is in fact a true statement. So G is true (since G = "UTM will never say G is true").

 

7. "I know a truth that UTM can never utter," Gödel says. "I know that G is true. UTM is not truly universal."

 

 

Think about it - it grows on you ...

 

 

 

And there you have it. The human mind can understand truths that machines never will. But let's take it one step further. Let's say that to the standard axioms of the formal system in question (n), we add G as an axiom, so that it becomes n+G. Axioms are always true, so we can assume that asking this system the G question would return a valid answer, and indeed it does. The kicker is, once we have boxed in this new formal system (n+G), we can devise a new G statement, call it G', that will do the exact same shit the first G statement did to the original system. And you can keep on going, and make a number system (((n+G)+G')+G'')+G'''... and so on, which you can simply enclose within a bigger G statement. There's no way out of the Gödel black hole.

 

I have still to read and learn more about this, but it seems like there is still hope for machines. As you start adding more G's to your formal system, shit starts getting extremely complex, and there will be a limit to the ability humans have to derive new G's from these increasingly complex systems... so there will be a point where a human being won't be able to out-Gödelize a complex-enough machine. This is just one of the arguments I'm beginning to brush up on, but there's stronger ones up ahead. Shit is crazy, son. Beer,

 

El Mamerro

Link to comment
Share on other sites

Originally posted by El Mamerro

Yes, we will one day, but we aren't even close to that. The AI you see in games is pretty fucking dope, but you have to understand the the algorithms involved are very specific and only take care of specialized functions. The algorithms involved in how a real human brain makes those same decisions are much more complex, and take care of a lot more than learning player patterns and predicting moves. The most advanced thinking machines we have right now are still on the insect level. They're pretty badass insect robots though.

 

AI still has a lot of hurdles to clear before (HOLY SHIT A GUY JUST PASSED IN FRONT OF MY WINDOW, WHICH IS NOT NORMAL BECAUSE IM ON A 2ND FLOOR. I just realized there's a huge electrician truck outside my place with a dude fixing up the lines around my apt.) we can really talk about human-level thinking machines. But as far as we know, the only reason people believe that it'll never happen is human pride (and the Gödel Incompleteness Theorem). Beer,

 

El Mamerro

 

We're actually not as far off as you think. Researchers such as Krick, Koch, and Chalmers have all been moving us by leaps and bounds into the future. Currently they believe they know of about 40% of the neural correlates that give rise to consciousness, which is necessary for replication to create a conscious machine, a very scary thought. Last I heard they were doing work with the Inferior Temporal Cortex as perhaps a center for "conscious perception". We probably won't acheive it in our lifetimes, but perhaps by the end of this century. Scary.

Link to comment
Share on other sites

Guest imported_El Mamerro

40%?? Jesus, that sounds like a damn lot. I don't think I'd heard about having gotten this far. Yet, mapping these neural correlates is only one part of the whole problem, there is still the problem of developing an accurate parallel of them, both in the hardware and software level... I agree, I don't think it'll come until the middle-end of the century.

 

But then again, humans are awful at predicting this kind of stuff.

 

What I meant about the insect machines is that insects are the most complex animals we've managed to mimic with machines... we don't yet have an artificial dog or a frog or a monkey, which would logically be less complex than a human being. However, we do have Michael Jackson, and that's probably the closest thing we'll see to an artificial human in our lifetimes. Beer,

 

El Mamerro

Link to comment
Share on other sites

  • 2 weeks later...

A Little Something I wrote on this topic for a class: awfully long...

 

). In the article “Computing Machinery and Intelligence”, Alan Turing creates a computer-generated test called the “Turing Test”. This test is an “imitation game” because the computer is trying to imitate human behavior. It consists of a machine, a human, and an interrogator, each in a different room. The interrogator’s job is to distinguish between the human being’s and the computer’s typed responses to the questions. Turing believes that if the interrogator is unable to distinguish the difference in the human’s and the computer’s behavior, that this demonstrates the computer‘s artificial intelligence. If the computer were able to deceive the interrogator into believing it was human, it would be exhibiting human thought processes.

1b) There are many objections to the idea of artificial intelligence to which Turing reacts. The first objection is the “theological objection”, in which it is suggested that a computer lacks the immortal soul which a human uses during thinking and without which, thinking is impossible. Turing states that the Almighty can confer a soul wherever He so desires. Human beings are making a container for that soul when they create a computer. It is God’s will, and something he is able to do, to put the soul in the computer.

The second objection Turing reacted to was the “Heads in the Sand” objection, in which it is suggested that some people are unwilling to accept the notion that computers can think. This upsetting notion makes some feel that human superiority is being threatened. Turing argued that there is no point in arguing with these people who believe they are superior to all other forms of living and non-living things.

The third objection is the “mathematical objection” in which it is suggested that a computer is unable to answer all types of questions correctly. However, Turing argued that if one machine cannot answer the problem, another should be able to. He also suggested that human beings have mass amounts of limitations as well, and we frequently give incorrect answers to questions.

The fourth objection is the “argument from consciousness objection”, in which it is suggested that thinking requires consciousness, and computers lack this ability to recognize feelings of pleasure, anger, and pain. Turing points out that the only way to know for certain what is going on in another’s consciousness is to be that other person or machine. If we believe that we can know only our own mind and perceptions, which is a solipsist point of view that would make communication very difficult. Rather, we generally assume that everyone thinks. Therefore, why not accept that machines can think as well. Also, Turing uses an example from the “Turing Test”. He shows the responses of a witness to the interrogator who is examining a line from a sonnet. These responses could have come from a sonnet writing machine.

The fifth objection, “the arguments for various disabilities” suggests that a machine can never be able to do any number of thins that humans can do, such as a machine will never have initiative, be kind, be beautiful, show love, or tell right from wrong. Turing states that those who make these kinds of arguments are unable to give support or evidence defending them. Turing also claims that individuals have drawn conclusions about machines from the time they spent with machines. These conclusions are too general because the machines with which these people have had contact are designed for specific purposes. Other machines could possibly be designed to have that function.

The sixth objection, “the Lady Lovelace’s objection”, suggests that a computer can only do what we order it to do and not originate any of its own work. Turing reacts that this complaint is like saying “there’s nothing new under the sun”. To this, Turing points out that we can never be sure that our original work is not based upon some input that we received from someone else. So giving a computer input and seeing what it does with it is like seeing it originate something. Turing then suggests that this objection may really be that machines don’t surprise us. However, he finds that machines can and do frequently surprise him. Finally, Turing claims that originality may be a programming problem. He suggests that if programmed correctly, that machines could show a similar learning process to that of a preschooler.

The seventh objection, “the argument from continuity in the nervous system objection” states that humans have a nervous system that does not function as a discrete state machine. Therefore, a computer cannot imitate a continuous machine like the nervous system. Turing responds by addressing the concept that computers are not asked to give different answers but right answers. In the “Turing Test” the interrogator was unable to take advantage of the difference between the human’s and the computer’s responses even though their internal workings are different.

In the eighth objection, the “argument from Informality of Behavior”, it is suggested that computers operate according to rules whereas humans do not always operate according to rules. Rather humans make judgments depending upon the particular situation. Turing suggests that humans have sets of rules of how to act in different contexts. The only way to find what all these laws which regulate human behavior are is by scientific observation. We haven’t yet discovered all these laws or rules.

The ninth and last objection is the “argument from Extrasensory Perception”. If E.S.P. exists, then a computer would be unable to recreate E.S.P. Turing reacts by stating how difficult it is to accept the occurrences of Extrasensory Perception as a scientist, but that the evidence for it is overwhelming. Tests using the computer would have to be seriously adjusted to rule out the influences of ESP.

1c) Turing suggests that most computer simulation programs function very well, but do lack “common sense” knowledge that an average preschooler would be able to handle. Turing makes suggestions about how to overcome this problem. He believes a program can be created that will mimic the mind of a child and have the ability to learn at that level. Since a young child has very little stored in their minds, he believes that the initial phase should be not too difficult to accomplish. There would not be too much to put into the computer to get it to resemble the very young child’s intellect. Educating this child machine, however, may be a long and difficult process. He recognizes that such an educational program will have to be created differently from the way we teach the average child because of the differences between a machine and a human being. But there can be some similarities. He mentions the importance of “reward and punishment” in the teaching process for both computer and human child. For computers, however, the rewards and punishments have to be transmitted through unemotional channels. Also, he describes how certain definitions and propositions would be stored in the computer. Some of these would act like imperatives, so that the appropriate action takes place automatically when the well established fact is input. Turing also proposes that a random element should be put into the child learning machine because this would be helpful in certain types of problem solving. He recognizes the much needs to be done, but is hopeful that with advances in technology machines will eventually be able to mimic the mind of a human and even exhibit common sense.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...