Jump to content

The Chinese Room Argument & Computer Analogies


Linkzelda

Recommended Posts

The Chinese Room Argument

 

Searle imagines himself alone in a room following a computer program for responding to Chinese characters slipped under the door. Searle understands nothing of Chinese' date=' and yet, by following the program for manipulating symbols and numerals just as a computer does, he produces appropriate strings of Chinese characters that fool those outside into thinking there is a Chinese speaker in the room. The narrow conclusion of the argument is that programming a digital computer may make it appear to understand language but does not produce real understanding. Hence the “Turing Test” is inadequate. Searle argues that the thought experiment underscores the fact that computers merely use syntactic rules to manipulate symbol strings, but have no understanding of meaning or semantics.[/quote']

 

Basically:

- A man is in a room following a computer program for replying to another language they do not have knowledge, or fluency in (e.g. natural English speaker not knowing Chinese language)

- The Chinese characters are sent under the door by Chinese speakers

- The man uses the program as a tool to fool the others outside of the room that there is a Chinese speaker in the room

 

The broader conclusion of the argument is that the theory that human minds are computer-like computational or information processing systems is refuted. Instead minds must result from biological processes; computers can at best simulate these biological processes. Thus the argument has large implications for semantics' date=' philosophy of language and mind, theories of consciousness, computer science and cognitive science generally. As a result, there have been many critical replies to the argument.[/quote']

 

In short:

- Good bye, computer analogies, your Christmas has been cancelled.

 

Computer Analogies in Relation to Tulpas

 

[hidden]Whether it’s explaining how the mind goes about instantiating sentience, or giving the impression of sentience, computer analogies seem to be the go-to source for trying to conceptualize what may be going on, but there’s a huge oversight in that correlating these processes of the brain as akin to computer simulations, and what have you is to switch roles of what preceded first. In other words, it’s the programmer that instilled whatever coding and syntactical data, or algorithm for the computer to run. However, it can’t be vice versa where it’s the computer’s algorithm. Because of this vice versa being believed as possible, people clinging onto the latter view people as a potential is what allows them to believe there can be something that’s more artificial, or less artificial than the other; this presumption knows no bounds in relation to tulpas.

 

For example, a person could structure parroting, and even extreme presumptions like unconscious parroting (even though I think it’s a contradiction):

- They would assume that a tulpa would be limited based on an algorithm given to them. Okay, seems like a harmless usage of analogy there, right?

 

- But, they would have to assume the brain would be running the algorithm, just like the computer would be running the algorithm set by the programmer….see where this is going?

 

 

- It leads to question begging that if it’s the brain running the algorithm, then who created it? One could say the brain, but it leads to circular reasoning in figuring out what the brain needs to run something that would be implied as being created by someone else, and yet still have the capacity to create the very same thing it’s running?

It's hinting that biological processes in the brain that allows one to gather meaning of something something a computer cannot understand; the latter just runs based on certain constraints from the programmer.

 

 

- Apply this into context of tulpas: Tulpas are contingent on algorithms set upon the brain; the brain runs these programs; but a computer cannot hold both capacities in creating them, and running them—they can only run them. And that’s the point – how is there an understanding through that transferring—especially in the case of Chinese characters in the Chinese Room Argument? [/hidden]

 

 

Questions

 

- Do you think a tulpa that is akin to artificial intelligence in some way?

 

- Do you think this thought experiment can truly refute computer analogies?

 

 

- Can computer analogies be sufficient enough even though they can’t explain how an “understanding” of something can come to be?

 

- Do you think creating a servitor, for instance, would be an example of artificial intelligence?

 

- Is the mind capable, in your opinion, of creating artificial intelligence and yet still have a conscious experiencer, the host by default, for example? Why?

 

 

- If a tulpa is based solely on a computer analogy of the brain running a simulation of sentience, wouldn't this imply that there can't be any hope of actual sentience, or understanding of things for a tulpa? In other words, they would look like, and behave like a sentient being, except they can't consciously experience things (e.g. p-zombies/philosophical zombies).

 

- Is parroting akin to a host believing it’s a tulpa following an algorithm from their willpower, or just from the brain running the “algorithm,” e.g., for sentience? Do you believe this is a way others can feel unconscious parroting is done as well? (By unconscious parroting, obviously, parroting that is done at unconscious levels; however, IMO, I think that in order for it to even be considered as that, metaphor is employed over a different conscious experiencer, e.g., our imagination. In other words, it gets personified to the point of being treated as a conscious experiencer to control the actions of tulpas even though parroting would involve the host consciously controlling the actions of their tulpa.

 

 

Note: The discussion doesn’t have to be limited based on these questions alone.

Link to comment
Share on other sites

The distinction between "real understanding" and "the illusion of understanding" is arbitrary.

 

If I provide input and the entity responds inappropriately, only then you can make the assertion that it doesn't "understand" language.

 

Code (like everything) boils down to "if X occurs, Y must occur".

Biological creatures are not special in this regard.

The idea of free will is a construct designed to reinforce the ego.

In actuality there are only degrees of slavery.

One man may be "free" from the whims of another man, but both men are slaves to causality.

"For small creatures such as we the vastness is bearable only through love." - Carl Sagan

Host: SubCon | Tulpas: Sol, Luna, Alice, Little One, Beast and Solune (me) | Servitors: Odonata, Guardian

 

Link to comment
Share on other sites

Guest Anonymous

Do you think a tulpa that is akin to artificial intelligence in some way?

 

Well sortof. If you mean that tulpas are sort of a hidden process in the mind then yes. To assume that a tulpa is any different than any other function of the human brain, like they are somehow a unique occurrence in the brain, is probably false. I see no reason to think that the tulpa effect is unique. The human brain can learn to act on complex processes subliminally. Examples of this are things like a musician, who's mind seems to create music spontaneously after many years of composing. The music just sometimes seems to come to him. Or it could be a computer programmer who has the experience of seeming to have entire algorithms that come to him in a sudden flash of insight. It could be any complex thing a person does that they improve on with practice, such as ice skating, playing a guitar, or driving a car. At first, a human being needs to consciously think about what they are doing when first learning the skill. Over time, as they practice, much of the process goes subliminal and so you end up with things like a person driving to work and getting lost in his thoughts only to suddenly realize he is most of the way to work before he is fully conscious of the act of driving there. The human brain does thousands and thousands of complex thoughts subliminally every day.

 

 

Do you think this thought experiment can truly refute computer analogies?

 

No. Searle may be wrong. The thought experiment assumes that the person translating the Chinese characters is using a program routine that does not allow full intuitive understanding of the symbols of Chinese language, no real deeper understanding. It's like the p-zombie argument for sentience. There is no way to know for sure if his assumption is correct or not. Artificial Intelligence is simulating human intelligence by giving a computer a memory and a way process information by using advanced algorithms that utilize a store house of information. The computer, as it learns and gains more memories, gets closer and closer to seeming human in its ability to process that information. The Turing Test assumes that an AI computer will eventually reach a point where it is impossible to distinguish the simulation from the real deal. When it is impossible to distinguish the difference, the difference becomes irrelevant and the computer has become virtually as sentient as a human being. Just like with a p-zombie argument, one will never be able to prove the computer is actually sentient or not.

 

 

Can computer analogies be sufficient enough even though they can’t explain how an “understanding” of something can come to be? Yes they can still be sufficient. See my answers above.

 

Do you think creating a servitor, for instance, would be an example of artificial intelligence? Not really exactly, it would be a subliminal process of the mind. See my answers above.

 

Is the mind capable, in your opinion, of creating artificial intelligence and yet still have a conscious experiencer, the host by default, for example? Why? Yes, although I think the term "artificial intelligence" is misleading. It is just subliminal thought. It is intelligence. Not all human thought is conscious. Most of it isn't already.

Link to comment
Share on other sites

I don't think that analogies are claiming that the brain functions like a computer but instead are putting ideas in different terms to create an understanding. You can compare a tennis game to paint drying using an analogy, but that doesn't mean that tennis and drying paint actually function the same way. Analogies aren't meant to be taken literally.

We're all gonna make it brah.

 

Link to comment
Share on other sites

The distinction between "real understanding" and "the illusion of understanding" is arbitrary.

 

It doesn’t seem ambiguous to me. The former just presumes that an entity can put things into context, and the other is that in spite of that probability, it still isn’t genuine understanding. The difference is as clear as night and day, IMO.

 

If I provide input and the entity responds inappropriately' date=' only then you can make the assertion that it doesn't "understand" language.[/quote']

 

Well, let’s take this into context of a lie. We all know a lie is in relation to feeding deceptive implications towards someone. Usually, if we’re lying to someone, and they can get that something just doesn’t sound too right, would we then make the assumption that they don’t “understand” in general? Because if they figured out something that seems weird, they would usually respond in a way that isn’t appropriate to what’s expected. If you’re providing input and it’s a computer program, or AI, chances are, it’s not going to understand it. It’s not as arbitrary as it looks as the assumption of AI bringing about new, novel things that makes them seem sentient as human being is probably based on the assumption that programmers have figured out how to instantiate those qualities of putting this into context.

 

It’s actually a matter of syntax vs. semantics. A program may have the tools to create syntactical strings, coding, meanings, etc., but in relation to semantics, that requires some kind of understanding of language. And not just language, but a capacity to apply those syntax to creating a meaning. So, with AI, and relating this to tulpas, for example, doesn’t become an exception to this, IMO. Because like some people stated, the analogies aren’t to be taken seriously. It’s just a tool to try and create a model of how the brain may work. But ultimately, it’s just constrained as a model, and not really the exact demonstration of the back office of the brain.

 

So, the if-then statement you provided with feeding of information that doesn’t get responded appropriately…it depends on what it means to be “appropriate,” which seems to be hinged upon whatever coding the program is restricted to base things from. It’s just another example of how it doesn’t understand the semantics, it just runs the syntax. In other words, you could say it’s just a manipulation of symbols from one mode of the English language, for example, to a Chinese character in which the operator can slip under the door to fool the other native, Chinese speakers into thinking there is a Chinese speaker.

 

The operator in the room is a testament of there being a middle man needed to do this; that even though the operator is just a messenger based on the syntactical functioning of the computer, the syntax is formed to create meaning. The computer doesn’t know that meaning, unless there’s a component that simulates that, but ultimately, it cannot be equivalent to biological, and psychological processes that makes us distinct from that. So, whether an analogy is taken seriously, or loosely, if one were to treat a tulpa as an AI, there’s an implication that they accept the mind can create virtual agents to understand certain things, and disregard why the host has exclusive rights to not be considered an AI; it ends up with the host undermining whatever capability the mind could have in creating another conscious experiencer, it seems.

 

The idea of free will is a construct designed to reinforce the ego.

In actuality there are only degrees of slavery.

One man may be "free" from the whims of another man, but both men are slaves to causality.

 

Interesting note on determinism. But, even if that may be the case, I’m sure because you’re open to degrees of fate, slavery, or whatever context is needed, there can be some degree of free will; but not to where reality itself is mind-dependent because it’s the complete opposite of that. In other words, mind-independent reality, and determinism that’s compatible with certain levels of free-will isn’t too far-fetched.

 

Or it could be a computer programmer who has the experience of seeming to have entire algorithms that come to him in a sudden flash of insight.

 

Well, that’s understandable that the person has their own POV, and inner experience, but it doesn’t seem to fall in line with what you were saying with tulpas are sort of a hidden process in the mind. As whatever seems to come at the computer programmer’s beck and call is more of an expression made apparent to them, it’s just maybe they aren’t able to construct a language of how it came about. Maybe that apprehension is what gets labeled as “hidden.”

 

The human brain does thousands and thousands of complex thoughts subliminally every day.

 

Right. Note, I’m agreeing with you, I’m not shrugging this off.

 

No. Searle may be wrong. The thought experiment assumes that the person translating the Chinese characters is using a program routine that does not allow full intuitive understanding of the symbols of Chinese language' date=' no real deeper understanding[/quote']

 

Actually, you may be overlooking the argument from Searle. That same full, intuitive understanding of symbols may be a misconception over the manipulation of symbols. It’s one thing for a program to just manipulate a symbol; shifting a 0 to 1, or whatever change in voltage, and what have you. But it’s another for it to have the same biological, and psychological processes of the brain. The operater doesn’t know jack about Chinese characters, so ironically, the question of understanding gets shifted into the program itself.

 

But it’s that same inclination we may have to personify something like a computer program when we can’t help but feel doing it to the operator would be too easy. In fact, I guess how you would tackle the argument would be based on “The Intuition Reply”:

 

Many responses to the Chinese Room argument have noted that, as with Leibniz’ Mill, the argument appears to be based on intuition: the intuition that a computer (or the man in the room) cannot think or have understanding….“Searle's argument depends for its force on intuitions that certain entities do not think.” But, Block argues, (1) intuitions sometimes can and should be trumped and (2) perhaps we need to bring our concept of understanding in line with a reality in which certain computer robots belong to the same natural kind as humans….

 

Critics argue that our intuitions regarding both intelligence and understanding may be unreliable, and perhaps incompatible even with current science. With regard to understanding, Steven Pinker, in How the Mind Works (1997), holds that “… Searle is merely exploring facts about the English word understand…. People are reluctant to use the word unless certain stereotypical conditions apply…” But, Pinker claims, nothing scientifically speaking is at stake. Pinker objects to Searle's appeal to the “causal powers of the brain” by noting that the apparent locus of the causal powers is the “patterns of interconnectivity that carry out the right information processing”. Pinker ends his discussion by citing a science fiction story in which Aliens, anatomically quite unlike humans, cannot believe that humans think when they discover that our heads are filled with meat. The Aliens' intuitions are unreliable—presumably ours may be so as well.

 

Basically:

- The Intuition Reply is a counterclaim that his argument is based on intuition; that the computer, or man cannot think, or have an understanding; something that requires conscious experience in some way.

 

The Intuition Reply, in short is, quoted:

 

Thus several in this group of critics argue that speed affects our willingness to attribute intelligence and understanding to a slow system, such as that in the Chinese Room. The result may simply be that our intuitions regarding the Chinese Room are unreliable, and thus the man in the room, in implementing the program, may understand Chinese despite intuitions to the contrary (Maudlin and Pinker). Or it may be that the slowness marks a crucial difference between the simulation in the room and what a fast computer does, such that the man is not intelligent while the computer system is (Dennett).

 

In other words, gotta go fast! Gotta go faster, faster, f-f-f-f-ff—f-f-faster!

 

Yes, although I think the term "artificial intelligence" is misleading. It is just subliminal thought. It is intelligence. Not all human thought is conscious. Most of it isn't already.

 

I agree. Maybe it’s because when one tries to fit in AI into the human brain, or the structure of it, they have to shift it over to unconscious, or subliminal thoughts instead. Because if they shifted it to conscious thoughts of the conscious experiencer, they have to question if their arrival of understanding is an illusion, and they’re just uttering the belief that they can understand even though in this shift, they cannot, i.e., p-zombie.

 

But with unconscious thoughts, anyone can do what they want to do with the analogies because it’s easier to do that rather than dodging the question when shifting to conscious thoughts. In other words, like others, and you have stated, the analogies are taken too seriously as an actual demonstration of the mind vs. a model of conceptualizing it, which would be two different things.

 

I don't think that analogies are claiming that the brain functions like a computer but instead are putting ideas in different terms to create an understanding. You can compare a tennis game to paint drying using an analogy' date=' but that doesn't mean that tennis and drying paint actually function the same way. Analogies aren't meant to be taken literally.[/quote']

 

Exactly. And that’s the thing, the computer analogies, and the levels of AI (weak and strong) are just ways to try and understanding something; the semantics of it. But, ultimately, the program itself cannot arrive at similar intuitions, or processes akin to ours to create an understanding. The program just manipulates symbols that gives the impression of understanding, but there’s really no intermediate process of it putting things into context, and consciously experiencing something; it’s just manipulating something.

 

That’s as analogous to creating a program to just replace having to lift your finger to switch over to one side of the paper, for example. No need to apply intelligence in that as it’s just shifting something into motion. But, when analogies are taken too seriously, there’s this belief that the program needs to have some emulation, impression, or implication of understanding to manipulate, or run a certain algorithm.

Link to comment
Share on other sites

My tulpa's probably a chinese room, but so am I, and I've never given a shit about "actual" understanding. Something that's indistinguishable from the real thing is the same as the real thing imo

Link to comment
Share on other sites

I'm having a hard time myself understanding that. Can you clarify how those two sentences match together? If you mean that you accept that your thoughts are just computations from the brain, then you might be taking the analogies too seriously as the analogies did stem from humans themselves. But, if this isn't the case, then it's probably something that we can't understand since it's a bit vague on what you mean by those two sentences that seem non-sequential.

 

It seems indistinguishable from the real thing should be replaced with distinguishable if we're talking about how we're completely different from any level of AI. But if not, along with how you don't care to even bother, then okay.

Link to comment
Share on other sites

There's no fundamental difference between humans an AI. Even if you argued that AIs are just a chinese room while humans are somehow magically not, it still wouldn't be a fundamental difference because the inputs and outputs are the same

Link to comment
Share on other sites

I didn't even refer to the Chinese room, like, the actual room, as an analogy for an AI. If you're talking about inputs and outputs, I guess we're all just unthinking automatons. I'll be on the side that will think there is a fundamental difference; the biological and psychological differences, actually.

Link to comment
Share on other sites

if we're talking about mind and thoughts then stuff like "humans have flesh but robots have metal" seems very irrelevant at least fmpov

psychologically an AI can be made to imitate almost any typical human psyche

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...