Jump to content

[split] Consciousness, et cetera.


Lunanite

Recommended Posts

Well aviar, did you ever hear of white torture?

 

Experiments in the last century showed (and where used for the sake of torture), that, if you place humans in bodysuits nearly nullifying their stimuli (sometimes coupled with loud music and humilation through nakedness), their personality and consciousness, whatever makes them what they are as a whole begins to degrade, even to disappear.

It is of course harder to do it to you than to a tulpa, but it is very much possible to dissolve you the very same way.

 

And you were narrated into consciousness, I doubt you were born with language, logic, moral culture etc. The people around you as child, what you perceived, managed to properly wire your executive functions, your langauge, and enculturated you.

"Sorry for that, my communication implants are idiologically biased."

Link to comment
Share on other sites

  • Replies 58
  • Created
  • Last Reply

Top Posters In This Topic

 

Not sure anyone ever narrated me into conciousness. Furthermore, theres the small fact (well, commonly observed phenomenon) that tulpa 'degrade' over time if ignored. If people ignore me, I might develop an inferiority complex or some social awkwardness, but I don't think I will disappear.

It could be argued that you were narrated into sapience. The fact that feral children lack this quality and that those of us in society do not means that something had a hand in our creation as sapient entities. From wiki:

The idea that language is a necessary component of subjective consciousness and more abstract forms of thinking has been gaining acceptance in recent years, with proponents such as Andy Clark, Daniel Dennett, William H. Calvin, Merlin Donald, John Limber, Howard Margolis, Peter Carruthers, and José Luis Bermúdez.

I can't prove it, but it's fair to say that language (ie narration) may be the culprit.

 

See Asgardian's post as to why you can, in fact, disappear from lack of interaction.

 

Furthermore, I laugh at your interpretation on the validity of citing Freud, not to mention interpreting its ethics. Really, we should only be allowed to cite people that agree with you or help your position?

I didn't say that. I think off-handedly citing outdated theories to dehumanize a possibly sapient being without backing it up is unethical, yes. But, here we are discussing it, so that's been addressed.

 

Shall I mention the fact that there are more than a few people that openly state that belief is a factor in tulpaforcing, and time to sentience?

This is a good point... I didn't consider this. In my own experience (and fig, chupi, wireless's), though, the difficulty is not their development but communication between you and the tulpa. When the communication is sketchy, sentience is hard to observe. I think they're sentient from the very beginning (Nesterbones, Sock's Ellenore (i think)), but it's easy to see that believing in your tulpa's sentience means you believe in their sentience. You can't really point to the belief as the cause, because the belief itself affects your observation.

 

Since apparently linking to only supporting material is all the rage, please read Do we have free will? at the following link: http://plato.stanford.edu/entries/freewill/

It quite nicely states what I try to state, that is, independent of subconcios/unconcious details, the Concious may override the Unconcious, though not necessarily manage its actions. That means that a tulpa can begin as a concious process, then through reinforcing a stimulus or belief become unconcious, whereupon it may act in a manner that is accordant with free-will and still be managed through Concious effort.

 

I'd like you to elaborate on the bolded part a little more. For clarity.

 

I've already explained that (given that you're not parroting and the tulpa is temporarily out of your conscious control) it is impossible for a sapient acting being to be non-sapient. Philosophically speaking (assuming physicalism) it's impossible for something to demonstrate metacognition without truly having it. If the tulpa says "I think I have trouble learning X", that means some part of your brain that isn't you is observing it's own processes (provided you aren't exerting conscious effort) which is the definition of sapience -- and unconscious process doesn't really encapsulate this.

 

Also, your theory relies on the fact that you may override unconscious (and therefore the tulpa's) processes, but go here: http://incorporealnuance.tumblr.com/post/24666185169/well#notes

A report that an old enough tulpa, with a lot of possession experience, can override his host's control of his body.

 

Correct, Freud did not state where conciousness comes from, or whether conciousness is self aware (whatever you may mean by that). He tried to attribute actions and properties, thus creating a functional model.

My point was that Freud didn't say much that is relevant to the topic at hand -- which is whether a tulpa is an independent sapient mind. The question requires an understanding of the origin of our own sapience, which, again, he didn't say anything about. Shoe-horning tulpae into his one-mind model is not convincing...and results in absurdity, as I've shown above.

 

To your link on feral children which attempts to base its idea of blank slate (partially) on Joseph Singh, http://en.wikipedia.org/wiki/Feral_child (just go to reality portion).

There were still more kids. It doesn't completely invalidate it, but I'll take it into account.

Link to comment
Share on other sites

It could be argued that you were narrated into sapience. The fact that feral children lack this quality and that those of us in society do not means that something had a hand in our creation as sapient entities. From wiki:

The idea that language is a necessary component of subjective consciousness and more abstract forms of thinking has been gaining acceptance in recent years, with proponents such as Andy Clark, Daniel Dennett, William H. Calvin, Merlin Donald, John Limber, Howard Margolis, Peter Carruthers, and José Luis Bermúdez.

I can't prove it, but it's fair to say that language (ie narration) may be the culprit.

 

See Asgardian's post as to why you can, in fact, disappear from lack of interaction.

 

I didn't say that. I think off-handedly citing outdated theories to dehumanize a possibly sapient being without backing it up is unethical, yes. But, here we are discussing it, so that's been addressed.

 

This is a good point... I didn't consider this. In my own experience (and fig, chupi, wireless's), though, the difficulty is not their development but communication between you and the tulpa. When the communication is sketchy, sentience is hard to observe. I think they're sentient from the very beginning (Nesterbones, Sock's Ellenore (i think)), but it's easy to see that believing in your tulpa's sentience means you believe in their sentience. You can't really point to the belief as the cause, because the belief itself affects your observation.

 

 

I'd like you to elaborate on the bolded part a little more. For clarity.

 

I've already explained that (given that you're not parroting and the tulpa is temporarily out of your conscious control) it is impossible for a sapient acting being to be non-sapient. Philosophically speaking (assuming physicalism) it's impossible for something to demonstrate metacognition without truly having it. If the tulpa says "I think I have trouble learning X", that means some part of your brain that isn't you is observing it's own processes (provided you aren't exerting conscious effort) which is the definition of sapience -- and unconscious process doesn't really encapsulate this.

 

Also, your theory relies on the fact that you may override unconscious (and therefore the tulpa's) processes, but go here: http://incorporealnuance.tumblr.com/post/24666185169/well#notes

A report that an old enough tulpa, with a lot of possession experience, can override his host's control of his body.

 

My point was that Freud didn't say much that is relevant to the topic at hand -- which is whether a tulpa is an independent sapient mind. The question requires an understanding of the origin of our own sapience, which, again, he didn't say anything about. Shoe-horning tulpae into his one-mind model is not convincing...and results in absurdity, as I've shown above.

 

There were still more kids. It doesn't completely invalidate it, but I'll take it into account.

 

I currently do not have much time to use the computer I am on, so I just wanted to state that I require more information on the process behind possession before I can comment on it (there is a distinct lack of material).

 

On the note of language in development and concious, I do not know the current research and theories, thus what I will state will simply be based on personal opinion. As far as I understand, verbal language is necessary for communication and interaction, but not for conciousness. Mutes and wild children can still communicate, be it by gestures or howling (a form of language). Not to mention that people are born with a form of body language (expressions, reactions, etc.). So, learned language I believe, is not a necessity, though language in its most ample sense (a set of percievable sensory?/behavioural? patterns) is.

 

I might once again note that tulpa, for all intents and purposes, are sentient (no matter what model, in the most 'dehumanizing' case this is due to the persons mind puppeting) and demonstrate intelligence (they can resolve problems). What I wish to plant is whether this is due to a seperate mind developing, outside the control of the first (commonly denominated host) or functions within the first mind. The implications of either are debatable. Some might say the first implies a greater degree of freedom, while the second is much more restrictive. Truth be told, the only final difference may be whether a tulpa can coexist, or eventually supersedes the first.

 

Furthermore, on the philosophy of sentience/intelligence, I might remit you to ELIZA and the Turing Test. I remember reading the introduction to a book where the author of ELIZA stated that people had taken the program to be capable of actual intelligence, and he pleaded people to understand it was not so (this might be the introduction to Art of Prolog by Shapiro as it is the latest book I have read). I will try to find an exact quote and source later.

 

@Asgard: Do you have any accounts or material over white torture (already read the wikipedia entry).

Link to comment
Share on other sites

 

I currently do not have much time to use the computer I am on, so I just wanted to state that I require more information on the process behind possession before I can comment on it (there is a distinct lack of material).

 

On the note of language in development and concious, I do not know the current research and theories, thus what I will state will simply be based on personal opinion. As far as I understand, verbal language is necessary for communication and interaction, but not for conciousness. Mutes and wild children can still communicate, be it by gestures or howling (a form of language). Not to mention that people are born with a form of body language (expressions, reactions, etc.). So, learned language I believe, is not a necessity, though language in its most ample sense (a set of percievable sensory?/behavioural? patterns) is.

I see what you're saying, but I believe the language must support the concept of self ("I") before sapience can develop. The feral children (assuming validity of the accounts) had body language and some crude vocalizations, but they never became sapient.

 

I might once again note that tulpa, for all intents and purposes, are sentient (no matter what model, in the most 'dehumanizing' case this is due to the persons mind puppeting) and demonstrate intelligence (they can resolve problems). What I wish to plant is whether this is due to a seperate mind developing, outside the control of the first (commonly denominated host) or functions within the first mind. The implications of either are debatable. Some might say the first implies a greater degree of freedom, while the second is much more restrictive. Truth be told, the only final difference may be whether a tulpa can coexist, or eventually supersedes the first.

If it's functioning within the first mind (meaning the conscious mind) it would require constant attention to operate. So that's out. If it's not in the conscious mind, it must be in the unconscious mind -- this leads to that part of the unconscious mind becoming sapient, since it would eventually have to observe itself to demonstrate metacognitive behavior.

 

The difference really is whether or not the tulpa should be regarded as a being with rights. I want people to be responsible with this.

 

Furthermore, on the philosophy of sentience/intelligence, I might remit you to ELIZA and the Turing Test. I remember reading the introduction to a book where the author of ELIZA stated that people had taken the program to be capable of actual intelligence, and he pleaded people to understand it was not so (this might be the introduction to Art of Prolog by Shapiro as it is the latest book I have read). I will try to find an exact quote and source later.

Difficulties in determining the type of behavior exhibited by a machine is not of much interest to me. I know we don't have a foolproof way to determine if something acts sapient (I discussed this w/ Asgardian earlier), but outside of a machine designed to fool humans the answer should be easy to intuit.

 

(speculating, not really relevant) Ironically, if a machine truly were sapient, it probably wouldn't act anything like human -- despite being aware of it's own processes, the machine wouldn't have emotions or demonstrate preference, sympathy, etc since it lacks the neurochemical mechanism to do so.

 

I can prove a tulpa is sapient if it acts sapient, but you're right when you say I can't prove it truly behaves that way. So suppose I've been fooled and I installed something ELIZA-esque into my mind, ignoring entirely how such a complicated entity would develop. I then mistook it's behavior for sapient behavior. This is the only scenario that would allow for an 'illusion' to develop. However...

 

The entity would be unable to modify itself. Self-modification would require observation of self, which contradicts our concept of a non-sapient being. I believe, after long enough, that you would notice.

 

Second, that Eliza shit took two years to program. Here's the 2011 winner for the Loebner prize; the best chatbot the world had to offer -- http://ai.bluemars.com/chat/

Not very convincing. The behavior of sapient beings is much more complicated than sapient beings themselves. Just as a matter of feasibility I would say this is impossible.

Link to comment
Share on other sites

I see what you're saying, but I believe the language must support the concept of self ("I") before sapience can develop. The feral children (assuming validity of the accounts) had body language and some crude vocalizations, but they never became sapient.

 

If it's functioning within the first mind (meaning the conscious mind) it would require constant attention to operate. So that's out. If it's not in the conscious mind, it must be in the unconscious mind -- this leads to that part of the unconscious mind becoming sapient, since it would eventually have to observe itself to demonstrate metacognitive behavior.

 

The difference really is whether or not the tulpa should be regarded as a being with rights. I want people to be responsible with this.

 

Difficulties in determining the type of behavior exhibited by a machine is not of much interest to me. I know we don't have a foolproof way to determine if something acts sapient (I discussed this w/ Asgardian earlier), but outside of a machine designed to fool humans the answer should be easy to intuit.

 

(speculating, not really relevant) Ironically, if a machine truly were sapient, it probably wouldn't act anything like human -- despite being aware of it's own processes, the machine wouldn't have emotions or demonstrate preference, sympathy, etc since it lacks the neurochemical mechanism to do so.

 

I can prove a tulpa is sapient if it acts sapient, but you're right when you say I can't prove it truly behaves that way. So suppose I've been fooled and I installed something ELIZA-esque into my mind, ignoring entirely how such a complicated entity would develop. I then mistook it's behavior for sapient behavior. This is the only scenario that would allow for an 'illusion' to develop. However...

 

The entity would be unable to modify itself. Self-modification would require observation of self, which contradicts our concept of a non-sapient being. I believe, after long enough, that you would notice.

 

Second, that Eliza shit took two years to program. Here's the 2011 winner for the Loebner prize; the best chatbot the world had to offer -- http://ai.bluemars.com/chat/

Not very convincing. The behavior of sapient beings is much more complicated than sapient beings themselves. Just as a matter of feasibility I would say this is impossible.

How do you define sapience? If sapience is the ability to act with judgement, then I would state that wild children are most likely sapient. For example, if a wild child gets burned by fire once, it will most likely avoid fire or be cautious around it the second time.

 

A mind is Preconcious, Unconcious and Concious. Also, the problem with the second model is that if it is a completely seperate mind, then how do these two minds have access to each other (shared memories, recalling, lucid dreaming, etc.)? As it appears, phenomenons with tulpa seem to indicate a symbosis between the two minds at every level. Therefor, either we have the first model, or we have a second model with a missing common component which permits these abilities.

 

I mentioned ELIZA because the problem is pertinent, we have two 'programs', how do we know if one is truly sentient?

 

A computer can demonstrate emotion, it is simply a matter of creating the proper models.

 

The ELIZA program simply goes to show that things are generally classified as meeting a criteria by their visual impression, independent of the operational mechanics.

 

Also, computer virus's (pretty sure this stands for a biological virus as well) can modify themselves, look up polymorphic code or introspection. Does that mean a virus is sapient?

Link to comment
Share on other sites

How do you define sapience? If sapience is the ability to act with judgement, then I would state that wild children are most likely sapient. For example, if a wild child gets burned by fire once, it will most likely avoid fire or be cautious around it the second time.

I'm using the definition -- a being is sapient if it has metacognition... the ability to think about thinking. It's an apparently accepted definiton, and it shows a clearly developed awareness of mind. Learning is not sapience.

 

A mind is Preconcious, Unconcious and Concious. Also, the problem with the second model is that if it is a completely seperate mind, then how do these two minds have access to each other (shared memories, recalling, lucid dreaming, etc.)? As it appears, phenomenons with tulpa seem to indicate a symbosis between the two minds at every level. Therefor, either we have the first model, or we have a second model with a missing common component which permits these abilities.

You are confusing how deep your consciousness runs. Your identity, the part that is self-aware, is only the conscious. The reason I say this is because the unconscious mind can exist without you (in feral children, again), and therefore doesn't belong to you. You and the tulpa can share memories and recall because those are in the unconscious mind, and do not belong to you. They may be associated with you, but the unconscious is the one that possesses them. And, as I've said, the unconscious doesn't belong to you.

 

Single unconscious mind, two conscious minds.

 

So there's no problems with the second model.

 

Problems with the first model: the tulpa couldn't act truly sapient (or else contradict the model), and would probably set off red flags. How would a construct of that complexity even get into the mind?

 

I mentioned ELIZA because the problem is pertinent, we have two 'programs', how do we know if one is truly sentient?

A computer can demonstrate emotion, it is simply a matter of creating the proper models.

We know it is sapient if it demonstrates sapience. How to determine if it demonstrates sapience is much harder, though. I argued in the last post that the mechanism for simulating near-sapience (nearly enough to be mistaken for actual sapience) would be much harder to construct than the actual thing. Also, it couldn't be perfect. As I said, the so called sophisticated models like ELIZA are embarrassingly, obviously not sapient, as well as absurdly complicated. The conscious mind only takes up 5-9 bits of information. Which, honestly, is more likely?

 

The ELIZA program simply goes to show that things are generally classified as meeting a criteria by their visual impression, independent of the operational mechanics.

 

Ok. Except that, in this case, behavior is both the criteria and the impression.

 

Also, computer virus's (pretty sure this stands for a biological virus as well) can modify themselves, look up polymorphic code or introspection. Does that mean a virus is sapient?

Not modify in the way that I meant. For all of your examples, there are simply "rules to change the rules". The meta-rules, however, cannot be changed. This would stand true for a non-sapient, nearly sapient acting being.

Link to comment
Share on other sites

I'm using the definition -- a being is sapient if it has metacognition... the ability to think about thinking. It's an apparently accepted definiton, and it shows a clearly developed awareness of mind. Learning is not sapience.

 

You are confusing how deep your consciousness runs. Your identity, the part that is self-aware, is only the conscious. The reason I say this is because the unconscious mind can exist without you (in feral children, again), and therefore doesn't belong to you. You and the tulpa can share memories and recall because those are in the unconscious mind, and do not belong to you. They may be associated with you, but the unconscious is the one that possesses them. And, as I've said, the unconscious doesn't belong to you.

 

Single unconscious mind, two conscious minds.

 

So there's no problems with the second model.

 

Problems with the first model: the tulpa couldn't act truly sapient (or else contradict the model), and would probably set off red flags. How would a construct of that complexity even get into the mind?

 

We know it is sapient if it demonstrates sapience. How to determine if it demonstrates sapience is much harder, though. I argued in the last post that the mechanism for simulating near-sapience (nearly enough to be mistaken for actual sapience) would be much harder to construct than the actual thing. Also, it couldn't be perfect. As I said, the so called sophisticated models like ELIZA are embarrassingly, obviously not sapient, as well as absurdly complicated. The conscious mind only takes up 5-9 bits of information. Which, honestly, is more likely?

 

 

Ok. Except that, in this case, behavior is both the criteria and the impression.

 

Not modify in the way that I meant. For all of your examples, there are simply "rules to change the rules". The meta-rules, however, cannot be changed. This would stand true for a non-sapient, nearly sapient acting being.

 

Sapience: http://en.wikipedia.org/wiki/Sapience#Sapience . I cannot seem to find introspection as a criteria. Also one is not born able to discern or judge, that is gained through experience.

 

Memories lie in the Preconcious, at least assuming we use Freuds model. Furthermore, your model would not be correct due to the fact that the Unconcious does not manifest itself in the Concious in a clear fashion (hence the problems of testing its properties). This is evident when one tries to recall a dream, and the nature of dreams as deeply symbolic imagery. Stating that the Unconcious mind can exist without the Concious mind would be erroneous for multiple reasons (not the least being a lack of experimental data and empirical data), primarily because a mind is all those things together, seperately, it would not be a mind. Might I also state that feral children are not some miracle of the human species, they are simply not indoctrinated into civilization. They are very concious, as they can percieve their environment (sense danger), feel (ever shot a feral child?), and are probably aware of the results of their actions on a very elemental level.

 

If it is easier to create sapience than simulate it, then why can we only create a simulation of it?

 

Furthermore, how do you explain the genesis of the tulpas mind/conciousness?

Link to comment
Share on other sites

 

Sapience: http://en.wikipedia.org/wiki/Sapience#Sapience . I cannot seem to find introspection as a criteria. Also one is not born able to discern or judge, that is gained through experience.

 

It's not so much a criteria as it is a necessary condition.

Wiki: Metacognologists believe that the ability to consciously think about thinking is unique to sapient species and indeed is one of the definitions of sapience.

 

http://books.google.com/books?id=htbRFb4TN0IC&pg=PA304&lpg=PA304&dq=Smith,+J.+D.,+Shields,+W.+E.,+%26+Washburn,+D.+A.+(2003).+The+comparative+psychology+of+uncertainty+monitoring+and%0Dmetacognition.+Behavioral+and+Brain+Sciences,+26,+317–373.&source=bl&ots=P-ACpSBOHA&sig=jJGAr5P3lbWCXDDjKvMMMdRL7ug&sa=X&ei=Et8uUPb6IKSh6wGIhYGgCg&ved=0CBkQ6AEwAA#v=onepage&q=metacognition&f=false -- page 302:

given the link between metacognition and declarative consciousness (Koriat, 2007; Nelson, 1996) the study of animal metacognition can contribute to the study of animal consciousness. Note -- declarative consciousness is related to sapience in that it indicates having subjective experience.

 

http://fredonia.academia.edu/JustinCouchman/Papers/743013/Beyond_stimulus_cues_and_reinforcement_signals_A_new_approach_to_animal_metacognition -- Page 1:

Researchers take humans’ metacognitive behaviors to indicate important mental capacities, including hierarchical layers of cog-nitive control (Nelson & Narens, 1990), self-awareness (Gallup,1982), and declarative consciousness (Nelson, 1996). -- Note: declarative consciousness is related to sapience in that it indicates having subjective experience.

 

http://iipdm.haifa.ac.il/images/Articles/metacognition_and_consciousness.pdf -- page 313 --The study of the bases of metacognitive judgments and their accuracy brings to the fore an important process that seems to underlie the shaping of subjective experi- ence

 

What you should get from the above is that a sapient being can think about thinking (metacognition), a being that is metacognizant is certainly sapient, and the two before indicate declarative consciousness and subjective experience -- which means they should be treated as any other person. Without subjective experience, no one is truly 'there' to feel pain. It becomes only a chemical process, with ingrained reactions.

 

The reason I use 'sapient' is because 'conscious' is too vague -- Asgardian and I ran into the same issue earlier. 'Sapient' isn't really a scientific term. It just denotes a being of the highest conscious order (that we know of) that is self-aware and experiences things subjectively.

 

Memories lie in the Preconcious, at least assuming we use Freuds model. Furthermore, your model would not be correct due to the fact that the Unconcious does not manifest itself in the Concious in a clear fashion (hence the problems of testing its properties). This is evident when one tries to recall a dream, and the nature of dreams as deeply symbolic imagery.

Assuming we use Freud's model -- The line between Preconscious and Unconscious is not really defined -- some memories come easily, some harder, some not at all. And this all changes based on your state of mind, the presence of a therapist, etc.. So, it naturally follows that some parts of the Unconscious/Preconscious whole -- the shallower, easier to reach places -- manifest themselves to the Conscious in a clear fashion. Preconscious 'zones' are defined only by whether or not you can access them -- and since this can change, it stands to reason that the Preconscious is really just the whole of accessible zones (by you) of the unconscious. So, yes, the unconscious can manifest clearly. Just depends on the subject matter. You get deeper as you fall asleep/enter hypnosis/tulpaforce and things make less sense. More is also available. This isn't a critique of Freud's model, it's just a critique of your interpretation of it.

 

So, I fail to see how this invalidates the model. Memories are stored in the 'Preconscious' which is just an accessible zone of the Unconscious. Terminology.

 

 

Stating that the Unconcious mind can exist without the Concious mind would be erroneous for multiple reasons (not the least being a lack of experimental data and empirical data), primarily because a mind is all those things together, seperately, it would not be a mind.

You're asking for experimental data in psychoanalysis, brah. Seriously. It is a logical certainty that, in a case where an identity did not develop, all that is left is unconscious (since none of it is metacognizant, aware of mind, and possesses subjective experience). I might add that Freud used 'Conscious mind' to mean a self-aware entity -- since he didn't know of any humans that didn't possess such a thing -- in the highest sense, so don't go using any broad definitions of consciousness on me. This all points to the unconscious not belonging to you; it's more like your boss.

 

Might I also state that feral children are not some miracle of the human species, they are simply not indoctrinated into civilization. They are very concious, as they can percieve their environment (sense danger), feel (ever shot a feral child?), and are probably aware of the results of their actions on a very elemental level.

Conscious, yes, in a broad sense. But they are not aware of their own existence, as I've demonstrated, and do not possess subjective experience -- they lack metacognition. You may be on to something with the 'indoctrinated into civilization' bit, though. There are theories floating around that say that subjective experience evolved as a result of changes in civilization about 3000 years ago. Wiki bicameralism.

 

If it is easier to create sapience than simulate it, then why can we only create a simulation of it?

 

Furthermore, how do you explain the genesis of the tulpas mind/conciousness?

First question: We can't create a simulation of it. It is literally, philosophically impossible to simulate it perfectly without creating it. It is less difficult to simulate it imperfectly, but then the simulation will have problems that will give it away. Humanity, with the help of computers, and all of our wisdom, camt even simulate it convincingly. The brain can create it with 5-9 bits of information. That was the point.

 

Second question: in which model? Mine is pretty well fleshed out.

 

Yours has not only the problem of creating a nearly-sapient-acting-but-not-sapient entity that is several magnitudes greater in complexity than our own consciousness, but also:

 

the problem of the entity's inevitable shortcomings.

the problem of the feral children and ownership of unconscious

Link to comment
Share on other sites

Sorry for the nonsense I've been spewing.

 

It's fine. I understood it well. At least I hope I did. When using such ambiguous terms whose meanings even professionals sometimes disagree on, it's good to establish common language before proceeding with a discussion.

 


 

I felt like contributing something, but thinking about tulpas in the domain of cognitive psychology has repeatedly caused my mind to suffer BSoDs, and I can't quite articulate my thoughts and ideas, so have this lousy and slightly incoherent writing. Also, sorry about the excessive use of the word host. Unfortunately, I don't know a better term for original entity that developed in the usual way.

 

Since tulpas are created and mature in so short a period, though higher cognitive functions require years to develop, I believe they use the host's mental skills like language and with it probably many, if not all, of the cognitive concepts that make up the host's knowledge and understanding of the world. It would be a good idea to somehow find out to what extent tulpas can use preexistent semantic memories and how intuitive their accessing them is. In their creation process, tulpas don't have to learn any of these things, but rather to decode and access what's already there. However, they may pick and examine semantic memories and interpret and conceptualise them in their own way. Well, this applies to those that become sentient quickly. The others may initially for some reason struggle to properly access these faculties, but that's beside the point. I agree that metacognition and other abilities constituting sapience are learned. So, without implying anything, I'd just like to point out that a tulpa may simply come by these abilities, comprehend and use them without having to develop them on their own. This is something worthy of consideration, in my opinion.

 

Keeping what I said in mind, imagine a scenario, where a tulpa would actively learn a complex mental skill, say, a foreign language, but the host would participate in it as least as possible, therefore greatly limiting their resulting knowledge of it. Would the tulpa succeed? I think so. By success, I mean being able to understand the language, talk with a native speaker, or maybe act as an interpreter. To further lessen ambiguity, the tulpa should also be able to understand a complex abstract task assigned in the learned language and carry it out.

Now the questions: Would the host have access to this ability and could he/she speak the newly learned language? Would that vary on a case-by-case basis?

Nevertheless, by successfully completing such a task, a tulpa would prove (to me, but that's irrelevant, because I already believe) their individuality and independence from the host, not to mention their sentience, sapience, and all that jazz.

This is a good suggestion for research. It doesn't have to be a language; any complex mental skill would do. There are many that don't take so long to learn.

 

More questions.

Concerning possession, could a tulpa learn a sensorimotor skill? If so, could the host use it? If a host couldn't use a skill their tulpa learned, or if it took the host too much time to "find" the skill, then the implications would be far-reaching. Any challenging activities requiring eye-hand coordination would do, for example playing games like flight simulators.

This test is very pertinent, because the sensory and motor systems developed earlier and are more primitive than higher cognitive faculties, so it could reveal the level of detachment between the host and tulpa's minds.

 

Currently, I can't imagine two separate minds connected to and using one limbic system. And by the way, there are some aspects in which tulpas are very different from hosts. There is so much we don't know, so much we'll never know. It's kind of frustrating, isn't it? Oh boy, I feel another BSoD coming on.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...