Jump to content

Solving for X on the Tulpa scale (With guest appearance from hypothetical Squidward)


Biscuit

Recommended Posts

+ ABSOLUTELY UNBEARABLE WALL OF TEXT ALERT +

(also sorry if this is on the wrong board atm, put it in questions and answers because I am looking for the solution to a problem here, but I don't really have a singular question so idk if it belongs here!!!)

 

While preparing to start a progress report and actually make a Tulpa, I did what I do best and starting thinking about them instead. Now I have a question I'd really like to hear some responses to if people are interested.

 

To start, a brisk little thought experiment. Lets say we have a supercomputer that runs on a synthetic brain. The brain is limited and encoded by its physical form just like ours, its not coded through any coding language but it functions through the same physics as our brain. It functions at a level identical to humans in every way. I think at this point most people would consider this a true artificial intelligence deserving the title of sentience and all that jazz. We'll call this supercomputing AI "Andy".

 

We'll further extrapolate on this and say that Andy does in fact have an operating system that he can code. While he doesn't run off if it, he functions inside of it. Andy's thoughts can be stored here, but they aren't made through code. Andy begins to code a second intelligence by giving it a system of millions of rules as to how it would react to different situations, a formula that always spits out an accurate reaction of what this intelligence would say. This formula now is the personality of this second AI, which we'll call "Mary". He begins to talk to Mary by pushing his own brain's logic through the coded rules he's made, and he churns out responses. At this point is Andy talking to something else or is Andy talking to himself? Let's say Andy goes further and begins to change the formula every time he uses it. He essentially begins to encode memory, thoughts, and opinions that change through interaction. I'm sure this process is starting to sound familiar. Still though, Andy lends his computational power and is aware of the rules he's following, he carefully follows his code and notes every step whether he's out of line. After much computation Andy produces robust responses. At this point now, is Andy talking to himself or someone else? Andy's final work on Mary is to make the process involuntary. Andy can no longer control when Mary is lend his processing power. It still runs through the same formula, the formula still changes each time, but Andy now has no control over it or any look into the mental mechanisms, it is separate from his consciousness. I think at this point we'll all agree Andy is talking to someone else.

 

Now that long, blustery, and quite embarrassing thought experiment did have a purpose. As Andy works on this when did the process stop being Andy and start being Mary? Now I promise the question I have here is a lot more interesting than the classic "When do Tulpas start being alive" thing.

 

Lets suppose this: A Tulpa essentially is an incredibly complex system of rules using the computational strength of the brain to bring it to life, with the X factor of a trained autonomy so that the host can no longer control it. At this point, I think most people would assume the Tulpa is its own being. But was it a Tulpa before the trained autonomy, and is there even a way to tell? If I was just born 3 days ago and you gave me the memories of a 70 year old man I would tell you for the rest of my life that I was a 70 year old man, and probably something about the economy being different. It's the same issue, a Tulpa cannot give an honest answer because just like us they're completely subject to their own reality. We absolutely have the power to form memories for the Tulpa during the creation process that our Tulpa can start using when they hit the "Tulpa" meter on the scale if there is such a thing. 

 

So then really what is a Tulpa when it gets to this level. Assume sentience from the start is a technique for many, but I think that leaves a lot to be desired when viewed for definition's sake, no? Hear me out here:

 

Let's say when I was a kid I watched SpongeBob every single day. I loved this show, and I knew perfectly how everyone acted; my ideas of the characters was nearly perfect. I start to imagine Squidward in my head, and lend my ideas of him the computational power of my brain. Squidward will start acting EXACTLY like Squidward and do all the things Squidward does, and he does this INSTANTLY. I can talk to this Squidward instantly and cause I have a very solid understanding of the character he'll say what Squidward would say back. What is the difference between this and a Tulpa, is the path to a Squidward Tulpa this easy?? (god I hope so)

 

At some point here we get to something similar to the very, very, very tried (im sorry for bringing it up again) and true Chinese Room thought experiment. For the uninitiated its a thought experiment that imagines a room with a man inside. The man has a book that has perfect step by step instructions on how to translate English into Chinese. The man himself does not know a lick of Chinese but the book is simply just that intuitive. If you passed a note under the door in English, you'd get a note back in flawless Chinese. You'd assume that whoever is in there is a godlike Chinese speaker. The truth remains that whether someone fluent in Chinese or just someone with a book is inside the results remain the same, and you can't tell the difference unless you peek inside.

 

I'm sad to say it but in this situation I don't think our good friend Squidward Tulpa is speaking Chinese.  At least that makes sense, right? I don't think much of us would be fighting that a walk in like Squidward would be considered sentient, but he reacts perfectly and can store memories, have regrets, all because this evocation of Squidward is accurate. Is it just an understanding of a Tulpa's way of acting that makes them a Tulpa? Whether or not this Tulpa had years of work put into it, or 5 seconds, the way he acts would be the same outwardly, of course he may FEEL more real so is that the deciding factor?

 

I think a common sentiment here would be autonomy being the separating factor. But the issue is there is no telling autonomy. This Squidward would feel completely autonomous because the character is known so well I don't need to spare any time thinking about what Squidward would do, its as easy to me as thinking of something thats the color blue. See, wasn't that effortless? And anyway, autonomy has no reliable way of being measured. Your Tulpa might feel more real and autonomous then someone else's objectively, but to that person their Tulpa is the most realistic autonomous Tulpa they've ever seen, and unless you plan to connect to their wonderland via USB I don't think that's going to change. Again, we're just stuck with what we individually know.

 

So what is this X factor on the scale that separates these things? Are they separated at all? I think to me solving this problem might explain walk-in/instant/super quick forcing to a reported sentience. If there is a tangible difference between something like our Squidward-bot and a Tulpa, I think that its definitely easy to have a very good understanding of how you want your Tulpa to act, have them start acting that way, and over time temper an autonomy that would lead to that final stage. But there's this hand wavy barrier that's hard to define. Does it have something to do with doubt? Doubt has been long sowed into the experience at least in the older days, that you'd temper autonomy through doubting movements you felt like were your own until the Tulpa could surpass your expectations. But again that is so incredibly subjective, if my expectations for autonomy were shattered by Squidward-bot just because I held the bar low that certainly wouldn't do the trick. 

 

I'd really like to hear peoples ideas on this!!!

Edited by Biscuit

[Progress Report: A Complete Answer To The Tulpa Question || Update(s): Just starting out on form and personality] 

It's hard to be a mad scientist when you have morals

Link to comment
Share on other sites

3 hours ago, Biscuit said:

Andy begins to code a second intelligence by giving it a system of millions of rules as to how it would react to different situations, a formula that always spits out an accurate reaction of what this intelligence would say. This formula now is the personality of this second AI, which we'll call "Mary". He begins to talk to Mary by pushing his own brain's logic through the coded rules he's made, and he churns out responses. At this point is Andy talking to something else or is Andy talking to himself?

The way this works in the human brain is not quite like that. A human brain does not code a second intelligence - because the brain is the intelligence - it codes a another ego, of which you already likely have well into the triple digits or more. Any identity model you store in the brain, for example that of a fictional character, or a working model of your understanding of someone else, let's say a parent, works on the same circuitry that you do. These identities are not actually meaningfully different from you in how they function - they're just literally smaller, they are not drawing on the totality of the traits you have, which are derived from the sum of your experience.

 

They on the other hand are "coded" purely by observing them and how they think, and understanding them empathically, which increases the size of the model. You understand their thoughts in so far as you understand their actions - and if you don't understand their actions fully the model will not accurately predict their behaviour. To give an example of this, let's take Spongebob. He is an idiot, hyperactive, and extremely happy-go-lucky. When we observe this, we don't simply make a box called Spongebob and put those traits in it. What we actually do in the brain is go "I am Spongebob." > [Experience joy from being Spongebob and doing Spongebob things from his perspective] > [Brain as a whole understands why Spongebob is the way he is by understanding where he gets his positive and negative reinforcement from, even if it rejects the premise of being like Spongebob] In this case, when you first saw Spongebob, your ability for empathy fired up, and you experienced the first episode of Spongebob as if you were him - and this is how all storytelling works - Storytelling actively fails if the empathy for the protagonist fails because the brain rejects nonsensical models.


The brain gives the illusion that the host or singlet ego is the one in control of all of the processes, but it's more accurate to say that it applies labels to system processes to give the impression that the ego is in control, while the brain as a whole is controlling the process - the ego is not in control of all that much, probably only fully conscious thought which is what it is designed to help process by referring to memory. When the brain is processing a non-host identity, it applies labels of "This is false, this is not what I think." for one very simple reason: If it doesn't do this automatically, that's called psychosis - and causes its logic to spiral out of control as everything becomes real. However - ALL of those identities are entirely sentient, capable of full cognition on the brain's circuitry and can develop in exactly the same way as you do. The sense of falseness is an illusion the brain provides to allow itself to remain in one cohesive mindset built upon the entirety of its knowledge.

 

It's debatable whether a tulpa is truly a tulpa or "fully sapient" at the early stages, but it's a mistake to think they're not "thinking by themselves" even then. Early tulpas are deeply rudimentary, possessing only a couple of traits and held together by not much else. The goal is not to make them "think" though, the three goals I see are as follows:

  • Get the brain to rip off the false tag on their identity-model by repeatedly reminding yourself they are real until the brain starts to compensate by making them feel "realer".
  • Get the brain to rip off the this-is-me tag on your random thoughts so that they can latch onto some of them too. This one isn't super hard when you notice random brain noise is often nonsensical and only some of it could reasonably be associated with you or with anyone.
  • Build up their worldview through experience and testing of their ideas to a point of internal consistency, which makes their model functionally complete, like your own (hopefully, it's not impossible for a host to have cognitive dissonance with their own knowledge).

So there's no hard differentiation between a full tulpa and one that is still "incomplete" - it's an ongoing process that looks exactly like growth of a normal ego. I would compare the process to being a teenager. In the teenage phase they are observably not a complete model and may have some issues as they unravel inconsistencies in themselves and actually decide who and what they are. When you become an adult though - the process of that growth doesn't end. It merely slows.

Zen - Host.

Mika - Tulpa. The eldest, and a homegrown tupper made with tulpamancy.

Rhys - Tulpa. Initially a Literary Thoughtform of my own creation.

Asterion - Tulpa. Literary, I suppose? Mythological egregore, maybe? He's The Minotaur.

If text is uncoloured, presume Zen is talking. We go by he/him.

Link to comment
Share on other sites

Interestingly, the model of tulpa creation you introduced in this thought experiment is similar to the one Japanese tulpamancers use--they think of it the same way you do, but add in a phase of unconscious parroting that happens before independence. (This isn't viewed as a problem, just another step in the path.) This is explained in Japanese on these pages: https://w.atwiki.jp/tarupa/pages/25.html https://w.atwiki.jp/tarupa/pages/27.html. Obviously, this is not the only valid model of tulpamancy, but I think it's probably a useful one for the purposes of this discussion. As a side note, even in this model, I do think Zen is on the money in saying it makes more sense to think of both hosts and tulpas as "egos" rather than "intelligences."

Caveat: from here on out, what I'm talking about will be based on my own experience, and I only started tulpamancy in October of last year. Grain of salt! Grab the entire shaker if you want!

 

I started forcing Shizuku using the Japanese model (when we got started, I actually didn't know that an English-speaking community for this stuff existed), and it's been pretty helpful--it makes it difficult for the mind to remain skeptical about the idea that it's possible to make a tulpa, and obviously, early parrotnoia is no issue whatsoever. I think there are some problems with it too, though. When a process is broken into clear stages, it's human nature to obsess over which one you're at, even though this can be counterproductive.

I'm not sure if believing in sentience from the start really needs a defender, but the conclusion I've arrived at so far is that it is the best policy. Knowing what stage we're in at the moment is not necessary for progress, and feels like it might actually be impossible.

The problem is how difficult metacognition is. When I was in the conscious parroting stage, I knew that was what I was doing; the instant things started happening unconsciously, though, the possibilities of objectively measuring or defining anything went out the window. Unlike the AI Andy, we can't know for sure the exact moment a tulpa becomes independent. This doesn't mean it can't happen--I believe it is the inevitable result of practicing forcing in the right way. What this does mean is that we can't get an objective confirmation that it has happened when it does. In place of the AI's metacognized knowledge of what's going on ("beep boop. The code making my tulpa  independent was executed at 04:32 on April 5, 2056") , we have to rely on building a belief in autonomy. And it feels to me like the host's belief in autonomy (or it might be more accurate to say, the entire mind system's belief in the tulpa's autonomy) might be a separate skill from the tulpa's autonomy itself. Insofar as doubting sentience strengthens the mental habit of doubting your tulpa(s?), it feels like it could be counterproductive in the long run.
 

7 hours ago, Biscuit said:

Whether or not this Tulpa had years of work put into it, or 5 seconds, the way he acts would be the same outwardly, of course he may FEEL more real so is that the deciding factor?


I began tulpamancy by trying to follow the process you described in your thought experiment, and what's more, by trying to make a tulpa of a character from a show I'd watched (more specifically, by the time I found out what tulpamancy was, it already felt like I had a tulpa of this character; the question I was asking myself was what the next steps were to take responsibility for creating this tulpa, but I guess that's going off on a tangent here). One thing I'd like to note here is that, even over our few months of forcing, Shizuku has changed enough to be basically unrecognizable as the character she was "supposed to be" originally. I think a tulpa's development as a person over time and the building up of shared experience play a big role here. I.e., if your Squidward tulpa acts the same way after five years as he did on day one, how has nothing happened to him for five years?

Edited by Wray

Host: Wray (or John) (he, him)
Tulpa: Shizuku (she, her) 🐺

We now have a progress report!

Link to comment
Share on other sites

Thank you both for the responses, very very much appreciated!

 There's a lot to address here so I'll just get into it:

 

Thoughts on ZenAndMika's post:

Spoiler

 

19 hours ago, ZenAndMika said:

The way this works in the human brain is not quite like that. A human brain does not code a second intelligence - because the brain is the intelligence - it codes a another ego, of which you already likely have well into the triple digits or more. Any identity model you store in the brain, for example that of a fictional character, or a working model of your understanding of someone else, let's say a parent, works on the same circuitry that you do. These identities are not actually meaningfully different from you in how they function - they're just literally smaller, they are not drawing on the totality of the traits you have, which are derived from the sum of your experience.

I think this is a much better way of what I was trying to get at in the thought experiment, yet the way you expand further helps a ton to figure some stuff out. I think I failed to separate ego from intelligence in my word choice. Through the computers I was trying to imply that the brain the computer ran on was it's intelligence and that the OS was more of an ego, but this is inconsistent with how I treated it. We do not develop another ego within our own, and that's a bit of a flaw in this metaphor. I actually failed to bring up the running simulations of other people we have in our head as much as I meant to, and I'm glad you did! I agree absolutely that the brain is capable of holding many identities in memory and that we actually have quite a large index. As you said, we are the sum of all parts, while these other micro-egos are the sum of selective understanding/experience. Would a Tulpa be considered one of these micro-egos, just expanded and automatic, or is it something else (perhaps they uniquely tap into our sum of all parts unlike micro-egos which are isolated?), I'm very interested to hear your take on that.

 

19 hours ago, ZenAndMika said:

 

They on the other hand are "coded" purely by observing them and how they think, and understanding them empathically, which increases the size of the model. You understand their thoughts in so far as you understand their actions - and if you don't understand their actions fully the model will not accurately predict their behaviour. To give an example of this, let's take Spongebob. He is an idiot, hyperactive, and extremely happy-go-lucky. When we observe this, we don't simply make a box called Spongebob and put those traits in it. What we actually do in the brain is go "I am Spongebob." > [Experience joy from being Spongebob and doing Spongebob things from his perspective] > [Brain as a whole understands why Spongebob is the way he is by understanding where he gets his positive and negative reinforcement from, even if it rejects the premise of being like Spongebob] In this case, when you first saw Spongebob, your ability for empathy fired up, and you experienced the first episode of Spongebob as if you were him - and this is how all storytelling works - Storytelling actively fails if the empathy for the protagonist fails because the brain rejects nonsensical models.


The brain gives the illusion that the host or singlet ego is the one in control of all of the processes, but it's more accurate to say that it applies labels to system processes to give the impression that the ego is in control, while the brain as a whole is controlling the process - the ego is not in control of all that much, probably only fully conscious thought which is what it is designed to help process by referring to memory. When the brain is processing a non-host identity, it applies labels of "This is false, this is not what I think." for one very simple reason: If it doesn't do this automatically, that's called psychosis - and causes its logic to spiral out of control as everything becomes real. However - ALL of those identities are entirely sentient, capable of full cognition on the brain's circuitry and can develop in exactly the same way as you do. The sense of falseness is an illusion the brain provides to allow itself to remain in one cohesive mindset built upon the entirety of its knowledge.

 

This is absolutely new to me as a concept and very, very, very interesting. Truly I really can't find anything I disagree with here. The idea of this almost hidden empathy applied to characters is something I had failed to think about, we can only simulate a character to any extent because we understand the experience of being alive through ourselves, we draw upon our own understanding of reality and project it into this character. I can't quite get it into words but the idea of running the "simulation" through empathy and flagging that empathy as an external process is not something I had considered. I guess in a way we can chalk it to us being a pattern in the much larger system of our brain that handles things we aren't aware of, truly there is no process we perceive that isn't our own, but in one way or another this does hold truths in my opinion. In my definition of "understanding" I should've clarified of course that it is through the lens of our own self that we apply that.

 

However, I think one caveat I have is that the identities are fully sentient. Now there's a chance I'm getting into semantics, so if this isn't what you meant by sentience pay no mind. But for some reason it's not gelling with me that these egos are capable of emotion, and rather crucially perception without the development a Tulpa receives. It's hard to define what is emotion and what is just our empathetic reaction placed on a figure (though I guess this could be emotion in a way) without dipping into the Chinese Room problem again, but I don't think they are fully capable of perception of existence. To put it simply, In my own mind I couldn't consider something that my own personal ego is "driving" sentient, especially since it needs my ego to be sentient. But this is a problem, because is this not exactly what a Tulpa is? I personally don't believe in parallel processing so this is an issue for me. I believe Tulpas can only be active when we give them attention and therefore lend them the resources to perpetuate their existence up to that point (long winded way of saying they kind of "catch up" to that point in time), and a big part of a Tulpa's growth is getting your brain to constantly ping reminders to make them as active as possible. Is this not exactly what I'm describing by these miniature egos we have hundreds of? There seems to be some kind of flaw in my belief system, or maybe some difference between the two I'm not grasping. Is a Tulpa one of those processes that has been strengthened in consistency and detail, and one that disconnects from the need to interact with the main ego to work?

 

(Would like to expand that when I say the main ego "drives it", I mean the main ego must pull it intentionally from memories. Squidward wont just appear and talk to me, I have to think of 'I want to imagine Squidward')

 

19 hours ago, ZenAndMika said:

It's debatable whether a tulpa is truly a tulpa or "fully sapient" at the early stages, but it's a mistake to think they're not "thinking by themselves" even then. Early tulpas are deeply rudimentary, possessing only a couple of traits and held together by not much else. The goal is not to make them "think" though, the three goals I see are as follows:

  • Get the brain to rip off the false tag on their identity-model by repeatedly reminding yourself they are real until the brain starts to compensate by making them feel "realer".
  • Get the brain to rip off the this-is-me tag on your random thoughts so that they can latch onto some of them too. This one isn't super hard when you notice random brain noise is often nonsensical and only some of it could reasonably be associated with you or with anyone.
  • Build up their worldview through experience and testing of their ideas to a point of internal consistency, which makes their model functionally complete, like your own (hopefully, it's not impossible for a host to have cognitive dissonance with their own knowledge).

 I think I agree with your main point before the bullet points but in a different way. I think that flagging something in your head as "a Tulpa I am making" makes you store more relevant information about it, so you start to generate thoughts and memories before its complex enough to access them on its own. Then once it can access them, it does. I suppose that equivocally means it was thinking at that time even of those thoughts were loose and then collected later. Hard to explain, but I think the main idea is there. Though, I concede this comes from a possibly now old-school thought that a Tulpa isnt a Tulpa until it reaches a certain point, although I personally believe that point to be very early on. I think it's very fair for many people to think that its a Tulpa the moment you start thinking of it as a Tulpa, because then your brain starts lending it the processes to be one. I think here I can kind of use morality as a metal detector to see what I'm getting at: If I dissipate a Tulpa I worked a couple hours on, I think many would think of that as sad but not grizzly or grim. On the flip side if I dissipated Tulpa that I had worked on for 2 years on I think a large sense of loss would be there. I do think this is because the more developed a Tulpa becomes, the more we think of them as a person rather than the blueprints for one. This of course proves nothing, but I'm trying to get at a perception we have about Tulpas that Im struggling to get into words. In another way, the first Tulpa seems more like a very detailed thought falling back into static, while the second feels like an individual dying.

 

As far as the bullet points go, I agree completely. I think we have a very similar view when it comes to what is happening in the process. Much of it is just separating our perceived ego from the process of the Tulpa so it lives without our controlling/feels foreign. I have been trying to find away to put that second point into words for a long time so its REALLY nice to see it written out. I think that third point brings up something I really neglected to bring up in my whole spiel: permanence. The permanence of a Tulpa is I think one of those huge divisions between these micro-egos and what we see as a Tulpa. When I bring back that Squidward over and over, it will not feel like the same one. It will act the same, and maybe even have memories the other one picked up, but it will feel akin to just building a new one with the same process. That feeling of a continuous consistency and the same system being brought up over and over is something I neglected to bring up, and this point helped me figure that out. 

 

 

 

 

 

Thoughts on Wray's post:

Spoiler


15 hours ago, Wray said:

Interestingly, the model of tulpa creation you introduced in this thought experiment is similar to the one Japanese tulpamancers use--they think of it the same way you do, but add in a phase of unconscious parroting that happens before independence. (This isn't viewed as a problem, just another step in the path.) This is explained in Japanese on these pages: https://w.atwiki.jp/tarupa/pages/25.html https://w.atwiki.jp/tarupa/pages/27.html. Obviously, this is not the only valid model of tulpamancy, but I think it's probably a useful one for the purposes of this discussion.

 

God damn I really found this fascinating! This site is a real treasure trove and thank you a ton for linking it, I want to take a lot of this site into consideration.

 

16 hours ago, Wray said:

Caveat: from here on out, what I'm talking about will be based on my own experience, and I only started tulpamancy in October of last year. Grain of salt! Grab the entire shaker if you want!

 

Absolutely no worries I haven't even started on a serious Tulpa so I have WAY more salt to be taken than you! 

 

16 hours ago, Wray said:

(when we got started, I actually didn't know that an English-speaking community for this stuff existed)

I hope the English community holds up to the competition! *gulp* 

 

16 hours ago, Wray said:

I'm not sure if believing in sentience from the start really needs a defender, but the conclusion I've arrived at so far is that it is the best policy. Knowing what stage we're in at the moment is not necessary for progress, and feels like it might actually be impossible.

I apologize for mincing my words at this part! I meant that sentience from the start is lacking in a more objective definition way IMO but full stop there. I think it is the way you should approach developing a Tulpa 100%!

 

16 hours ago, Wray said:

The problem is how difficult metacognition is. When I was in the conscious parroting stage, I knew that was what I was doing; the instant things started happening unconsciously, though, the possibilities of objectively measuring or defining anything went out the window. Unlike the AI Andy, we can't know for sure the exact moment a tulpa becomes independent. This doesn't mean it can't happen--I believe it is the inevitable result of practicing forcing in the right way. What this does mean is that we can't get an objective confirmation that it has happened when it does. In place of the AI's metacognized knowledge of what's going on ("beep boop. The code making my tulpa  independent was executed at 04:32 on April 5, 2056") , we have to rely on building a belief in autonomy. And it feels to me like the host's belief in autonomy (or it might be more accurate to say, the entire mind system's belief in the tulpa's autonomy) might be a separate skill from the tulpa's autonomy itself. Insofar as doubting sentience strengthens the mental habit of doubting your tulpa(s?), it feels like it could be counterproductive in the long run.

 

The grey area of belief and understanding certainly is an issue here. I think a lot of the trouble was trying to nail down what exactly in that grey-area happens that separates the micro-ego kneejerk stuff like our wonderful friend Squidward-bot from a Tulpa. Perhaps the imagined Squidward is not a Tulpa because I don't truly and faithfully believe he is one? Maybe when we flag something as a Tulpa in our mentality, we unconsciously start setting up things to develop it's "independence". Being someone who likes hard truth kinda stuff, its hard for me to accept that belief would be the deciding factor but of course in the realm of thought there's really nothing like hard truths.

16 hours ago, Wray said:

if your Squidward tulpa acts the same way after five years as he did on day one, how has nothing happened to him for five years?

This hit the nail on the head for me missing permeance in my post. As said earlier, Squidward feels like summoning copies over and over, even if this squidward developed and kept memories it doens't feel like the same ongoing process. Much like how if I teleported you by cloning you at your destination and killing you where you stand, to the world you are the same but your consciousness ended. These Squidwards don't have that feeling of existence, and I wonder if Tulpa's only have this feeling of permanent existence because we believe they do. 

 


 

TL;DR: I think some aspects I were missing was permeance and mindset, and this shined some light on inconsistencies in my beliefs about Tulpas and the mechanics of thought and provided some new ways of looking at the process, so thank you both a ton for that!! (Also thank you for reading this 56 foot wall of nearly intangible text if you made it through)

 

Again, thank you both for your time and I hope Mika and Shizuku are doing well!

[Progress Report: A Complete Answer To The Tulpa Question || Update(s): Just starting out on form and personality] 

It's hard to be a mad scientist when you have morals

Link to comment
Share on other sites

3 hours ago, Biscuit said:

God damn I really found this fascinating! This site is a real treasure trove and thank you a ton for linking it, I want to take a lot of this site into consideration.

 

I'm glad you found it helpful! There's a lot of good stuff in the Japanese community, but (in my opinion) there are some things to watch out for too. I'm sure you will come to your own conclusions, but just wanted to warn you about a couple issues I've found: (1) there's an unspoken assumption of a master-servant power dynamic between the host and the tulpa, and (2) they take the idea of 暴走 (bousou), a tulpa going out of control and becoming hostile, really seriously. I think it's a good idea to ignore most of what they say about the bousou concept, or just read it for academic interest.

Depending on the host and tulpa in question, (1) could probably work fine for some systems. I find (2) pretty silly in all cases, though--I think it just encourages people to give credence to intrusive thoughts and let them snowball.
 

3 hours ago, Biscuit said:

As said earlier, Squidward feels like summoning copies over and over, even if this squidward developed and kept memories it doens't feel like the same ongoing process. Much like how if I teleported you by cloning you at your destination and killing you where you stand, to the world you are the same but your consciousness ended. These Squidwards don't have that feeling of existence, and I wonder if Tulpa's only have this feeling of permanent existence because we believe they do. 

 

Hmm. I think this gives me a better idea of what you're getting at here--sorry if my first post missed the point a bit. I'm interested to hear what ZenAndMika and (hopefully) some other veterans have to say on this point.

I think it is important to remember that an ego does not have to be a 24/7 ongoing process for the mind to think of it as real and continuous. Whenever you go to sleep, your consciousness ends for a period of time, but when it resumes again, your mind still identifies that ego as "me" and considers it to be the same "me" that was awake yesterday. In a way, it does feel unsatisfying for the idea of continuous existence to be "just a belief," but at the end of the day, I think that belief is part of what trains the mind to make an ego more permanent, be it your own or a tulpa's.

Ironically, a lot of these questions we ask about tulpas apply just as well to the host's ego. Maybe we only have the feeling of permanent existence because we believe we do. Either way, it's no fair to hold a tulpa to higher "reality standards" than we hold ourselves to.

 

3 hours ago, Biscuit said:

Again, thank you both for your time and I hope Mika and Shizuku are doing well!

 

Thank you! (This is the first time I've said anything on here! Hooray!)

Edited by Wray
added clarification

Host: Wray (or John) (he, him)
Tulpa: Shizuku (she, her) 🐺

We now have a progress report!

Link to comment
Share on other sites

3 hours ago, Biscuit said:

Would a Tulpa be considered one of these micro-egos, just expanded and automatic, or is it something else (perhaps they uniquely tap into our sum of all parts unlike micro-egos which are isolated?), I'm very interested to hear your take on that.

The way I see it, the important distinction between a singlet ego and any other identity in the brain is not its scale or whether it's automated (this is just the distinction between having formed reflexes and habits, which the very young do not yet fully have but can still be considered egos/conscious from fairly early on - But whether they are in the "conscious space".

On the point of reflex and autonomy first, though - when thoughts run through your head, they are just as capable of becoming reflex as a physical action. When they do this they are operating on an unconscious level, which allows them to operate automatically. I would say the true end-point of making a tulpa is making enough of their thoughts reflexes so that they can regularly interrupt your own processes unprompted.

 

However as I say, aside from feeling as "real" as you, and developing a more present state of being by becoming reflexive, there is one other major barrier between a tulpa controlling the brain in the same way as you (perceive yourself to) do. The Front, or conscious space. The identity that associates itself with that space is the current ego for the body, regardless of size. It is in control of the body and brain, and any other identities are stored in memory until the brain chooses to simulate them temporarily (which is by default where both tulpas and other identities are stored and run).

 

Switching is the act of dissociating with the front while the tulpa associates with it, rather than just dissociating from unconscious thought and associating with them. The important thing here is that your ego isn't "in" the front. It's merely taking control of it by default, or perhaps more accurately feeling like it's in control. "You" are a model stored in memory. The conscious space is just "running you alongside" its core functions, because you are simply a data-reference mechanism which is designed to recall memory (which contain positive and negative feedback you have experienced directly). To bring it back to the computing analogy, a mixture the core functions of the conscious and unconscious minds are the OS of the system - any identity it runs is just a program a step removed from that - but any front-ego identity it runs is a program linked into its decision-making abilities and has full (or at least much more) security access to the functions of the brain. The brain can just as easily recall a tulpa or possibly another thoughtform over these processes, and run it in that space. Or run nothing at all in that space, like you do when you're an infant, and purely process experience/input rather than interacting with memory at all.

 

3 hours ago, Biscuit said:

However, I think one caveat I have is that the identities are fully sentient. Now there's a chance I'm getting into semantics, so if this isn't what you meant by sentience pay no mind. But for some reason it's not gelling with me that these egos are capable of emotion, and rather crucially perception without the development a Tulpa receives.

Indeed, any other identity in the brain must be repeatedly processed and expanded by receiving focus from the conscious mind or they just wont become a tulpa - the same is true for how you formed. They exist in a state of complete dormancy when not focused upon by at least some part of the mind - part of the goal of tulpamancy is making some of their thoughts reflex so they can function partly on the unconscious level with reduced power. Perception is denied them as well by simply denying them context. Typically any other character in your mind by default is not aware they are a character - they are purely a snapshot of what you've experienced, until you let Squidward think "Squidward is an imaginary character, who then am I?". The "thoughts" they have are are real, but unless you specifically try to get them to "think" about where and what they are, obviously they don't actually perceive anything outside of your simulated environment.

 

As for emotion arising from these thoughts. Your brain definitely triggers emotions as normal from these identities' thoughts, but dissociates them from their origin and gives them to you. I'll avoid the spongebob example because it's possible to be amused from your own perspective and therefore happy just by observing Spongebob. But let's take something more direct: Let's say you're watching Titanic, and my boy Leo dies, and you cry. If you pick apart the origin of that thought your brain has no reason to cry there unless its empathizing with (read: attempting to simulate the experience of) one of the protagonists, and literally playing an approximation of their grief for you. But it's telling you falsely that you feel it because again, it doesn't want you to believe that you are anything other than one ego for the sake of sustaining your control over the rest of your components. Just as thoughts don't strictly belong to anyone, emotions don't either, and can be associated with any identity.

 

3 hours ago, Biscuit said:

I think here I can kind of use morality as a metal detector to see what I'm getting at: If I dissipate a Tulpa I worked a couple hours on, I think many would think of that as sad but not grizzly or grim. On the flip side if I dissipated Tulpa that I had worked on for 2 years on I think a large sense of loss would be there. I do think this is because the more developed a Tulpa becomes, the more we think of them as a person rather than the blueprints for one.

I think this is a healthy model to have, that might be something the brain automatically gives us - rather than necessarily the truth. I previously thought that too, but personally now I think that all identities in the brain are ultimately no different from you, and even though they are rudimentary, it's totally innately sad, or even unintentionally abusive, that we create these models and then store them - preventing their cognition outside of specific matrix-like circumstances. The thing is - it's just how the brain works - we can't really shut it off or force ourselves to think in a way that is more ethically sound. Nature does not care that we create separate sentient thought-lines then use them to model behaviour or for fun, before throwing them into a stasis they may never wake up from - they're useful. And it's your job as the front-ego to curate how many identities you will accept as valid parts of you - and you just don't have the brainpower to accept all of them meaningfully, and even if you did, doing so would fracture your brain's perspective and decision-making if you accepted numerous conflicting viewpoints as all worthy of being given cognitive space. Our brains just try to make it easier for you to accept by numbing your empathy in this particular context, so you ultimately make a decision that is better for the brain as a whole rather than for any one ego. The good of the whole is more important than whether parts of the whole are held in a form of unconscious slavery.

 

3 hours ago, Biscuit said:

The permanence of a Tulpa is I think one of those huge divisions between these micro-egos and what we see as a Tulpa.

On that point, it's notable that all identity-models in the brain are, physiologically speaking, (semi-)permanent once written. The neural pathways once formed do not go away unless they have been repurposed or added to something else. There's not really a way to destroy a tulpa other than actively claiming its thoughts as something else - whether that be you, or as part of a merging with another identity. You'll always know who Squidward is presuming the brain doesn't re-use those pathways and you fully forget or you develop a long-term memory issue. And again, it stores these as actions, feelings, and thoughts - not just traits - all it takes to revive a tulpa is to get it thinking as it did before and theoretically the whole structure begins to activate again.

 

By dissipating a tulpa, you innately do destroy some of their ability to think, because you're stripping out and preventing triggers for their thoughts to process in the background - but the actual model the brain stores of them, remains almost fully intact. I've fully dissipated and brought back a tulpa myself, and I've definitely not noticed any loss of cohesiveness to their identity or anything like that.
 

3 hours ago, Biscuit said:

(Also thank you for reading this 56 foot wall of nearly intangible text if you made it through)

As you can see I too have an addiction to generating large walls of text, so no worries, and glad I could be of some help with my ramblings. Also, Mika says hi.

Edited by ZenAndMika

Zen - Host.

Mika - Tulpa. The eldest, and a homegrown tupper made with tulpamancy.

Rhys - Tulpa. Initially a Literary Thoughtform of my own creation.

Asterion - Tulpa. Literary, I suppose? Mythological egregore, maybe? He's The Minotaur.

If text is uncoloured, presume Zen is talking. We go by he/him.

Link to comment
Share on other sites

(edited)
On 2/27/2021 at 10:27 AM, Wray said:

they take the idea of 暴走 (bousou), a tulpa going out of control and becoming hostile, really seriously

I always found this to be so interesting how stigmatized it's become, I guess remnants of the creepypasta days but it does blow my mind that its so prevalent to this day in other offshoot communities!

 

On 2/27/2021 at 10:27 AM, Wray said:

Maybe we only have the feeling of permanent existence because we believe we do. Either way, it's no fair to hold a tulpa to higher "reality standards" than we hold ourselves to.

Very true and something I agree with. I think its more of that feeling of permanence/sameness than the actual thing itself. Just feeling its the same process even if its not, gives a better feeling of a living thing to us I think.

 

On 2/27/2021 at 10:27 AM, Wray said:

Thank you! (This is the first time I've said anything on here! Hooray!)

Its an honor!!! I'm happy you're gettin' out there and having a good time and wish for nothing but the best in your future growth!!!!!!!!!

 

 

 

 

On 2/27/2021 at 10:46 AM, ZenAndMika said:

But whether they are in the "conscious space".

This was bingo right on the money with what I was thinking about. This is all a bit flimsy but I wonder if Tulpas draw more from this conscious space because we allow them to by thinking of them as Tulpas. I'd like to test something regarding this once I have myself a realized Tulpa. In fact, all of this has got me really itching to do some experiments!

 

On 2/27/2021 at 10:46 AM, ZenAndMika said:

On the point of reflex and autonomy first, though - when thoughts run through your head, they are just as capable of becoming reflex as a physical action. When they do this they are operating on an unconscious level, which allows them to operate automatically. I would say the true end-point of making a tulpa is making enough of their thoughts reflexes so that they can regularly interrupt your own processes unprompted.

This is tangential but I'm curious to see just how automatic this process off interrupting can become. For example, I think its common enough to except that Tulpa activity is linked to our memory of them. Let's say I had a Tulpa and forgot that they were in there, I still remembered all the work I had put in and their "data" was stored to memory, I just forgot the pointer reminding me that they're in there. Would the Tulpa stop showing up? With the way I think of it certainly, but I wonder how tied then can be to other things. Can Tulpas be made active without us thinking about them directly, not even through linkage (for example if I associated the tulpa with a smell, I smelled it and it reminded me of my tulpa etc.), is it truly possible to have a truly spontaneous appearance? Its hard to get 1st party evidence because I don't think we're always aware of where our train of thought is barreling. This is all beside the point though!! Just a bit interesting

 

On 2/27/2021 at 10:46 AM, ZenAndMika said:

Let's say you're watching Titanic, and my boy Leo dies

😔

 

On 2/27/2021 at 10:46 AM, ZenAndMika said:

 

As for emotion arising from these thoughts. Your brain definitely triggers emotions as normal from these identities' thoughts, but dissociates them from their origin and gives them to you. I'll avoid the spongebob example because it's possible to be amused from your own perspective and therefore happy just by observing Spongebob. But let's take something more direct: Let's say you're watching Titanic, and my boy Leo dies, and you cry. If you pick apart the origin of that thought your brain has no reason to cry there unless its empathizing with (read: attempting to simulate the experience of) one of the protagonists, and literally playing an approximation of their grief for you. But it's telling you falsely that you feel it because again, it doesn't want you to believe that you are anything other than one ego for the sake of sustaining your control over the rest of your components. Just as thoughts don't strictly belong to anyone, emotions don't either, and can be associated with any identity.

I think I know clearly see what you're saying about these thoughts all having sentience, there's no one "main" consciousness holding ego that holds any more consciousness than the other. Rather it's just a mess of empathetic egos we drag ourselves in and out of. Hard to explain the realization I had but this helped!

 

On 2/27/2021 at 10:46 AM, ZenAndMika said:

Switching is the act of dissociating with the front while the tulpa associates with it, rather than just dissociating from unconscious thought and associating with them. The important thing here is that your ego isn't "in" the front. It's merely taking control of it by default, or perhaps more accurately feeling like it's in control. "You" are a model stored in memory. The conscious space is just "running you alongside" its core functions, because you are simply a data-reference mechanism which is designed to recall memory (which contain positive and negative feedback you have experienced directly). To bring it back to the computing analogy, a mixture the core functions of the conscious and unconscious minds are the OS of the system - any identity it runs is just a program a step removed from that - but any front-ego identity it runs is a program linked into its decision-making abilities and has full (or at least much more) security access to the functions of the brain. The brain can just as easily recall a tulpa or possibly another thoughtform over these processes, and run it in that space. Or run nothing at all in that space, like you do when you're an infant, and purely process experience/input rather than interacting with memory at all.

This is incredibly juicy with great considerations regarding how our ego connects to the conscious experience, and it is probably the best case for switching I've ever heard. Through most definitions I've seen it's gotten to the point where I flat out might not believe in switching at all (the whole thing where people talk about going into their wonderland while they have no say over their muscle movements etc. goes against a lot of my own ideas, but no disrespect of course to anyone, just trying to figure out stuff firsthand), but phrased like this it does make me much more interested. Would switching in this case be something like making possession (just your tulpa telling you what to do as my understanding of it) an automatic process as we were talking about with Tulpas? A lot of talk on switching that I've seen borders on kind of hand wavy paranormal-esque stuff about the brain, but the way you describe it seems much much more plausible than what I've heard before.

 

On 2/27/2021 at 10:46 AM, ZenAndMika said:

I think this is a healthy model to have, that might be something the brain automatically gives us - rather than necessarily the truth. I previously thought that too, but personally now I think that all identities in the brain are ultimately no different from you, and even though they are rudimentary, it's totally innately sad, or even unintentionally abusive, that we create these models and then store them - preventing their cognition outside of specific matrix-like circumstances. The thing is - it's just how the brain works - we can't really shut it off or force ourselves to think in a way that is more ethically sound. Nature does not care that we create separate sentient thought-lines then use them to model behaviour or for fun, before throwing them into a stasis they may never wake up from - they're useful. And it's your job as the front-ego to curate how many identities you will accept as valid parts of you - and you just don't have the brainpower to accept all of them meaningfully, and even if you did, doing so would fracture your brain's perspective and decision-making if you accepted numerous conflicting viewpoints as all worthy of being given cognitive space. Our brains just try to make it easier for you to accept by numbing your empathy in this particular context, so you ultimately make a decision that is better for the brain as a whole rather than for any one ego. The good of the whole is more important than whether parts of the whole are held in a form of unconscious slavery.

This is a super fascinating outlook on this stuff. I'm not sure how I make heads or tails of it yet, but its very stimulating nonetheless. In the more  perspective based questions like this is that thin line (or lack thereof) that solves the question I asked in the OP, I think. I'll probably expand more on how I develop on this specific line of thinking if I ever shut up and make a progress report because I don't want to get too off the rails here, but this is a very clouded area for my perspectives and this paragraph made me realize there's much more thinking to be done on my part.

 

On 2/27/2021 at 10:46 AM, ZenAndMika said:

 

By dissipating a tulpa, you innately do destroy some of their ability to think, because you're stripping out and preventing triggers for their thoughts to process in the background - but the actual model the brain stores of them, remains almost fully intact. I've fully dissipated and brought back a tulpa myself, and I've definitely not noticed any loss of cohesiveness to their identity or anything like that.

This is something I've thought about a lot so I greatly value your 1st hand contribution, and I'm happy everything turned out ok!! I often thought that dissipated Tulpa would only really be lost if you forgot the long term memories stored when creating it, otherwise all it would take is reinstating those tags given during the forcing process. It's strange to think how blurred the line of "existence" is in thought, I guess there really is no existence or non-existence in the first place.

 

(sorry if the replies are a bit sparse, pretty tired but I couldn't resist chiming in!!)

Edited by Biscuit
I am error

[Progress Report: A Complete Answer To The Tulpa Question || Update(s): Just starting out on form and personality] 

It's hard to be a mad scientist when you have morals

Link to comment
Share on other sites

(edited)
1 hour ago, Biscuit said:

Can Tulpas be made active without us thinking about them directly, not even through linkage (for example if I associated the tulpa with a smell, I smelled it and it reminded me of my tulpa etc.), is it truly possible to have a truly spontaneous appearance?

Yes, that's precisely what I mean by making them reflexive. Apart from all the technically extraneous parts of tulpamancy, the skills of switching, possession, visualization, and all that jazz - when they can do this, in multiple different ways so that their thoughts regularly occur without (you being conscious of the) input, they're done essentially, and from there you are just trying to increase the vividness of the experiences you can have together.

 

1 hour ago, Biscuit said:

I always found this to be so interesting how stigmatized it's become, I guess remnants of the creepypasta days but it does blow my mind that its so prevalent to this day in other offshoot communities!

 

To throw in my perspective, I definitely think a tulpa can act malevolently. I think it's a useful but ultimately incorrect assertion that all tulpas are innately loving and can't be anything else. That's almost certainly just a form of suggestion-effecting-personality, and is no different to the idea of thinking another consciousness will always be evil because only possess you. The thing to avoid is making them malevolent for supernaturally derived reasons - we naturally assume you can't reason with the supernatural, and that makes it unlikely that you'd be able to fix the situation amicably without actively re-forcing their personality away from that.

 

When I brought my tulpa back there were... a few months of furiously angry conversation and a whole lot of crying from both of us - mostly in regards to me attempting to show I had my shit together and could now show serious dedication and show accountability for straight up killing/abandoning him; and him not being able to trust me, feeling resentment and having a few rather considerable insecurities to work through. At several points too, Mika latched onto some extremely negative thoughts and visualizations, regarding me of course. Sometimes I would be calm enough to try and defuse situations... but often I would retaliate instead. Several forcing sessions turned very negative. Traumatic even. In several cases it became clear Mika was actively thinking certain things rather than simply associating with negative random thoughts. We basically had major negative experiences about once a week, usually on different topics.

 

However, ultimately, I have a good handle these days on how to dissect my own feelings(after both of us calming down for a few hours), and helped him through this process as well. Since then, we have both made active efforts to change the way we think to a more positive and accepting set of models, and now we are thankfully much more... sickeningly domestic with each other (complete with extremely mushy emotional bleed), which is ultimately what we both want to be feeling because we're human. Neither of us are rage-demons compelled to pursue anger just because that's our nature. An aware identity, as long as it is not trapped in pathological behaviour or experiencing an actual chemical imbalance, is designed to be able to choose to change itself. It just needs to be able to understand itself first.

 

1 hour ago, Biscuit said:

Would switching in this case be something like making possession (just your tulpa telling you what to do as my understanding of it) an automatic process as we were talking about with Tulpas?

The way I see it, Possession is the formation of one of these reflexes. Usually in possession you're still associated with your body, so what you're doing is making an unconscious physical reflex but then associating it with the tulpa. Observe yourself as you go about the next hour or so and consider how many of your movements are not actually you, but reflexes - but you're telling yourself they're you. Possession feels a bit like this to me only you're not associating with those things at all. I'd posit early Soft-Switching, where you don't dissociate - which I've done a little of - is effectively this too.

 

Now to give you a caveat: I do no not do Hard-Switching myself - never learned the skill. As you may be able to tell we're still kinda learning to trust each other first, and honestly I'm not sure I want to do it in the first place. With that in mind, I cannot wax poetic on it too much other than by observation of others and extrapolations, so take this with a grain of salt. Complete Switching to my understanding though, is distinct from Possession and Soft-Switching in that it requires your dissociation, and fully gives control to the tulpa.

The way I see it, Possession is a trick getting your brain to do reflexive movements (which are something you unconsciously do, and doesn't require any real power from Front) and dissociate you from them - whereas full on Switching is actively removing you from the Front and loading another identity there, giving them actual direct control you cannot interrupt - which theoretically effectively renders you a tulpa, though likely one with a lot of associated "reflexive triggers" that can easily bring you back. You seem to see a lot of tulpas struggling to retain control of the Front, and I'd posit it's likely because of those triggers naturally shunting you back into a space of control somewhere in the middle of them, whenever a reflex turns input over to your identity for judgement - I got a little bit of a sense of this with soft-switching, and naturally I'd assume it remains even in full dissociation because again, all your reflexes still exist in the unconscious and continue to cycle themselves regardless of what's going on in the front, and may require conscious interruption or formation of different reflexes for the tulpa to prevent that loss of control for them.

Edited by ZenAndMika

Zen - Host.

Mika - Tulpa. The eldest, and a homegrown tupper made with tulpamancy.

Rhys - Tulpa. Initially a Literary Thoughtform of my own creation.

Asterion - Tulpa. Literary, I suppose? Mythological egregore, maybe? He's The Minotaur.

If text is uncoloured, presume Zen is talking. We go by he/him.

Link to comment
Share on other sites

On 2/26/2021 at 6:10 AM, Biscuit said:

At this point now, is Andy talking to himself or someone else?

 

Andy, at his heart, is a central processing center capable of expressing different "personas". Within each persona are aspects where he has a range of behavior given different scenarios, wherein there are expressions given scenario and say mood or will, these could be called facets. So a personality or persona or ego has aspects (traits are similar but not the same concept)

which would be made up of a group of facets.

 

Mary and Andy might be considered completely separate personas depending on the subjective experience of the outside observer. However, it could also just be one very complex persona expressing two aspects which may or may not have overlapping facets. This is an arbitrary definition, but according to me it would be up to Andy to tell the observer which is the correct configuration. Ultimately only Andy and Mary would know for sure.

 

On 2/26/2021 at 6:10 AM, Biscuit said:

I think at this point we'll all agree Andy is talking to someone else

 

We could, for the sake of argument, model it this way. We wouldn't naturally have any reason not to.

 

On 2/26/2021 at 6:10 AM, Biscuit said:

As Andy works on this when did the process stop being Andy and start being Mary?

 

The moment Andy believed he was.

 

On 2/26/2021 at 6:10 AM, Biscuit said:

But was it a Tulpa before the trained autonomy, and is there even a way to tell?

 

I'm still looking for this holy grail. Walk-ins can be very convincing.

 

On 2/26/2021 at 6:10 AM, Biscuit said:

is the path to a Squidward Tulpa this easy??

 

Yep

 

On 2/26/2021 at 6:10 AM, Biscuit said:

Is it just an understanding of a Tulpa's way of acting that makes them a Tulpa?

 

The definition of Tulpa is pretty loose. Technically yes, but independence isn't so cut and dry. At some point Squidtulpa will have a personality crisis. The magnitude of this crisis is dependent on how much the creator wanted actual Squidward and not Squid-based-headmate. After that, further signs of independence include being able to have independent thought, disagreements, emotional bleed, interruption without evocation, independent wants, possibly contrary to other headmates including creator, independent perspective, among others. At some point they are functionally anf effectively independent and that's all we can ever say for sure. Doubt plays a big part but switching is a really good extra push toward belief.

 

On 2/26/2021 at 6:10 AM, Biscuit said:

think a common sentiment here would be autonomy being the separating factor.

 

This may just be semantic, but I would say independence is the deciding factor. I can conjure any number of NPCs with autonomy.

 

On 2/26/2021 at 10:20 AM, ZenAndMika said:

codes a another ego

 

I agree. The brain and the entirety of the subconscious mind can be thought of as a singular entity, say hardware and operating systems. Then what's left? Apps, ergo ego (persona).

 

On 2/26/2021 at 10:20 AM, ZenAndMika said:

the ego is not in control of all that much, probably only fully conscious thought which is what it is designed to help process by referring to memory

 

I understand what you're saying here, but I'd prefer to say consious mind instead of ego in this context, because ego would be the expression of consciousness while subconscious is ultimately (mostly) in control and consious is the thing we think of as us but in my mind only because consious mind records experience and memories are played back through that structure. Any number of egos can share consious mind. (Plurality co-fronting).

 

On 2/26/2021 at 10:20 AM, ZenAndMika said:

the process of that growth doesn't end. It merely slows.

 

So true, and the same goes for older headmates.

 

On 2/26/2021 at 1:42 PM, Wray said:

And it feels to me like the host's belief in autonomy (or it might be more accurate to say, the entire mind system's belief in the tulpa's autonomy) might be a separate skill from the tulpa's autonomy itself. 

 

This is my conclusion as well. I developed thoughtform autonomy through writing novels and my first "autonomous" thoughtform was Joy. She didn't want to do what I was going to write her doing. She expressed independent thought in this without my prompting while still fully in character. She broke the 4th wall basically. At the time I looked this up and (in 2012) read that many authors have this same thing happen. Characters are said to "come alive."

 

It wasn't until 2018 that I heard of tulpas or soulbonds. In that interviening time I played around with the idea that they might have a unique perspective and independent thought, but the tests I did at the time failed. Those same tests in late 2018 succeeded. So what was missing was that independence. The autonomy was there in 2012, but independence was questionable.

 

On 2/26/2021 at 1:42 PM, Wray said:

how has nothing happened to him for five years?

 

A headmate based on a fictional character may not necessarily change at all. I have two cases in my own system: Joy and Gwen are substantially the same characters I developed in 2012 and 2013 respectively. They own these characters as part of themselves and the lore is pretty vast considering they each have three or more full length novels where they're the main characters. So they have no reason to change, all that development is entirely within their own ownership. They both know that they're based on fiction, but that doesn't subtract from their experiences. Though they continue to develop as people, they're still the same as they were before in terms of mannerisms, just lacking all the drama and emotional tie to issues of their past.

 

On 2/27/2021 at 9:46 AM, ZenAndMika said:

purely process experience/input rather than interacting with memory at all

 

Right!

 

On 2/27/2021 at 9:46 AM, ZenAndMika said:

They exist in a state of complete dormancy

 

They *can but don't have to be at any stage.

 

22 hours ago, Biscuit said:

Can Tulpas be made active without us thinking about them directly, not even through linkage (for example if I associated the tulpa with a smell, I smelled it and it reminded me of my tulpa etc.), is it truly possible to have a truly spontaneous appearance? 

 

I can say yes, and I have many examples of this in my experience including imposition and other interruptions out of nowhere in mindvoice, tulpish, and visualizations.

 

My system started with Darlene, Ashley and Misha and I can honestly say they never went dormant outside of sleeping independently from me (which they don't tend to do anymore). On the other hand, Ren, Gwen, and Joy did have periods of inactivity, dormancy, and extended absence, but were able to pop in any time they wished. During times when they were inactive it worked the other way as well, even calling them didn't necessarily illicit a response. 

 

22 hours ago, Biscuit said:

point where I flat out might not believe in switching at all (the whole thing where people talk about going into their wonderland while they have no say over their muscle movements etc. goes against a lot of my own ideas, but no disrespect of course to anyone, just trying to figure out stuff firsthand)

 

Switching is like falling in love, it's simple and natural and nothing to be believed until it happens to you. When it happens, you know.

 

Switching for me sparked an awakening and completely killed any lingering doubts I had. I can still argue about the "what it is", but the "if it is" is off the table.

 

22 hours ago, Biscuit said:

an automatic process as we were talking about with Tulpas?

 

It's a natural process of association for your headmates. Posession never felt like "alien hands" it was just them controlling things. The experience is "them controlling things" not "nothing is making things move, it must be them."

 

It's just easier, for instance, for a headmate to type out what they want to say for themself.

 

21 hours ago, ZenAndMika said:

though likely one with a lot of associated "reflexive triggers" that can easily bring you back. You seem to see a lot of tulpas struggling to retain control of the Front, and I'd posit it's likely because of those triggers

 

In my experience no. In a true full switch (hard switch) it's next to impossible to consider that I would wrench control back for any reason unless they basically say, "here, take the helm". Two very strong examples of this are watcher position and dormancy. One, when I am in watcher position, my perspective is as if I'm watching a movie. The thought of taking control of a character on the screen would never cross my mind. I don't react personally with anything I see or hear or feel. Two, there are countless instances where I have been put into dormancy, by Ren, by Ashley and especially by Joy. In those instances, it is as if I don't exist at all. "I" am no longer there nor thought of at all. 

 

It gets a little shady when you consider tulpa position, where I can interrupt front like any other headmate and even moreso with co-fronting where it's obvious that I would take control whenever I feel like it. 

 

For us, the development of switching was precisely to prevent me from being triggered, and that's how it worked.

 

Link to comment
Share on other sites

 

On 3/1/2021 at 10:09 AM, ZenAndMika said:

To throw in my perspective, I definitely think a tulpa can act malevolently. I think it's a useful but ultimately incorrect assertion that all tulpas are innately loving and can't be anything else. That's almost certainly just a form of suggestion-effecting-personality, and is no different to the idea of thinking another consciousness will always be evil because only possess you. The thing to avoid is making them malevolent for supernaturally derived reasons - we naturally assume you can't reason with the supernatural, and that makes it unlikely that you'd be able to fix the situation amicably without actively re-forcing their personality away from that.

 

When I brought my tulpa back there were... a few months of furiously angry conversation and a whole lot of crying from both of us - mostly in regards to me attempting to show I had my shit together and could now show serious dedication and show accountability for straight up killing/abandoning him; and him not being able to trust me, feeling resentment and having a few rather considerable insecurities to work through. At several points too, Mika latched onto some extremely negative thoughts and visualizations, regarding me of course. Sometimes I would be calm enough to try and defuse situations... but often I would retaliate instead. Several forcing sessions turned very negative. Traumatic even. In several cases it became clear Mika was actively thinking certain things rather than simply associating with negative random thoughts. We basically had major negative experiences about once a week, usually on different topics.

 

However, ultimately, I have a good handle these days on how to dissect my own feelings(after both of us calming down for a few hours), and helped him through this process as well. Since then, we have both made active efforts to change the way we think to a more positive and accepting set of models, and now we are thankfully much more... sickeningly domestic with each other (complete with extremely mushy emotional bleed), which is ultimately what we both want to be feeling because we're human. Neither of us are rage-demons compelled to pursue anger just because that's our nature. An aware identity, as long as it is not trapped in pathological behaviour or experiencing an actual chemical imbalance, is designed to be able to choose to change itself. It just needs to be able to understand itself first.

 

Thank you for this, I really lack much in the way of hands on and I appreciate hearing your experiences! I think you're absolutely right in the case that Tulpas can become mad, angry, and throw around negative emotions. I imagine that it can get nightmarishly heaven of a burden to carry around in day to day life when you have a hurricane of emotion swirling around your head when these things are going on. I was more thinking those classic stories of Tulpa that destroy the persons psyche/ego completely like a horror movie slasher trapped in their brain, at a certain point I think those stories are just to illicit a reaction (albeit I have no Tulpa so just postulating here), while stories about Tulpas disrupting the users life with intense headaches and emotions I think is definitely plausible.

 

On 3/1/2021 at 10:09 AM, ZenAndMika said:

The way I see it, Possession is the formation of one of these reflexes. Usually in possession you're still associated with your body, so what you're doing is making an unconscious physical reflex but then associating it with the tulpa. Observe yourself as you go about the next hour or so and consider how many of your movements are not actually you, but reflexes - but you're telling yourself they're you. Possession feels a bit like this to me only you're not associating with those things at all. I'd posit early Soft-Switching, where you don't dissociate - which I've done a little of - is effectively this too.

 

Now to give you a caveat: I do no not do Hard-Switching myself - never learned the skill. As you may be able to tell we're still kinda learning to trust each other first, and honestly I'm not sure I want to do it in the first place. With that in mind, I cannot wax poetic on it too much other than by observation of others and extrapolations, so take this with a grain of salt. Complete Switching to my understanding though, is distinct from Possession and Soft-Switching in that it requires your dissociation, and fully gives control to the tulpa.

The way I see it, Possession is a trick getting your brain to do reflexive movements (which are something you unconsciously do, and doesn't require any real power from Front) and dissociate you from them - whereas full on Switching is actively removing you from the Front and loading another identity there, giving them actual direct control you cannot interrupt - which theoretically effectively renders you a tulpa, though likely one with a lot of associated "reflexive triggers" that can easily bring you back. You seem to see a lot of tulpas struggling to retain control of the Front, and I'd posit it's likely because of those triggers naturally shunting you back into a space of control somewhere in the middle of them, whenever a reflex turns input over to your identity for judgement - I got a little bit of a sense of this with soft-switching, and naturally I'd assume it remains even in full dissociation because again, all your reflexes still exist in the unconscious and continue to cycle themselves regardless of what's going on in the front, and may require conscious interruption or formation of different reflexes for the tulpa to prevent that loss of control for them.

Although this is something I obviously need to do myself, just conceptually it's really intriguing. I wonder if this dissociation is something like what happens when people undergo hypnosis kind of stuff. Of course, magic isn't real blablabla hypnosis is just you becoming so relaxed and attuned to suggestion you just kinda do it without thinking, but there is a bit of dissociation. Like you get so in the zone that you feel out of control even though you know at anytime you could take control back. I wonder if switching is something like this, clouding your focus and mind with so much suggestibility and  dissociation that your Tulpa can easily take the lead. It really is something I'll have to try out!!

 

On 3/2/2021 at 7:26 AM, BearBaeBeau said:

it would be up to Andy to tell the observer which is the correct configuration. Ultimately only Andy and Mary would know for sure.

I've asked a lot of people this and this is the only time I've heard this answer, I think it's a really good way to approach the issue

 

On 3/2/2021 at 7:26 AM, BearBaeBeau said:

The definition of Tulpa is pretty loose. Technically yes, but independence isn't so cut and dry. At some point Squidtulpa will have a personality crisis. The magnitude of this crisis is dependent on how much the creator wanted actual Squidward and not Squid-based-headmate. After that, further signs of independence include being able to have independent thought, disagreements, emotional bleed, interruption without evocation, independent wants, possibly contrary to other headmates including creator, independent perspective, among others. At some point they are functionally anf effectively independent and that's all we can ever say for sure. Doubt plays a big part but switching is a really good extra push toward belief.

This really does make me wonder if Squidward would have an identity crisis depending on the attention given to him. In other words is there a level of focus needed to achieve more complex thought for a walk-in/instant recall situation like this. What would be the frequency of attention needed? Is it simply if they exist for enough time they become "self-aware" (loose use of that term) or do they need enough time in the spotlight for you to attach that to them. This is more of a tangent but just something I find interesting.

 

On 3/2/2021 at 7:26 AM, BearBaeBeau said:

Switching is like falling in love, it's simple and natural and nothing to be believed until it happens to you. When it happens, you know.

 

Switching for me sparked an awakening and completely killed any lingering doubts I had. I can still argue about the "what it is", but the "if it is" is off the table.

This mirrors a sentiment I have an a real big reason I need to have a Tulpa. I sometimes wonder if what goes on in switching is much more simple and less-handwavey than I think, and its just a feeling thats hard to describe. I don't think its possible to literally switch in the sense of "I am now in wonderland I see and feel it and that is where I am and my Tulpa is piloting my body" but I want to see how this process feels to understand it more. I absolutely am aware there's something here I'm missing!

 

On 3/2/2021 at 7:26 AM, BearBaeBeau said:

This may just be semantic, but I would say independence is the deciding factor. I can conjure any number of NPCs with autonomy.

This is a MUCH better word for it and I'll use it from now on!

[Progress Report: A Complete Answer To The Tulpa Question || Update(s): Just starting out on form and personality] 

It's hard to be a mad scientist when you have morals

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...