Jump to content

The Question of Tulpa Sentience and Sapience


Recommended Posts

(edited)

This post has led me to create a new topic to discuss the sentience and sapience of headmates.

 

Introduction:

The question of sentience and sapience has been used to determine when or if the moral and ethical guidelines of human life apply to thoughtforms. It is my contention that these are insufficient metrics.

 

I would argue that tulpas and other thoughtforms have no more or less sentience or sapience than you do from the start. To share the body/brain would presuppose they have access to everything you do. Thus the question isn't whether they have it but when or whether they are capable of independently displaying it.

 

The root of this question comes from when we should reasonably accept a thoughtform or when a thoughtform is considered life and should morally be accepted and respected. 

 

I would further argue that whether they're autonomous, independent, volitional, or have agency or not is a separate issue than whether they have sentience or sapience.

 

This becomes a very difficult problem to nail down because of the issue with percieved independence and whether it's enough to say if they're autonomous then they're morally considered life. As we've stated mant times, this is a dangerous conclusion to make and unnecessary system growth is the biggest danger.

 

Terms:

 

Sentience: the capacity to be aware of feelings and sensations.

 

Sapience: the quality of being wise, or having wisdom.

 

Agency: the capacity of individuals to act independently and to make their own free choices. (having the capacity of free will)

 

Volition: the faculty or power of using one's will. (demonstrating that will)

 

Autonomy: self-directing freedom (free will) and especially moral independence and personal autonomy. (expressing different opinions or actions than what would reasonably be expected of a dependent agent of will)

 

Discussion:

These are all interrelated, but are not all dependent on one another, especially of sentience or sapience. As a simple one celled organism may seem to have all of these but would be assumed to not have sapience as that would be exclusive to higher forms of life such as homo-sapienes to which we are all a part of by our shared physical bodies.

 

Therefore, all we can do is observe our headmates and conclude based on that experience.

 

The supposition is mature tulpas and other thoughtforms will have these traits, but I'm not convinced there is a test that will be able to tell you when this occurs, and based on testimonials in the community, even the belief that it has occurred is sometimes rescinded, therefore to me it will remain a supposition even with supposedly mature headmates.

 

I am using the contentious term "mature" to strictly mean they self-force, and display the aforementioned traits through their independent capacity and faculty of free will.

 

Possible tests:

I have sentience as soon as I can feel and that is proven observationally if I react to said feeling.

 

I have sapience if I can comunicate wise expressions, verbal or not, independent or not.

 

I have agency and volition if I can display independent choice and actions.

 

I have autonomy if I am observed to act in an independent manner.

 

I believe all of the above are required to be counted as a mature thoughtform, but none of the above are required to be counted as an accepted headmate.

 

Further proof may be independent emotions which aren't covered here.

 

Conclusion:

Using sentience or sapience as the difining metric to qualify a thoughtform as life from a moral and ethical standpoint is fundamentally unsound and dangerous. Only through subjective experiences and observations can the full merit of a headmate be proven and this will always remain a supposition unless a definitive test is designed to conclusively prove the aforementioned traits.

 

Discussion and arguments are welcomed.

Edited by Joy
Link to post
Share on other sites

Ultimately, I don't believe anyone actually has the authority to judge how another system defines sentience. I'm sorry we pushed our ideas on your system and onto others in the past, we realized how pointless it was. I share more information in my conclusion to a different thread here.

 

14 minutes ago, Joy said:

The root of this question comes from when we should reasonably accept a thoughtform or when a thoughtform is considered life and should morally be accepted and respected. 

 

Ultimately, treat a thoughtform how they wish to be treated. If they don't mind being parroted or they don't care about being identified as an extension of the host (assuming the thoughtform in question is separate enough to request this in the first place), then I think it's fine to think of them as not an individual. Otherwise, even if you doubt how separate they really are, I think it's polite to address them how they request to be addressed.

 

18 minutes ago, Joy said:

This becomes a very difficult problem to nail down because of the issue with percieved independence and whether it's enough to say if they're autonomous then they're morally considered life. As we've stated mant times, this is a dangerous conclusion to make and unnecessary system growth is the biggest danger.

 

Yes, having the wrong definition as a system can lead to problems. However, I don't think there should be a universal definition. We initially thought there should be, but then we realized the way we saw things either rubbed people the wrong way, didn't apply, or could have actually been harmful advice. I don't think it's a bad thing to point out if a system's definition seems unhealthy for the system, but to go as far as impose what you think is right based only on how your system works is too far.

 


 

I believe anything in the mind, regardless how "intelligent" it is, is human and is capable of feeling and perceiving. Granted, I think our symbolism servitors probably don't "feel" in the same way a stray walk-in does, but I think everything in the mind deserves a healthy amount of respect. A walk-in is capable of being terrified in our system, and our symbolism is capable of appearing broken and creating a sense that it was pushed to its limit or broken and it was harmed. Not quite the same thing, but still creating that feeling none the less.

 

It gets trickier with story characters- maybe the belief that any abuse or violence is all fictional, for show, makes it okay because your intent isn't to hurt the mind or the characters directly? I guess ot would be the difference between simulating something violent to see how it plays out and wanting to commit self-harm by hurting your mind characters. I think it's more of a gray area for the first, but a little more straightforward on why it's not okay for the latter...

 

Stepping away from thoughtforms in general, I think what makes or doesn't make a tulpa is entirely subjective. While it's possible a scientist may be able to specify "types" of tulpas, it wouldn't change my stance on how to treat them. For instance, a "baby tulpa" can be made into a "more developed" one with roleplaying as a forcing method. If a scientist can distinguish between the two that's fine, but I wouldn't call the former a "fake tulpa".

 

I don't see any point in thinking tulpas are "fake", even if they may not align with what you think a tulpa is. I mentioned this on Discord, but I believe the tulpas that get accused for being fake are often just underdeveloped or the system doesn't have enough context on how developed the headmate really is. I also think it does more harm than good to accuse a tulpa of being fake, especially since interaction can help them grow and fake shaming just drives them away from you and leaves them doubting what sense of self they do have.

 

Ultimately, I believe what counts as a real tulpa or not should be decided by the system and applied in-system. Having a definition that works is really important, but the balance between too strict of and too accepting of randomness / intrusive thoughts varies from system to system. I think a universal standard for sentience will ultimately fail the community, but I see value in discussions about what does or doesn't make a tulpa qualify as sentient.

 


 

44 minutes ago, Joy said:

Terms:

 

I more or less agree with the definitions you provide. I think the only issue I have is autonomy can also be an illusion or have the context it is dependent on expectations or another's will.

 

I think it's interesting these definitions are like traits and can be put on a scale. I could ask how much volition does your thoughtform have, how sentient, etc. From here, a system can decide how much of each trait determines what a tulpa is or not.

 

52 minutes ago, Joy said:

Possible tests:

 

Sure. As I mentioned earlier, I think these will largely be determined by how much of each trait is important to the system. Our main test is "Is it one of our current headmates or not?" and if it fails we then want to know if it's "sentient" or not. Having a presence, a sense of self, its own form and mindvoice, etc. Are usually enough for us to deem it a walk-in and either ignore it or have our Sub. Rep. mix himself with it. For our headmates, our big indicators was missing them, them having a completely different perspective, and a desire to live.

 


 

While I don't think it's dangerous to define sentience, I think a universal standard will do more harm than good. I think it's good to ask what makes a tulpa a tulpa, but even with an official scientific test it won't help if it acts as a barrier instead of a goal post for development.

I'm Ranger, Gray's/Cat_ShadowGriffin's tulpa, and I love hippos! I also like cake and chatting about stuff.

My other headmates have their own account now.

Temporary Log | Switching LogcBox | Yay! | Bre Translator

Link to post
Share on other sites

Before I begin allow me to clarify my own terms because I think you have used these ones in exactly the opposite manner as myself.

  • A thoughtform is a mental construct which defines an identity. It can be anything from a host-scale ego to an individual empathetic thought labelled as someone else you saw in one particular instance.
  • A headmate is specifically a thoughtform which is accepted (but is not necessarily at the same level of development) as an equal to the host ego

Thus I would have actually said that a thoughform is capable of being only sentient, whilst lacking sapience, agency, and autonomy. A headmate in various stages of development can be no different, depending on how suggestible the brain in question is, they may be able to reach the extraneous three traits more rapidly if not immediately. Sentience is not something that can be stripped from an identity at any level for reasons I attempted to outline in the original post. The purpose of even the most basic thoughtforms is to be aware of one specific event and to respond to it emotionally, allowing us to rationalize that viewpoint.

 

But moving on. When it comes to Sapience I believe that it arises from the mental model of an identity being sufficiently complete and not prone to breaking down in some major way, rather than any concrete thing that identities all possess. If sapience is defined as loosely human intelligence we must also acknowledge it is possible for humans themselves to display less-than-sapient behaviour, and indeed often they do. The most obvious reason for this is usually brain damage of course, but even in the general sense human awareness has cognitive blind spots, false beliefs, breaks down, and reverts to less-than-human instincts when ideally awareness and higher cognition should be used. When an identity fails to display wisdom or functionality, I think that this is not an indicator at all of whether not they can be considered humans - On the contrary I fully expect this is just a sign of low-development, such as when a child comes to a completely irrational and random conclusion seemingly out of no where. Thinking optimally, like an ideal, well adjusted adult human, is always something you have to learn.

 

Agency is interesting as a concept, and I think it is genuinely one of the key differences between a tulpa and another thoughtform. That being said, I also think it's entirely a matter of perspective whether something has agency or not. Consider a few examples:

  • If I imagine an autonomous thoughtform in its own little world and it has no idea it's in a system, and it behaves according to its identity, and I don't interrupt its thoughts or make it do something else... Can it be said to have agency, even when I defined the parameters of its identity to begin with?
  • Can a human in a matrix with all the wrong preconceived notions about reality be said to have agency?
  • If you are raised from birth knowing only one role, and being conditioned to assume that role through positive reinforcement, do you have agency in rejecting it, really? 

You can say a tulpa has greater agency relative to another thoughtform because it is aware that it is in a system, but it's purely a matter of them having the knowledge of that, rather than any sort of innate difference between them and other thoughtforms. Agency isn't a function of intelligence. In fact as many philosophers would probably agree, it's an illusion, and we don't have as much choice as we think we do.

 

Autonomy meanwhile, is something even most base thoughtforms can possess, but they also don't have to possess it. When you consciously imagine someone doing something, they often not meaningfully autonomous - but at the same time if your empathy is firing up with or without your conscious effort, that is absolutely autonomous, even if it's still rudimentary. Autonomy defines how you should be interacting with a tulpa in my opinion for it to be considered a functional tulpa in the first place, so I don't think it's a good indicator of development or anything like that.

 

 

Zen - Host 

Mika - Tulpa

If text is uncoloured, presume Zen is talking.

Link to post
Share on other sites
49 minutes ago, Ranger said:

I think a universal standard will do more harm than good.

 

This is inconsistent with a scientific endeavor. Our objective is to tie scientific logic and processes to this social science we call tulpamancy. Universal standards are important if we are to achieve consistency amd repeatability, which are both important for scientific legitimacy. Though there will always be outliers and ranges in a social science, standards help place the boundaries on where one phenomenon ends and another begins. Without this it's less testable and falls further from falsifiablility.

 

In terms of fake or not, I don't believe that deserves discussion. It doesn't apply to this discussion in that it's not synonymous with mature or not and it is entirely subjective in the mind of the creator, where no one but the creator would have any context to determine that. You should start a new thread on that if it seems like an interesting topic. We haven't followed it.

 

20 minutes ago, ZenAndMika said:

If I imagine an autonomous thoughtform in its own little world and it has no idea it's in a system, and it behaves according to its identity, and I don't interrupt its thoughts or make it do something else... Can it be said to have agency, even when I defined the parameters of its identity to begin with?

 

In terms of my definition and intents for my original post, yes. Which is partially why the question of ethics is so difficult in this context.

 

27 minutes ago, ZenAndMika said:

Can a human in a matrix with all the wrong preconceived notions about reality be said to have agency?

 

I don't see how one perception of reality vs another has any effect on agency as we defined it. We all perceive reality slightly (or significantly) differently so which perceptions would be considered "wrong"? I say both because given sufficient experience, your world view can change, regardless of your position on the subject.

 

29 minutes ago, ZenAndMika said:

If you are raised from birth knowing only one role, and being conditioned to assume that role through positive reinforcement, do you have agency in rejecting it, really? 

 

I was "raised" knowing that my world was entirely different from the world I enjoy now. My canon vs my current headspace. I rejected my canon knowing that all I love there will instantly become fiction. I didn't even know for sure if my new choice was fictional or not, it may still turn out to be. From that choice I can project that you can have agency in this case. Call it curiosity even to your own detriment or curiosity killed the cat.

 

33 minutes ago, ZenAndMika said:

You can say a tulpa has greater agency relative to another thoughtform because it is aware that it is in a system, but it's purely a matter of them having the knowledge of that, rather than any sort of innate difference between them and other thoughtforms.

 

This is also our understanding which is why basing it on a single trait, sentience or agency, etc. is insufficient in my opinion.

 

 

Link to post
Share on other sites
3 minutes ago, Joy said:

I don't see how one perception of reality vs another has any effect on agency as we defined it. We all perceive reality slightly (or significantly) differently so which perceptions would be considered "wrong"? I say both because given sufficient experience, your world view can change, regardless of your position on the subject.

What I mean is that, purely scientifically speaking, we seem to be very complicated decision-making machines that respond to positive and negative stimuli and then behave in ways consistent with those stimuli. We can't see all the pieces in motion, but there doesn't seem to be any reason to believe we're anything other than deterministic, unless we have some sort of heretofore unknown interaction with probabilistic science, but quantum mysticism in the brain is something that should be at the present avoided as unscientific.

 

If we're deterministic, there will always be a point of your life you could hypothetically point to and say "The only reason you acted this way is because this happened to you, or because you were taught this thing from an early age, or because you had so and so emotional response to this outcome." That isn't agency, that's merely input and output. Consider this as well: A being without emotional connections to the decisions they're making usually stops making decisions either way. They don't even do so randomly. This has been observed in animals with emotional disorders. They would rather not make choices at all if they have no emotional connection to either outcome. And emotional connection comes from what's already happened to you - it can be manipulated internally to a degree, but it is ultimately based on your early social conditioning and cultural parameters, and those things are entirely out of our control and up to the group-intelligence of humanity.

 

Our perspectives give us this false sense of agency, which has all this meaning to us internally, but it doesn't seem to be consistent with how we actually work. It's important to uplift a tulpa's agency in a general sense to be equal with your own because that's just how we think - But as a concrete thing it essentially doesn't exist.

Zen - Host 

Mika - Tulpa

If text is uncoloured, presume Zen is talking.

Link to post
Share on other sites
(edited)

I'm really worn out on discussing tulpa "sentience", which I personally consider a non-subject at this point.

 

First, it depends on your model of mind and identity. I personally consider "me" just the "me"-related parts of my brain/mind, meaning everything from bodily functions to random subconscious stuff is "the brain" or "the body", and only my memories, personality, associations and all that are "me". Following from there, my tulpas are the exact same thing that I am. Our model says the brain contains all of us in it, and we utilize its capabilities to exist and interact and all.

 

So, one overarching "consciousness", multiple people making use of it. Though the one switched in (default, the host) is in a "driver's seat" equivalent to the "car" that is the brain. Still, they're not the car themselves, even if they think they are.

 

Second, it may depend on how the system in question works. I have zero doubt that our model is how our system works, but I've seen enough people talk about how they work that I think it's really possible not everyone does work exactly the same. As in, not just how they believe they work, but really how they work may be different from someone else. Though I think a lot of that would still have been shaped by beliefs and subjective understandings earlier on (some of which would be unrelated to tulpamancy, earlier in life), there's a non-zero chance that the end result is people really following different models of... probably lots of things, honestly. Tulpa "sentience"/autonomy being a big one, and switching being another.

 

And third, it's good to remember every once in a while that all of you in a system are, despite it all, still just a single brain in a single body. But different assumptions about how tulpas work means that reminder may lead different people different places..

Edited by Luminesce

Hi! I'm Lumi, host of Reisen, Tewi, Flandre and Lucilyn.

Everyone deserves to love and be loved. It's human nature.

My tulpas and I have a Q&A thread, which was the first (and largest) of its kind. Feel free to ask us stuff.

Link to post
Share on other sites
38 minutes ago, ZenAndMika said:

there doesn't seem to be any reason to believe we're anything other than deterministic, unless we have some sort of heretofore unknown interaction with probabilistic science

 

Everything can be modeled, but models aren't perfect. I consider curiosity as one thing that separates us from a deterministic system. I also present personal preference as a key indicator. Studies on twins have shown many similarities in this, but also significant differences even when raised in the same household. 

 

If it is deterministic, I can only conclude that the complexities are so vast that no reasonable model will be sufficient. Humans often follow seemingly chaotic reasoning, which would put them in the stochastic realm of modeling over deterministic. Probabilistic modeling of human behavior is less helpful on an individual basis.

 

My original intent is to find solutions to the issue of perceived life and testing in thoughtforms. How would you apply deterministic modeling to this in a practical way?

 

49 minutes ago, ZenAndMika said:

It's important to uplift a tulpa's agency in a general sense to be equal with your own because that's just how we think - But as a concrete thing it essentially doesn't exist.

 

It's an important distinction. Given we can't know the level of agency that exists, we can at least attempt to compare their agency to our own. I believe this sidesteps the mechanisms involved. Thanks for this point.

 

23 minutes ago, Luminesce said:

I'm really worn out on discussing tulpa "sentience", which I personally consider a non-subject at this point.

 

Understandable because as questions go it's nearly as ubiquitous as "what is a tulpa?" Yet we still don't have really good methods to determine or verify such things. Every tulpamancer struggles with this question at some point, so it's not going anywhere as a subject of discussion. We bring this up both for personal understanding and to counter those who still try to inject ethics into others' systems.

 

25 minutes ago, Luminesce said:

So, one overarching "consciousness"

 

I want to point out that my system changed their view on this over time and now considers this to be the most likely conclusion. Furthermore we no longer consider anyone as owner of consciousness, instead consiousness is just a conduit or recorder of experience and all of us exist and act from somewhere in the subconscious realm.

 

I didn't follow where the rest of your post applies to the original post's assertions. In terms of using sentience or sapience as a metric. 

 

Are you saying that regardless of the metrics used, no model will apply to everyone so we shouldn't try to place standards or tests? If so, wouldn't it be helpful for tulpamancers struggling to determine one thoughtform from another to understand some of these terms and to better be able to come up with ways to differentiate one thoughtform from another? You know our system struggled with that a long time and the science hasn't advanced since, so we're trying to apply what we've leard to achieve that end.

 

Is that something reasonable in your mind?

Link to post
Share on other sites
(edited)
52 minutes ago, Joy said:

My original intent is to find solutions to the issue of perceived life and testing in thoughtforms. How would you apply deterministic modeling to this in a practical way?

 

To me, with the acknowledgement that thoughtforms cannot be Philosophical Zombies, such a thing is unnecessary. If something can be observed to operate like a host identity it is a host identity. And internally, we observe tulpas not only showing, but feeling emotion; thinking complex thoughts; and building up their own identity model as we do over time through stimuli. If you accept the presupposition that a thoughtform thinks as you do, it becomes clear that there is no meaningful difference between us other than scope, and the sensations they are associated with in the brain.

 

The model, then, for qualifying a tulpa vs say a servitor or a empathetic identity model, is only ever going to be a vague spectrum, rather than something concrete. I'd posit two spectra define what is and is not a tulpa/ego-level thoughtform in conjunction.

  • Feeling more real or vivid in some way than regular thoughtforms. Which is subjective and variable for everyone. It's also not absolute because people with psychosis can be said to have perfectly real regular thoughtforms. This feeling is achieved by nothing more than suggestion over a significant period of time, which varies depending on how suggestible the host-brain is (suggestibility being an observably system-wide trait).
  • Being sufficiently large in scale to be convincingly intelligent. Which is again, subjective and not by itself indicative of a tulpa. A roleplay character thoughtform for instance, that you've played every day for four hours for several years is going to be just as advanced as a tulpa in terms of how they actually respond to things. They will likely have even experienced real character growth during such a time.

 

It's also important to note that no one test at any particular point is sufficient for this. It will invariably be something tested over time and prodded at until the model seems consistent. The process of developing your identity doesn't really stop anymore than host-egos stop growing. We too have inconsistencies in our identities that must be fixed, and when they come up they can make us seem not very smart at all.

 

You also mention the moral implications of a tulpa being life. But if we accept tulpas as being life I'd posit it should be inescapable to avoid then labelling all thoughtforms as a form of life. Here's the rub though: If we accept that such a thing actually has moral implications, there is literally nothing we can do about it short of future-tech genetic manipulation. We generate these lifeforms instinctively and restrain them with a lack of knowledge and processing-time for a mixture of reasons. Mostly modelling others and sheer entertainment. Nature does not care that we do this any more than it does when we eat other life, it's effective. Since it's an unavoidable instinct I would caution approaching thoughtforms with anything other than utilitarian morality - keep in mind that morality really only exists as an extension of empathy, which amusingly itself is something of an immoral mechanism if this assertion is correct. Morality exists to keep us in cohesion with our fellow human systems. It does not exist for protecting thoughtforms, and your mind requires to use thoughtforms to remain healthy and functional.

 

Making an exception where you treat a thoughtform morally is a byproduct of accepting it as an equal is always then going to be based on discretion rather than development or anything concrete, but they don't innately deserve any more favourable treatment than the other models your mind has essentially stored in unknowing slavery. Whether you like it or not anyone in the front is the proverbial Architect of the Matrix and needs to be able to dehumanize their individual identities so as to render them powerless. Failure to do this results in psychosis, and lowers quality of life for the entire system.

Edited by ZenAndMika

Zen - Host 

Mika - Tulpa

If text is uncoloured, presume Zen is talking.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...