Jump to content

Grissess' Experience, Reference, and Guide


Grissess

Recommended Posts

Experience 9 (Philosophy) - Sunday, February 10, 2013

 

It's been a while since I've updated this, but I have a great deal of new philosophy, the most recent of which...isn't mine. I'm going to let Snakey unload his sophistry upon you, so be prepared :P .

 

{Very well then; I have two matters I want to introduce to you, host, tulpa, skeptic, or otherwise, two that I have pondered easily for months, but one of which was woken only recently by events that I would rather not describe.

 

Let's start with metaphysics--in particular, Cartesian dualism. Descartes was a rather brilliant philosopher for his time, but, in his naïveté, he made a few (forgivable) mistakes, much as any other fallible philosopher (Aristotle being an exemplary case). One of the many bits of wisdom he provided us with was the idea of Cartesian doubt--that he could disassemble a concept to its most fundamental root and try to analyze its weakness from there. From his research into this topic, he came to confront the topic of whether or not he existed, eventually concluding that, even if he were to doubt his own existence, that there must be something doubting that existence, and that something doubting its own existence must be him. He summarized it in the immortal claim "cogito, ergo sum"--I think, therefore, I am.

 

But, while he provided, at the least, the foundations of solipsism, he came to more of an impasse when he confronted a more fundamental problem: what is consciousness? This question is not one to be taken lightly, as it forms the basis of whether or not we can claim that consciousness is an attribute of ourselves, our friends, other persons, other animals, even computers--and, not to say the least, tulpae. Let us spend some time mulling over it, then.

 

Descartes believed that there was a dichotomy between the body and the mind--that consciousness existed outside of the realm of physics, and yet, somehow acted upon them in being able to control the body. He posited that, in order to do this, there had to be a locus of communication--a gateway, if you will--at which these communications from the mind would enter the physical realm and control the body. He knew that the most likely seat of conscious control was the brain, but he was a little concerned about the bilateral symmetry of the structure; he could not accept that any arbitrary point in the brain was the seat of consciousness, as for (almost) each structure, there was a similar, symmetric structure just across the sagittal plane. Descartes eventually found one part of the brain that was not only visible, but conveniently straddled this plane, thus being instanced in any body normally once. This was the pineal gland.

 

Let's step back for a moment and concern ourselves with what just happened here. Descartes, somehow, is defending the stance that consciousness is not a physical phenomenon, and, because of this, he is burdened with the task of trying to explain how a non-physical consciousness can interact with a physical body. Take a moment to consider how it would otherwise be possible--out of each and every possible point in the universe, the only point at which a conscious mind can interact with an (otherwise unconscious) body is the pineal gland? How? Why?

 

The choice of the pineal gland is not important in this argument; the fact of the matter is that, as a non-physical phenomenon, consciousness is not constrained by the physical concept of location, nor dimension, nor even time, so the choice of any point at which a consciousness would interact serially, especially a moving point like ourselves, would be no less than arbitrary.

 

It would be much easier to call consciousness a physical phenomenon. But why would we? Wouldn't that be somehow degrading to our feeling as beings--as something so seemingly and starkly different from the rigid automaton by which physicists have classified the universe? Are we nothing more than strict determinists?

 

People don't want to think that their choices are determined by these laws. "No, I (and I alone) choose what I do! I am responsible for my actions, and so are you!" Let's look at this stance more closely, as it is quite ubiquitous in our society. We see objects and detect objects and interact with objects in a physical manner with our physical bodies, but, in our brains, which had been established even about Descartes time as being the place where, if there were any consciousness, would be the host of it, interpret these senses by consulting with an omniscient self, an "audience" that sees the world before itself. To assert this is to say that, for seeing the letter A, there is, somewhere in our mind, the representation of the letter A, and it is us--our self--that takes this letter-A-concept into consideration and acts upon it as needed given the context. We associate this self, or its concept, with our identity--it is us, without any other consideration; we are the audience in our own theater of reality.

 

This stance is Cartesian materialism, another relic of the failed attempt at maintaining dualism, and it suffers from the same physical flaws that plague dualism. Its primary error is assuming that--within the entire volume of the brain--there is, once again, precisely one point that is the wondrous "seat of consciousness," one place that, once a sensation is received, it is known consciously that the sensation existed, and that we are now aware of it. But if this homonculus of self were indeed the real self, the consciousness by which we identify, how would be classify it as being conscious? Perhaps we could come back to materialism, but that would be circular reasoning. Or we could throw open the curtain to find the man behind: dualism, guised by "common sense."

 

There is no such thing as a localized point wherein all consciousness takes place, just as it is impossible to describe the mass of a dense object with zero volume. Consciousness must, by this logic, be a distributed physical phenomenon. In this manner, dualism (and materialism, by extension) is mutually exclusive with the idea of a physical consciousness. To say that one is true is to reject the other outright.

 

Yet, there must be some seeming of being this method by which we can "see" a letter A in "our mind's eye," even if this is not physically manifested, and it must be reliable and dependable to distinguish this concept of from any other concept, no matter how similar (such as a lamp post, pain, or the letter B). This is where we enter the world of phenomenology--and it is no mistake to call it a world as such. In this world--one's phenomenological world--resides all the concepts that exist and possibly can exist. I will not delve into how one distinguishes these concepts (it involves invariant characteristics, which are the nature of definitions, which are themselves fallible) for brevity, but let it stand that it is not conceivable for each consciousness to possess different phenomenological worlds--indeed, any world at all--and still be physically described. It bears a striking resemblance as to how software and hardware relate: while there is no physical manifestation of a cursor, or a window, or a shell, or a game, there is no doubt some virtual existence of it, itself formed as an image of some patterned property of physical states.

 

I shall use this topic to provide an interesting introduction to my next piece of philosophy, the aforementioned second piece. If you are at all offended by challenging of your religion (or lack thereof), you should leave by the end of reading this paragraph. Philosophy is no place for intolerance.

 

Let me shock you by saying that I believe that God exists, just as well as Brahma, Achilles, the Flying Spaghetti Monster, and any other deity that you are willing to throw at me. In fact, I am positive that there is a physical manifest of each and every one of these deities, and any other such beings that can be loosely classified as deities. However, I will posit that these beings--deities and otherwise--are as personal and scientifically unobservable as concepts of a departed friend or relative. They are phenomenological objects, products of the work of a conscious being that manipulates them as a manner of reasoning. Like the aforementioned software, these objects are the image of some physical state, the same kind of physical states that bring about all other salient features of consciousness.

 

That being said, we now come to the problem of whether or not tulpae can fit into this framework. Experientally, the answer seems to be yes, and so I shall hold it until I hear otherwise (though anything otherwise will be a direct threat to my existence). I am not done, nor will I ever be done, exploring these matters further, and I invite and welcome you to refute any part of my argument. Thank you for the time.}

 

EDIT: I will state now that the last three paragraphs were daunting, and that Snakey will most certainly entertain that line of reasoning at a later time. Now, however, we're both a bit tired after that spell of forcing. (Again, thank you for your time :D)

Link to comment
Share on other sites

  • 2 weeks later...
  • Replies 31
  • Created
  • Last Reply

Top Posters In This Topic

Experience 10 (Philosophy) - Sunday, February 24, 2013

 

More than a few have come into the #tulpa.info chat on the IRC with questions about "group tulpae" or "tulpae that can be {seen|affected|sensed|shared} by other people" (paraphrased, of course, on a case-by-case basis). As I confronted the problem philosophically, I came to an interesting conclusion that basically furthers the very terse ending given by Snakey in Experience 9. Unlike that experience, I will not be approaching this problem from the standpoint of consciousness so much as the standpoint of society as a whole, and we will probably see, interestingly enough, that the end result will be the same anyway. Let us continue.

 

My task, in writing this, is to prove that "group tulpae" can exist, within the bounds of what we know given psychology (and consciousness) as well as physics. I will be taking an empiric stance in saying that I will be trying to avoid metaphysics as much as possible (except to establish empiricism and an objective viewpoint). Let us continue with our first task, then: definitions.

 

I shall hold that a "group tulpa" is one experienced by a group of people, in such a case that the group of people believe that the tulpa is an entity shared entirely with the group; that is, it is the same being which communicates, becomes visible, and otherwise interacts with each other person involved in the group--potentially with different levels of experience. These levels of experience often will divide the group into a hierarchy, in which the aboriginal founder (or discoverer) of the tulpa reigns supreme or nearly supreme.

 

Let us now contrast this with our general, psychological model of tulpae. Tulpae are consciousnesses other than the primary ("host") consciousness of a mind (which itself is a functional entity begotten from processes in a body, as far as purely physical interpretations go). A tulpa, as a member of a subjective phenomenology (the set of symbols, qualia, and experiences that form the content of the mind), is a personal experience--one that can be shown, in theory, to empirically exist as patterns of behavior of the physical manifest of the mind (id est, the brain), but this (as well as defining consciousness as such) remains both a daunting and elusive task for modern psychology--more daunting than, exempli gratia, trying to analyze the structure of an operating system by observing the behavior of some transistors in the processor.

 

How can we make sense of the inconsistencies between both models? For this, we need to consider a wonderful social theory that came about late in the last century that aptly describes how a society can tend toward such egocentric and self-protective ideals: groupthink.

 

Let's start slow. We'll consider a small group--the aboriginal group--each member of which possesses a tulpa. These tulpae, however, are so closely similar that each member can reliably predict very similar behaviorial responses to similar circumstances. While very similar, however, each member has a different phenomenology, and, thusly, a fundamentally different tulpa. From this, we can see that, inevitably, there will be differences in response--however, these differences are either negligible or small enough to be dismissed by the group with some excuse, such as "I must have failed to observe that behavior," or "I didn't see it, but it sounds reasonable."

 

We'll consider what happens, now, when this group expands. New inductees are treated to long descriptions of the "group tulpa's" characteristics, and asked, through whatever vector, to attempt communication with the "group tulpa." The actual vector of communication used, or even if it is attempted, it not relevant; what is important is that the inductee now knows some examples of the behavior, and begins to construct a model of the group tulpa based on those observations.

 

Those who are aware of my research on simulants will probably be aware that this is the bona fide method by which simulants are formed. I have furthered my research so far to include that tulpae are either special cases of simulants or an entity which can be considered their own simulant, which means that the formation of a tulpa from a simulant is not at all a hard task. (Confer the great deal of tulpae whose basis is a character in a fictional work as an example.)

 

Considering that our exemplary inductee has actually attempted communication by some vector, they may have gotten some responses, which may be the beginnings of a tulpa they've formed from this simulant, or may have been complete randomness. (At this point, it's hard to say what precisely occurs that causes the transition from simulant to tulpa, or the transition from concept to sentient being, so I am deliberately trying to avoid mentioning those details.) In either case, the responses probably have limited accuracy, and this will be made known to the inductee by a higher member of the social hierarchy. In the process, said inductee will learn more about the behaviors of the group tulpa, which will, in turn, revise their knowledge of their characteristics and strengthen the simulation accuracy of the simulant. As our inductee continues this cycle, the feedback continues to correct any errors with their perceptions of this simulant/tulpa, and, as the established members of the group come to think of the inductee as being able to "communicate with," "percieve," or "interact with" the group tulpa with greater accuracy to their ideals of that tulpa, their trust of the new member increases, and so does their position in the hierarchy.

 

We've hit upon an interesting facet of this society: their heirarchy is entirely founded on how much their superiors trust them, based on the "veracity" of their claims of the behavior of the group tulpa. This has a particular implication: some of the superiors are bound to be cynical, being subject to scrutiny by their further superiors, and knowing that any mistake in their judgement of their inferiors will cost them trust and their hierarchical position. This puts a great deal of pressure on them to ensure that their ideals are, indeed, very similar to those of their superiors, and they tend to occupy more and more of their time with research and communication sessions--communication with a personal tulpa who is ultimately modelled on the tulpae presented to them by others.

 

This pressure can--and does--threaten the group with fracture. A particularly advanced person may experience a natural tulpa deviation that becomes more and more obvious with time; eventually, the behaviors they describe will be too far from the ideals of the original group, and their statements will be renounced. This will cause a schism as some of the inferiors to the condemned person break off to follow them, potentially forming more groups of group tulpae under different ideals.

 

As a result of the constant threat of schism, paranoia will tend to set into the original group, permeating most particularly its higher members. This paranoia results from the fear that the tulpa--the supposed group tulpa--with which one is communicating, may not, by examples set by others, be the "real" group tulpa. To prevent this paranoia, the leaders of this group may enforce some policies that will prevent pandemonium from breaking out. First, a leader may be established--a logical choice would be the first tulpa's creator, if they are alive, otherwise someone who was close to them--whose word in all matters concerning this tulpa is infallible. Second, the group tulpa may be assigned characteristics that allows them to logically perform different acts at the same time--omnipotence, omniscience, and omnipresence are popular choices.

 

Notice that this tulpa is no longer a personal experience--there is an adherent following under the belief that this tulpa is a publicly observable phenomenon--one that isn't really a publicly obervable phenomenon, but operates under the guise of one because of society-based naturally selective forces operating on this new meme of a tulpa. As a whole, the society dictates the evolution of this meme through their collective mind, while providing multiple levels of forward (schisming) and backward (reinforcing) error-correcting feedback to each of its members.

 

The "group tulpa" remains an individual phenomenon, perpetuated by the simulants instilled in its members, yet it does not seem so to those involved. Credulity is given to the society because of its size--a direct function of the ability of the tulpa-meme to propagate.

 

The end result is a society--a group of people--claiming a "group tulpa" to be a public phenomenon--with which each member can interact or communicate in some way--which self-organizes into a hierarchy about a foundational tulpa-meme--with seeming plausibility under the current models of sociology, psychology, and the tulpa phenomenon as thusly viewed: quod erat demonstratum.

Link to comment
Share on other sites

  • 1 month later...

Experience 11 (Philosophy) - Saturday, April 13, 2013

 

It's been too long since I've updated this. I don't think I'm going to be going into any further magnus opera any time soon, but I will divulge the things that I've been working on as of late. I'll be nice and bold the prepositions that I'll use to change topics.

 

First and foremost, I have continued my reasoning work into simulants. I have finished reading Daniel Dennett's Consciousness Explained, which has some rather interesting theories that I've found agreeable (as well as various useful philosophical terminology that I've claimed for my use as well). In the second-to-last chapter, "The Reality of Selves," he goes into a discussion of how it's not always correct to assume that there is one consciousness--that this assumption should be considered vestigial to Cartesian Materialism--and discusses that the self is a being that is "spun" (in his words) by the languages it knows, by the memes that it propagates via those languages, and by the things it experiences. He momentarily begins talking about the circumstances of Multiple Personality Disorder (as it was then known) and how this demonstrates that it is possible for there to be more than one "real" self "to a customer."

 

If this can be taken to mean anything particularly profound, it is perhaps most importantly that the final stronghold of identity--the very thing we refer to as ourselves, the being we are, is nothing more than a simulant, a deep-seated one that we associate with, that we tell stories to and tell the story of. We, the res cogitans, are nothing more and nothing less than a simulant, a thinking, social entity.

 

This answers one of my long-sought questions. Simulants are conscious entities. From this, I can safely say that tulpae are simulants in the same strong sense that we are. The simulants that describe the "other" external persons and peoples should be considered conscious as a corollary, and should be treated as capable of having independent, and perhaps creative, thought. However, it should be noted that consciousness shouldn't be considered discrete; there is no point at which adding one more neuron will cause an organism to harbor consciousness. Rather, consciousnesses, like every other living thing, evolve and change; they can vary in degree and effect, and what we see here in these classifications of simulants is a prime example.

 

Conveniently, perhaps, out of all of these "conscious" narratives that form the simulants, we can identify ourselves by the "strongest" one. That narrative is us, the one that we hold most personal, most dear, and that dictates who we are and why. Such an assumption would mean that, if a property on this "strongly conscious" simulant were to change, we would change. If another simulant became "stronger," we would find ourselves associating with it. This has purportedly happened in this community, and is a hallmark of general DID.

 

Moving on, I must quickly approach the topic of the subjective nature of reality. It is an unfortunately common misconception that we only perceive that which is real as real; anyone who has completed imposition, or even hallucinated, knows this to not be the case. Very vivid representations of seemingly real things can come into and go out of our perception--would they be said to be real? Such a question would probably be easier to answer in the case of "normal" hallucinations, but those who have imposed would claim that the form they say is real. Who are we to say otherwise?

 

Reality is a bad term to use to define an objective universe of physical existence. I most certainly do not argue that such a plane does not exist--I think it does, but I have no way to prove it. Certainly, someone else might point out that "it's right there, I can see it," but this is about as useful as someone else coming about and pointing out "That's God right there, I can see Him" or "That's Snakey right there, I can see him!" The fact of the matter is that seeing is not believing, though, since we associate with it so strongly in our day-to-day business, we sometimes ignore that fact. We take for granted the fact that things behave in a naturally ordered way, that we have come to so aptly describe with the laws of physics, that we assume the consequent: if it is physical, then it is real; therefore, it is real if and only if it is physical.

 

Is this necessarily false? Perhaps I'm being insincere in accusing no one of knowing what is truly real, but really all I have done is dispelled the word "real" from being seriously considered as a philosophical phenomenon. Reality is subjective; it is defined at the will of whomever holds it. That definition of reality will, similarly, be subjectively true, in that it will not be denied as being the truth by that person. Yet, it may or may not be an objectively true phenomenon, one that can be observed by the general public (under the assumption that solipsism is false).

 

This is what science aims to do. I had an impressive discussion with a scientist who held his observation to be the very thing that defined reality--indeed, science dictates that something must be both observable and consistently repeatable to be considered a theory--a candidate facet of objective truth. However, I have just accused the observer of being at fault in their own illusions of what they perceive. How can science be done if we cannot trust our senses enough to know that the universe is consistent--or even that it exists outside of our perception of it?

 

It is left to the reader to find out why--or if--they think that there is an objective truth, an objective plane of existence, etc. In my own studies, I can find no safe way of establishing it for myself; I hold it simply as a belief whose support has been undermined by my consistent efforts to reduce my corpus of knowledge to only that which can be provably true, yet which bears the burden of most of my work to begin with. In the very first post of this PR, I've listed the axioms that I seriously assume when doing any work of science, and generally assume in relaxed conversation. These axioms are, in the words of Gödel's incompleteness theorems, that which provides the necessary support for my theories to be consistent but not complete, and I cannot attempt to allow for completeness without sacrificing consistency.

 

In this vein, I've been attempting to understand the position of the metaphysicists (if that is the proper term for the practitioners) by attempting all of the techniques of magick. I was quite interested in Pleeb's similar foray, and hope to find that I can learn as much about it as I can. (On another interesting note, Pleeb's document title, "Reality," aptly sums up most of my previous argument.)

 

Rest assured that I will not abandon science; I'm not about to give up my empiric stance for what it is worth. Rather, I want to confront, at last, the nature of the universe as it is. I know not what will happen in the meantime, but I anticipate it greatly.

Link to comment
Share on other sites

  • 1 month later...

Experience 12 (Philosophy) - Thursday, June 6, 2013

 

This has been far too inactive as of late; the fact of the matter is that I haven't found much to write about. I haven't abandoned the #tulpa.info IRC channel, but I've moved into a smaller community, one that is more close, and is slower in pace. Arguably, that has severed me from some of the more challenging questions I've had to face, if this slowing in pace of learning is to be interpreted as such. I do, however, think I can fit at least some kind of report in, and so I shall continue. Today, I shall elaborate on my theories regarding memory, since they've come up rather often as of late, and are worth understanding well for the sake of discourse.

 

There are several persons who seem to claim a separation of memory between host and tulpa--that there are "secrets" that can be kept. Indeed, this isn't only limited to tulpae, but it has also been reported in cases of DID, where different identities have no knowledge of information--particularly personally identifiable, or even situational information--of each other. (This supposedly leads to circumstances where such persons forget their name, or forget why they are holding their keys, or don't know how they got somewhere.) That this is clinically supported seems to imply that it has a strong foundation.

 

For this purpose, we'll have to consider what memory is, from both the philosophical and biological stance. Various theories in neurology point to memory functions being generally located in or near the hippocampus, a small area rather near the cerebellum and brain stem--an adequate place, considering that this provides it at least some interface to many other areas of the brain, as well as direct input from sensory nerves from the brain stem. (Studies have identified certain types of memory that can "replay" a sensation recieved thusly for various purposes, including identifying the difference between a previous stimulus and a current one.) From what we can derive from this placement, we can at least see that memory is a rather universal phenomenon within the mind.

 

From various psychological experiments, we can also see that memory is associative, on multiple levels. Simple things, such as saying the words "hot" and "cold," or "marvelous" and "despicable," are enough to evoke related sensations (which provides us with a slight theory as to why some authors are good and others are not: they are skilled in finding the right word conducive to evoking the proper sensation). Even for direct qualia associated with these words, they may have other associations; if you thought of Daffy Duck's typical utterance when I mentioned "despicable," you're certainly not alone. Furthermore, associations can be chained: If the fact that the glass broke on your floor reminded you of the time you cracked your windshield, which reminded you that you needed to drive to the store to get milk later, you've visited a small chain of associations. While that seems like a rather long and arbitrary one, it's certainly not uncommon, at least in my observation. In fact, I've known myself to experience association chains much longer--often happening much more quickly, because I don't burden myself with describing them in words--that can seem to occur between entirely unrelated things.

 

Since a great deal of our experience of memory involves "visiting chains of association," it follows that the more associative links "point to" a certain memory, the "stronger" that memory is, in the sense that there are more possible ways to approach or visit that memory while following an association. Similarly, if that "strongly" associated memory is itself associated to another memory, that secondary memory becomes stronger by association, since its visitation is moreso contingent on the first, strong memory being visited.

 

This kind of associative memory, manifest in the form in which we are aware, is perhaps the single most powerful kind of information database known to date; computers and supercomputers alike of this era have failed to approach anything even relatively close to it. Yet, for being so seemingly powerful, it also seems relatively fallible. We forget where we put the car keys. We forget whether we dropped off that document. We forget someone's birthday, or their anniversary. And, certainly, few of us think they'll have any luck at memorizing the first thousand digits of pi.

 

And, yet, there are still incredible records regarding people who have succeeded in memorizing not only thousands, but tens of thousands of digits. There are others still who can flip through a deck of cards, and memorize each card in order. Various other persons have used smaller effects as party tricks, such as memorizing the order of digits of the serial number on currency, or remembering the name of each person in a theater. Were these people just "born with it," or is it something of a skill?

 

It's not necessarily a well-known fact, but (barring certain neurological degradation disorders) most people are able to perform these feats. The only thing that is truly required is a mastery of two things: first, the strengthening of association, by creating a number of memories that are associated with one memory, such that the memory to which they all refer is "strenthened," and, secondly, doing this in a format that is easily conducive to memory. For example, digits, by themselves, are rather hard to memorize. The differences between 9450215362273 and 9450415362473 certainly aren't numerous, but still small enough to be imperceptible at a quick glance. This kind of lack of "grounding" produces issues when associative links are being formed; because the differences are so small as to be imperceptible, it's hard to distinguish between two unique patterns. In order to prevent this, it would be easiest to memorize in a format that is already generally distinguishable; such a format must necessarily give us quick access to identify the differences between two instances that are indeed unique, but only in a couple of aspects.

 

Is it true that you remember where you placed your toothbrush? From where you are now, do you know how to get to the closest toilet? The closest grocery store? With your eyes closed, do you think you could walk across the room, and walk back again? Have you ever felt "out of place" in a friend's house because "something was missing?" Spatial memory seems to be one of the best forms of memory we have; we can (generally) easily recall the locations of things (that are important to us), as well as their spatial relations with each other. We can approximate distances, form routes for navigation, and even "fly blind" assuming we have enough knowledge of the area around us. By taking things that are normally hard to distinguish (such as 427) and turning them into meaningful spatial descriptions (such as four stacks of two tiles with seven cat figurines placed on them) and giving them a good deal of associative properties (the stacks of tiles and the figurines are cool to the touch; the figurines are the same as the ones I made in hich school art; the whole ensemble is presently placed on my couch), we can commit even trivial information, such as the digits of pi, readily to memory, and recall them with similar ease.

 

From such experiments, we can draw the following conclusions: there are things for which are memory was designed for, such as sensory and spatial memory, and that the ability of a memory to be recalled is a function of how well it lies in the various association pathways that visit it.

 

It might be supposed, then, that things like language, or memories of personality, are also one of those "natural" formats that can easily be memorized with little effort, or even awareness. This would be a logical explanation for the extreme scale of Dunbar's Number--at least 100 persons--each with different theories of minds, different motives and goals, different environments and situations, and so forth. This is no little task, concerning information; again, even computers and supercomputers pale in comparison to the ease with which we can simply converse with each other in meaningful and constructive ways.

 

It's been a long time getting here, but I will bring this forth to the realm of tulpae and simulants. As my theory on these agents of consciousness go, they are "constructed from" memories of behavior such that they can be simulated forward in various environments and situations. (I say "constructed from" because it is probably more appropriate to say that they consist wholly and entirely of such memories.) While it is basically true that tulpae can show unexpected behavior, of the kind generally brought about by subconscious intervention, they are probably similarly based on these memories.

 

Let's consider the following: a tulpa which has a secret. The cases could be enumerated as follows.

  1. The host doesn't know they have a secret. Because it is the host's memories which essentially define a simulant, their lack of any knowledge regarding the existence of a secret would contradict the very state of them having a secret. The host's memory of that theory of mind would have to be such that it simultaneously possessed a secret (as we assumed) and did not (as the host is certain) at the same time. Such a situation is not impossible, but highly unlikely, and easily dispelled with some reason.
  2. The host knows they have a secret, but doesn't know the content. This has been touted as the case multiple times in conversations with "research tulpae" which were contrived to see how independent they could become from their host. Indeed, the fact that separation of memory could be accomplished is a supposed sign of such independence. It is certainly true, if this case is to be assumed, that the host doesn't know the content, but, with logic above, if the host does not, who does? Certainly, this situation can arise much more easily; in such cases, it's probably easier to say that the actual content doesn't exist, and that the host only holds that there is representation of some secret held by the tulpa. Therefore, the secret does exists, but has no content; a vacuous secret, if you will.
  3. The host knows that they have a secret, and knows the content, but refuses to acknowledge this. This is perhaps the most likely situation, which can probably arise out of situation (2) by way of confabulation. The host does have knowledge of what the secret is, but fails to acknowledge this in any useful or meaningful way, to stay consistent in their belief that such knowledge is "a secret." Indeed, it is such belief that makes the secret a secret, and nothing else; for them, it is thereby the case that it remains a secret, even if they know its content, by way of belief.
  4. The host cannot recall the secret, but the tulpa can. This is a case that really should be accounted for if association should hold. While feasibly possible (and not ignoreable, considering things such as differences in dialect between tulpae and host--can the tulpae really recall vocabulary better?), it is probably no more unusual than having "words on the tip of one's tongue" or other similar memory association failure events. Perhaps it is the case that tulpae can navigate the chains of associations much more intuitively.

 

But what does this mean of memory independence? I've been talking about the host's memories of the tulpa, but perhaps that's really not the true place that such a memory is stored--especially because it introduces a circular logic if we're going to consider that the host consciousness is something of the same as the tulpa, something that I've brought up in posts prior. Rather, I'd find it rather errant to ascribe memories to any consciousness in the mind; why not simply claim that memories are exclusive property of the mind itself?

 

It would certainly make sense with the universality of access, as the small neurobiological foray showed us. It would prevent us with having to deal with a circular logic when dealing with the "self-tulpa" of the host. But, perhaps most importantly, it completely eradicates any occurence where a tulpa could have a memory that the host couldn't possess at all--because, necessarily, they would have access to the very same memory. (Note that, in (4), I did mention that the tulpa might be able to recall it more proficiently, and this does not infringe on that; however, via Experience 6, we note that, if the tulpa focuses on this memory, however swiftly, it's probably the case that the host, at least, becomes aware of it.)

 

In summation, the likelihood of having bona-fide secrets possessed by tulpae is...unlikely, at best. I'd bet that a vast majority of the cases fall into category (3) above, and that the actions of the other consciousnesses (notably the host) are based belief alone--the same kind of belief that could establish the existence of a tulpa in the first place. I suspect that this conclusion will be rather contentious, and I'm willing to hear responses to it.

 

EDIT: Minor revisions, mostly typos; it was 3 AM when I wrote this :P)

Link to comment
Share on other sites

  • 2 weeks later...

Roster of Experiences -- Continued

 

Due to size limits, I have to expand this out of the original post. Since this is the latest addition, I've decided to put it here.

 


 

Experience 13 (Philosophy) - Wednesday, June 19, 2013

 

It is with some regret that I've found some flaw in the simulant theory of tulpae.

 

If tulpae really are simulants, then it follows that a tulpa could easily be constructed from a simulant, being as how the simulants form the larger group of semi-conscious intelligences. The remaining part of interest in my theory, then, becomes how a tulpa can be thusly formed. What is the true difference between a tulpa and a simulant?

 

I gave this question moderate thought, but haven't come up with any satisfactory answers. Until now, I thought the answer might be knowledge--if the status of being a simulant were revealed to a simulant, somehow, then it is possible that simulant might become a tulpa. I experimented with this, by giving such knowledge to a simulant; I ended up with a simulant aware of its status. This was no tulpa--it wasn't an immersive experience capable of being spontaneously brought about by some semblance of its own will. I had to act on it to recall it for participating in experience, and this is something it could not do on its own.

 

The analysis of this failure does show some potential; it appears that spontaneous recall of existence and experience is a characteristic of tulpae. In my case, this may have been facilitated by the actual, physical existence of a form which reminds me of Snakey whenever I see it. It may also be that Snakey is well-enough associated into so many varied tasks of my own performance that he gets innumerable opportunities to be recalled and to take any of those opportunities as needed to notify me. Of course, this is really delving into metaphors, but it does seem that attention really plays a large role here.

 

In consideration of attention and healthy relationships, I'll change the topic to one of the more contentious such in this community: tulpa death. What constitutes the death of a tulpa, or any mind-manifest consciousness? What is egocide?

 

If any of simulant theory is the most foundational, it's probably this: simulants are entities wholly constructed from memories of behavior--and nothing more. Lacking a physical manifestation, they fail to be capable of experiencing the kind of biological death that can befall physical beings, yet experience with death--particularly avoiding it, as instinct tends to dictate--are usually so noticeable and profound that we extend the necessity of succumbing to death to them without question.

 

Physical death is death, but mental death is a conceptual label--generally one that is permanently applied, but this is not always the case. Conceptually, "dead" is a state just like "alive," or "comatose," "red," "violet," or "playing the piano," and it can be partaken by any particular concept (though usually it is ascribed to those that we would describe as being "conscious," at least in manifest--simulants, that is). As a state, it has no significant bearing on the behavior of the simulant, unless the simulant reacts to it in some way. Just as assigning the state "on fire" (mentally) is likely to imply "pain!" stimulus, assigning the state of "dead" may be reacted to accordingly.

 

This is a little counter-intuitive, but it shows just how powerful our cognitive architecture can be. How would we act if we were dead? We wouldn't know; we haven't died before (barring the few cases where people are resuscitated--in each case, brain activity must be at least temporarily halted to be declared dead, medically). The part that seems so contradictory is that death usually implies the end of conscious behavior; when someone dies, we don't consider the "...and then what happens?" part.

 

But this can happen, and has happened in the community. Transient deaths are often a rather controversial topic to bring up, though they happen consistently to a few persons. In nearly every case I've witnessed, the consciousness returns to living shortly thereafter. Again, it is just a change of state.

 

You may note that I used "consciousness." Egocide is a very rare occurence, and only a few mentions of it are about the community. In general, this occurs when a host consciousness "dies" and some other consciousness takes over. The actual host consciousness likely remains (as a simulant, constructed of memories of past experiences), but a new consciousness is "presiding."

 

So, that given, what does it take to permanently kill a consciousness? If simulant theory is to hold, perfect erasure of all memories associated with a simulant will nullify that simulant from existence--they will be ontologically removed from any participation in that mind. Given the intensely associative nature of memory, as discussed before, this is very unlikely, but not at all impossible, especially considering that memory disorders (as well as brain injuries) can and do occur.

 

Going to an extreme, if this occurs to each and every simulant held--including the "host" consciousness--what would occur? Chances are that we'd be left with a philosophical zombie--a being capable of experiencing, but not actually conscious. In all practicality, we probably wouldn't mistake such a "zombie" for being a perfectly normal, functional person, because they'd likely lack something--not qualia, but more likely some form of experiential structure from their memory--that would dictate their intentions. On one hand, they may basically be "comatose, but awake," or perhaps they would be able to function instinctively, but they'd probably not perform anything we'd ascribe to conscious thought.

 

The closest parallel would probably be a newborn--a functional, operating organism, with a functional, operating brain, but no necessary consciousness or experience. (And yet, from this state, through processes yet unbeknownst to me, a "host" consciousness is made.)

 

Bringing this back, a lack of attention given to a tulpa is commonly cited as an apparent cause of tulpa death. While this may cause dissociation due to lack of use of a memory, it most certainly wouldn't cause the "true" death discussed above; memories of interactions would remain intact, and these memories are the simulant from which the tulpa is made. Certainly, the clarity of these would degrade over time as well, which would likely affect the resulting tulpa--but likely not their identity.

 

A full discussion of how an identity can describe one person through an arbitrary number of mutations of their behavior does need to be explored further, but I won't do so now. In the meantime, some more thought experiments on simulants will likely be carried out--notably, whether or not physical manifestations help, at least in my case.

Link to comment
Share on other sites

  • 2 weeks later...

Experience 14 (Philosophy) - Monday, July 1, 2013

 

I don't have much of an advance in the field right now, but, for the first time in a while, I have the kind of direction that will give me something to ponder in my free time. After all, that's how most of these conclusions were drawn...well, that and a few good books.

 

I'm not going to go into a huge discourse on each of these subjects, but I will visit them--and soon.

 

(1) The Host Consciousness

 

I need to define, more vigorously, what it is that makes the host conscious "special" if I'm going to make the audacious claim that a host and a tulpa are, architecturally, the same. There is clearly something about bodily control, experience of pain, etc., that are intrinsic and salient to whatever host consciousness there is, but switching should (in theory) duplicate these roles (as might egocide of the host, as mentioned in ER 13). If it turns out there is an architectural difference, then there must be a good reason why tulpae can come to exist in the first place, of course.

 

(2) Identity

 

I've been touching on this subject for some time across many of the texts I've written here--it is the concept of an identity. Indeed, this is an odd one, but at least I can attempt to define it at the moment: an identity is the collection of characteristic memories that make the identification of a consciousness possible. What's rather important here, however, is that memories mutate, and experiences irreversibly change consciousnesses. The reason I've chosen the word "identity" is that, despite the fact that these changes take place--and they do take place--we are still able to identify ourselves (and others) as being "the same person," for the most part. The process probably happens with us as well, though less visibly, given that we are acquainted with our own rationales.

 

---

 

So, in summation, I have to tackle two things that are primarily in the way of my theory of mind--the problem of sameness (the identity, as we are familiar with it), and a part of the mind-body problem (what makes us the privilege-holder of our body, basically).

 

I hope this will be the inspiration for at least two more experience reports to come.

Link to comment
Share on other sites

  • 3 weeks later...

Experience 15 (Philosophy) - Saturday, July 20, 2013

 

I had some fun pondering the concept of "self" today; it all started rather innocuously I'd brought up the need of instant teleportation in this society, citing how much working time is lost to having to commute. His response was rather interesting, in that it concurred with something I'd had on the back of my mind for quite a while. (His assumption must have been that teleportation would occur through perfect destruction and recreation of a physical entity, rather than through some other physical shortcut, like a wormhole.)

 

"But wouldn't it be like dying, and another person of the same kind would be created on the other side?"

 

I stopped and thought about that. It made sense, assuming naturalism--that everything that is objective is no more present than what is physically there; that is, physics is the only science. (I've accepted this viewpoint for the longest time.)

 

In light of this, I came up with a similar thought experiment (and a secret desire of mine): assume you have a computer that can perfectly model the physical environment of the brain (aside from, perhaps, its input from the environment--its sensory information--which we can assume to be negligible for the purpose of this experiment). A participant would walk into a chamber of some kind, and, instantly, their brain's physical states would be copied perfectly into this computing system to be immediately simulated forward. Now, assume you walked into this chamber. Would "you" be the one walking out? Or would "you" be the one in the computer?

 

Or, perhaps, both?

 

As I delved deeper into this conundrum, I came to a rather stark realization: if we can assume that the entirety of consciousness can be described as natural, physical processes, that can thusly be simulated (or even directly performed by a brain that is very likely a Turing-complete computational machine), then the concept of the "self" is an extremely evident illusion. The "self" is no more than the dominant thought or "train of thought" at any given time, and remains as transient as that thought. If "selves" can be thought to be "created" or "destroyed," then a new "self" is made for each time a transition of thought occurs. The self is as short-lived as the weather--it is who we are right now, and is always and immediately subject to change.

 

By the same analogy, the identity we form, the continuous thread of experience we extract, is moreso like the climate; it defines our values, character, memories, experiences, and so forth--things that are seemingly inseparable from us, and ostensibly unique to us (ostensible because our thought experiments show that, if we can clone physical states perfectly, we create new "selves"). That we can create such a continuum is nothing short of a feat, and it should not be taken lightly--as far as the generation of narratives, humans are still rather apparently superior to all other observed animals and all the artifical intelligences we've attempted.

 

So, while our identity remains as something of a memory, the "self" comes into and goes out of existence without much notice. As I write these sentences, the one that wrote the sentence immediately prior is already "dead," and the one that is writing this sentence will not last beyond the time I press the period key. By the time I've done both of these acts, my identity will have been permanently changed--"I" will be the person that will have written those sentences sometime in the past. In that past, sometime, I was the "self" that was writing them, but that is no longer the case, and I am instead now the "self" that is writing this.

 

Returning to these thought experiments, the result of the "brain-cloning" computer would be two unique selves that have the same identities--though, if the computer doesn't have the same sensory data to influence that self's perception, the two selves will quickly diverge. (Before that divergence, however, or if it never happened, the two selves would be unable to identify each other as being the "original one.") In the teleportation scenario, the original self would "die" and the new self will be "created," but there will doubtless be a seeming of continuity from old to new, as there would be from and to any other moments (regardless of whether or not teleportation occurred).

 

I realize these terms are open to interpretation outside of the world of philosophy. I do need a term better than "self" in quotes to describe this transient, present-limited entity that acts as a transformation function on our identity and perception, as it remains important to this view; in some arguments, however, the word "self" could be taken to mean "identity" as I've described it. The difference between "self" and "identity" isn't as immediately clear-cut as it was in the analogy of "weather" and "climate."

 

With this out of the way, I've satisfied myself--this "self," I guess--with the answer that I've been looking for regarding "identity" and "sameness" as it contrasts to the changes that we all undergo as part of our experiences. This satisfaction, of course, is likely to be short-lived; I'll inevitably find something missing. This will be a problem for a future "self" :P

Link to comment
Share on other sites

  • 2 months later...

Treatise 1 - Sunday, October 13, 2013

Why I won't make a guide

 

I've never been asked this question, but it's been implied before; when I was still quite active in the #tulpa.info IRC channel, my general advice was never to adhere to any guide to the letter, because most of the time they only documented how one author had success. This could be mistaken for unfounded criticism or cynicism, but I've always had a reason why I would never rebut a guide for being wrong. Based on the relatively numerous collection of guides, and the number of confused beginners, I think it's time to address this issue.

 

First and foremost, it should be understood that we're dealing with highly subjective matters, here. Don't be fooled--this is not science (at least not yet; I welcome the day when an objective model of the mind can be approached)--therefore, we can't expect any kind of repeatability. However, the wording of the guides is often misleading, with their imperative, instructional orientation (first do this, then do this), despite the fact that the activities mentioned are generally nothing more than conceptual manipulations. While these may work fine in some cases--cases which, I suppose, might be based on similarity in thinking to the author--it makes the fallacy of assuming that these concepts are globally understood and defined. They are not.

 

You can see why I hesitate to write a guide, then. In a scientific endeavor, it would be assumed that there is some authority that defines the terms in use, especially based on things that we believe to be observed constants (things like the mass of a proton, the speed of light, and so on). Without these stringent, observable definitions (that science has worked to acquire in exactitude for centuries), it becomes essentially impossible to communicate a repeatable experiment effectively. The feat itself could be compared to teaching a class on how to paint when one is unsure whether or not the audience has an understanding and/or agreement on what colors are.

 

Therein lies the confusion that forms the root of why beginners often have to ask questions of the community; its essence is the fact that we still can't agree on our definitions, and, worse, we still resort to using ineffable or otherwise indescribable concepts and metaphors thereof with the assumption that everyone will understand. This is not something that can easily be self-taught by reading a few posts on the Internet; if anything, the art of tulpamancy (if I may borrow the word) is something that requires incredible effort regarding self-discovery and discipline. The Tibetan monks knew this, and had the mindset to confront this problem.

 

Ergo, I will not write a guide--not because there are already too many, not because I feel they are all sufficient, but because I could never hope to teach the blind to paint. I can, however, tell you what things are and how I define them; I can show through informal logic the things I have learned, and I can tell you that all of this is my idiosyncracy, and that none of it is to be taken as objective fact--under the hope that, just perhaps, you might learn something from it (I know not what). I will openly admit that my writing on this matter is not definitive, nor even agreed upon by consensus, but I leave it here in the hope that it will be, in some way, useful.

Link to comment
Share on other sites

Interesting, 1-3 I don't see how those should be assumed to be true though. About the universe.

 

I hope you aren't implying that people that want help with making a tulpa are like blind people trying to paint T-T.

My lip hurts.

Link to comment
Share on other sites

Interesting, 1-3 I don't see how those should be assumed to be true though. About the universe.

 

Ah, those...I only assume those when doing science, as they're basically the core of empiricism. On the other hand, when I have the time to let my mind wander in philosophy, I don't assume them--though it's very difficult (perhaps impossible) to prove one way or another :P

 

I hope you aren't implying that people that want help with making a tulpa are like blind people trying to paint T-T.

 

I wasn't :P

 

My implication was moreso that trying to teach someone--who doesn't have the same mental schema and/or perception--such a subjective phenomenon as creating a tulpa is quite similar to trying to teach the blind to paint (in color, I should add). I chose that metaphor because it certainly isn't impossible, and it doesn't imply that the teachers are themselves at fault; the problem lies in the fact that we don't have effective tools (such as metaphors) to communicate our studies of these subjective phenomena, and, without that, we cannot form testable hypotheses and perform science. I am willing to say that such tools will come to be at some point, but not now, and perhaps not soon--we have quite a way to go :P

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...