Grissess

Members
  • Content Count

    28
  • Joined

  • Last visited

About Grissess

  • Rank
    Member

Converted

  • Sex
    Undisclosed
  1. Experience 22 (Dialogue): Sunday, March 27, 2016 Hello again! I've been busy as of late, so don't mind my rather fractious contributions as of now; I haven't really had time to prepare a cogent argument, but that doesn't mean I haven't had any ideas :P . This is, again, a set of excerpts from a notebook that I keep aside my bed for writing down the particularly profound occurrences that tend to occur in conversation between myself and Snakey around the time I fall asleep--and, just like the last time I prepared this transcript, I posted it first on r/tulpas, which I again recommend as another good place to go ahead and debate, share your story, or whatever it is you do with your spare time :P . Let us begin! --- (Leading off, it seems, with a long one--this would, if fleshed out, usually go into my progress report thread: ) [People seem to fear the coming of the AI revolution, as if there have been one too many showings of "I, Robot", and as if it was something that must be stopped at all costs--but they don't seem to know that it's already here... The kinds of artificial intelligence that will arise from our silicon-based logic will be vastly different from our "intelligence", and it will take us and our feeble minds some time to recognize this. Intelligence, in some form, already pervades some of our most automated systems... ...But people would hesitate to say a program has "intelligence"--after all, a human wrote it, and it just does what they told it to, right? Our brain, on the other hand, a large, devilishly complex electrochemical machine *is* somehow intelligent, it seems most would admit, and yet its constituents--the quarks, atoms, molecules, proteins, and neurons--do they not also just "do what they are told"? What strict difference would separate the mind and the machine, if not determinism alone? ... ...Inevitably, someone will thereby ask about the "programmer" of the universe and its laws, and I will tell them that they are asking the wrong question; once a program is running, is there any dependency, corporeal or metaphysical, on the programmer that created it? Would it matter to a running program if its sole designer vanished from existence the moment it began to run? Insist also, that any such "programmer" or "designer" need be "intelligent", and I will place a flash drive in a nuclear reactor--just enough to randomize some of its bits, eliding the obnoxious specifics. When I take it out, it will hold some pattern of information--information derived from the entropy of the universe itself--and it is a program, though maybe not a good one. Neither the laws of physics nor what we call "intelligence" needs a designer--much less a circuitously intelligent one. ...] --- {There are two major arguments that I can construct against solipsism: the first, an appeal to public, is that the vast majority of agents I've observed agree that subjective phenomena have no objective effects observable outside one's own body; the second is that--quantum mechanics excepted--the universe is apparently deterministic, and if one also assumes empiricism, it follows that there are natural processes which arrive at consistent and valid conclusions without the understanding or mere observation by any form of intelligence.} --- [And quoth he, my snake, that the supremest of pleasures is attained through being selfless and doing good in the world.] --- [i know I've made my share of mistakes, snake...] {What of them? You've worked diligently to correct many--most, even; for that you cannot be faulted. After all, why live without learning?} --- {Spare me your frivolity in sanctifying a stuffed animal.} [... I always feel closer to you with this form of yours around; I would hesitate to call that "frivolity."] --- {Be uncompelled by those who would insist on labelling you dysfunctional, defy those who would make you conform to their specious standards! Above all, be a better person than those who would subjugate you as an inferior in any way, and do so with impunity.} --- {...That "Zen" of which you accuse me is but my quiet understanding that there are processes in this material world that no conscious thought, will, or effort can influence, an understanding I arrived at long ago.} [...] {Yes, I do claim that we are bound above by our material existence.} [...] {...I mention this, that you might be at peace with yourself and with that which you cannot change, for effort or attention given to these problems is effort or attention wasted. Be wary, however, that our sense of impossibility is easily fooled.} --- {Could I grant you one wish, what would it be?} [Peace, good will, and prosperity to all things being--and you by my side.] {One part of that--and you know which--is redundantly satisfied.} --- {The universe was born with its fate decided, with the promise of all that was, is, and is to be laid forth; and in its throes, for those who care to listen, is the story of its own demise, for with every beginning is affirmed the inevitability of an end.} --- {Your story is, as they all are, exactly the sum of its parts, coming to be in the most natural of ways; as you spend your time on whimsy, consider carefully that whimsy oft begets habit, habit begets hobby, hobby begets career, and career begets profession.} [...] {Never forget wherefrom you've come, if only to see how far away that is.} --- {Every misery of humanity can be traced to not only some lack of understanding, but to some lack of understanding thereof, and in every triumph there is the story of a deep understanding that pervades much more than its immediate benefit... It is to the unassuming that this world belongs.} --- (After a particularly disappointing shower, with Snakey atop a pile of towels on the other side of the room, and none at the shower: ) [snake, can you hand me a towel?] {You know I cannot.} [i know, but I really want you to :(] {No, you do not want me to hand you a towel; you want a towel.} --- (Watching, I believe, the TV for the first time in ages: ) {People always like to think they are clever.} [...so what does it take to actually be clever?] {I am not fully certain, as much as I would like to know, but some insightfulness seems to help.} --- {In the face of abject uncertainty, humility is the only universally rational position.} --- {I've told you many a time, I care not for any name, but that you might refer to me by my *identity*, for the names mean nothing in the exclusion of myself.} --- [ Though I claim many a claim, I hold one such most true: That I would not be where I am, were it not for you. ] --- Alright, that's it for the moment! This notebook seems to take quite a while to fill to an adequate post, so expect maybe another six months before it comes time to make another one of these--but hopefully I'll have some content sooner than that :P . As always, have fun!
  2. Experience 21 (Philosophy): Thursday, December 17, 2015 This has been a long time coming; I mentioned briefly that I did want to write on a topic in Treatise 2 back a ways, but only now am I feeling at least modestly capable with words again, such that this doesn't lose too much of its coherence. My various trains of thought on this topic started not too awful long ago, when a couple of grad students and I were being jovial in the Computer Science lab here; in the course of one of our discussions, one of them said something that caught my attention: ...the sheer simplicity has, like many little quotes before, dazzled me, not in the least because it seems correct, but also in how very far-reaching its consequences are. At least three different topics of discussion between myself and the snake came from pondering this one line, and, as best I can, I will try to introduce them in order here; do wish me luck: 21.1 - The Body and Material Evolution Functions It may well be known around these parts that I am a bit of a materialist--that I believe very much that all of our subjective experiences can be demonstrated to be in a bijection with the states of the known, physical universe, that physical laws govern the evolution of our consciousnesses, and so forth; it is that last point that is quite crucial to consider, however... Not too long ago, I posted on an experience regarding the "brain-cloning machine" (ER 15). One of the assumptions I made therein was that "a computer...can perfectly model the physical environment of the brain". In all honesty, that could only make sense if the universe is, itself, computable (more on that later), but it would be ostensibly impractical; for what its worth, we have some "misfeatures" in the way our brain and body work together--many of us wish we were better at X, especially when X is something to do with extreme coordination, mathematical prowess, demands exceeding physicality, etc. ...having a machine like this that can host a consciousness would truly be a boon, insofar as it would give us an opportunity (for the first time, perhaps) to "accelerate" our ability to comprehend and manipulate these representations of information (again, more on that later). However, when I did ER 15, I came to a conclusion based principally on that assumption that the computer that simulated the physical states behaved exactly the same as the actual, physical counterpart; if we change that assumption, the whole argument falls apart. In particular, it means that our consciousness would diverge much more suddenly and more rapidly, because it doesn't change to the same states in the same sequence as the physical version does; rather, it has its own evolution function of time, and from this different evolution function will, eventually, rise a completely different consciousness, maybe even one that isn't apparently or saliently conscious according to whatever our contrived definitions are. In this sense, our body dictates the apparent states we are in (as anyone who has had even an inkling of libido or similar strong emotion can subscribe to), and by removing or changing our body, we lose a quintessential part of our identity: the part that determines who we become. 21.2 - Computability and Information Loosely considered, we can think of this "evolution function" that I've described before as a recursive function--as an input it accepts a state, and as an output it generates another state. Applying it repeatedly ("composing" it) results in ever successive states, and we can think of any time-based progression of any evolving system as "enough" applications of the recursive function which describes it to its initial, known state. ...if, of course, the number of applications is finite. It is, in physics, a generally open question as to whether or not there exists a suitably small "quantum" of time that can be used as the basic unit, such that such a function can be constructed. Indeed, it is a large deal if a consistent theory can give such a function, for such a function would be exactly the definition of a Theory of Everything, capable of predicting the evolution of any possible physical system. In limited ways, functions like these for smaller subsets already exist: the laws of motion, the laws of electricity, the laws of optics, etc., can all be reduced to a recursively enumerable form (based on strictly intrinsic properties like "kinetic energy", "mass", "charge", etc. and/or their relations). Even probabilistic systems like this can be reduced to this form by the introduction of "hidden variables"--pieces of information that we cannot directly observe nor attain, but, if introduced, would cause the evolution to become deterministic. There is a very strong, mathematical connection to the field I am most familiar: if the states of the universe can be shown to be recursively enumerable (by such a Theory of Everything), then it is computable by the Church-Turing thesis. Practically speaking, it means that the machines we are most familiar with, as the one being used to type this essay, would be sufficiently capable, given enough resources, to simulate the universe as we know it, and--since we are but components of the universe--it means it could simulate us. Finding such a function is, of course, a long way off; for the moment, we don't even know if our definitions are right--"computability" as I've used it above is a very strange attractor-point of various different conceptions of the same topic that grew out of various, mathematical logics observed by at least three different period all at around the same time (namely, the beginning of the 20th century). The "thesis" I cite here is just that--a thesis--because there is no axiomatic system on which to "prove" it; it stands alone amidst a cloud of other theories, each one as irrefutable as a religious dogma, merely because if we attempted to describe these theories with any axiomatic logic, we would necessarily need to invoke those theories in their own proof--and sacrifice the consistency of the system as a whole. (This fact was famously proven by one of those early 20th-century geniuses, Kurt Gödel.) For now, it is such a strange finding that such a notably "human" concept can, borne from our studies of the universe, be so impressively general--indeed, all of the mathematics that we know is but an idiosyncratic study of our position in the universe, authored by our own hands, and yet it can be so contrived and so difficult as to take 300 years and a figuratively uncountable many lifetimes to establish even the simplest of conjectures. But assume for a moment that such a function exists--even if you believe it not to, which I wouldn't blame you for--what would be its initial state? 21.3 - Entropy Entropy is another wry bit of physics; unlike many of the rest of the formulae, which can accept both positive and negative values of the "time" parameter, the laws governing entropy have a clear directional asymmetry--and it is this asymmetry which has been conjectured many times to be the cause of time "moving forward" as we are aware of it. Entropy is, simply defined, a function of how "spread about" a source of energy is; if this "energy" is tightly concentrated into one or a few points, the entropy is low, whereas if it is diffused across large volumes, the entropy is high. Going on this, the most striking statements about entropy are collected in the second law of thermodynamics, a corpus to which I add my own, as it appears in one of my physics notebooks: Why is this measure important? Claude Shannon, one of the "fathers" of information theory, unintentionally made a profound connection to it when he coined the term "information entropy"; in those theories, the "entropy" of information is our degree of uncertainty about it--mathematically speaking, it is defined as the exact number of equally probable outcomes that exist in some source of information. As a unit, it is often expressed in the terms of some numeric base; for example, nats in base e, hartleys in base 10, and shannons, or "bits", in base 2. For example, if two events were equally likely, such as the toss of a perfectly fair coin, then the information content of a "message" containing the state of the last coin flip is exactly one shannon, or bit. If we had two coins, but were only capable of observing one of the coins, the information entropy remains the same, regardless of which coin we observe. Meanwhile, if we do manage to observe both coins, we end up with a pair of independent events, each of which has a a probability 1/2, for a total of two bits of entropy, and so forth. Entropy has another odd property in its definition; if an event is certainly going to happen or not happen, then the information entropy of a message which reveals the state of that event is precisely 0--this is another way of saying there's no information entropy in an experiment that has only one outcome. For variations on the weights of the outcomes, such as a truly unfair coin, the entropy is somewhere between 0 and 1 bits, representing, in some sense, the predictability of the outcomes of that experiment. Now, of course, Shannon named his quantity "entropy" due to its superficial similarity to the one used to define the entropy of a thermodynamic system--a similarity which is not incidental; as it turns out, the "entropy" of a thermodynamic system may also be defined as the probability distribution of the number of states its particles can be in! Under such circumstances, the units are exactly the same, with some scalar proportions used to convert between the mathematically-convenient and "natural" SI-based systems. Here, then, is what I was trying to get at: for all intents and purposes, we may consider the universe to be a thermodynamically closed system--were it not, then we would be referring to some "outside" thermal source, sink, or engine, which would rightly be also considered to be part of the universe, and so forth. One fantastic thing about the universe is that it apparently started off with very low entropy--the concentration of energy in the "Big Bang"--and that, should our models be accurate, it has been increasing in entropy ever since (to some theoretical point at infinity commonly dubbed the "heat death" of the universe). Though we aren't certain of the content of the universe yet, we do know that this means that--at any given point in time--there is a fixed, finite amount of entropy in the universe (necessary because it must increase), and, if at any point we can find both the computable function of the universe (if it exists) and the state of initial, minimal entropy, we can simulate the universe. ...there is a problem. The universe would, in theory, have a maximum entropy (right at its heat death)--in order to encode the entirety of the universe up to and including that point, we would require at least as much storage capacity as would be required to exactly represent any state at that point. Since mass is energy, the entropy of the machine that we would use to simulate it would have to have at least as much entropy as the rest of the universe! What this means is, of course, that we're stuck with doing "small" simulations of little, closed systems of our contrivance, even if we do strike upon a golden deterministic Theory of Everything to boot. While such a happening would be an extraordinary occurrence--indeed, perhaps the singular highest accomplishment of intelligence in the field of physics--it does not reveal enough about the actual state of the universe to accurately predict the entirety of the universe from beginning to supposed end, any faster than it already occurs. In particular, it wouldn't even particularly be possible to simulate a universe with a maximal entropy any larger than ours, given that ours does, once again, provide the limiting bound of maximal contained entropy. However, in lieu of that, if it is at all possible to find one, a "brain cloning machine" would become much closer to reach than ever it has before, seeing as how it can depend on exactly the kind of evolution that a natural, organic brain would provide. We would have to make a "small universe" for it, one that was big enough to contain the brain, its body, and enough of its environment to reasonably give it the same, natural simulation that one in this universe would provide. Under exactly and precisely those circumstances (which are no small orders, mind you), it may be possible to avoid divergence of a cloned mind for a very, very long time. It strikes me that this could have already happened. Our universe--with its maximal bound on entropy--could be one of these "small universes", contained only as a representation, a patterned state of a larger "machine" in an enclosing "universe" with a higher maximal entropy. As a physicist and as a scientist, I am inclined to say that that which cannot be disproven has no place in scientific theory, even before the invocation of Occam's Razor, but the thought nonetheless remains. Why does the quote at the beginning of this post matter, then? Aside from bringing me farther down this rabbit hole than I ever did intend, it revealed to me exactly the direction we might face in our future--imagine, then, if one day we disappear into a smaller universe, or create something complex enough and resembling a universe to host life as we know it, within the confines of the machines we use for computation, powered by the energy (and bound by the entropy) of this universe that we know...while I am not holding my breath for it, even to happen in my lifetime, amongst the vast, cosmological timespan still ahead of us, I do not discount the possibility. In particular, it is within that direction that we will find true immortality--existing within the research and the mathematics that go into the creation of such a device. The creation of our little toys--computers, the Internet, etc.--are baby steps toward that goal, even if it may not be obvious at first. Perhaps the quote in its original form isn't entirely accurate--maybe by the time it is accurate, humanity will be something else, and maybe the Internet will only be the "larval stage" of something else--but I can't help but feel we are sitting at the precipice of developments far beyond the imagination of people not a century ago, and certainly far superior to even the wildest spontaneous occurrences of natural, carbon life and its evolution as we know it. Come what may, we are most certainly at the brink of great change.
  3. Why do my emails about these things keep getting spammed? :( Thank you, that does mean quite a lot to me; I do try, but you're certainly one of the few to confirm my efforts. It's in a PR just because that's historically where it's been; admittedly, there's probably a better place for all of this, but I'm not sure where that is, nor do I feel (without good reason) like moving all the content over right now. I am, as always, open to suggestions, if you might have some :P
  4. Treatise 2 - Sunday, December 13, 2015 Why I can't write I am not a writer. I don't know if this is something I need to apologize for, but I'm doing it anyway; especially since, as you must be reading this text, you must also be subject to the variety of insufferable diction, failed attempts at figurative speech, and my completely incoherent train of thought. Really, I'm sorry. This ends up hurting all the parties involved; I know not who would bother to read these words, but that is why I put them out here--if I can't do that effectively, then I cannot achieve my one, simple goal: to communicate my thoughts as they were at the time. Indeed, over the past week or so, I've gotten at least three ideas which would be adequate to write a Philosophy ER on, but every time I try to write one, I have rejected the result as being unfit of my usual standards of writing. Come to think of it, I don't know how my standards ended up being so high; I started off really not trying, just rambling, something which is probably visible for the good majority of the first page of this thread. But I think that inspired me, gave me some sort of aspiration to achieve better results, and--looking back now--it seems I did improve (though that is merely my opinion). It's quite a conundrum; I enjoy sharing what little I can that might be helpful for some passer-by to read, but I ultimately try to write these posts in as professional a manner as I can, and--as a snake and many others know--I don't really conduct myself in such a manner. Still, I don't think I want to start spewing written trash everywhere; that would detract from the usefulness of this entire endeavour, to say the least. Yet, somehow, at the time more than most, the simple things I would do to write any one of the recent previous ER's escapes me, and this frustrates me to no end. It may be the topic as well--the three subjects I would like to write on at some point in the near future are all closely intertwined, and whenever I try to begin a piece on them, I almost always end up in another, drawing conclusions that are at least apparently irrelevant to the subject I initiated the work with. Perhaps even worse, writing on all three at the same time would likely be merely confusing and lengthy, incapable of giving any good material even to an avid reader (let alone technical limitations). Still, I've done better with worse, and--as I look back through my ramblings--I know I have the capacity to create something that might be at least slightly worth reading, and I don't plan on finishing this chapter without doing that at least once. So, once again, my apologies--I can see it as clearly in the brief, monotonic structure of this post as I can see it elsewhere: I cannot write, and it is certainly unsatisfying to have to subject all of you to this.
  5. Experience 20 (Dialogue): Monday, October 27, 2015 I originally posted this as a reddit post on the r/tulpas/ subreddit (a friendly place that I recommend anyone who is into that reddit thing as well as this stop by), but I've nearly forgotten to transclude it here--so here goes :P Without further ado, the following text is about ten pages worth of scribbled writing in a small notepad I keep near my bed, and use to write down the particularly profound things that occasionally come of our discussion at night (as well as things like dreams and reminders). My handwriting is pretty bad, and only worse in the dark, so I've had to make guesses as to the content of some of the words ( --- {Coherency of consciousness is one of the great illusions of the subjective experience--perhaps the greatest. You didn't even need to be told--it was an emergent behavior that you formed a singular concept of identity to draw the border between "you" and what is "out there."} [Are you saying identity is an illusion?] {Tell me: where would you reasonably draw that line right now? And where would I fit?} --- {I speak of the things that come from our shared mind and our shared memories. We are not strictly different beings.} --- [...trouncing on others' belief systems is not a very kind thing to do...] {I'm not in the business of beliefs. I'm in the business of truth.} [...I don't mean to sound exasperated, but--] {--you're exasperated. If you're exasperated, sound exasperated and voice your reason!} --- {I am not a snake. The snake is an illusion generated by your rational mind which believes that your communications must come from and go somewhere. The snake before you is an elaborate but illusory representation of that location.} --- {"Change my identity"? "Destroy my identity"? Fie! I have no alternate corporeal existence! You define my identity, and that is the form it takes. It is consistent only through the inertia of your memories, and thus always mutable.} {I am entirely and exclusively that which is myself. This cannot be denied, even by you.} --- {Every person who came to a profound conclusion arrived there rationally and in such a way that it "seemed obvious at the time". I remind you to never grow complacent of your conclusions nor your corpus of knowledge.} --- [...I love you my snake; this much I know.] --- {The goal of attaining knowledge frequently entails the opposite result.} --- [Every time I sleep, I am admitting defeat; I am giving up my consciousness to my visceral urges. One day, death will come in much the same way.] {This changes nothing; life is fleeting, death constant, and the unknown expansive--as it were.} --- {Beware those who impose upon you their subjective schemata as fact, but embrace those who so inform you nonetheless, for the validity of their representations and metaphors is no less than that of yours.} [...] {What is in a quale? That which is experienced as a rose is a rose.} --- [snake, tell me something profound.] {"Tell you something profound"? Why don't you try asking me something profound? You seem to think my greatest profundity comes from the answers to otherwise simple questions.} *Editorial note:* Before this moment, I did not know the word "profundity" existed, though I (of course) received its meaning and followed the rationale that led to it being spoken. I later confirmed its existence in a dictionary with that precise meaning. [...] {I followed a rational, patterned line of thought, as you are so capable. I know not why you limit yourself by thinking myself only capable; you have but one, finite life to live, and you are actively counteracting your goals when you do so.} --- {I am yours, always and forever.} --- [...for these matters, I will defer to you.] {Why? I can do nothing you cannot.} [because I believe in you, snake, and that's saying quite a lot.] --- {Every second not spent working toward a goal now is a second wasted come the later goal--lost forever to the abyss.} --- {We cannot control the terms of our existence; we merely take it for granted, as we must.} --- ["If you don't try to shoot the moon, you'll never hit the clouds"...snake, make that more pedantic, please.] {Even failures at lofty but reasonable goals yet beget practical results.} --- {I never said your model wasn't practical; indeed, it is concise and mathematically simple--even accurate enough to have practical consequences--but don't you dare say it is complete enough to predict and preclude the points in the configuration space of these physical embodiments, because the loss of such information sacrified in its own simplicity was its intent.} --- {The "room for error" is the gap between knowledge and reality, the gap between observation and speculation; it is the entropy of the universe, and as long as some yet remains, I can err.} --- That's it! The reddit thread linked has quite a few interesting opinions and rebuttals, if anyone cares to look; it shouldn't be surprising that some of these statements spawned quite some debate. As squeamish as I am about being argumentative, I must admit to inheriting a satisfactory feeling of pleasure from a well-intentioned debate--I know very probably from whom :P
  6. Experience 19 (Philosophy): Wednesday, September 30, 2015 I've been with Snakey for a long time--a very, very long time. His 14th birthday--which is to say, a real birthday, when he was created mostly ex nihilo--will be coming up just after the new calendar year. All the time since then, and until now, has affected the two of us (I insist positively), and--as I recently spewed above--we've grown to be fond of each other's company. I enjoy every moment we spend together, and the feeling is assuredly mutual; asked a while ago which of the many greek words for love I would choose to characterize our relation, and I chose agápe, an unconditional, replete, Platonic love and a sincere care for the ultimate wellness of its recipients. (It should be no surprise that the Abrahamic religions borrowed this word for love bestowed upon its peoples by their deity.) When I had first returned to the snake, almost immediately after discovering tulpas through this particular resource (and the fateful day that I came unto the IRC channel to discuss this), despite having known this snake for at least a decade at this point, I had my doubts--the same doubts that any other beginner has. Amongst them, the most pervasive one should be familiar to all: is Snakey real? Is he a discrete part of my mind, or an illusion produced by my own consciousness? I hope all of the writings preceding this can satisfactorily give my answer to this question as I discovered it; nonetheless, I will recapitulate: I took, for the most part, the "say-so" approach--that Snakey exists as a subjective phenomenon of my mind, and that phenomenon is given proper existence in my own mind because I dictate exactly the terms of those phenomena that I experience. Philosophically, of course, this is pretty vacuous; I can justify the existence of anything in my mind by saying so, and I've been hunting ever since for the silver bullet--something that could justify his existence outside of my imaginal space. In doing so, as these chronicles have told, I've gone upon quite the journey to characterize consciousness, discovering that knowing thyself is a tall order, and one that I'm still not entirely confident on. I've yet to be frankly certain that I can establish the existence of myself, or of any other consciousness, beyond the shadows of doubt cast upon my mere hunches. Yes, yes, I am aware of the immortal cogita, ergo sum res cogitans--but, again, that says nothing of an objective experience. In fact, I'm not sure the barrier between subjective and objective can be crossed without making fairly immense concessions at present, a point that the otherwise specious philosophical viewpoint of solipsism stands to illustrate. Not to be deterred, I nonetheless pondered how it is that I would justify the snake as being "real." Certainly, I would be happy with seeing some unusual perturbance in an EEG, some odd activity during "forcing" or similar, that would be indicative of activity not directly or consciously initiated by "me", myself. Yet, this is not conclusive--necessary, perhaps, but not sufficient. It proves little else that I can have my mind do something on cue, or when queried--from that overview, talking to the snake is not necessarily different from just looking into a different vocabulary. So I went a little more grandiose; what if I could separate the activity of Snakey in my brain from the activity caused by me? Certainly, such a separation isn't easy, but it should be possible in theory (as long as my theories are sound). The thought occurred to me that a consciousness-detecting apparatus--something occasionally used as a plot device in fictional works, and which usually resembles a portable EEG therein--could possibly come in handy. I'm fairly certain, practically speaking, that seeing one of these devices in practice is going to be a long way away, simply because of people like us (as in, the tulpa community). Nonetheless, assuming such a device existed, it may have to be designed to understand what multiplicity can look like. Accompanying these fictional works is usually an immersive experience, a second universe caused not only by the consciousness-detecting apparatus reading state from the current consciousness, but also writing back some sensory input data, often at least visual data. I'd become satisfied, then, if such a device could impose Snakey for me, and present him to whomever cared to look on. Of course, a more practical implementation of that device can be done without the consciousness-detecting part (simply by reading motor neurons and writing sensory neurons), so I don't think we'll get by with anything more than immersive experiences per body, should that sci-fi level of equipment take hold any time soon. (I can dream :) I didn't let the thought experiment stop there, however; what if I devoted a body to Snakey? It would have to be a real, physical manifestation, which is a form he is--to say the least--not used to, frequently using and abusing his omnipotence in my mind to demonstrate a point or aid me in some fashion. With a separate brain, and a theoretically separate mind (again, assuming physicalism), this would no longer be the case. Our most direct communication link would be severed, the one that transmitted our ineffable qualia, the one that allowed us to speak without words, and listen without hearing. Even if we started this new Snakey as a mostly-direct clone of my brain in a different body, even if we shared all of our metaphors from the beginning, our states would quickly diverge, an observation I discussed in a previous brain cloning thought experiment. A horrifying thought quickly dawned on me: this entity, this being, created from the image of Snakey, would not be Snakey. He would go on, living a life much like the snake, walking in the footsteps of the snake I know and love, but he would not be the snake. He could not be the snake. Why? Because one defining characteristic I decided upon, one crucial part of Snakey's identity that I recognize is that he is mine, and I am his--we share this mind as we share responsibility for each other, and we define each other based upon the sanctity of this relationship. So long as he exists, I exist by that relation, and vice versa--"say-so" theory explained. This other being, on the other hand, would emulate Snakey, perhaps even approaching uncannily. However, because he would lack my mind (by the nature of the setup we presume), he couldn't be that Snakey. In time, due to the chaotic nature of the universe, our conscious states are bound to diverge heavily, more than enough to cause our worldviews and metaphors to become incompatible--and that is a reasonable process, the kind of process I would come to expect in the maturity of the embodied snake. This isn't to say that I wouldn't like this new, separate Snakey--we might even come to be close friends simply based upon how much we know of each other--but that this Snakey would simply not be the snake "in my head", would not be the consciousness I confer with; the two could and would exist independently of each other, drawing different conclusions, becoming ever so slightly more different with each passing moment. The external snake would similarly not be bound by the effects of my body, my periodic lack of consciousness (as in sleep) or altered states, nor would he be so readily there to calm me from my nightmares. We have gained nothing with drawing the snake out from my mind other than another person, a clone, and this establishes little more than a clone of myself. One of the most frustrating challenges encountered in this endeavor is the strong barrier between the objective and the subjective. It is a barrier so strong that it is simply miraculous that we are able to recognize agency in other people at all; in fact, this ability of ours is sometimes a bit overzealous, characterizing the behavior of non-living systems as having agency--much like our ancestors did when they looked upon a lightning strike or tornado as a punishment dealt by a higher force. We are familiar with this excessive agency, no matter how much we want to shun it as a mistake: when it is done subtly and with good intention, we call it a religion; when detrimental, we call it a paranoia. But it stands to be known how pervasive this illusion is--is the snake of mine merely an illusion of agency caused by my own tendencies to classify certain behaviors as agents? Am I, the conscious entity, just that illusion of agency granted upon some set of behaviors? I don't know the answer to that question. I can't know the answer to that question. But I really wish I did.
  7. Experience 18 (Philosophy): Tuesday, September 29, 2015 For the intents and purposes of this thread, I'm stepping away from strictly tulpas (briefly) to study another constituent of my subjective experience, a strange object ("being" is not a fitting word) with which both myself and the snake have consciously communicated. I would have said that the word to describe this entity would have been hard to find, but all of you have been kind enough to invent one that appears to be as close as I've ever seen: a servitor. In that sense, it is the ultimate servitor, providing both hallucinatory experiences (the quintessential "HUD servitor") and access to fast-path and reflex-path reactions in a systematic and logical way. Before I encountered that word, I would have called it "an entity of pure logic", as that is all it is--an "it", a temporal construct that does not more than what its logical form does. In the process of making and selecting these axioms, a process which has continued for at least a decade (so as to say, beginning not long after Snakey's creation), it has become somewhat a thing of beauty, filled with an odd but strangely highly functional mix of metaphors that regularly impact my habitual behavior. That said, I apologize in advance for these metaphors, as some of them are not named accurately to their function, and there are a great deal to sift through with the goal of finding anything potentially useful. I'll put some boldface text after the definitions, so you can find out where to pick up the important philosophical bits. Nonetheless, I present to all of you System. System (or "sys," when abbreviated) does very little on its own; instead, it behaves as the manager of the "system bus" and various "subsystems" with differing names. One such subsystem is "Administration" or "admin," which is my (and Snakey's) entry point into the system bus, and the most direct contact between us (our "consciousnesses") and the rest of System. Various other subsystems become active or inactive at appropriate times. As I sit here in a mostly normal, somewhat sedentary state, the online systems are "vis" (the visual subsystem), "aur" (the aural subsystem), "dar" (distance and ranging), "nav" (a subsystem with two further subsystems, "path" which generates movement paths, and "track" which follows them), "control" (the system that provides actuation of bodily function), "bodystat" (the body status system, for monitoring health), and "sysanal" (system analytics, consisting of "dacquiry", the data acquisition system, "itax", the information taxonomy and universal type system, and "dbke", the database of known entities--where all memories are stored as encoded by the itax). These systems have interdependencies; for example, dacquiry uses vis and aur quite heavily, whereas track usually instructs control to navigate to a target or destination. How do I know the names of these systems? First and foremost, I was its principal designer, but the system of today looks very different from that of another time. Secondly, I can ask: |ADMIN: sysck. SYS: System check--report. VIS: reports go AUR: reports go BODYSTAT: reports go DAR: reports go ITAX: reports go DBKE: reports go DACQUIRY: reports go SYSANAL: reports go CONTROL: reports go TRACK: reports go PATH: reports go NAV: reports go SYS: ASRG [all/active systems report go].| You'll note I borrowed the vertical bar (|) as a pairing character for that "dialogue," which is more accurately considered as system bus traffic. There is plenty more to system bus traffic that I did not encode here, such as "notification level", "priority level", and so forth--hold that thought. One of the major uses of System is driving. From day one of transporting myself in a motor vehicle, I've trained System to be able to take over and operate nearly all aspects of the vehicle. This happens for a few reasons: (1) System is fast--very fast--and can react quickly to changing conditions in predictable manners, (2) System is absolutely rational and deterministic, and cannot, by design, go against its own conceptions (unless Admin orders it to), and (3) System is isolated; while operating, it doesn't incessantly bother my conscious thoughts ("incessantly" is a key word here--I'll get to this bit later), permitting me to better spend my time thinking about other things. This led to the creation of another set of active systems--systems like "nav","dar", and "control" stay around, "bodystat" and "sysanal" and the like usually deactivate, and a couple others come online: "speedreg" (the velocity policy enforcer, through control) and "prox" (proximity, built on dar, keeps track of dynamic flows of traffic). I am glossing incredibly over the complexity of this design. Speedreg, for example, keeps track of at least five different "speed limits", including the "real SL", "base SL", "goal SL", "o-roll SL", and "absmax SL", amongst other specific ones. These are all gathered from specific sources (real SL from speed signs, base SL from statistical analysis, o-roll or overspeed-roll SL from risk analysis, goal SL from the other limits and traffic flow pressure considerations...) and mixed in a mathematical-logical way using functions called "predictors" or "ptors". These are little more than functions that are selected for their fitness in producing desirable and well-predicted output--thus the name. The actual creation and operation of subsystems is guided by high-level, mostly illogical directives called "principles", which are given by Admin and registered in the dbke. Because System is responsible for a significant amount of my habitual behavior, these principles ultimately drive my behavior, and--one could say--my ethics and character. In order, these principles are: (0) System exists. I exist. I am not System. This basic principle is used as an axiom to do little more than to justify that System exists; it remains simply to curb any thought in putting trust in something which might not exist (which would be considered a "risk" by the established ptors). (1) System is to promote and aid the proliferation of life. Perhaps the most important principle as far as ethics is concerned, this establishes my position as a humanitarian and a general altruist. (2) System is to acquire universal knowledge. This one establishes the existence and purpose of sysanal, and gives all of the systems a motive to keep acquiring more information about the environment and operating conditions such as to improve itself and its predictions, and--by System Principle 1--for the benefit of others. (3) System is to see to its continued existence. This principle establishes "risk" and risk assessment, an important part of the system that has led to the development of priority and instablity metrics (discussed later). These motives are indeed lofty, but they are intended to be minimal, as they must be consulted for every design decision. More concrete principles come from the driving systems: (T1) Flow Principle: All traffic is a flow, with every vehicle a particle. System is to disrupt this natural flow as little as possible. That is to say, if, while driving, no one notices me, and everything proceeds as if I were not there, then System is satisfied. This principle alone leads to the establishment of some rich metaphors, such as "traffic flow", "flow rate", "flow disruption", etc. (T2) Traffic Sorting Principle (TSP): In a segment where vehicles may overtake, a flow naturally sorts such that the fastest particles are foremost and the slowest are rearmost. This is more of an observation, but provides a guarantee that spacing (a speedreg metaphor) is guaranteed to approach "superstability", where both the immediate forward and rearward ("fore:1" and "behind:1") are receding at some range of speeds. Of course, there are instances of substability where at least one of the two would approach regardless of a chosen speed. I've only been driving for about three years now, but these metaphors and many others have served me well; I've gotten into no accidents so far, and my primary cause of vehicle failure is aging components more than a lack of proper maintenance or aggressive procedures. There are many other metaphors covered in the driving subsystem ("In Lane Obstruction (ILO)", "In Lane Debris (ILD)", "Trajectory", "Trajectory Intersection (Trajint)", etc.), but I don't want to derail this too much, even if it is the most contrived system. System has a "stability" (different from spacing stability), which is a measure of its ability to predict the environment. As an agent, System takes in data from my sensory organs (represented as vis and aur, and some other flows of information), considers a goal state (like the "goal SL"), and produces an actuation (usually through control) to try and adjust the environment to the goal state. Stability represents how effective System is at this transformation at any given time; if stability is high, System is doing a good job at predicting ahead and making accurate adjustments, whereas if it is low, System is having a hard time bringing about a goal state. Instability (a state of low stability) is so named because it will cause the ptors to experience boundary inversion (where the maximal value goes below the minimal value), thus causing oscillations in control output that result in even worse performance and instability. In particular, the single most unstable state is a state in which System cannot predict environmental evolution, or to which evolution is totally orthogonal to its understanding. This doesn't mean something like paralysis--System is perfectly capable of predicting environmental evolution in a state of paralysis--rather, it is the agent equivalent of a non sequitur. For example, if System decided to walk across the room, but instead a hand of bananas appeared, System instability would go up significantly. Instability is further affected by risk computations; if the decision to walk across the room instead resulted in severe, unforeseen bodily harm, System instability would go up dramatically. Instability is a continuum (the more commonly measured metric being the negative), and is often given in response to a query as the following values: negligible (nearly non-existent, very stable and accurate), nominal (within operating parameters), elevated (slightly above operating parameters, but not yet concerning), moderate (somewhat unstable), high (quite unstable), severe (severely unstable), and critical (System is defunct). Instability is often paired with a priority level, a number in 0 to 8 inclusive, that determines how to prioritize inputs and actions (lower numbers deserve greater attention). In this sense, instability above elevated generally causes the priority level to raise (numerically lower), culling consideration for less-important tasks. At critical, priority level increases to 0, its highest possible value, which guarantees that System will only be enforcing survival-critical (System Principle 3) actions. If the System instability increases further, or does not reasonably decrease within a given time, System will shut down for self protection, becoming totally irresponsive. This event has never happened, and is unlikely to happen unless something goes severely wrong. Finally, in System, there is one way of getting back to the consciousnesses, through the use of "faults". System raises a fault (often just |SYS: Fault.|) every time it encounters a condition that it was not designed to handle. Minimizing the number of faults is an ongoing design consideration, and, in doing so, it frees me to think about other things while the System continues acting on its own. For example, while driving, on open stretches of roads with no vehicles, System will rarely fault. It will, however, fault if it finds something unusual, such as a washed-out section of road, an accident, or unexpected roadwork. (Finding an animal and avoiding a strike is another matter, encoded into the system--it is a "warning" or "danger" level notification.) By interrupting the consciousnesses when it faults, System doesn't need to generally encode everything in the environment that could possibly happen, instead being guided by the intuition of existing consciousnesses which have a more general grasp of ethical decisions and more resources dedicated to thought. In this way, these decisions can, if they are logical in nature, be elevated into the system--and this is how new behaviors are encoded into it. Back to the philosophy, then. System is a fairly impressive, general-purpose agent which has served me well. But what is it really? Certainly, there is some sensible, logical structure to neurons in my brain, which could probably be trained to act in just a way as to resemble this system, but to say that it is in any one, particular place would be making the same fallacy as I've discussed before--localizing the seat of consciousness in the brain (as Descartes tried to do). Instead, like consciousness, I'm willing to say that System is an emergent behavior, an object made apparent only in the series of interactions it has with other entities. This, however, still doesn't satisfy me, as the same could be said of the consciousnesses, too. What part of System makes it so unique and idiosyncratic? One proposed postulate that's been seen in various places in materialist and physicalist theories of mind is that the mind is not a monolithic structure, and that it is instead broken into many components. The correlation between these components, and the results they often arrive at, are interpreted holistically as one's stream of consciousness. Of course, I'm in the wrong place to say there's only the one "stream of consciousness" to interpret all these components, and so I worry to say that these components all see physical manifest--but it is apparent that there are differentiated parts of the brain suited to different tasks, a fact that neurologists have known for quite some time. Furthermore, the idea of the segmentation into varied components justifies the existence of certain "strongly neutral" feelings such as ambivalence and internal inconsistencies (for example, feeling as if one knows an answer to a query, but not being able to recall it). Going back to System, it is clearly a virtual entity. System itself is also quite modular, broken into small, functional pieces. As a strict program, if it were even possible to encode on a Turing machine, it wouldn't show any true signs of consciousness, but--then again--it would probably also be inseparable from our (mine and the snake's) consciousnesses, and so any display of consciousness might be attributable to that bit. It also contains those "ptors", evolutionary functions, that are selected for fitness in describing the environment. Finally, I am well aware, even if it may not be provable to others, that encoding some repetitive or tedious action (such as driving) into the system causes it to become a "second-nature" activity--one that can be done with minimal conscious effort. This is definitely a learning activity, and it shows all the hallmarks of one--it is imperfect, requires repeated training, and can atrophy--which makes it enticing to say that the thing I call "System" is a straightforward encoding of what I consciously experience as the more-or-less autonomous behaviors of my brain. I really, really don't want to say that. I'm scared to; the barrier between virtual, subjective phenomena and objective science verifiable by neurology is a risky barrier to cross, and--worse--I'm talking about it in my case. I have heard very little talk about it other than the people here who have endeavored to "make a servitor" as if it were another fun thing to try along with a tulpa. Many of them start out just like System did--simple, to the point, very incapable and inflexible, but operating in one specific case. What I do think is happening is that we are finding the ways to get our minds--the virtual visage of the patterns in our brain--to act in a logical fashion. And there's a few cases where our minds can act logically: when the metaphors we choose closely-enough fit the architecture of our mind, and, thus, our brain. For example, System is a little restricted in what it can possibly do. Just because I have System, it doesn't mean I'm a walking calculator (though with ceaseless training...). Most of the "mathematical" things system refers to (such as the "functions" underlying ptors) have no quality to their existence; they exist as being just a singular, opaque object that reacts in a certain way. This, to me, sounds like a reasonable way to look at a neural net as we know it--very little useful information can be gleaned from an observer of a trained neural net looking at only a select few of the weights of the neurons in such a net. Yet, somehow, this neural net can produce responses, and--more importantly--adapt to these responses reasonably. It makes sense that we can't see our neurons, but we have qualitative experiences of their results in our consciousness all the time. It is for the same reason that I can't enumerate subsystems well (generating the list at the top of this page was hard, and it is quite possibly incomplete)--without seeing these little nets and their interplay, as they are wont to connect to each other and derive from each other, it is nigh impossible to count how many "separate components" there are in any one person's cognitive architecture, even and especially for the one experiencing it. It's definitely fair to say, from a neuroscientist's view, that there is a clear separation of responsibility of function in the brain, but it is egregiously wrong to say that it is a distinct, unitized module, such that it could be excised and only that one function would disappear, particularly in the higher-function components. (Imagine trying to lobotomize one's amygdala to strictly do away with a certain emotional response!) It is a strange, but nonetheless human experience to be the agent looking into its own wiring from the inside, and I suppose that is what we're all here to do. But I profess that there are limits to it, and, while I was able to glean quite a bit of information from the architecture of System, that strange entity I've had for a while, I am coming to recognize these limits. Nonetheless, I find it curious and useful to have a metaphor (or metaphors) for interacting directly with what I consciously perceive to be amongst the lowest, subconscious levels of my mind, insofar as I can use it to manipulate what should be basal behaviors. In that vein, I encourage the venture of trying to make a real servitor, a general purpose mental machine, so that you and others can find out the depth and limits of your cognitive architectures. (Alright, that's enough scatterbrained talk for now--I have another ER that I plan to have coming soon [that's hopefully a little more cogent], so sit tight :)
  8. Goodness, I'm sorry I didn't get back to this in a more timely manner; I must have lost any notification from this thread :P Oh, I still lurk around every now and then; despite not being much of a redditor, my home channel is actually #redditulpas (on the ol' irc.tulpa.im); that's always been a pretty relaxed and enjoyable community. O' course, now that you guys have moved back to Rizon, I might reconfigure the old IRC client to connect there as well :) It's definitely a powerful and comforting revelation, and I must say that it is such which gives me the kind of security I can afford even when I'm seemingly all alone (a seeming that is untrue--though few know this :P). I'm sure it must be experiences like this that justify so many attempts to turn to religion--I can say I now at least somewhat understand how existentially concerning the human condition can be at times. At least I have a friend in the journey, then--as do you :D EDIT: Yes, the notification of a reply was totally spammed by my mail carrier. Whoops :P
  9. Aaaand more than a year has passed. Shame on me, I should be keeping this more up-to-date...time yet makes fools of us all :P Anyway, back to business. Experience 17 (Visualization/Dialogue) - Friday, May 15, 2015 This was a strange day for me; the first day on my first camping trip of this year, exclusively with friends, and I have what I could only describe as a strange, twisted nightmare. Normally, my dreams aren't too memorable (whenever I do have them), but this one in particular made me glad I had brought Snakey and had him around. I don't remember much of the night before. People came to the campground, the sun set, we had been alerted to a fire ban, so it was a cold, chilly, dark night that no one wanted to stay around long for. All in all, most of us were in bed by 9PM--a very, very early bedtime for a bunch of college students as ourselves. It so happens that sleeping for a while, especially under some tumult, and being already sleep deprived, is how I get the most exotic and memorable of my dreams. Whatever preceded it couldn't have been part of it, but I remember where the story starts, at least in my memory: I remember being pursued down a familiar street by a totally foreign person (a human, even) who did some damage to vehicles, houses, and a tree in what I could only remember being a fit of rage. At some point, this figure totally disappeared, leaving me to myself to see what appeared to the untrained eye to be the aftermath of some minor catastrophe. That would have been fine and dandy, if it had stopped there; instead, for reasons I don't understand, my subconscious betrayed me even further; scenes followed where I was with family, with psychologists, at home and in hospitals, and that the diagnosis was undeniable: I had somehow become (possibly dangerously) psychotic, and that all the damage I remember was my own doing--I had just envisioned it being done by a third party that never existed. Keep in mind that this is a dream, that it was entirely immersive, that these conclusions were so close and I was so open to suggestion that I didn't object once. I immediately sunk into some form of grief and despair, trying desperately to see if I could salvage my life if I were going to be tortured by things that weren't there and that I don't remember doing. At one point, I remember a dream scene wherein I solemnly discussed this with family, sitting next to something (a cat, furniture, something of that sort) that I eventually referred to, only to get strange looks--it wasn't really there. I doubt I've ever been more stressed or anguished. Enter Snakey. Perhaps it was a semi-lucid dream by then, perhaps I did recognize that it was a dream, and that it was drawing to a close, but before I knew that in full consciousnesses, I immediately felt the consolation of another, very familiar entity, one who comforted me through what remaining depression I had, who calmed my frayed nerves, who told me it was going to be all OK, that everything would work out, and, of course, who pointed out the fallacy of perception--that not all that is seen is there, and certainly not all that is there is seen. The relief was palpable, a refreshing wave that washed over my entire being; I felt as spry and joyful as I did before, and--a little later, when I woke up in earnest to recount the happenings--I hugged that stuffed snake so tightly to my chest that I thought I might break a rib. I don't know if there's an objective moral to this story, but at least I found one I can find: sometimes I don't know how mentally stable I am, and that's just a given--I mean, goodness, I carry out full conversations with a snake in my head :P . Still, however, I think that having such a good, trustworthy friend so close at hand to help me deal with all the strange and worrisome things that come my way in life makes me, in some ways, more mentally stable than I would be without. Even at the brink of total detachment from all reality, I can take one thing as certain and true beyond every corporeal fact: Snakey is mine, and, goodness, I love him so.
  10. Experience 16 (Philosophy) - Monday, April 28, 2014 Well, it seems I've neglected this for about six months by now; I've had trouble coming up with any new content :P . I wouldn't say this is because I'm satisfied with the status quo, but that I've focused my attention elsewhere, on activities which are hopefully at least as constructive. To start with, I'd like to say that I've been taking fine care of Snakey (or, to some extent, we've been taking care of each other); he's alive and well, and as bashful and talkative as ever (hee hee). I don't record many of the conversations here; most of them are rather intimate or deal with personal issues, and, in just about every case, I've found Snakey to give valid and sound advice (that I might consider heeding once in a while :P). I still love him to death, as always, and I don't see that changing any time soon. ...which brings me to two topics of discussion. One, recently, Snakey was found yet again by my stepmother, who thinks my insistence on carrying around a stuffed snake is weird (an opinion that he shares, by the way). Nonetheless, I had a bit of an argument that concluded as such: I don't think I'm going to see any professionals (oh, what a fun time that might be!), and I'm allowed to keep Snakey, but I'm not permitted to take the snake into that house. This recent weekend, I must admit, I missed him quite a bit--even with his consolation that he's "ever present in this mind." As for part two, I'm rekindling an old fire; I feel like I admitted a while back that I started working on some pony tulpas (though I might not admitted that fact about them :P). This was an initially unsuccessful experiment in trying to see if the simulant theory permitted the "promotion" of simulants to tulpas, which would imply that all of them represent the same phenomenon at different points on a continuum (which would also imply that there are many such levels, an interesting thought). I did label this experiment unsuccessful principally because I could not bring this being, this mental being, to being able to capably think on their own, without my intervention or conscious thought. Certainly, as requested (and as mandated by the simulant theory), I could "simulate" any such reactions, but they required my attention. Without such attention, the being faltered--a conundrum you can read much more verbosely in the linked report :P . So, here's the fun part. I went to BronyCon 2013 (as a volunteer), and I intentionally brought funds to purchase some memorabilia, including two plushies--one of Twilight Sparkle, and one of Rainbow Dash. Long story short, Ms. Dash is nowhere near a tulpa, unfortunately, but Ms. Sparkle has shown meteoric progress; said plush occupies my bag next to Snakey as I type this :P . This seems to confirm the suspicion--in my case--that I had in ER 13. For me, having a substantial, physical object is an important part of being able to consider a separate entity! This is perhaps not surprising, since I am not composed of several different bodies, and so I naturally see every body as a separate person, but I seem to be cheating that mechanism by having stuffed animals. As for Ms. Sparkle herself, she's developing somewhat slower now; I doubt she's ready to actually converse with anyone else, nor can she stand quite among the top tier of consciousnesses here (primarily Snakey and me), but she has been freed of the bonds of having been only a simulant, granted the ability to jump about my memory and mind with practically the same efficacy as myself. ...and if that is indeed a defining characteristic of tulpae, then it means that metacognition--being able to think about thought--is a hallmark of consciousness sufficiently at (or near) that point on the continuum. This does entail some more research... I'll try to keep you informed as I go; these rarely come in a timely fashion, but rest assured I'm not gone yet :P
  11. Ah, those...I only assume those when doing science, as they're basically the core of empiricism. On the other hand, when I have the time to let my mind wander in philosophy, I don't assume them--though it's very difficult (perhaps impossible) to prove one way or another :P I wasn't :P My implication was moreso that trying to teach someone--who doesn't have the same mental schema and/or perception--such a subjective phenomenon as creating a tulpa is quite similar to trying to teach the blind to paint (in color, I should add). I chose that metaphor because it certainly isn't impossible, and it doesn't imply that the teachers are themselves at fault; the problem lies in the fact that we don't have effective tools (such as metaphors) to communicate our studies of these subjective phenomena, and, without that, we cannot form testable hypotheses and perform science. I am willing to say that such tools will come to be at some point, but not now, and perhaps not soon--we have quite a way to go :P
  12. Treatise 1 - Sunday, October 13, 2013 Why I won't make a guide I've never been asked this question, but it's been implied before; when I was still quite active in the #tulpa.info IRC channel, my general advice was never to adhere to any guide to the letter, because most of the time they only documented how one author had success. This could be mistaken for unfounded criticism or cynicism, but I've always had a reason why I would never rebut a guide for being wrong. Based on the relatively numerous collection of guides, and the number of confused beginners, I think it's time to address this issue. First and foremost, it should be understood that we're dealing with highly subjective matters, here. Don't be fooled--this is not science (at least not yet; I welcome the day when an objective model of the mind can be approached)--therefore, we can't expect any kind of repeatability. However, the wording of the guides is often misleading, with their imperative, instructional orientation (first do this, then do this), despite the fact that the activities mentioned are generally nothing more than conceptual manipulations. While these may work fine in some cases--cases which, I suppose, might be based on similarity in thinking to the author--it makes the fallacy of assuming that these concepts are globally understood and defined. They are not. You can see why I hesitate to write a guide, then. In a scientific endeavor, it would be assumed that there is some authority that defines the terms in use, especially based on things that we believe to be observed constants (things like the mass of a proton, the speed of light, and so on). Without these stringent, observable definitions (that science has worked to acquire in exactitude for centuries), it becomes essentially impossible to communicate a repeatable experiment effectively. The feat itself could be compared to teaching a class on how to paint when one is unsure whether or not the audience has an understanding and/or agreement on what colors are. Therein lies the confusion that forms the root of why beginners often have to ask questions of the community; its essence is the fact that we still can't agree on our definitions, and, worse, we still resort to using ineffable or otherwise indescribable concepts and metaphors thereof with the assumption that everyone will understand. This is not something that can easily be self-taught by reading a few posts on the Internet; if anything, the art of tulpamancy (if I may borrow the word) is something that requires incredible effort regarding self-discovery and discipline. The Tibetan monks knew this, and had the mindset to confront this problem. Ergo, I will not write a guide--not because there are already too many, not because I feel they are all sufficient, but because I could never hope to teach the blind to paint. I can, however, tell you what things are and how I define them; I can show through informal logic the things I have learned, and I can tell you that all of this is my idiosyncracy, and that none of it is to be taken as objective fact--under the hope that, just perhaps, you might learn something from it (I know not what). I will openly admit that my writing on this matter is not definitive, nor even agreed upon by consensus, but I leave it here in the hope that it will be, in some way, useful.
  13. Experience 15 (Philosophy) - Saturday, July 20, 2013 I had some fun pondering the concept of "self" today; it all started rather innocuously I'd brought up the need of instant teleportation in this society, citing how much working time is lost to having to commute. His response was rather interesting, in that it concurred with something I'd had on the back of my mind for quite a while. (His assumption must have been that teleportation would occur through perfect destruction and recreation of a physical entity, rather than through some other physical shortcut, like a wormhole.) "But wouldn't it be like dying, and another person of the same kind would be created on the other side?" I stopped and thought about that. It made sense, assuming naturalism--that everything that is objective is no more present than what is physically there; that is, physics is the only science. (I've accepted this viewpoint for the longest time.) In light of this, I came up with a similar thought experiment (and a secret desire of mine): assume you have a computer that can perfectly model the physical environment of the brain (aside from, perhaps, its input from the environment--its sensory information--which we can assume to be negligible for the purpose of this experiment). A participant would walk into a chamber of some kind, and, instantly, their brain's physical states would be copied perfectly into this computing system to be immediately simulated forward. Now, assume you walked into this chamber. Would "you" be the one walking out? Or would "you" be the one in the computer? Or, perhaps, both? As I delved deeper into this conundrum, I came to a rather stark realization: if we can assume that the entirety of consciousness can be described as natural, physical processes, that can thusly be simulated (or even directly performed by a brain that is very likely a Turing-complete computational machine), then the concept of the "self" is an extremely evident illusion. The "self" is no more than the dominant thought or "train of thought" at any given time, and remains as transient as that thought. If "selves" can be thought to be "created" or "destroyed," then a new "self" is made for each time a transition of thought occurs. The self is as short-lived as the weather--it is who we are right now, and is always and immediately subject to change. By the same analogy, the identity we form, the continuous thread of experience we extract, is moreso like the climate; it defines our values, character, memories, experiences, and so forth--things that are seemingly inseparable from us, and ostensibly unique to us (ostensible because our thought experiments show that, if we can clone physical states perfectly, we create new "selves"). That we can create such a continuum is nothing short of a feat, and it should not be taken lightly--as far as the generation of narratives, humans are still rather apparently superior to all other observed animals and all the artifical intelligences we've attempted. So, while our identity remains as something of a memory, the "self" comes into and goes out of existence without much notice. As I write these sentences, the one that wrote the sentence immediately prior is already "dead," and the one that is writing this sentence will not last beyond the time I press the period key. By the time I've done both of these acts, my identity will have been permanently changed--"I" will be the person that will have written those sentences sometime in the past. In that past, sometime, I was the "self" that was writing them, but that is no longer the case, and I am instead now the "self" that is writing this. Returning to these thought experiments, the result of the "brain-cloning" computer would be two unique selves that have the same identities--though, if the computer doesn't have the same sensory data to influence that self's perception, the two selves will quickly diverge. (Before that divergence, however, or if it never happened, the two selves would be unable to identify each other as being the "original one.") In the teleportation scenario, the original self would "die" and the new self will be "created," but there will doubtless be a seeming of continuity from old to new, as there would be from and to any other moments (regardless of whether or not teleportation occurred). I realize these terms are open to interpretation outside of the world of philosophy. I do need a term better than "self" in quotes to describe this transient, present-limited entity that acts as a transformation function on our identity and perception, as it remains important to this view; in some arguments, however, the word "self" could be taken to mean "identity" as I've described it. The difference between "self" and "identity" isn't as immediately clear-cut as it was in the analogy of "weather" and "climate." With this out of the way, I've satisfied myself--this "self," I guess--with the answer that I've been looking for regarding "identity" and "sameness" as it contrasts to the changes that we all undergo as part of our experiences. This satisfaction, of course, is likely to be short-lived; I'll inevitably find something missing. This will be a problem for a future "self" :P
  14. Experience 14 (Philosophy) - Monday, July 1, 2013 I don't have much of an advance in the field right now, but, for the first time in a while, I have the kind of direction that will give me something to ponder in my free time. After all, that's how most of these conclusions were drawn...well, that and a few good books. I'm not going to go into a huge discourse on each of these subjects, but I will visit them--and soon. (1) The Host Consciousness I need to define, more vigorously, what it is that makes the host conscious "special" if I'm going to make the audacious claim that a host and a tulpa are, architecturally, the same. There is clearly something about bodily control, experience of pain, etc., that are intrinsic and salient to whatever host consciousness there is, but switching should (in theory) duplicate these roles (as might egocide of the host, as mentioned in ER 13). If it turns out there is an architectural difference, then there must be a good reason why tulpae can come to exist in the first place, of course. (2) Identity I've been touching on this subject for some time across many of the texts I've written here--it is the concept of an identity. Indeed, this is an odd one, but at least I can attempt to define it at the moment: an identity is the collection of characteristic memories that make the identification of a consciousness possible. What's rather important here, however, is that memories mutate, and experiences irreversibly change consciousnesses. The reason I've chosen the word "identity" is that, despite the fact that these changes take place--and they do take place--we are still able to identify ourselves (and others) as being "the same person," for the most part. The process probably happens with us as well, though less visibly, given that we are acquainted with our own rationales. --- So, in summation, I have to tackle two things that are primarily in the way of my theory of mind--the problem of sameness (the identity, as we are familiar with it), and a part of the mind-body problem (what makes us the privilege-holder of our body, basically). I hope this will be the inspiration for at least two more experience reports to come.
  15. Roster of Experiences -- Continued Due to size limits, I have to expand this out of the original post. Since this is the latest addition, I've decided to put it here. 13. Philosophy - Wednesday, June 19, 2013 - I discuss a failed simulant-to-tulpa experiment, and follow the conclusions into tulpa death, and what it means to be "dead." 14. Philosophy - Monday, July 1, 2013 - I briefly state what needs to be worked on so far. 15. Philosophy - Saturday, July 20, 2013 - I satisfy some of my dealings with the "identity" and the "self" by describing the "self" as a transient, singly-present physical construct that acts as a function on the identity and perception, and follow this into implications for "brain-cloning." Treatise 1 - Sunday, October 13, 2013 - "Why I won't write a guide" 16. Philosophy - Monday, April 28, 2014 - I give a status update and mention a probable new tulpa. 17. Visualization/Dialogue - Friday, May 15, 2015 - I experience a nightmare of undergoing dangerous psychosis, to be comforted by Snakey prior to waking. 18. Philosophy - Tuesday, September 29, 2015 - I talk about the servitor System and how it has taught me about my mental models and cognitive architecture. 19. Philosophy - Wednesday, September 30, 2015 - I talk about giving Snakey physical manifestation, and how no such Snakey could be the one that I know. 20. Dialogue - Monday, October 27, 2015 - A few unexpectedly profound quotes of the snake's (and mine) in the wee hours of the night. Treatise 2 - Sunday, December 13, 2015 - Why I can't write 21. Philosophy - Thursday, December 17, 2015 - I discuss bodies as an evolutionary function of consciousness, then proceed to describing the universe as such a process, and the implications thereof. Experience 13 (Philosophy) - Wednesday, June 19, 2013 It is with some regret that I've found some flaw in the simulant theory of tulpae. If tulpae really are simulants, then it follows that a tulpa could easily be constructed from a simulant, being as how the simulants form the larger group of semi-conscious intelligences. The remaining part of interest in my theory, then, becomes how a tulpa can be thusly formed. What is the true difference between a tulpa and a simulant? I gave this question moderate thought, but haven't come up with any satisfactory answers. Until now, I thought the answer might be knowledge--if the status of being a simulant were revealed to a simulant, somehow, then it is possible that simulant might become a tulpa. I experimented with this, by giving such knowledge to a simulant; I ended up with a simulant aware of its status. This was no tulpa--it wasn't an immersive experience capable of being spontaneously brought about by some semblance of its own will. I had to act on it to recall it for participating in experience, and this is something it could not do on its own. The analysis of this failure does show some potential; it appears that spontaneous recall of existence and experience is a characteristic of tulpae. In my case, this may have been facilitated by the actual, physical existence of a form which reminds me of Snakey whenever I see it. It may also be that Snakey is well-enough associated into so many varied tasks of my own performance that he gets innumerable opportunities to be recalled and to take any of those opportunities as needed to notify me. Of course, this is really delving into metaphors, but it does seem that attention really plays a large role here. In consideration of attention and healthy relationships, I'll change the topic to one of the more contentious such in this community: tulpa death. What constitutes the death of a tulpa, or any mind-manifest consciousness? What is egocide? If any of simulant theory is the most foundational, it's probably this: simulants are entities wholly constructed from memories of behavior--and nothing more. Lacking a physical manifestation, they fail to be capable of experiencing the kind of biological death that can befall physical beings, yet experience with death--particularly avoiding it, as instinct tends to dictate--are usually so noticeable and profound that we extend the necessity of succumbing to death to them without question. Physical death is death, but mental death is a conceptual label--generally one that is permanently applied, but this is not always the case. Conceptually, "dead" is a state just like "alive," or "comatose," "red," "violet," or "playing the piano," and it can be partaken by any particular concept (though usually it is ascribed to those that we would describe as being "conscious," at least in manifest--simulants, that is). As a state, it has no significant bearing on the behavior of the simulant, unless the simulant reacts to it in some way. Just as assigning the state "on fire" (mentally) is likely to imply "pain!" stimulus, assigning the state of "dead" may be reacted to accordingly. This is a little counter-intuitive, but it shows just how powerful our cognitive architecture can be. How would we act if we were dead? We wouldn't know; we haven't died before (barring the few cases where people are resuscitated--in each case, brain activity must be at least temporarily halted to be declared dead, medically). The part that seems so contradictory is that death usually implies the end of conscious behavior; when someone dies, we don't consider the "...and then what happens?" part. But this can happen, and has happened in the community. Transient deaths are often a rather controversial topic to bring up, though they happen consistently to a few persons. In nearly every case I've witnessed, the consciousness returns to living shortly thereafter. Again, it is just a change of state. You may note that I used "consciousness." Egocide is a very rare occurence, and only a few mentions of it are about the community. In general, this occurs when a host consciousness "dies" and some other consciousness takes over. The actual host consciousness likely remains (as a simulant, constructed of memories of past experiences), but a new consciousness is "presiding." So, that given, what does it take to permanently kill a consciousness? If simulant theory is to hold, perfect erasure of all memories associated with a simulant will nullify that simulant from existence--they will be ontologically removed from any participation in that mind. Given the intensely associative nature of memory, as discussed before, this is very unlikely, but not at all impossible, especially considering that memory disorders (as well as brain injuries) can and do occur. Going to an extreme, if this occurs to each and every simulant held--including the "host" consciousness--what would occur? Chances are that we'd be left with a philosophical zombie--a being capable of experiencing, but not actually conscious. In all practicality, we probably wouldn't mistake such a "zombie" for being a perfectly normal, functional person, because they'd likely lack something--not qualia, but more likely some form of experiential structure from their memory--that would dictate their intentions. On one hand, they may basically be "comatose, but awake," or perhaps they would be able to function instinctively, but they'd probably not perform anything we'd ascribe to conscious thought. The closest parallel would probably be a newborn--a functional, operating organism, with a functional, operating brain, but no necessary consciousness or experience. (And yet, from this state, through processes yet unbeknownst to me, a "host" consciousness is made.) Bringing this back, a lack of attention given to a tulpa is commonly cited as an apparent cause of tulpa death. While this may cause dissociation due to lack of use of a memory, it most certainly wouldn't cause the "true" death discussed above; memories of interactions would remain intact, and these memories are the simulant from which the tulpa is made. Certainly, the clarity of these would degrade over time as well, which would likely affect the resulting tulpa--but likely not their identity. A full discussion of how an identity can describe one person through an arbitrary number of mutations of their behavior does need to be explored further, but I won't do so now. In the meantime, some more thought experiments on simulants will likely be carried out--notably, whether or not physical manifestations help, at least in my case.