• Content Count

  • Joined

  • Last visited

About Virgil

  • Rank


  • Sex
  • Location
    The Czech Republic
  • Bio
  1. I'm going to use the email quoting style again. I hope it's not too inconvenient for the reader; it is certainly convenient for me. > let's define sentience Let's not. Sentience is a highly elusive thing. We could discuss it for weeks and not get anywhere. Let's do something productive instead. > A computer, no matter how complex, is not sentient, because it can only use a linear code of algorithms to process information. If you can simulate a brain on a computer, you can have a thinking computer. It's theoretically possible, because classical physical systems can be emulated (or simulated, if you'd like) on a Turing machine and the computer is a Turing machine. It may be very difficult in practice, though. I imagine running a sufficiently complex model of the human brain could be computationally very expensive and thus infeasible. And what if the wacky theory that the brain's operation depends on quantum phenomena is true? Quantum systems can also be simulated with a classical Turing machine, but classical complexity is exponentially proportional to the size of a quantum system's space of states, which is ridiculously large even for one consisting of a couple of atoms. Imagine the whole brain… Well, let's see what quantum computation will bring us in the future. This was just an interesting but quite unimportant remark. > A brain, however, is quite different. The brain processes information not by a linear sequence, but by a mass of individually-working neurons. True. > There is no pattern it has to follow, nor is it handicapped by any logical rules. Aaaand, this is too vague. Patterns, logical rules, sentience? It's sort of true there are no apparent patterns in the state of a neural network, but there is a little method to its structure, if you look at the brain, specifically. Hm, I can't say I understand that sentence. > This is what makes the brain sentient. Umm—— > But that's not really true. Let's go back to our visual cortex example. The wonderful Wikipedia has beaten me to it: > > This is clearly a linear sequence, > > and a pattern. Not sure again what was meant by linear, so I'll just say that cerebral pathways are full of feedback loops, and there are seemingly useless connections in it, mostly just single axons, connecting supposedly unrelated regions. > Everything is governed by logic, also. American conservatism sure isn't. Joking aside, what logic? Back to the brain. Couguhl is right; neural networks definitely don't employ logic. Individual neurons do not perform boolean operations, don't act as logic gates, and the brain definitely doesn't encode or transmit information from one part to another as signals consisting of sequences of binary values. I assume here I interpreted logic correctly. Any other interpretation in the context of signal processing would be foolish. Logic, logic, what does it mean? > a fourth law for superposition. I have no idea whatsoever what this means. I'd normally look it up on the web, but I'm currently in the middle of the woods, connected to the internet via a crappy EDGE connection, and it takes a dozen seconds to load pages. I'd go mental if I had to do that, so, please, explain to me what it is. > Not really true. I mean, algorithms can modify themselves, comparative to neural rewiring. You missed the point (I think — I'm not even sure what it was supposed to be), which was the fundamental difference between the typical computer and one whose function is based on neural network processing and how that is the reason why computers can't be sentient, an argument that makes some sense to me but is based on very shaky foundations. Anyhow, let's discuss the differences, because I think it's an interesting topic. Also, I hope to dissuade people from letting the preposterous brain-computer analogy cloud their reasoning when dealing with the mind and other brain functions. The computer executes one instruction at a time per core — sequentially — is fully deterministic, uses synchronous logic, encodes information explicitly in definite physical locations. There are more differences. Biological NNs are immensely asynchronous and many, if not all, of their elemental units or neurons are simultaneously engaged in a (here unspecified) process at any given time, while only a subset of logic gates is used at a time in synchronous logic circuits. biological NNs are plagued by stochastic processes that arise from its own activity as well as internal noise. To function in the semi-deterministic manner required for coherence, it must therefore be resistant to some noise levels and various other disruptions. This resistance, it turns out, is an intrinsic quality of many types of neural networks. What is more, individual neurons are somewhat noisy; they fail to fire from time to time despite being sufficiently depolarised or fire when they're not supposed to. This is true only for smaller neurons, though; large cells tend to be reliable. It should be clear that information processing must be distributed to some extent. The brain has other interesting characteristics regarding dealing with disruptions, but that's beyond the intended scope of my reply. Back to computers: What happens if one gate in the CPU fails to output the correct value, if just a single transistor fails to switch? It depends on what code is being executed, but it will always have definite and apparent consequences, be it a system crash or an incorrect result of an arithmetic operation. Every unit must work flawlessly in order for the whole system to function as intended. In the brain, a neural network, any neural network, even many mathematical models, you can knock out several neurons or more depending on its size, and it can still perform its task, albeit not as well as when intact. The forebrain is even more resistant — you can destroy sizable populations of its neurons, and if no critical pathway is affected, it won't result in any apparent deterioration in function or performance. Evolutionarily newer brain areas, usually frontal, seem more impervious than older parts, located mostly in dorsal and caudal regions. One possible reason is that the frontal lobes contain fewer critical pathways and more diffuse networks, or that impairment in their function is not readily obvious. Still, the frontal lobes are more likely to be injured and more susceptible to injury, possibly due to their location. Yeah, all of this is probably off-topic but, hopefully, interesting. You mentioned neural rewiring, which is another process of possible interest. But what does it mean and how is it carried out? I have an inkling; let's see if I'm on the right track. > Also, computer systems can and have been built using simulated neurons. I think he meant the digital computer, the kind one usually means when one says computer. >An active cortico-thalamic complex is necessary for consciousness in humans. So it should be regarded as its centre? You shouldn't be so hasty in your conclusions and laconic in explaining them. It's not so plain and clear and there are other structures and functions necessary for consciousness, but you're essentially correct. Keep in mind, though, that the thalamus and cortex are tightly entwined, and the thalamus and signals from the looped circuits it projects to it only stimulate or induce consciousness; the process itself occurs in the cerebrum. Still, it can be considered its centre. It sort of neatly fits in my conceptual framework, so why not. But what is consciousness? One of my friends responded to this question when I once asked him, "It's when neural activity in the brain is coherent." I didn't quite get it then, but now I believe he was right. Consider, however, how broad this definition is in contrast to the one relying on the responsiveness tests and that it depends on another rather vague idea, coherency. At least we know how some types of incoherent activity manifest. Since those thalamic nuclei responsible for monitoring the cerebrum's activity and adjusting the T-C-T circuits' firing patterns synchronise the neural activity and basically make it coherent, they can indeed be thought of as the heart of consciousness. >The brain isn't some magic sandbox. It is generally thought that it has specific, distinct areas responsible for distinct functions. Distinct is a poorly chosen word here. Areas in the cortex as mapped to their respective functions are very vaguely delineated, and they overlap. And specific? Nah. There are many parts that process various signals and integrate several functions, all in one solid physical structure. It might not be a magic sandbox, but it's a system of quite poorly understood black boxes. We should regard the brain as what it really is, a vast neural network comprising a hierarchical and intricately interconnected system of fuzzy modules of diverse population sizes. We might know what function many of these components are involved in, but we have no idea how they do whatever they do, which is presumably processing of information in general. Cerebral modularity loosely follows a pattern: Evolutionary older areas tend to be more distinctly separated into pathways, and recently evolved, or more advanced areas seemingly more monolithic or consisting of diffuse networks. The frontal regions and parts of the temporal lobes, fairly recent additions to the mammalian brain, still elude our understanding as to function. The knowledge of the rest comes mostly from lesion and activity studies, and all those can tell us is this part is required for or involved in that function, but we don't really know what it does, how it works, and whether other structures are involved in said function. We need accurate neuronal models and a means to translate their state into meaningful information, and only when we have those we can move on to trying to understand actual structures found in the brain. Some neuroscientists, even those of renown, claim we know about 20-50% of all there is to know about the brain. As if it were possible to quantify knowledge. I know enough to know that I— we know next to nothing. And if one thinks we'll have the technology to transfer the mind/consciousness/whatever in the next fifty years or a more advanced brain interface than just electrodes detecting and effecting voltage changes at the nodes of convergent pathways, then one is deluded. This area of research is little funded and so far there is no considerable interest in it. Understanding the human brain, or the brain of any higher animal, for that matter, will keep eluding us for longer than people may think. Sorry for the sloppiness of my replies. I don't really want to put more effort into them.
  2. Aeris? ——Wait—no. Aeris's diction was horrible. Yours is fine. Damn, waffles, I really admire your patience. Or are you arguing with her for the hell of it? If so, I'd still find it admirable.
  3. Virgil

    Proof of Tulpae

    > I don't think that would work Neither do I; I don't expect the success rate to exceed 0.1 if it is at all possible. The point of this test is to bypass most of the motor processing pathways and learn how to trigger directly individual cells (they're of the Betz cell type; their dendrites are numerous and project quite far layer-wise) whose axons connect straight to the motor neurons. Otherwise, it would be quite useless, since the motor system is already trained to control the skeleto-muscular system by contracting whole muscles (well, that might not be true for posture and fine movements, but I'm not going to expound that). It is not possession; it's an entirely different skill. I used to think it was impossible to trigger motor neurons individually, until I learned about the twitches tulpaforcing often causes. I have no idea about the actual manner of operation. There even might be a mechanism specifically developed to produce such twitches for whatever reason. It could be a remnant — a vestigial function or a by-product brought forth by imperfect evolution. If that were true, the Betz cells we're after could be out of conscious reach no matter what. Well, that's what I want to find out. Well, the premises are mere assumptions, wild ones at that, but I can't think of anything better at the moment. The advantage of this test, this potential ability, is that it's visible to others, not just the host, which is one of the objectives, which visual or auditory hallucinations don't meet. Should a significant number of volunteers succeed, it'll considerably change my understanding of the cerebrum. I'd even venture to say the implications would be fascinating and shocking. The outcome, if positive, could turn neuroscientific research upside-down. Am I overexaggerating? Oh, well. Imagine this: A person had suffered some damage to the visual cortex, which made him/her (cortically) blind but left with the ability of blindsight. Now, if it were possible to manipulate the motor system on such a primitive level, it could also be possible for a tulpa to gain access to the information reaching the remaining visual cortex and interpret it, even though the signal couldn't reach the host's conscious mind. It could to some extent restore vision in some cortically blind patients. Sure, the amount of processing the visual cortex carries out can never be recovered, but at least some crude perception of the surroundings would be a huge improvement. This is just one possibility. Imagine how much a tulpa could affect the adaptability of the cerebrum. We already know a tulpa can affect the visual cortex in ways previously unthinkable. They can produce visual hallucinations, so there is a chance, albeit slim, that this is possible. Unfortunately, there's not going to be much interest, I guess, because who would want to learn the ability to voluntarily twitch? Oh, and I will make a thread once I have more energy if you think it worthwhile. :)
  4. I thought the short/long-term classification of memory was outdated. Even if it still isn't, there's too much dispute about its existence and no convincing evidence to confirm it. I don't like to use it at all. The section concerning memory in infancy would've been much clearer, had you used the other classification. Children learn language among other skills during that period, and they remember words and phrases after having heard them just once. Implicit memory, or some of its subtypes, is at its peak. So seems to be semantic memory, for knowledge is also rapidly acquired during these years (although I think much of it is replaced or buried in later years). These various types are undeniably considered long-term, and I think it was the episodic memory capability that was in question, I think, a memory which is long-term too. You said without explanation that classifying memory by duration is useful in this instance, but try as I may, I fail to see how. I don't think this argument will be very productive, should you wish to pursue it, but meh. Personally, I'd like to drop this matter. It's off-topic, anyway. > maybe you did misread Ah, I thought as much. I was probably too quick to judge— I got the impression from your discussion style and that you used long-term memory. What a blundering fool I am when I try to comprehend text in a hurry . Still, it's not good enough just to know the terms (again, just an impression— chances are you didn't just read a bunch of Wikipedia articles, but actually studied it), but also be familiar with the research and aware of the limits and problems of the theories they come from. I admit I know next to nothing about how declarative memory operates — I consider the most interesting and suggestive fact to be that there's a considerable latency between a stimulus and committing the information it carried to declarative memory — but who does? However I like to think I understand implicit memory comparatively well. Anyway, if you have an intriguing theory of your own concerning memory, I'd be really eager to get to know it, but this might not be the best place to post it, even though the thread is pretty much dead. > and I stated such in an earlier post. I can't seem to find it, even though I reread your posts in this thread (including the parts containing “[Freud]” this time). Perhaps it is in another thread. Still, there were no contextual clues to indicate any reference to other text, so I assumed the reply itself was complete. Nevertheless, you used a qualifier conveying speculation, albeit not where it should've been, so there's nothing wrong there. I think we're though with this one as well; let's not bring it up again. > Cognition; Perhaps you are using another definition of the term. I likely do; it's explained in the quoted text. Animals certainly do possess a subset of mental faculties that make up cognition, but it's very limited, both in scope and degree. Animals don't even come close to the information-processing powers of the human mind, so I deem cognition to be (almost) exclusively a human ability. I realise this rigid way of looking at it might not be very good, and I'm willing to change it. As it stands, there are a number of people in the world, and not an insignificant number, that don't meet my criteria for cognition. I can't really explain why I treat the attribute of having this ability, or rather a set of abilities, as dichotomous. I'll be more careful next time. As for its usefulness, I only meant the discussion. Motorheadlk in particular used some expressions in whose place cognition would be not only apter, but actually correct. Speaking of which, cognitive psychology can offer many useful tools for determining differences between tulpas and hosts (the original conscious minds). Although it “can't really explain much”, it can provide some insight into differences in learning and memory and many other things. I've found it very helpful. > logical grounding Yeah, I had some idea, but it turns out it was wrong. Good thing I asked. Thanks for the clarification. One more off-topic thing: I've seen parallel processing mentioned many times on this forum, and I even thought I vaguely knew what it meant, but I must admit that I haven't got a clue now. Frankly, it sounds to me like an unnecessary buzz-phrase. You seem to be good with words, so I'd like to know your interpretation of it. Okay, there still remains one point, the many classifications of consciousness, which I'll try to address tomorrow, because I'm too tired now.
  5. Virgil

    Proof of Tulpae

    Here's an idea that's occurred to me a great many times, but I've never thought of using it as confirmation of tulpas' existence. I think that's because I am already dead sure their existence is possible. It is based mostly on a peculiar side effect of the creation process, muscle twitching. Teh tehory: Individual muscle fascicles are innervated each by a separate motor neuron. It doesn't have to be fascicles; any portion smaller than a muscle head would do. So, individual motor neurons in the ventral horn control parts of muscles, and they can fire individually. That much is certain. Now, I speculate the primary motor cortex's connection to the ventral horn cells is such that it allows control of individual muscle parts. There is ample evidence corroborating that, but still, I'm not quite sure. I can't find any explicit information confirming that. The most closely related piece of information Neuroanatomy for the Neuroscientist has to offer is this: Note that 70–90% of the fibers passing through the medullary pyramid will decussate and continue as the lateral corticospinal tract. These fibers originate primarily from those portions of area 4 representing the distal extremities and synapse directly on alpha motor neurons in the lateral sector of the ventral horn. Well, let's assume it's true that it's possible for the cerebrum to send a signal to individual muscle parts. We also know that the cortex is vastly interconnected throughout the whole cerebrum; there are a great many axons spanning the anteroposterior axis of the head, so I surmise that the unavailability of some lower brain functions, excluding the autonmic system, to the (conscious) mind stems from functional attributes rather than structural. People can normally voluntarily control whole muscle groups and with some effort even individual muscles, but it's nearly impossible to actuate any smaller units. Well, if the cerebrum is structurally capable of it though the mind is not, a tulpa might be able to learn such a skill. The advantage is that it can be easily measured and it's harder to learn than to draw or write two different things with each hand at once, thus harder to fake. However, it may prove too difficult even for a tulpa. Well, let's find out. Unfortunately, some people might have no idea how a localised twitch manifests and how to tell it apart from a normal muscle contraction. I suggest to focus on a sensitive area such as the tongue. It has a lot of small fascicles and I imagine the feeling of contracting only a small part of it to be very odd, so there is less risk of error. To summarise, if a person can actuate localised muscle twitches at will (the tulpa's will), then it is probable that the person indeed shares the brain with a tulpa.
  6. Hello, I'm sorry to intrude on your one-on-one discussion session. I'm not even sure what I've just done — it seems I'm just responding to some arbitrarily picked parts of text — but here is the product. I didn't bother with putting quoted text into those neat boxes, so you can't outright tell who wrote what, but what the hell, at least you'll have more fun remembering or seeking out what you wrote. As for other readers, well, too bad. > The memory is fallible, and doesn't keep logs. Actually, it sort of does, but it's a lack of a "log entry" that could potentially make a memory suspicious. ( However, you are basically right; people can always rationalise the source of a stray memory and there is no such thing as a complex memory system in the brain that monitors and logs its own activity. > … entropy… > Actually, if you think regularly about entropy and evolution… I fail to assign your usage of the word entropy to any generally accepted concept— to me, it has no meaning. It appears you tried to include some paradigm of yours in your message. Although I have nothing against word misuse if it doesn't concern the central point or if I manage to figure out the meaning, this particular instance really baffles me, and I can't just overlook it. So, how about you elaborate on it? If you think it deviates from the topic too much, there's always the PM system. > When it does not have connections (or have still "immature" ones, like you want to call them), it does not have abstract concepts of the world and therefore cannot have consciousness. I think you meant concepts in general. First, concepts are already abstracted or generalised, and second, applying abstract as a qualifier to concept usually serves to distinguish it from a concrete concept, which is an idea reflecting concrete objects like chairs or books, as opposed to abstract ones such as velocity, an integral or love. Moreover, the presence of a certain configuration of synaptic connections alone, assuming that's what you meant, doesn't encode much information. It's mainly their (chemical?) state that encodes information, since synaptogenesis is a rather slow process. > You could argue that the consciousness and subconsciousness still can't be considered subconscious and consciousness without each other, but that's not how scientists see it. Well, I don't know about psychologists, but most neuroscientists and neuropsychologists outright disregard the term conscious mind due to its vagueness. It's an interesting philosophical concept, but that's about it— to my knowledge, much of the scientific community in my country avoids it like the plague. Consciousness is used solely as the cognitive state (explained in the quoted text at the bottom). Consciously is often synonymous with voluntarily. > memory before the age of three is probably down to a lack of physical neural development enabling long-term memories to be stored There are two or three kinds of memory: Procedural memory, broadly speaking, is an inherent attribute of most neural networks. As long as something has an adaptable neural network system, it also has the ability to form implicit memories. Declarative memory — this is probably what you meant — is something that has eluded all attempts at determining its operation since its existence was established. It stores information without the need of repetitive stimulus feeding (training). And working memory, which is unrelated to the topic at hand. Despite their sharing a common word, memory, these terms describe very different capacities. It would be a great mistake to think of them as a single function. Infants are able to memorise and retain memories for many days and as they grow older it becomes months, but for some reason, their explicit memories aren't preserved into subsequent stages of maturing. Carolyn Rovee-Collier, a memory researcher, proved that infants do possess long-term memory. She published the article Evidence of long-term memory in infancy, but I don't think it's available on the internet. There's some info here: > My point is that sensory input alone does not make memories; it is conscious experience that feeds into the memory. Sensory input and its unconscious processing can affect the state of a neural network to so great a degree that the effect can be regarded as a formation of implicit memories. Visual pattern recognition is an example; the learned patterns constitute memories. Again, memory is an ambiguous word — also, the vagueness of consciousness strikes again. Obviously, consciousness (the state) is required for procedural learning, but is it the conscious mind, which motorheadlk probably meant, that carries out the learning? The answer depends on your meaning of conscious mind > we were talking about general human ability to think without long-term memory. It wasn't going anywhere anyway. Sorry to be blunt, but it appears to me that memory is something you two barely understand. And I don't mean just to the extent to which it is generally admitted that we don't understand many function of the brain. Ah, motorheadlk's posts give me brain damage and waffles, you could really benefit from reading some sciencey articles, even if just on good old Wikipedia. Then again, I only skimmed the conversation, so perhaps I misjudged you. > You may be confusing 'intelligence' and 'consciousness'. And you intelligence with cognition. Nah, they both mean basically the same thing, but psychologists prefer cognition, possibly because intelligence is vaguer and too broad. Animals possess intelligence, but are incapable of cognition. > logical grounding Uh—— I don't understand this phrase. I don't seem to know what grounding is. Does it have the same meaning as basis? By the way, the word logic and its forms are misused a lot nowadays, aren't they? I think you can even get away with saying something to this effect: "This radiator is illogical." Grounding, as it seems, can be good or bad, premises can be right/true or wrong/false, but it's the process, reasoning or argumentation, that can be either (logically) valid or fallacious. Okay, adding logic to the blacklist section of my mental lexicon. It's just too confusing. Here's something I wrote a couple of months ago, so it might be outdated and off-topic, but I'll post it anyway. It seems you two, motorheadlk in particular, are struggling with some of the terms, and this is what I came up with when I was plagued by cognitive dissonance they gave me. I think the word cognition is possibly what you both are looking for. It's a really useful idea for these discussions. If you are not, well, never mind. Incidentally, there is already an old thread that deals with consciousness to some degree: You might want to take a look at it. Oh god, what am I doing with my life? This isn't science; this is philosophy. Dear sirs, I shall now lean back in my armchair, light a cigar, and enjoy the mood. And when I'm finished, I'll get out the trusty revolver my grandfather gave me and blow my brains out. Hmm—— if only I had some cartridges. I bet the damn thing doesn't even work any more.
  7. In the likely case there's no one with the experience needed to answer your question, I'll give you my opinion. The condition shouldn't be damaging to a tulpa, as long as its effects are of purely hallucinatory character. Hopefully, it doesn't affect any important functions. There's even a slight chance a tulpa could help you fight the condition. Inducing (benign) hallucinations is just one of tulpas' many possible abilities. It doesn't define them. > I'm also worried that if i did create one and allowed it those memories that it may take on the likeness of some of the things i saw during the experience. I think that's an unfounded worry. You can create a tulpa; I don't see why you shouldn't, but the question of whether it's wise is another matter.
  8. Your uncertainty is completely understandable; in fact, tulpas don't usually need any permission whatsoever unless they really, really believe they do, and even then it's still a problematic question. You just have to rely on their being considerate and their healthy disinclination to do anything stupid.
  9. First off, let me tell you what these symptoms are. They are, for the most part, hallucinations. There are two kinds of these hallucinations: * lasting mild effects characterised by noisiness: tinnitus, visual noise * transient, often abrupt, clear and noticeable and sometimes intense the latter are listed bellow: nociceptive: bursts of intense but benign localised pain, pinches, dull cranial aches thermoceptive: regions of warmth or heat (may move), chilly pins and needles, patches of cold feelings tactile: vibrations, tingling, poking, sensations of pressure that may move, pins and needles proprioceptive: hard to describe — the body or individual limbs start to feel disproportionate — usually bigger (this is erroneously listed as a mental symptom in the wiki article below), absent, as if they were in a different possition than they really are equillibrioceptive: vertigo, often occur along with proprioceptive hallucinations, resulting in pretty trippy sensations auditory: low noises with more complex patterns, short sound bursts (20-500ms), which may vary in volume visual: specks and spots, plasma-like effects, shadows, lights, sudden size changes in a part of the visual field The advanced portion of the visual system is divided into four subprocessing systems that process position, motion, colour, and shape respectively. I know only of one symptom affecting this system, and that's shape distortion. there is another classification, but these effects (or symptoms) can't be called hallucinations, because a symptom like muscle twitches manifests outside the mind efferent: muscle twitches, sudden minor involuntary movements another type of efferent: These are really hard for me to believe, because they affect the autonomic NS, but evidence suggests it may be possible: abnormal body temperature (I think there are some groups of monks who can change/increase their body temperature at will — that would confirm the possibility of affecting thermoregulation), abnormal heart rate, and arrhythmia seeming afferent: all of the symptoms (hallucination types) listed above purely internal: alien thoughts and emotional responses, for instance — this is an interesting class, which I'll discuss further [align=justify] This article on The Kundalini syndrome is very interesting. In it, there are three tables listing symptoms occurring due to a certain kind of meditation. The similarity is striking, but the symptoms are less intense and frequent in tulpaforcers. That's a good sign, since I consider them useless or even detrimental. If you are interested in this topic, I suggest that you read the whole article… because I haven't yet. What causes these symptoms? I'd say stray or leaked signals and general noisiness, but that would be somewhat inaccurate. I don't have enough data yet to determine that. Substantially strong neural activity unrecognised by your processing pathways is interpreted as noise. Let's talk about this kind of 'neural' noise more. The process, when awry, doesn't cause only tinnitus or visual noise; it can also be the source of tremor — yes, tulpaforcing can impair your ability to perform very fine movements. I mentioned purely internal effects resulting from bad tulpaforcing, and now I'd like to expound that. So, what can noise do to your mind? It's not so noticeable, unlike ringing in the ears, and the effects are subtler; they include slowing down the thought process, making it prone to faults and if they are severe enough, general mild confusion, and finally, they can manifest as an inability to concentrate or even think coherently. Then there is superfluous actity that is not pure noise. The signals it produces are interpreted as semi-meaningful. Examples are emotional responses and other unrecognised feelings, thought process disruptions, acute confusion, deja vu-like feelings, doubting reality or the perception thereof; it's fairly hard to describe and also interpret, the person whose findings I'm drawing on said (translated by me— the person's actual wording is much less crappy), "… I kept narrating to my tulpa about [the AV receiver (or something, doesn't matter)], and I suddenly felt a very strong feeling of nonsensicality about it. I looked at the vase next to the TV stand and felt the same about it as well. At first, I thought it was an emotional response or something, but it was something I'd never felt before, totally alien. For a moment, I was dead sure the vase couldn't exist in this universe. It just seemed utterly impossible to me. The wave lasted for several seconds and then altogether disappeared." — generally speaking, it was just a wave of confusion. Emotional responses are definitely suspicious when there's no actual recognisable emotional content to them— when they are mere weird feelings. All of this may sound scary, but tulpaforcing seldom causes such problems. The effects are usually mild at most. Moreover, there are worse, yet mundane ways to screw up your mind, for instance doing nothing but watching crap on television every day. It won't have the same effects, but it can reduce you to a gibbering fool, which is way worse than, say, tinnitus. This may seem like a joke, but I'm serious. Be careful about what you put in your mind. It learns from and adapts to what you're exposed to, whether you want or not; although, you can ignore certain things to some extent. Fortunately, people can control what they expose themselves to. Well, those with a mind of sufficient willpower can, anyway. . Yet another way of classifying, or rather identifying, these effects is by their intentionality. If your tulpa can consistently use, for example, head pressure to communicate, it shouldn't be looked on as a negative symptom; on the contrary, conscious control is an ability which both the host and his or her tulpa should strive for. However, if it is not consistent, and (not or) your tulpa can't willingly control it, then I don't believe it's beneficial. Keep in mind that the line between these two states is not always very clear, and hosts tend to deceive themselves into thinking that certain sensations are produced intentionally. Well, well, I'm not happy at all with what I've just spewed out, regarding both its form and its content; it's too incoherent and too incomplete for my liking and some of the ideas don't even reflect my current views. Too bad I'm not going to do anything about that, so there. [/align]
  10. That claim is discrepant with the OP of And the implication that the lack of progress was caused by counting hours is disproved by the following statement Keeping a log is fine unless it starts causing you to worry, doubt the progress, or obsess with it, which can lead to favouring quantity over quality. Counting hours itself is not bad.
  11. What the hell? Uh, never mind. Anyone want a Scriptish script that removes this nonsense?
  12. Virgil

    Proof of Tulpae

    Click the green arrow in the quote header.
  13. Virgil

    Proof of Tulpae

    Yeah well, I'd definitely regard having two minds in one brain as an aberrant condition. Perhaps there are mechanisms in the brain to prevent such a condition from occurring spontaneously, and by tulpaforcing, we somehow bypass them or subvert them. Oh, and don't expect an MRI scan to reveal much. Even DID affects the physical structure of the brain little. A semi-related report:
  14. Virgil

    Proof of Tulpae

    Q2 neglected to consider the remarkable degree of autonomy tulpas (well, some/most tulpas) exhibit. A tulpa's mind can do most of, if not more than, what the host's can, without the action needing to be prompted by the host. Tulpas are conscious. Fictional characters in your head aren't. They might have complex personalities, but they act only when you're thinking about them, unless they've become tulpas. You'll probably have more luck convincing your friend with this: