Raindrop

Members
  • Content Count

    4
  • Joined

  • Last visited

About Raindrop

  • Rank
    Member

Converted

  • Sex
    Male
  • Location
    Helsinki, Finland
  1. Human brains do indeed do a lot of parallel processing, but it seems like we can only become conscious of one process at a given time: that said, processes can reach a "pre-conscious" stage, basically remaining just on the edge of becoming conscious. Here's an illustrative image from Dehaene at al's paper Conscious, preconscious, and subliminal processing: a testable taxonomy. A visual stimulus (T1) has taken control of the whole workspace and is being broadcast to a large number of regions in the brain, but there's also a preconscious stimulus T2 that's currently self-maintaining in one part of the brain and could become the object of conscious experience (= take over the global workspace from T1, throwing T1 back to a preconscious state in its own part of the brain) if the person directed their attention toward it. There's also a third stimulus, T3, that was briefly perceived but whose neural activation pattern isn't strong enough to sustain itself, and which will soon die out. From their paper: A possible way to presume that tulpas were conscious despite not necessarily gaining control of the global workspace would be to presume that the right kind of "pre-conscious" processing was actually enough to create a certain degree of consciousness, just not strong enough to take control of the whole system. (Thus, a tulpa would only control the global workspace while switched with the host - not sure about possession.) It's useful to note that Dehaene et al's definition for "conscious" is "something that a test subject can verbally report they're conscious of", so a process that was conscious but couldn't access the speech-production system wouldn't be realized to be conscious - an unfortunate but also rather unavoidable consequence of the methodology, since the only way to find out whether someone is conscious of something is to ask them. (Again I feel the need to caveat that I'm not really an expert on these topics and could be misunderstanding something about them.)
  2. I should prefix all of my comments by saying that I'm not really a great expert in neuroscience or consciousness, just someone with a Bachelor's degree in cognitive science who's done a bit of extra reading and who managed to come up with a hypothesis that apparently sounded plausible enough to be accepted into the conference. For how much that counts, I'm not sure. That said, here's for some half-informed speculation! That's an interesting question. My explanation for how tulpas are formed is that the brain is repeatedly formed sense data that suggests the existence of an independent mind, until it gradually starts modeling that mind the same way it models any other person. The outputs of that modeling process then becomes the tulpa's behavior, which gets fed back to the model: and the tulpa essentially is the model of itself. What would it mean for the host's consciousness to exist through the same mechanism? I think it would mean that infants wouldn't yet be conscious - rather their behavior would be driven by much simpler and to some extent "hard-wired" rules and reflexes. But as they did things, some part of their brain would observe the things they did, and build up a model of "what kind of a person would do these kinds of things"... and then as the infant got older, she would gradually become conscious, as that model's predictions would start feeding back to itself and it started having some degree of control over the body. Thus, we would be our brain's models of ourselves. And that sounds kinda weird, but... there is work within philosophy of mind that does suggest something along these lines. One of my favorite quotes about this comes from the AI project OpenCog, where there is a quote saying that what I described above is pretty much what happens: That would be one way by which the host's consciousness could come into existence through the same mechanism as the tulpa's. Is that what actually happens? I couldn't say. "Goes through the same thought process as the host" is somewhat vague, but we know that there are a lot of different systems and neural circuits in the brain. I would expect that, as the tulpa developed, the model containing it would gradually develop connections to different circuits, being able to - to use a computer analogy - "run its own programs" in those circuits. (Please interpret this analogy in the loosest possible sense, since the patterns of neural activation happening in the brain are very different in kind from any of the programs that we are familiar with in ordinary computers.) Possibly the tulpa would think differently from the host, due to having access to different neural circuits than the host, or maybe they'd mostly share access to the same circuits and thus be very similar. But this is again very wild speculation on my part! To get my feet on somewhat firmer ground: one thing that complicates this is that human consciousness seems to work through what's called a global workspace. Different thoughts and sensations are constantly competing over how to get into that workspace; whatever does get there is broadcast all around the brain, and is perceived consciously. Thing is, only one thing at a time can take control of the global workspace, suggesting that if a tulpa is conscious through a similar mechanism, it and the host cannot be conscious at the time. On the other hand, we do often have a lot of thoughts going through our heads at once, and we perceive them as happening basically simultaneously because they keep kicking each other out of the workspace at such a rapid pace. Could a host and tulpa similarly be conscious at practically the same time, with their respective thought processes flickering in and out of the workspace at such a high speed that neither notices that they're not actually "constantly online", but rather taking turns being conscious? I'm a little skeptical, because there's also research saying that genuine multitasking is hard for people and exerts a much higher mental load than just doing one thing at once, which doesn't sound like it would be possible for two different minds to keep switching places that fluidly. So, I dunno. We need to get someone to do some brain imagining studies on tulpamancers to get answers to these questions. :)
  3. I don't think the academic community necessarily views "a spiritual thing" and "psychological phenomenon deserving scientific study" as two different categories - there's lots of research on the psychological and evolutionary foundations of religion, for instance.
  4. 'Toward a Science of Consciousness 2015' is, in their own words, They accepted my abstract, Sentient companions predicted and modeled into existence: explaining the tulpa phenomenon, into their program as a contributed poster. I'm reproducing its contents below. ---- [align=center]Sentient companions predicted and modeled into existence: explaining the tulpa phenomenon Kaj Sotala [/align] Within the last few years, several Internet communities have sprung up around so-called "tulpas". One website (tulpa.info) describes a tulpa as "an independent consciousness in the mind [...] an internal personality separate from your own, but just as human. They are sentient, meaning they have their own thoughts, consciousness, perceptions and feelings, and even their own memories." Members of these communities discuss ways of actively creating tulpas, and exchange experiences and techniques. Tulpas may interact with their hosts via e.g. auditory, visual, or tactile hallucinations, appearing as real people. Although the existence of such beings may sound implausible, several related phenomena are known to exist. These include children’s imaginary friends, dissociative personality disorder, and the "illusion of independent agency" (Taylor et al. 2003) where fiction writers report experiencing their characters as real. I hypothesize that tulpas may arise from the combination of three factors. First, conscious thought acts as a "reality simulator", and imagining something is essentially the same process as perceiving it, with the sense data being generated from an internal model rather than from external input (Hesslow 2002, Metzinger 2004). Second, our brains have evolved to be capable of modeling other people and predicting their behavior, so as to facilitate social interaction. Third, according to the predictive coding model of the brain (Clark 2013), action and perception/prediction are closely linked: doing something involves us predicting that we will do it, after which the brain carries out backwards inference to find the actions that are needed to fulfill the prediction. This allows for a tulpa-creation process in which the practitioner starts with imagining the kind of person they wish to create, and how that person would behave in different situations. The mental images produced by this process are picked up by the people-modeling modules of the brain, which might not be able to distinguish between imagined and perceived sense data, and they begin creating a model of the tulpa that is being imagined. Practitioners report their tulpas sometimes doing new and surprising things, which could be explained by the brain doing backwards inference to find possible "deep causes" of the tulpa’s imagined behavior, whose other consequences are then simulated, causing the tulpa to act in ways unanticipated by the practitioner. Eventually, once the model and the practitioner’s ability to imagine the tulpa become strong enough, there will be a self-sustaining feedback loop: the model of the tulpa creates new predictions of its behavior, which are experienced as happening, and these experiences are fed back into the model, giving rise to new predictions and behavior. By this point, the tulpa will be experienced as acting independently and separately from the “main” personality. References Clark, A. (2013) Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36:3. Hesslow, G. (2002) Conscious thought as simulation of behaviour and perception. Trends in Cognitive Sciences, 6(6). Metzinger, T. (2004) Being No-one: The self-model theory of subjectivity.. Taylor, M. & Hodges, S.D. & Kohányi, A. (2003) The illusion of independent agency: do adult fiction writers experience their characters as having minds of their own? Imagination, Cognition, and Personality, 22(4).