I_wesley125 December 17, 2012 Share December 17, 2012 Fede already said this. Stop taking his content. Name: Twi (she wants a new name, we are thinking of one) Form: Twilight Sparkle On: Talking/communication Need web or game hosting? Feel free to message me. Prices are flexible, custom assistance from me. Link to comment Share on other sites More sharing options...
Guest December 17, 2012 Share December 17, 2012 Not exactly. These guys have read a bunch of fairytales called "psychology". I don't really like fiction that much. All I can really say about psychology is that there are aware (conscious) thoughts and unaware (subconscious) thoughts. That's about it. These faggots just go on about consciousnesses, brain sections, memory/motor processing, personalities, id, ego, super-ego, and... unconscious parrots? Who the hell came up with that fucking term, anyway? I don't refer to these things as that. I've no idea why it even keeps popping up again and again. It's like you're trying to seperate them into "parrots" and "truly unique 100% real and certified", which in the end makes no fucking difference when you say something to a dude in your head and you get a response on both accounts. Link to comment Share on other sites More sharing options...
waffles December 17, 2012 Share December 17, 2012 I stole nothing from Fede. If the hypotheses overlap then that's great, but don't accuse me. Fede: If you divide the mind into aware and unaware, then that'd be conscious and unconscious. As for unconscious parroting, the point is that we assume the tulpa's existence beyond the response; that is, they are sentient. This being the case, your unconscious mind could produce a response that is not them, or in substitute of them - this being early development. Link to comment Share on other sites More sharing options...
Guest December 17, 2012 Share December 17, 2012 If you divide the mind into aware and unaware, then that'd be conscious and unconscious.Word-picking now, are we? I'm a psychology expert from the 1800s, arised from the demonic Hellfire in the depths of the Earth thanks to a skilled conjurer, and I say it's now superconscious and metacon-scious. Sounds cooler. Also take note of the line in metacon-scious - make sure you get this one right or people will have no idea what you're talking about, especially the people that have devoted their life to reading the series of fictional novels like The Unconscious Mind and its spinoffs. Link to comment Share on other sites More sharing options...
waffles December 17, 2012 Share December 17, 2012 Wonder why people didn't want you off moderated? I didn't realise you'd go back to your old ways so quickly. Still, the word-picking is appropriate. = Link to comment Share on other sites More sharing options...
Guest December 17, 2012 Share December 17, 2012 I'm just proving my point through creative examples. Link to comment Share on other sites More sharing options...
Couguhl February 3, 2013 Share February 3, 2013 If the dissociated personality is able to create lines of thought you(you meaning the original personality) don't have direct access to, how is that functionally(or at all) different than it being sentient? It isn't. Consciousness can be neurologically defined as the collection of neurons that individually express themselves. The brain can actually rewire itself in accordance to a continuously- perceived belief. (ie Why some actors who play roles of a mentally handicapped person start to actually behave as if they were handicapped.) There is no 'center' of control or consciousness in the brain; it works together as a whole to express the individual's personality and thought. That is why people who undergo hemispherectomies (removal of one half of the brain, let's say the right side) cannot perceive objects held by the left side of their body, nor can they perceive the left-side of any objects they observe. They have undergone a drastic change in consciousness. This is also why brain damage can permanently change someone's personality. The total collection of neurons expressing themselves has drastically changed. Keeping this in mind, let's define sentience. A computer, no matter how complex, is not sentient, because it can only use a linear code of algorithms to process information. A brain, however, is quite different. The brain processes information not by a linear sequence, but by a mass of individually-working neurons. There is no pattern it has to follow, nor is it handicapped by any logical rules. This is what makes the brain sentient. Just as the pixels on a screen can express themselves as a recognizable image when in unity, the convergence of neural interaction expresses itself as consciousness. Scientifically you can say that we are "a momentary expression of an ever-changing unity with no center." And that is exactly what a tulpa is. Tulpa: Sierra Forcing since July 2012 Couguhl’s Progress Report Link to comment Share on other sites More sharing options...
waffles February 3, 2013 Share February 3, 2013 Consciousness can be neurologically defined as the collection of neurons that individually express themselves. Whaaat. Neurons can't 'express themselves'. That doesn't sound like a neurological definition to me. Use clearer words. There is no 'center' of control or consciousness in the brain; it works together as a whole to express the individual's personality and thought. That's wrong. Typically, the brain has a certain area responsible for a certain function. Visual processing goes on in the visual cortex, and so on. While consciousness is more complex, it says right on the damn Wikipedia page: An active cortico-thalamic complex is necessary for consciousness in humans The brain isn't some magic sandbox. It is generally thought that it has specific, distinct areas responsible for distinct functions. That is why people who undergo hemispherectomies (removal of one half of the brain, let's say the right side) cannot perceive objects held by the left side of their body, nor can they perceive the left-side of any objects they observe. They have undergone a drastic change in consciousness. This is also why brain damage can permanently change someone's personality. The total collection of neurons expressing themselves has drastically changed. There's a far saner explanation for that. Again, I quote Wikipedia: There is a visual cortex in each hemisphere of the brain. The left hemisphere visual cortex receives signals from the right visual field and the right visual cortex from the left visual field. The part of the V cortex in the right hemisphere is responsible for left-side processing, a right-side hemispherectomy that removes this part of the V cortex will remove its function of processing left-side vision. Why would you assume that it's a "drastic change in consciousness"? What does that even mean? We have the Glasgow Coma Scale to give a scale of consciousness medically, but that doesn't cover things like partial vision loss or change of character. Keeping this in mind, let's define sentience. A computer, no matter how complex, is not sentient, because it can only use a linear code of algorithms to process information. Not really true. I mean, algorithms can modify themselves, comparative to neural rewiring. Also, computer systems can and have been built using simulated neurons. A brain, however, is quite different. The brain processes information not by a linear sequence, but by a mass of individually-working neurons. There is no pattern it has to follow, nor is it handicapped by any logical rules. This is what makes the brain sentient. But that's not really true. Let's go back to our visual cortex example. The wonderful Wikipedia has beaten me to it: The dorsal stream begins with V1, goes through Visual area V2, then to the dorsomedial area and Visual area MT (also known as V5) and to the posterior parietal cortex. The dorsal stream, sometimes called the "Where Pathway" or "How Pathway", is associated with motion, representation of object locations, and control of the eyes and arms, especially when visual information is used to guide saccades or reaching.The ventral stream begins with V1, goes through visual area V2, then through visual area V4, and to the inferior temporal cortex. The ventral stream, sometimes called the "What Pathway", is associated with form recognition and object representation. It is also associated with storage of long-term memory. Just as the pixels on a screen can express themselves as a recognizable image when in unity, the convergence of neural interaction expresses itself as consciousness. This is clearly a linear sequence, and a pattern. Everything is governed by logic, also. No exceptions, if you add a fourth law for superposition. Scientifically you can say that we are "a momentary expression of an ever-changing unity with no center." I don't think you can say that scientifically, actually. And that is exactly what a tulpa is. And that's exactly what a non-sequitur is. Link to comment Share on other sites More sharing options...
Couguhl February 18, 2013 Share February 18, 2013 Forgive me. I can be intellectually choppy. >Whaaat. Neurons can't 'express themselves'. That doesn't sound like a neurological definition to me. Use clearer words. I was attempting to describe Stanislas Dehaene's model of consciousness, where consciousness is characterised - neurally - by large scale, reverberant processing across the whole brain. >That's wrong. Typically, the brain has a certain area responsible for a certain function. Visual processing goes on in the visual cortex, and so on. While consciousness is more complex, it says right on the damn Wikipedia page: [etc.] The brain isn't some magic sandbox. It is generally thought that it has specific, distinct areas responsible for distinct functions. That is true, but consciousness isn't something that has a clear or equivocal definition, unlike sight or the activation of norepinephrine. And sure, something like an active cortico-thalamic complex may be required for consciousness, but it is fallacious to assume it is the seat of consciousness solely because of this. Hell, a brain stem is required for consciousness. >Why would you assume that it's a "drastic change in consciousness"? What does that even mean? We have the Glasgow Coma Scale to give a scale of consciousness medically, but that doesn't cover things like partial vision loss or change of character. Awareness would be a more accurate term. But awareness is a fundamental aspect of consciousness. People who suffer brain damage on only one side of their brain for example may start to see only half of what they are focusing on. A lot of times, if someone asks them to draw a picture of something, they will only draw half of it because their perception is skewed. When confronted about it, they will suddenly make an excuse of some sort. (I forgot the name of this, but I'm sure you can find it on YouTube or something similar.) This shows that how you perceive things - your awareness, can ultimately impact consciousness. >Not really true. I mean, algorithms can modify themselves, comparative to neural rewiring. Also, computer systems can and have been built using simulated neurons. Not true that computers can be sentient, or that they only use linear code? Nonetheless, you are correct about flexible algorithms. >But that's not really true. Let's go back to our visual cortex example. The wonderful Wikipedia has beaten me to it: [etc.] This is clearly a linear sequence, and a pattern. Everything is governed by logic, also. No exceptions, if you add a fourth law for superposition I see. I was incorrect. As a side note, everything is governed by logic, but not everything is logical. >I don't think you can say that scientifically, actually. Not in those terms, not really. >And that's exactly what a non-sequitur is. My post was an attempt to explain consciousness in order to show that a tulpa's conscious-like behavior is synonymous with 'real' conscious-like behavior of humans. As a conclusion, I stated the definition of 'what we are,' and immediately stated that it applies to tulpæ as well. But I appreciate the refute. I don't get them that often. Tulpa: Sierra Forcing since July 2012 Couguhl’s Progress Report Link to comment Share on other sites More sharing options...
Virgil February 23, 2013 Share February 23, 2013 I'm going to use the email quoting style again. I hope it's not too inconvenient for the reader; it is certainly convenient for me. > let's define sentience Let's not. Sentience is a highly elusive thing. We could discuss it for weeks and not get anywhere. Let's do something productive instead. > A computer, no matter how complex, is not sentient, because it can only use a linear code of algorithms to process information. If you can simulate a brain on a computer, you can have a thinking computer. It's theoretically possible, because classical physical systems can be emulated (or simulated, if you'd like) on a Turing machine and the computer is a Turing machine. It may be very difficult in practice, though. I imagine running a sufficiently complex model of the human brain could be computationally very expensive and thus infeasible. And what if the wacky theory that the brain's operation depends on quantum phenomena is true? Quantum systems can also be simulated with a classical Turing machine, but classical complexity is exponentially proportional to the size of a quantum system's space of states, which is ridiculously large even for one consisting of a couple of atoms. Imagine the whole brain… Well, let's see what quantum computation will bring us in the future. This was just an interesting but quite unimportant remark. > A brain, however, is quite different. The brain processes information not by a linear sequence, but by a mass of individually-working neurons. True. > There is no pattern it has to follow, nor is it handicapped by any logical rules. Aaaand, this is too vague. Patterns, logical rules, sentience? It's sort of true there are no apparent patterns in the state of a neural network, but there is a little method to its structure, if you look at the brain, specifically. Hm, I can't say I understand that sentence. > This is what makes the brain sentient. Umm—— > But that's not really true. Let's go back to our visual cortex example. The wonderful Wikipedia has beaten me to it: > > This is clearly a linear sequence, > > and a pattern. Not sure again what was meant by linear, so I'll just say that cerebral pathways are full of feedback loops, and there are seemingly useless connections in it, mostly just single axons, connecting supposedly unrelated regions. > Everything is governed by logic, also. American conservatism sure isn't. hue Joking aside, what logic? Back to the brain. Couguhl is right; neural networks definitely don't employ logic. Individual neurons do not perform boolean operations, don't act as logic gates, and the brain definitely doesn't encode or transmit information from one part to another as signals consisting of sequences of binary values. I assume here I interpreted logic correctly. Any other interpretation in the context of signal processing would be foolish. Logic, logic, what does it mean? > a fourth law for superposition. I have no idea whatsoever what this means. I'd normally look it up on the web, but I'm currently in the middle of the woods, connected to the internet via a crappy EDGE connection, and it takes a dozen seconds to load pages. I'd go mental if I had to do that, so, please, explain to me what it is. > Not really true. I mean, algorithms can modify themselves, comparative to neural rewiring. You missed the point (I think — I'm not even sure what it was supposed to be), which was the fundamental difference between the typical computer and one whose function is based on neural network processing and how that is the reason why computers can't be sentient, an argument that makes some sense to me but is based on very shaky foundations. Anyhow, let's discuss the differences, because I think it's an interesting topic. Also, I hope to dissuade people from letting the preposterous brain-computer analogy cloud their reasoning when dealing with the mind and other brain functions. The computer executes one instruction at a time per core — sequentially — is fully deterministic, uses synchronous logic, encodes information explicitly in definite physical locations. There are more differences. Biological NNs are immensely asynchronous and many, if not all, of their elemental units or neurons are simultaneously engaged in a (here unspecified) process at any given time, while only a subset of logic gates is used at a time in synchronous logic circuits. biological NNs are plagued by stochastic processes that arise from its own activity as well as internal noise. To function in the semi-deterministic manner required for coherence, it must therefore be resistant to some noise levels and various other disruptions. This resistance, it turns out, is an intrinsic quality of many types of neural networks. What is more, individual neurons are somewhat noisy; they fail to fire from time to time despite being sufficiently depolarised or fire when they're not supposed to. This is true only for smaller neurons, though; large cells tend to be reliable. It should be clear that information processing must be distributed to some extent. The brain has other interesting characteristics regarding dealing with disruptions, but that's beyond the intended scope of my reply. Back to computers: What happens if one gate in the CPU fails to output the correct value, if just a single transistor fails to switch? It depends on what code is being executed, but it will always have definite and apparent consequences, be it a system crash or an incorrect result of an arithmetic operation. Every unit must work flawlessly in order for the whole system to function as intended. In the brain, a neural network, any neural network, even many mathematical models, you can knock out several neurons or more depending on its size, and it can still perform its task, albeit not as well as when intact. The forebrain is even more resistant — you can destroy sizable populations of its neurons, and if no critical pathway is affected, it won't result in any apparent deterioration in function or performance. Evolutionarily newer brain areas, usually frontal, seem more impervious than older parts, located mostly in dorsal and caudal regions. One possible reason is that the frontal lobes contain fewer critical pathways and more diffuse networks, or that impairment in their function is not readily obvious. Still, the frontal lobes are more likely to be injured and more susceptible to injury, possibly due to their location. Yeah, all of this is probably off-topic but, hopefully, interesting. You mentioned neural rewiring, which is another process of possible interest. But what does it mean and how is it carried out? I have an inkling; let's see if I'm on the right track. > Also, computer systems can and have been built using simulated neurons. I think he meant the digital computer, the kind one usually means when one says computer. >An active cortico-thalamic complex is necessary for consciousness in humans. So it should be regarded as its centre? You shouldn't be so hasty in your conclusions and laconic in explaining them. It's not so plain and clear and there are other structures and functions necessary for consciousness, but you're essentially correct. Keep in mind, though, that the thalamus and cortex are tightly entwined, and the thalamus and signals from the looped circuits it projects to it only stimulate or induce consciousness; the process itself occurs in the cerebrum. Still, it can be considered its centre. It sort of neatly fits in my conceptual framework, so why not. But what is consciousness? One of my friends responded to this question when I once asked him, "It's when neural activity in the brain is coherent." I didn't quite get it then, but now I believe he was right. Consider, however, how broad this definition is in contrast to the one relying on the responsiveness tests and that it depends on another rather vague idea, coherency. At least we know how some types of incoherent activity manifest. Since those thalamic nuclei responsible for monitoring the cerebrum's activity and adjusting the T-C-T circuits' firing patterns synchronise the neural activity and basically make it coherent, they can indeed be thought of as the heart of consciousness. >The brain isn't some magic sandbox. It is generally thought that it has specific, distinct areas responsible for distinct functions. Distinct is a poorly chosen word here. Areas in the cortex as mapped to their respective functions are very vaguely delineated, and they overlap. And specific? Nah. There are many parts that process various signals and integrate several functions, all in one solid physical structure. It might not be a magic sandbox, but it's a system of quite poorly understood black boxes. We should regard the brain as what it really is, a vast neural network comprising a hierarchical and intricately interconnected system of fuzzy modules of diverse population sizes. We might know what function many of these components are involved in, but we have no idea how they do whatever they do, which is presumably processing of information in general. Cerebral modularity loosely follows a pattern: Evolutionary older areas tend to be more distinctly separated into pathways, and recently evolved, or more advanced areas seemingly more monolithic or consisting of diffuse networks. The frontal regions and parts of the temporal lobes, fairly recent additions to the mammalian brain, still elude our understanding as to function. The knowledge of the rest comes mostly from lesion and activity studies, and all those can tell us is this part is required for or involved in that function, but we don't really know what it does, how it works, and whether other structures are involved in said function. We need accurate neuronal models and a means to translate their state into meaningful information, and only when we have those we can move on to trying to understand actual structures found in the brain. Some neuroscientists, even those of renown, claim we know about 20-50% of all there is to know about the brain. As if it were possible to quantify knowledge. I know enough to know that I— we know next to nothing. And if one thinks we'll have the technology to transfer the mind/consciousness/whatever in the next fifty years or a more advanced brain interface than just electrodes detecting and effecting voltage changes at the nodes of convergent pathways, then one is deluded. This area of research is little funded and so far there is no considerable interest in it. Understanding the human brain, or the brain of any higher animal, for that matter, will keep eluding us for longer than people may think. Sorry for the sloppiness of my replies. I don't really want to put more effort into them. Bayesian inference Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.