Jump to content

PhDwannabe

Senior Members
  • Posts

    333
  • Joined

  • Last visited

Everything posted by PhDwannabe

  1. Excellent responses, gentlemen. Just what I was asking for. Thanks! Anyone else is of course free to add something, were there to be anything more to add.
  2. I think the word we're looking for here is "suggest," not "infer." With that out of the way: we really don't need the presence of afferent nerves associated with feeding processes in a section of our brain which we share with certain ancestors to suggest that those ancestors, um... needed to eat. Creatures without the ability to detect sound, without a metencephalon, and indeed without a brain are able to move. This is not archeology. Taking the whole passage at once, however: Well, OK, this looks to be a gigantic non sequitur that one of our dear evolutionary bio folks should pick apart themselves. In short, the ability of animals to engage in some kind of metabolic depression or suspension evolutionarily well-predates brains--occurring in many single-celled creatures. And finally: [headdesk] You said you'd welcome thoughts, DrmDoc. I apologize that none of mine are particularly kind.
  3. You make one claim here, and then a variation on that claim. First, that the relative lack of sensory acuity (compared to other animals) among early primate ancestors compelled cortical growth on an evolutionary scale. Second, that relative lack of sensory acuity, as well as relative lack of speed, and relative lack of stealth compelled this growth. Is it sensory acuity, or is it all three? It's not particularly novel to suggest that h. sapiens occupy a niche in which a relatively high degree of cognitive skill, and relatively low degrees of physical strength or senses are used to make a living. To make more specific variations upon that claim requires some more evidence. To make it in general terms is largely to recite what's already well-known. It also strikes me that this particular hypothesis: is somewhat incomplete, given that brain-to-body mass ratio has steadily increased along the mammalian line since very early mammals during the mesozoic. Also, the rapid advances in encephalization during the history of the early hominids we're talking about occurred long before the technology associated with hunting had arrived. Thus, the "mental demands" of obtaining meat are really the mental demands of scavenging--a strategy not unlike what mammals had already been employing for millions of years. One would have to make a case that scavenging meat and scavenging vegetable matter are qualitatively dissimilar. In both cases, other scavengers must be fought off or outsmarted (though they generally gather in higher density in the case of meat). Extremely simple tools are used in both--early hominids no doubt used stones to break open large bones and extract marrow, but this is probably not terribly new. Primates of all kinds employ sticks and rocks to get at ants and termites, nuts, roots, and honey. Animals far "down the tree" from primates have been seen to exhibit tool use behavior as well. Although, the biggest thing going on here is: could you share some of the anthropological literature which drew you to these conclusions?
  4. My question still remains: when is it enough? How far back do you have to walk to use the term? Do you need to walk back in time across a species barrier? I was using some shorthand in my original question which I shouldn't have, which you needled in on a bit, so I'll clarify: could we say that, in the last 10-15 thousand years, dogs have evolved from recent ancestors which are very much like extant gray wolves? Or, since extant canis lupus, extant canis lupus familiaris, and 10-15kya canis lupus could probably cross in any combination to create fertile offspring, is it inappropriate to use the word "evolved" in the bolded phrase above?
  5. In any discussion of the goodness of the investment, one must consider the expense of the initial investment. To that end, I'll suggest that the marginal value of peasant labor in Old Kingdom Egypt was pretty damned low. When the floodplain is underwater, an Egyptian peasant has very, very, very little else to do. May as well make him drag rocks around, eh? And remember, Egypt of the Old Kingdom wasn't really yet a mighty empire anyway--they'd yet managed the nonetheless commendable task of uniting the chiefdoms of the delta and the upper Nile, but not a whole lot else. And from somebody who has seen them (and had an AK-47 pointed at him by a member of the Egyptian paramilitary for climbing, admittedly, a bit too high), I'll say yeah: very much worth it. Edit: In anticipation of what would be a pretty reasonable response about opportunity cost, I should probably also contend that there wasn't really tons else to do with all that labor back then anyway. Technology hadn't really advanced to the point where things of lasting economic value like highways or aqueducts were really buildable, and ancient Egyptians were already pretty much maxing out the number of canals they'd needed to farm the Nile valley (which, compared to Mesopotamian region, wasn't actually too many). Indeed, other impressive architectural works would really have to wait until technology and knowledge advanced more. (As one professor once explained to me, the pyramids are not so much marvels of architecture are they are of engineering--assuming you've got the hundred million hours of free labor, it's not something that's technically really difficult to design. The Pantheon, while miniscule in comparative size, makes the pyramids look like a kid playing in the sand in terms of complexity.)
  6. Hey all: I'll start by saying that I'm not utterly ignorant of evolutionary biology--I suppose I have a college-intro-level knowledge of it, and do not subscribe to any particularly goofy theories regarding it. I am aware that the creationists/IDers point to examples of natural selection at work and scream that this isn't evolution, just adjustments in the gene pool, changing frequencies of traits, etc. The example of peppered moths in industrial Britain, I know, has attracted no shortage of hoarse screams. My question is, according to current definitions (or perhaps just current norms) that you might be more in step with than I am, when is it conventional to say that one creature has "evolved from" another? The example of dogs is the big one in my mind--they're just domesticated gray wolves, subjected to the selection pressures of 10,000+ years of yummy trash sitting around the outskirts of human settlements. As far as I'm aware, they're fully sexually compatible with wild gray wolves. Would it be conventional to refer to the recent "evolution" of dogs, or say that they have recently "evolved from" wild gray wolf stock? Or do you really need speciation to use the e-word? It'd be helpful if those with up-to-date contact with the evolutionary bio world could share their thoughts. Thanks, DJ
  7. Who says that we lack the "slightest understanding?" I can go on Google, Amazon, or a searchable academic database and find dozens of books and hundreds of articles about the evolution of the human brain, of intelligence, and of specific cognitive functions. Are you just saying that individuals speculate on the topic without understanding, or that science lacks it in general? Who speculates this? Classical Freudians? You've mentioned two parts of the classical Freudian tripartite model of the psyche (leaving out the superego--I'm not certain why) and the concept of a "complex"--by which you mean, perhaps, the Jungian concept of the complex. These theories have been significantly discarded by the mainstream community of psychology. To be so gauche as to quote myself from a previous post: At any rate, I'm not sure how any of the question has clear bearing on a discussion of the evolution of the brain, the mind, or cognition, specifically. It might help if this: was expressed without the use of an unclear metaphor. Follow what footprints how?
  8. Indeed, Fuzzwood, let me Google that for you.
  9. Many behaviors or body states which correspond to emotion have plausible theories as to their evolutionary function. Human beings evolved an extremely complex social system, and the development of increasingly complex social networks served as a powerful evolutionary force for our species and its recent ancestors (for instance, most scientists in the area believe that it drove the development of a massive, calorie-sucking neocortex.) During this rapid development, old evolutionary architecture was "co-opted" for new needs. Suppose you're really disappointed in or really disapprove of somebody's behavior--say, they're telling you a happy, unapologetic story about incest they've engaged in. Now, think about the face you'd be making--does it look anything like this? Probably something like it. You might do something similar if I threw up in front of you. This is one of the "Ekman faces" for disgust--part of the landmark study of the universality of facial emotions. But why does this weird arrangement of the facial musculature signify disgust? Because that face has its origins in an evolved response to protect the biologically important/expensive/difficult-to-replace sense organs of the face from noxious environmental stimuli which could do them harm. Put some hand sanitizer on your hands and wave them in front of your cat--you'll get the same expression. Why do hand-sanitizer-irritated cat and human grossed out by vomit make the same face as human outraged by incest story? Because this evolutionary architecture has been co-opted for a social purpose. We needed a way to signify that others' behavior was repulsive before we had the language to say, "Dude, you're totally a noxious environmental stimuli." So, how about crying, then? Crying's a tougher case. The distinct behavior of human emotional crying appears to be just about without precedent in the animal kingdom, although the two parts that comprise it are not novel in our species: the loud vocal wailing, and the lacrimation. Animals at far younger parts of the evolutionary tree than us engage in alarm behavior, including vocalizations. That part is not difficult to understand, evolutionarily--"hey! I need help!" And just about anything with eyes has some way to clear them of debris or irritants. But why on earth do we combine them? If you're upset and need help, aren't you quite possibly in a situation where it might make sense to be able to see what the hell is going on? Well, maybe that's exactly the point. Our species and its ancestors got pretty smart--smart enough to deceive, cheat, and play tricks. We see some deception in other animals (two hilarious examples here), but man, we're really good at it. We're capable of planning all sorts of nefarious things like: "I'll pretend to be upset and need help, then, when he comes over to help me out... bam! sharp stick!" The addition of this other behavior--previously in evolutionary history just used to clear out the eyes and such--could serve to show we mean business. "Look, I'm not trying to mess with you--I can't breathe properly or see straight, so I'm in no shape to pull any funny business." This isn't my theory, it's Oren Hasson's. David Buss--pretty much the most highly regarded evolutionary psychologist of our time--thinks pretty highly of it himself. Since evolutionary psychology necessarily rests on some amount of post-hoc reasoning, and is difficult to test in many of the conventional ways we're familiar with in the social sciences, I might call it a pretty damn good guess.* Without going further into the methods of evo-psych, and spending more ink justifying why this might be a particularly coherent hypothesis, I'll just leave it there. Thanks, DJ *CharonY's point is well-taken: There's plenty of debate between and among evolutionary psychologists and evolutionary biologists about how deep we can go in terms of describing modern behavior with these sorts of evolutionary explainations. His note that there are plenty of pressures which do not take the form of an obvious selective advantage is very true. Suffice it to say that there's disagreement about most of it. Many of these hypotheses end up quite rightly criticized as just-so stories. Anything that we can't pound into controllable experimental frameworks the way we'd like needs to be taken with one more grain of salt than usual. Offered here was simply one reasonably well-thought-out account of a potential explanation for human emotional crying.
  10. ...and I suppose one could trace it back further, if one wanted. Iron came from the silicon burning stage of very elderly stars, and other nucleosynthetic processes occurring in supernovae.
  11. This isn't closely related to any current branch of psychology. Psychology is the study of human beings' minds, thoughts, personalities, emotions, and behaviors as they relate to very... er... terrestrial affairs. This stuff is really cosmology/metaphysics/philosophy of mind. And speculative cosmology/metaphysics/philosophy of mind, at that. You're much better served by a solid grounding in the largely agreed-upon science related to those topics. Once your understanding of that is laid down, go have fun with the, uhh... less conventional viewpoints, like those of Russell Stannard. Thanks, DJ
  12. OK, maybe a simple laudation is kind of inappropriate for a thread post, but, umm, holy crap! FINALLY somebody explains this! I hear mystics of all shapes and sizes bringing up the observer-collapse-of-the-wavefunction to buttress the theory behind whatever crystal healing or past-life channeling they're selling, and I NEVER know how to explain it! Why did no instructor ever put it like this/why did I never manage to intuit this myself? Thanks guys!
  13. Sorry to drag up nitpicks on an old post, but I just saw this one, and I wanted to note a couple of things here: 1) This isn't psychology. The id/ego/superego scheme is classical psychodynamic theory--it's Freud, as many people are aware. Ask ten academic psychologists who happen to be breathing at this moment, and you might find one willing to say he really believes it.* Another two or three might say that it has some value as an analogy or metaphor. This is not by any means in the mainstream--it hasn't been for decades and decades. Also, Freud's scheme was tripartite--the subconscious was not regarded as a "4th aspect" of a personality. The id and superego were considered largely unconscious, while the ego was made up of largely conscious processes. 2) These aren't "anchoring behaviors." They're not behaviors whatsoever, they're divisions of the mind or person. And religion, parenting, and social exposure do not really contribute to all of them--Freud regarded the id, particularly, as a largely inherited, animalistic set of drives directed at basic need-fulfillment. One's socialization process developed the norms and value of a superego, and the conscious ego was tasked with the difficult job of mediating between the two. 3) Getting more to the point of the thread, the idea of psychoanalysis or any other theory of behavior doesn't really militate against the idea of free will, nor does they support it. Free will is a metaphysical concept--it really refers to the real causal efficacy of the agent, the power to actually influence the course of events in the universe. This would be opposed to a sort of determinist viewpoint that we're just a bunch of atoms getting knocked around, and our will is an illusion as we float in this gigantic stream. And as if two options weren't enough, the compatibilists say, "hey, dudes--you're both right. Or at least, you both can be at once." What we might call debates about psychological free will versus psychological determinism really take place at a physical, not a metaphysical level. To say that individuals' personalities are largely shaped by parenting or attachment style, genetic factors, various quirks of brain structure and function, or behavioral contingencies doesn't actually make you, metaphysically, a determinist. I can still hold that people tend to be psychologically the sum of any number of those things, but still exert agentic causal influence over the universe. Conversely, I can be a metaphysical determinist and think that the aforementioned psychological theories about personality or personality development are bogus. So, this debate about free will--metaphysically, which is really what most people mean by the term, cognizant of that or not--takes place on a totally different battlefield. Not to say that arguments about the psychology stuff aren't interesting or anything. They're sorta what I do all day. Thanks, DJ * Though you will find certain quarters of psychologists who are still more into it. Tends to be a bit more alive on the East Coast of the U.S. and in Europe. Even the psychodynamic folks, those who are willing to name themselves heirs to a Freudian system, don't even really believe or focus on the early Freudian view of the tripartite personality anymore--they're mostly far more informed by object relations and attachment. Full disclosure, though: being a cognitive-behaviorist, I tend to regard those two as about 90% and 70% bogus, respectively. Depending on the day.
  14. I wish I did! They're fun, after all. No, I really don't though--if, as I've assumed, by "good" you mean "empirically validated to even a minimum extent." At present time, you really need a human to perform the test. I'll ring this bell as much as I can: the real result of an IQ test is not an FSIQ score, it's a narrative assessment report. The clinician-tester is also doing a fair amount of observation in addition to the plain old mechanics of the test, and that sort of trained/careful observation and interpretation of the test is really important to a full understanding of a person's intellectual capacities. Oh yeah, and even if this wasn't the case, well, the tests cost so damn much to properly develop that you'd see a "real" one being given out for free on the internet right after you see your local pharmacy giving away medication. Sorry to disappoint!
  15. Pretty good responses so far to a good (and important) question. Thought I'd throw in a small two cents. You used just the right words there, cap'n: "a good start." Many in psychology absolutely freak out about twin studies. Yes, they're the gold standard in determining what's inherited and what isn't (we call this "heritability"). But that doesn't mean they're perfect. Let me break down the three big types of twin studies in more exhaustive terms than cap'n did, though he already said much of this: Identical Twin Studies: compare identical twins on your trait of interest. Say, schizophrenia, for instance. Find that the rate of concordance is something like 80%--when one has it, 80% of the time, the other one has it too. So we're done, right? 80%? Not yet. Not only do they share genes, they shared an environment. So, then we do... Fraternal Twin Studies: theoretically, they shared the same environment, though they are only as genetically similar as brother and sister. This should allow us to distill out the relative influence of the two. Suppose that you study them and find out that when one has schizophrenia, the other one has it 30% of the time. 80 - 30 = 50% genetic. Hold the phone, though. There's a big thing we're not ruling out, and that's fetal environment. Genetic influence is not the sum total of biological influence. Though the fraternal twins didn't share genes, they shared the same digs for 9 months, during which they were exposed to the same array of hormones, nutrition, and possibly diseases or teratogenic factors. So that 50% is really just non post-natal environmental influence--some of which is likely to be genetic, some of which isn't. There are other little issues with twin studies like these as well. Sometimes, if we're lucky, we get a large enough sample to study the granddaddy of heritability samples: Identical Twins Raised Apart Studies: The postnatal environments are really different, but genes and prenatal environments are the same. This should tell us very, very precisely about the influence of a postnatal environment. This is the one people freak out about, and that gets a lot of fun media coverage. The body of studies often known collectively as the Minnesota Twin Family Study included a section like this: The Minnesota Study of Twins Reared Apart. Often, you hear stories about ridiculous similarities between long-lost twins--one well-known one you probably heard in gradeschool allegedly comes from the MSTRA: "One pair of twins had both divorced women named Linda and then married women named Betty. They later discovered that before they met each other as adults, they had taken several Florida vacations on the very same stretch of beach and had driven there in the same model of Chevrolet. They had both named their sons James Alan (one was "Allen") and both chain smoked Salems. Both chewed their nails and had woodworking shops in their basements." I'll save the speech about how human beings tend to remember and repeat amazing information like this, and forget about the million other times where nothing weird happens. So, let me rain on the parade a little. One of the problems with them is that, to really use them as a perfect measure of heritability, one of the conditions you'd have to fulfill is random assignment. If we got to be evil research overlords and design this study ourselves, we'd separate the kids at birth and then randomly select couples somewhere in the world, and parachute in that kid with a note that says "we'll be checking up on you from time to time, thanks." But that's not what happens in these situations. Often, they're handled by families or localities, such that the two environments are, in essence, correlated. The kids get exposed to somewhat similar environments by virtue of the fact that they've stayed within a similar family, regional locality, and/or nation or culture. If none of the kids get thrown around between very different cultures (and they almost never do), we can't separate out the influence of culture. If indeed the famous set of twins above really existed as often reported (I've never really cared to check up on it), maybe part of the reason they both smoked Salems was hereditary in some convoluted way. But it was also because one of them wasn't adopted out to Yemen, where he may have chewed khat all day instead. It's because of ridiculous complications like these (and the fact that we can't experimentally separate children at birth... damn government) that it's very difficult to say what proportion of a trait is "driven by" genetic or other biological causes, and what part is driven by environmental ones. But let me take yet another step back, and rain on the rain that's already raining on that parade. This entire discussion has been predicated (and is almost always predicated, no matter where you hear it talked about) on a largely unexamined assumption: that genetic/biological and environmental causes are proportions of a whole which add up to 100%. This is not true. To illustrate, let me give you a mathematical function: f(x) = x2 - 4x x = 6 f(x) = 12 So, I gave you the function--the set of instructions, as it were--I have you the number to "plug into" those instructions, and then we got an answer. So tell me now: what proportion of the answer came from, or was "driven by" the function, and how much of it came from the value we plugged in? The question is insane. The two things don't produce anything without one another. They work together. Well, genes/biology and environment similarly work together. Each actualizes and works through the other. Genes create predispositions which alters the patterns of environments one is exposed to. Environments create conditions which influence the regulation of gene expression. These things are entwined in a way that can't be broken into two pieces that add up to 100%. The heritability coefficients that we're so fond of ("intelligence is heritable at .5," we often say, meaning 50% of it comes from one's genetics/biology) simply do not tell us how much of a trait comes from that. They tell us how much came from genetics/biology in the sample studied, at the time it was studied, under the conditions of the study. It's a normative measure, not an absolute measure of the "power" of the gene. If we were to change that environment around, the heritability level would change! Certain environments allow, promote, or tamp down the expression of genes in different ways. Suppose we tested normal sets of twins and found that verbal skills were heritable at something like a .4--a decent enough result. Now, test another set of twins where one half of the set was raised chained up in a dark basement with no human interaction. Guess what? Your heritiability coefficient--what people take as a measure of exclusively the gene's power--is now at zero. That's a dramatic example, but this works in real-life too. People use heritability coefficients politically to agitate against programs like Head Start. They say: "what's the use throwing all that money at these kids? Most of these skills are highly heritable, blah blah blah." Guess what? Change the kids' environment, and the heritabilty itself changes. Neato, eh? This concludes my wandering, probably largely non-question-answering rant. Thanks, DJ
  16. Sorry to keep beating this, and it may get far afield from the topic, but I can't let this one go. I'm in school to be a psychologist, and believe me, nobody finds the DSM more ridiculous than psychologists. We're not generally in charge of the thing. It has a thousand problems, and the very nature of categorical diagnosis has its problems as well. (Often, the people who hate it have a knee-jerk reaction to support a system of dimensional, or spectrum diagnosis, which, believe me, has a set of problems which gives categorical diagnosis a run for it's money.) It's extremely difficult to adequately describe mental illness in a way that is consistent with research, diagnostically useful, and clinically useful. But that's no reason to: 1) throw the entire baby out with what is admittedly a lot of bathwater 2) sneer at it with slippery slope arguments about what insane thing they'll think up next. Yeah, the psychiatrists annoy me for medicalizing so much of human experience. Yeah, drug companies sell more drugs that way, and they fund a lot of the research. This tired old conspiracy theory writes itself. BUT, guess what? What we have does have some utility. It is based, to a large extent, on good, solid peer-reviewed research. The details of diagnostic criteria mostly do have clinical meaning and ecological validity. We fiddle with these things constantly. Many people in the lab I work in, for instance, are concerned with Criterion A2 of Posttraumatic Stress Disorder (it requires a person to have experienced fear, helplessness, or horror during or immediately after the traumatic event). Very refined recent research has found less utility with this one than was first believed. So, a lot of people want to drop it. What's wrong with this, exactly? You dump a diagnostic system because they've changed their minds? I'm not going to dump astronomy because those idiots used to think the earth's orbit was circular. Good for them for figuring out it went around the sun at all--better still figuring out it does so in an ellipse. Science makes progress. The unfortunate but hopeful implication of that is that it's dumber today than it will be tomorrow. We change and refine and improve upon (and sometimes screw up entirely). And why does any of this matter for psychology? Because we help people--tons of them. Emperically validated diagnosis helps me choose empiricially validated treatment, which helps someone get better. Often a lot better. Better than they would've been if I would've been studying this 25, or 50 years ago. Better than they would've been without a halfway decent diagnostic system at all. I apologize if I've continued something which may be off-topic. I'll shut up on this one now, and let the conversation return to the decidedly weird game of third-hand diabetes evaluation. Thanks, DJ P.S.: Poor interrater reliability for schizophrenia? I'd love to see the literature on this one. Mail me a reference, eh?
  17. We are. IQ testing reports include more than a score. Clinicians describe the breakdowns of test subscales and the relationship between subscale scores, and what implications that has or may have for the individual. IQ tests are also often part of a testing battery which evaluates skills in an even more focused manner, or helps inform the choice of assessments for a future battery. It's not. If you read an IQ report, rather than relying on a single FSIQ ("Full Scale Intelligence Quotient") score. We do. Again, that's really what an IQ test is for. To give both a broad measure of a person's general intellectual ability ("g"), as well as more comprehensive accounts of a person's specific capacities in several different areas. Of course, these areas are typically highly intercorrelated, which lends meaning to the concept of general intelligence. I wrote a relatively gigantic post about two weeks ago, right up the page there, that hit this and several other issues.
  18. Uhhhh... I'm curious about your question or your topic of discussion. Do you mean to ask: 1) Is homosexuality a fetish, or similar to fetishism? 2) Is homosexuality a learned behavior/choice or is it inherited, or something between the two? 3) Is homosexuality a mental illness? If it's any of these, with some level of fatigue at this sort of line of inquiry, I'll go ahead and register the answers of 1) No, not with any halfway rigorous conceptualization of these concepts 2) Some combination of the two, (although people don't fully define their terms when they ask this, and it's thus generally nonsense; furthermore, it doesn't really matter for most purposes, anyway) 3) No Somebody with the energy to explain any of this is welcome to. It's one hell of a dead horse.
  19. I'm going to let the inner Hobbesian come rolling out here, but, uhhh, maybe it's because people are sort of dumb, generally incapable of handling nuance and context, and need to be told what to do? I don't trust the common man to read TV Guide correctly, let alone make informed decisions which bear strongly on the public health. I mean, have you seen interviews of people at political rallies lately? And on a more serious (or at least, rigorous) note, can you really operationalize "puritanism" in a way that you can detect it in all varieties of communique from the federal government? Really? Do they feed us nothing but self-restriction, self-denial, and self-punishment? What about Bush's constant entreaties for us to spend money and go on vacation? What about structures of subsidy which help pay for every manner of wasteful, selfish lifestyle: highway driving, suburban living, beef consumption? Why does the federal government promote condom use? Fund methadone clinics? Why do state governments do everything they can to encourage spending and partying in their own states? Why are community or alternative courts replacing criminal courts for so many offenses? Doesn't sound like they're being very good puritans at all! Look, man, I'm not even a conservative--I'm not even against these things. But it does very much seem like you've got one big hammer, and you're seeing a whole lot of nails.
  20. Hope this doesn't get us too far afield here. I would be very careful about throwing around the names of mental illness. I understand where this is coming from, and I do appreciate that you stayed tentative (e.g., "seem," "tendencies"), while many others simply want to smack a sticker with the name of a disorder onto someone. Nonetheless, I do want to point out a few things: 1) OCD is a serious anxiety disorder, which is often best typified by the connection of anxiogenic thoughts (obsessions) to anxiolytic behaviors (compulsions). Often, these compulsions end up highly ritualized (we all have our visions of people with OCD checking the oven six--not five! not seven!--times. It actually isn't a terribly inaccurate picture.) The term "OCD" has entered the popular imagination as something which describes people who are fastidious, perfectionistic, rule-bound, or observably neurotic about having things just-so. OCD, though, is really a bit more than that. The DSM-IV-TR actually includes a diagnosis which more closely matches this, one which fewer people seem to know about: OCPD, or Obsessive-Compulsive Personality Disorder. This tends to be a somewhat milder syndrome, closer to what I just described, and is thought of as more "driven by personality style" than by a disordered level of anxiety. It's difficult for me to exhaustively describe what we (I'm a psychology graduate student, so I use the group pronoun for the profession) mean by that, so I'll leave it there. 2) Perhaps even more importantly than that, what many of us consider to be the most important factor in diagnosing disorders is a clinical level of distress. Nearly every diagnosis for a mental disorder includes the provision that diagnoses are not made unless there is significant interference with a person's daily life functioning. Who generally gets to decide that? Well, they do. In the OCD diagnosis, this is Criterion C. 3) Finally, when an obsession is about a real-life problem, well, in short, it isn't an obsession (at least, not the kind we're thinking of when we make an OCD diagnosis.) "The oven will explode if I don't check it six times" is not a real-life problem. "My blood sugar is unstable and I'm trying to manage it with diet" is. It doesn't really count as sort-of an obsession, clinically--it just doesn't count (This is Criterion A2, if you fancy a look through the DSM!). Also, even more relevant in this case, we don't make diagnoses of most disorders when they're well-accounted for by a general medical condition. This is Criterion E. I can tell you from a clinical perspective (full disclosure: student clinician) that, based on the information here, that more than applies. Marat, I hope it doesn't sound like I'm hounding you--I know you were being tentative. Given what I was just mentioning about general medical condition exclusions, I also agree with you in saying: That about sums up my last point, too. It's difficult to say anything, since all of our information is secondhand and over a text medium! Can't make too many solid diagnoses over that--medical, psychological, or otherwise. Anyway, I just wanted to shed a little light on this part of the discussion. Thanks, DJ
  21. Needimprovement, I'm not a medical doctor either. Luckily, I don't think we totally need to be here. I am a student of psychology at the graduate level, with a fair amount of experience understanding, arguing about, and combating pseudoscience. You are absolutely right to be skeptical of a claim like: "85% of all disease is rooted in our emotions." Your wariness is very well-placed here, and you've got a good head on your shoulders with regard to this particular issue. As a psychological researcher, I can tell you about some questions a person could and should immediately ask when they see something like that is: 1) First and foremost: What studies? Show them to me! 2) Through what statistical procedure was it possible to arrive at a number of both such drama and such apparent precision as 85%? Certainly, elaborate processes of population estimation were used. I'd love somebody to walk me through them. 3) How have these studies operationalized and measured "emotion," and defined "rooted in emotions?" Are we talking about emotion's immediate effects? Are we including emotions' effects which are mediated through other processes? (For instance, say a depressed person drinks themselves into cirrhosis. Should we include that? Say an individual with poor body image has a bunch of casual sex in a crude effort to improve it, and they get chlamydia. Should we include that one too?) 4) What are we counting as "disease?" Just physical stuff? Are we including mental disorders? It seems a bit tautological to say that depression--serious public health problem that it is, of course--is "rooted in our emotions." If those are counted as part of that 85%, we might be, uhh... cheating a little. 5) Are we talking about mere individual incidences of diseases, or some sort of total amount of morbidity? For instance, does stress cause 85% of cold cases, or is it responsible for that proportion of their severity or duration? 6) How does this claimed effect play out in different areas? Is it across the board, or does emotion account for, say, 95% of colds but only 40% of cancer, and 65% of heart disease? 7) Is this claim actually based on causal data? Maybe it's only based on correlational data, and a third variable accounts for an apparent connection between negative emotion and illness. Or indeed, maybe it runs in the other direction--I was pretty down the last time I had the flu! Without being able to examine the methodology of the studies, we just can't know. People make improper causal inferences based on correlational findings constantly. (This is being pretty charitable: assuming that there even is such a finding of this magnitude. More likely that the point is moot, because the finding doesn't exist.) Can Bo Sanchez answer these? I'm going to put my money on "no" (rather than into one of his get-rich-quick schemes, eh?). Gee, I guess that means he shouldn't be making the claims. Funny notion. If you actually read the stuff linked, it's the usual postmodern spirituality dreck: scientists/doctors/academics are a bunch of evil materialists who want to hook you on medication/take your money/suck your soul out into a tube and make it into soylent green. Purchase my book/DVD/conference tickets and learn how to reconnect with your inner spiritual being/past lives/god-given right to do whatever. Variations on these claims are the bread and butter of an industry which makes separates millions of fools from billions of their dollars annually. See this excellent book, not to mention this one. Enough of that. So, on the other side of it, you're absolutely right: The area of psychoneuroimmunology has been doing good science in this area for more than a generation. As you might gather from the name of the discipline, much study focuses on the effects of stress or negative emotion as mediated in vivo by immune functionality. We find all sorts of interesting things about how stress hormones, at excessively high dosages and durations, depress levels of various kinds of immune cells necessary to fight infection, cancer, you name it. And this is not to speak of other, perhaps simpler and more well-known connections between stress, hypertension, and cardiovascular disease. For a decent review of psychoneuroimmunology, check out a (fairly) recent meta-analysis like the one at bottom--hopefully, you've got database access of some kind. You likely won't find the authors making insane claims about what proportion of human disease is attributable to anything. In sum, your hunches are correct: it's important. In fact, it's really important. But it ain't 85% important.* Almost nothing is. Peace, DJ Zorrilla, E. P., Luborsky, L., McKay, J. R., Rosenthal, R., Houldin, A., Tax, A., McCorkle, R., Seligman, D. A., & Schmidt, K. (2001). The relationship of depression and stressors to immunological assays: a meta-analytic review. Brain Behavior and Immunity, 15(3), 199-226 * As a postscript, this is also an example of one of the myriad situations in which people insanely divide up a chain of causal factors, all perhaps of varying levels of necessity and sufficiency for some goal, into fractions of 100%. Look--stress may have caused the cold to get unnecessarily worse. But the virus causes the cold. But the poor handwashing habit caused the contraction of the virus. And the guy who should've called in sick today caused it to sit there on the door handle. We can't meaningfully divvy up "blame" here like that. Now, we could create some kind of experimental study where we altered each step individually and observed what the different effects were within a big sample, but that's a bit different--it doesn't allow us to make baldfaced claims about what is whatever percent whatever. What's responsible for your car moving: gas pedal, fuel injector, spark plugs, drive train, or the air in the tires? Sheesh! All of them!
  22. Do you have any kind of peer-reviewed empirical data to back this up? I can't wait to see it.
  23. Why Naltrexone? It's a drug used for alcoholism or opioid addiction. It blocks opioid receptors so that the opiates may not attach. Why did you choose naltrexone? Sure, it is a drug for that--it's an opioid antagonist. Remember, though, that drugs often have widely varying effects, and new uses appear which don't (to a layperson) necessarily seem to be connected. Valproic acid (Depakote, most commonly) is often cited as an example: developed as an anticonvulsant, it's still widely used to treat epilepsy. However, it was also later found to be effective in treating migraine headaches, as well as bipolar disorder. Naltrexone also has a bunch of uses being currently investigated--it may have some activity against some inflammatory bowel diseases, and maybe maybe maybe some cancers or multiple sclerosis. These investigations typically involve "low dose naltrexone." As far as I know, no big clinical trials have really been done, and so far samples have been very small. In the age of the internet, people more quickly than ever find out about small studies, producing low-powered results that haven't undergone extensive scientific vetting. So there are a million things out there like low dose naltrexone which have a lot of peoples' hopes up. So the answer to the original question is: as far as I'm aware--given the state of literature on the topic--nobody can really say they know if naltrexone at any dose is an effective treatment for cancer, nor can they really speak to why it is or isn't. The science just hasn't been done yet. If you're asking about it with respect to your personal health or that of a loved one, there are rules that always apply: ask your doctor. If you doctor hasn't heard about it, ask him to look it up. Most of all, beware your own excitement. We should doubt ourselves most strongly when we really desire that something should be true. We're typically a lot better at remembering all of those fun stories which support our beliefs than the sad cases which don't. Cheers, DJ
  24. Is there a, uhh... question here? Or even a topic?
  25. This is sort of a classic example of using logic when empiricism is needed. Science is the union, the marriage of these two great philosophical traditions--we nail ourselves to the rules of science, we use logical and empirical methods, and in doing so, we get to be not so utterly stupid all the time. Nonetheless, there are certainly questions for which either logic (perhaps more simply, "argument"), or empiricism, is more suited. Consider another question on these message boards: "is homosexuality a mental illness?" Well, I can say with pretty solid certainty, based on a cargo ship full of data: no, it isn't. Nonetheless, those data don't really define our terms for us. The question of what a mental illness is is not something we can directly answer with any kind of measurement device, and that definition is a huge part of the argument. So it's a question that is amenable to argument perhaps as much as it is informed by empirical finding. (Note: I still think the good arguments end up on the side of "no, it isn't," but that's another issue completely.) So, let's take this question: is there a causal relationship between estrogen level and pleasure-seeking behavior? Look, this is basically an empirical question. What we think about it doesn't really matter a whole lot--we can easily go measure it. It's like asking "do you think that there are more black socks or white socks in that sock drawer over there?" Who the hell cares what I think? Just go look in the drawer! Pleasure-seeking behavior can be defined/operationalized with fairly little controversy, and study designs are available which go far beyond the complexity needed to answer a question like this. Get on a database. Hell, Google it if you don't have that--that has dangers of its own, but it's better than nothing, provided you can sniff out junk fairly well. OK then... the answer? As far as I know, there isn't a connection--and certainly not a causal one. A cursory search of social science literature didn't turn one up. Find something on the matter? Sweet. Share it and cite it. Cheers, DJ
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.