Jump to content

lucaspa

Senior Members
  • Posts

    1588
  • Joined

  • Last visited

Everything posted by lucaspa

  1. Nice try but the logic doesn't follow. The bark of a tree and separation of continents (which is what the picture shows) happen by completely different processes. The Gulf of Aden and the Persion Gulf did not form because the earth is drying out. Let's face it, both places aren't dry -- they have water in them! Instead, they are a product of plate tectonics.

     

    The bark of a tree, OTOH, is formed from cells of the tree. The pattern is not formed by "drying" and it is just coincidence that this part of earth bears a (very) superficial resemblance to tree bark.

     

    Sorry, but the planet is not a living organism.

  2. Scar tissue is a specific type of tissue. Skin is composed of 2 basic layers: dermis underlying the epidermis. Dermis is composed of fibroblasts but has an organized matrix outside these cells while epidermis is composed of epidermal cells, hair follicles, and sweat glands. Scar tissue is a very disorganized tissue but does not have epidermis over it. There are no hair follicles or sweat glands. The extracellular matrix is mostly type I collagen. All the "treatments" for scar tissue seem to be trying to get the tissue to "soften" a bit and be more pliable. None of them will cause the epidermis to regrow, so none of them are going to restore the skin to its normal appearance. Sorry. On normal skin, glycolic acid would remove the dead epidermal cells that lay on the surface of the skin -- that strata cornelium. This will expose the underlying living epidermal cells (probably also taking off the top 2 or 3 cells in that layer), leaving a "softer" skin until the epidermis naturally reconstructs the overlying layer of dead cells. I can't see it doing anything really for scar tissue except perhaps using acid to break down some of the collagen on the surface. I've seen papers exploring the use of hyaluronic acid to prevent scarring during wound healing, but none on using HA on existing scars. I can't envision any possible mechanism for it. What do you mean by "prove effective"? What do you hope to gain by treating the scar? Get it to go away?
  3. I agree that finding the cause of the autoimmune destruction of the islet cells is going to be important. However, even when that is known, it may not be possible to prevent it. And yes, if the system is not shut down any transplanted cells will simply succumb to the same process. But there are possible ways to avoid the process. One way is to hide the transplanted cells from the immune system. This can be done by placing the islet cells in a cylinder composed of material with a pore size that only allows molecules of < 50,000 MW to pass thru the membrane. This means that nutrients can enter the cylinder and insulin can leave the cylinder, but neither immune cells nor antibodies can get into the cylinder. Thus, no possible destruction of the islet cells. The problem then becomes getting enough islet cells for every type I diabetic. Some companies are looking into using xenogenic islet cells (such as bovine) while others are looking at means of differentiating either ES cells or adult stem cells into islet cells. There are several papers claiming that various adult stem cells are capable of differentiating into islet cells. However, no one has found a way to do this efficiently. Once that is found, then the treatment is to isolate adult stem cells from an individual (not necessarily the patient), grow them in cuture, differentiate them, place them in the cylinder, and then place the cylinder in the abdominal cavity (in the omentum probably). Voila! Instant insulin producing organ. There are, of course, several engineering hurdles to pass, but several biotech companies have appropriate membranes. It's cool. The problem here is how you have to get the transciption factors into the cells. You do that by transduction with a retrovirus and the danger is that the retrovirus will kick the cells into being cancer cells. FDA is not going to approve a treatment unless a safer method is devised to get the genes into the cell. What is needed is the exogenous signal that tells exocrine cells during development to differentiate into islet cells. These are the genes that are turned on as a result of that signal. We need the original signal, but right now the molecular biology community is tunnel-visioned on transducing genes.
  4. I know. It's obvious that we would do anything for those salads and breadsticks. Especially the breadsticks. Most of this research is actually done in academia. This type of research is what NIH gives out most of its money for. But again, much of the research figuring out the disease is done in animals. Let's take osteoarthritis for example. We can't go and harvest massive amounts of cartilage from humans in various stages of OA to look at the causes and biochemical events of the disease. People need that cartilage in their knees to walk around on. So there are animal models where the animal can be euthanized at particular points in the progression of the disease to look at the morphological, mechanical, and biochemical changes taking place. The most recent model is goats walking on concrete floors. This is very good! I'm glad you made this point. Yes, some initial toxicology testing (particularly for carcinogenesis) might be done in mice, but the tests for subtle toxicity and efficacy are going to be done in one (or 2) appropriate animal models -- not just any animal. Again, good point! Lots of tissues are taken from the animals and analyzed. Remember, the company has to show to the FDA both efficacy and toxicity! FDA is going to ask them about effects on organ systems other than the one the disease is working on. Particularly lung, heart, brain, kidney, and liver. In the case of Viox and Celebrex, there was data on these organs. I would remind everyone that Celebrex (like Viox a cox-2 inhibitor) was never withdrawn from the market. The animal testing did predict both efficacy and toxicity. That first one is what we mean by "ineffective". If the drug is not efficacious in animals, it won't be tested on humans. Can you provide an example of a drug that was not efficacious ("ineffective for other reasons) that went to clinical trials? Some caveats here. Phase I clinical trials are done on a small number of patients: usually those for which all other treatments have failed. Thus, for a new cancer drug, usually terminally ill cancer patients are chosen. The purpose here is to look for gross toxicity, since the number of patients is too small to pick up rare toxicity. Phase I is not for efficacy. If the Phase I cancer patients for a new cancer drug show improvement, that is a bonus. But even if they show no improvement but also don't show gross toxicity, the drug will move to Phase II. This trial involves a lot more people and is looking for 1) efficacy primarily and 2) not so common side effects. But you are correct, the trials are carefully scrutinized. Or are supposed to be. Sometimes that scrutiny fails, but that is a political problem we must address and not a failure of animal testing. Another good point. Regulatory agencies err on the side of safety. For instance, chloramphenicol was pulled because 1 in 10,000 patients developed fatal liver complications. That's an extremely low risk, but there were other antibiotics available and therefore the risk was considered too high. However, even today there are 1 or 2 situations where chloramphenicol is used because there is no other antibiotic that will work and the risk of dying from the disease is so much greater than the risk of liver failure. That's the therapeutic index I was talking about. Here there are legal considerations. The drug company doesn't have absolute control over the physicians who prescribe the drug or the compliance of the patients who take it. There are many patients who think "if one pill is good, 2 pills would be twice as good and I can get over the disease twice as fast." If the TI is close to 2, that patient will take enough to get toxicity. Is the patient blamed? NO! The company would be liable. So the company doesn't want to take the liability risk and will pull the drug. Good point. That is never done. It would be impossible to get such a protocol past any IACUC I have ever dealt with. There is a section in the forms that is "Justification for the use of animals" and putting "I just wanted to see what would happen" is going to get the protocol rejected.
  5. The problem is that we are trying to kill human cells. It's easy when we make antibiotics for prokaryotes: there are a LOT of differences between prokaryotes and eukaryotes and therefore we can make drugs that have no effect on human cells. But what are the differences between cancer cells and normal ones? Not that many and they are all subtle. The most easily seen difference is that cancer cells divide rapidly. Therefore the first attempt was to get drugs that interferred with this division and killed cells that were dividing. Most human cells do not divide very much in the adult. Muscle cells and osteocytes don't divide at all. Other cells, such as adipocytes, endothelial cells, nerve cells, tenocytes, etc. don't divide very much. One of these cells can go years without dividing. Unfortunately, there are a few cells in a few tissues that do divide fairly rapidly: intestinal lining cells, hair follicle cells, hematopoietic cells, etc. So these get affected, too, and we get the side effects. The real problem with cancer is that natural selection is true. All our medical advances can't get around that. We can kill off 99.9% of the tumor cells, but all we've done is ensure that the tumor grows again from cells that were resistant to the treatment. You are confusing vivisection with drug testing. Two different things. And animal studies using "vivisection" or surgery have been very good at extrapolation to humans because the anatomy is so close. There is motivation for drug companies to fudge the data. Not so much for academics in medical research. There we get gigged if we don't call them as we see them. The motivation for drug companies is why there are regulatory agencies like the FDA to monitor things. That's not true. The reliability has not been that bad. As you note, the reliability is so good that drug companies sometimes try to hide their animal data because it is unfavorable! If the animal testing were not reliable, they wouldn't have to do that. Sorry, you can't have it both ways. So, are you volunteering to test one of these rejected drugs? No, we don't. We use the word "euthanasia". That's why clinical trials are designed in such a way as to eliminate the placebo effect. Haven't you ever heard about "randomized double blind" studies? Phase II clinical trials are all randomized double blind. You can't be sure. It's very difficult in science to be sure unless you have falsified a theory. But animals can provide good models that greatly increase our chances that a drug will be both safe adn efficacious when tried in humans. 1. Because of evolution, knowing about other animals does help us know about humans. We are related to other mammals by descent with modification. 2. And that part of research "not applicable to humans" is part of the falsification I talked about above. That is the part we are certain about. However, we wouldn't know it was not applicable to humans unless we had done it first. And yet we have all these cures because of research on animals! How about that? You can do that in the USA too. It's a Category C study, in which pain relieving medication can't be used. It's very difficult to get a Category C study approved. The justification must be airtight. I speak as someone who sat on an IACUC for 8 years and have submitted my own proposals -- none of which have been Category C. But of course the AR people are all saints, with no agenda and they would never distort the truth for their cause, would they? No. The major reason is the legal/ethical status of the clone: is it fully human or a piece of property? There are also technical/practical reasons: 1. Cloning so far is very inefficient, requiring 500 to 1,000 ova for every clone produced. It's difficult to get that many human ova. Are you willing to donate yours? 2. It turns out that some of the enzymes necessary for the initial divisions of the fertilized ova reside in the sperm in humans. Therefore primate and human clones are more difficult to generate than rat or mice clones. WOW! You really like to slander researchers! How many do you personally know? Again, are you willing to volunteer to take a drug that quickly killed a rat? Or are you going to think "rats and humans are both mammals and any drug that kills a rat is likely to kill me."? Notice that word "rare". And sometimes that happens. Often the drugs do not have an unpredicted rare side effect. Don't you think we would use serum-free media if we could? The reason people started using fetal bovine serum was because the cells died in serum-free media. Because up until recently none of the in silico systems were accurate compared to the known data we already had from animals. If your in silico program predicts drug A will be harmless but you know from previous testing that drug A caused liver toxicity, then you don't trust the program. Duh! Remember GIGO. A computer program is only as good as the input. If that is flawed, then the output is flawed. And up until the last 5 years or so, we didn't have enough data to model complex whole animal interactions. We still don't for most things. That's why you have to use the whole animal. So why should you object? After all, since you think animal testing can't predict what happens in humans, why not use humans? According to you, using humans as the first line of testing is the only way to go. However, you have cited a lot of "programs" but not documented any of them. Please do so. "opinions" or positions get discussed to determine their validity. Saying something is an "opinion" doesn't exempt it from critical evaluation. Yes to both. In the wild the dam finds a secluded spot away from the male: a defendable spot. However, in a cage there is no way to do this. Nests are usually in corners, but the male is right there. Once the pups grow fur, I've never seen a male rat eat them. But newborn rats are vulnerable. This wasn't that. Rats fight off predators. Basically, once a rat gets used to a procedure, you can do anything you want with no risk. Even for the first couple of times (and I witnessed that), when the rats were carefully immobilized so that they would not bit the researcher, then don't flinch or show any outward signs of pain to the injection. Their entire objection is to being held. You need to be careful about projecting your human perceptions on other animals. You make a big deal about rats and humans not being predictors about human reactions. Consistency demands that you do the reverse: what happens with humans cannot be used to say other species are the same as humans. Yes. I have seen rats in pain from infection, inadequate anesthesia, or failed bone fixation. PC, my job in no way "depends" on causing pain or terror to animals. Most of my research is cell culture. Nor do any of my animal experiments require the animal to have any more pain and/or discomfort than a human undergoing the same procedure. Therefore all my protocols include the same analgesic treatment a human patient would get. But it appears that you have no first-hand experience of animal behavior. Your positiion depends on animals feeling pain and you don't want to hear any contrary evidence. Therefore, by your own logic, anything you say about how much pain animals feel should be taken with a large pinch of salt. That's a convenient way to get rid of evidence against your position, isn't it? Since that same argument applies to yourself just as well, why should we pay any attention to you? I didn't say "sacrifice", did I? I said "euthanasia". And the reason is twofold: 1. We don't like killing animals. We really don't like seeing them in pain. My first experience with animal research occurred as a junior in college. I was at Kansas U Medical Center working with a guy studying whether the drug antabuse was an effective means of preventing alcoholics from drinking again. For the experiment he needed to know precisely how much water the rats had. The way to do this is to put measured water in a syringe and the "needle" has a ball on the end. Hold the rat, put the ball at the back of the throat, tilt the syringe and needle up, and have the needle slide down to the stomach and then inject the water. The guy showed me this very quickly (he had done this thousands of times). The second rat I did I missed the esophagus, got the trachea and injected 15 ml of water into the rat's lungs. He thrashed for several seconds as he drowned. I threw up. It obviously haunts me today. 2. Euthansia is different from "killing" (so is "sacrifice"). Killing does imply pain and fear on the part of the victim. Euthanasia implies the lack of both. I like carbon dioxide inhalation. But the rat into a box and run carbon dioxide thru it. The rat quietly goes to sleep. A colleague prefers cervical dislocation in the mice he works with. Why? Because, according to him, he can dislocate the cervical vertebrae in a hundredth of a second, sparing the mouse any pain. Another euphemism, 'Final Solution', sounds nicer than the reality of what it meant. The articular cartilage defects were completely regenerated to the point that you can't tell where they were. Now, how can I do "human-based research" without vivisection on humans? Is it OK to do vivisection on humans? If so, why isn't it OK to do so on rabbits? Who is Sciecewiz and what is your source? Please give a citation. And why should we trust these people? Oh yes, you depend on them and they agree with you. So of course they are trustworthy. You didn't answer the question. Do you know what an Institutional Animal Care and Use Committee is? Do you know the mandatory composition of these committees? Have you ever sat in on any meeting of one? If not, how can you say how much of their deliberations are due to "corruption" or whether the committee people can be trusted? BTW, one of the requirements is that committee members can receive no compensation whatsoever. The IACUC I served on for 8 held our meetings at the local Olive Garden so that we got free lunches. Most committees don't get that. Look at the results in advances in medicine over the past 50 years. No, they don't. They can't. Anyone having connections to drug companies can't serve on an IRB. In fact, we had an example in ethics this year of a physician working for a pharmaceutical company that wanted to teach residents in clinic. This was allowed only under the condition that her prescription use was monitored to ensure that she was not overly prescribing the drugs her employer made. After a year -- during which she was clean -- she asked if she could join the IRB. She was told "absolutely not" because of the potential conflict of interest. I'm afraid the people you trust have given you some really bad information. Please cite a source! In the USA the rules expressly forbid payment for participation in Phase I trials. That is an ethical problem, not a scientific one. And I agree with you; this is wrong. Patients must be counseled as to the risks of the participation. Which one? As you should have noted, I was arguing against using unwilling human participants in human trials. However, it is unclear why you think this is so. After all, you say the only way to test new drugs is on people. So why not use people? My position is that the ethical thing to do is test animals first. Same thing. You won't accept any data contrary to your view. All you've given is an invalid reason why you won't accept any such data. So what about physicians? They "experiment" on you every time they treat you. By your logic, you shouldn't go to a physician. But I bet you do. Apples and oranges. After all, we can consult the victim. And you can inspect the animal protocols and visit animal research facilities to check for yourself to see that the animals are treated humanely. PC, what you are forgetting is that science is public. Everyone must be able to get the same results in approximately the same circumstances. I've offered to post the IACUC forms I have to fill out. The requirements for IACUC and IRB committees are publicly available, as are the inspection criteria for animal facilities to meet AALAC approval. Look for yourself. See? Not a rational discussion. If you are close to New York, you can come watch one of the operations. Or I can give you the name of some undergrad students who have participated. Same way human patients are tested: to a mild pain stimulus. In this case a toe pinch. If the animal draws the foot away, anesthesia is not complete. Now, just what percentage of human patients have reported pain while under general anesthesia? This is not local; but general. Complete unconsciousness. So do humans! Don't you feel fear/anxiety before an operation? I know I have. So why do you want a condition to apply to animals that we can't satisfy for humans? How about plants? The plants are in a cage, aren't they? They are grown and then savagely killed -- oftentimes torn out by the roots. How about a lion hunting? Or even a domestic cat hunting a bird or a mouse? How much fear does the prey feel? Or pain as the carnivore bites down on the neck? You are trying to duck the issue: every species exploits other species. This is completely unavoidable in animals; every animal must at least exploit plants. mooeypoo (post 336): 'Perhaps but the problems start with defining what a "Good Cause" is (to you it can be one thing, to me another, and we each can consider each-other's subjective 'good causes' as absolutely not worth it), and the second problem is what TYPE of actions justify what type of means.' PC - Yes. There were causes good enough for the Aztecs to sacrifice humans. There are causes good enough to lead one nation to attempt genocide. That is your wishful thinking. Yes, there is. Did you miss my statement how we closed down a researcher for not following the rules? I've kept it for you. Animal facilities must be regularly inspected. Since IACUC members cannot receive any compensation, what is the source of corruption? Your premises are wrong. Researchers have an even better chance at getting caught if they break the rules. Remember, they have people outside their lab caring for and inspecting the animals. People whose jobs depend on the researcher following the rules. If the researcher doesn't and then they get caught in an FDA or USDA inspection, the animal facility is shut down and the caretakers are out of a job. IACUC meetings are open to the public. You won't trust me but then you say false things that anyone can look up and see are false! Who has the credibiliity problem here? Not for the scientist. You might perceive it that way because there is no jail time involved, but having your research shut down is a MAJOR punishment. That does not mean it is all subjective. Either you are I might have made a mistake in either our premises or our reasoning. Oh, but they did. For instance, one of the early pieces of evidence against using animals in research was a video showing a researcher waving a blowtorch over the skin of a pig to cause burns. No researcher could possibly perform that research today. Several times! However, you can read the regulations for yourself. But now the burden of proof is on you to back that claim. You must show specific instances where the rules are not adhered to. Please do. 1. Because we never would have had the necessary tests for efficacy and safety without going to animals. You seem to forget (altho you argue for it) that research in humans is extremely limited. There are lots of experiments we simply are not allowed to do in humans until there is animal data. So we couldn't have made it to human clinical trials without the animal data. 2. It's possible that "better" treatments were scrapped. The system is biased towards safety at the expense of efficacy. Do you want to change that bias? Your other comments say "no". 3. Using non-humans is NOT a "lottery". That is part of the mythology AR people must have. After all, if animal testing is predictive of human success, the AR position crumbles. So in this instance, by your own criteria, we can't trust anything you say. However, it is demonstrably otherwise. I can provide several instances where we can trace a successful treatment/drug by the scientific publications and you can see how animal testing was critical. OTOH, you must show how, in the last 50 years, the majority of new treatments/drugs were not based on animal research. Please do so. Irrelevant answer. The original claim was that there was NO sponsored research on animal alternatives. This shows otherwise. Also, NIH funds are completely separate from money spent marketing drugs. You simply can't compare the 2. Of course the pharma companies market their drugs; companies market cars, houses, computers, and every other product. So what? It costs over $500 million to bring a drug to market. The companies have to recoup that cost or their are no new drugs. As you admitted, the policies do stop abuse! "From time to time". Yes, occasionally individuals will get around any system. By your logic, we should shut down children's and retirement homes because there will be abuse! So stop taking care of those children and old people, a very few of them will be abused! Do you see the flaw and irrationality in your argument? You've repeated this several times. Since this argument is essential to your AR stand, why should we believe you? You have motive for making this statement. As I noted in other posts, the literature is full of papers making comparisons between animal models and human conditions so yes, one species can be a reliable model for another. As just one example, guinea pigs are a reliable model for human vitamin C deficiency. Really? And I suppose we are going to see you at the head of the volunteer line? You've mentioned it, and it is still wrong. Non-humans have been used to predict both. I notice you did not mention new drugs/treatments. Since you think that there are a lot of undesirable side effects to any new drug in humans, why would any human volunteer to test a drug without animal testing first? Would you? So the end result is still a complete halt. If that is what you want, then be honest enough to say so. And they can have NO difference. Look, no one says animal testing is perfect and eliminates all risk when we go to humans. That's why we have phased human clinical trials. But before we ask a human to take the risk, the animal testing has provided data that the risk is minimal and worth it. You have a very low regard for your fellow humans if you want to just pump any new compound into them first. The claim was about scientific data that was said to exist! Therefore we can rightfully expect to have scientific papers documenting that scientific data. Sorry, you can't claim "science" and then say that there is none! You really don't understand how science works, do you? We as scientists get fame by showing things to be wrong. Think about it. Einstein is famous for showing Newton to be wrong; Hawking is famous for showing Einstein to be wrong. Darwin is famous for showing all the Special Creationists to be wrong. It works at all levels of science. Sometimes we can. The use of adult stem cells for recovery after heart attacks did go from mice to human clinical trials. And they are working out quite well! lucaspa (post 372): 'Again, untrue. Because of evolution many of the biological systems are very similar.' PC - Not similar enough when condidering how a drug will work. lucaspa (post 372): ' The actual record is that animal efficacy is a strong predictor of human efficacy.' PC - Can you guess what I am going to say? Correct - give that person a cigar! It is not strong enough when considering what a drug will do. It is very late. I can hardly keep my eyes open. To be contiued.
  6. Not really. There are 2 claims here: 1. Other mammalian species have metabolic routes that humans don't and humans have metabolic routes that other mammalian species don't. 2. Both humans and other mammalian species have the same metabolic routes but that some mammalian species use the routes in different proportions than humans do. You are stating claim 1 and I am stating claim 2. Both rats and humans sulfate drugs and use the cytochrome P450 system to oxygenate them. What I am saying is that, for drug A, rats might have 75% sulfation and 25% oxidation while humans would have 75% oxidation and 25% sulfation. IF the oxidation route produced a toxic metabolite, then humans might produce enough of the metabolite to show symptoms while the rat would not. Is that clear now? This is the law I told you about. Congress enacted laws in 1960 with the "DeLaney Clause" that stated that the FDA could approve no drug or addictive that caused cancer in any species at any dose. Note the "any". This began to be challenged over the addition of the sweetener in "Tab" (which was pulled from the market). Massive amounts of the sweetener were given to rats and there was a slight increase in the rate of cancer in the rats. Very, very small but with enough numbers it was statistically significant. However, the amount of sweetener the rats were getting would mean that humans would have to drink about 1,000 cans of Tab per day. This was decided to be unreasonable and the animal trials were decided to be unrealistic due to the massive doses of chemical -- far higher than humans would ever ingest. Since then the Delaney Clause has been modified or dropped, especially in terms of drugs. It is still in effect in terms of food additives and some pesticides. 1. It's not a systematic review. There is no indication from the Abstract how the reviews were picked. 2. You missed this sentence: "The poor human clinical and toxicological utility of animal models, combined with their generally substantial animal welfare and economic costs, necessitate considerably greater rigor within animal studies, and justify a ban on the use of animal models lacking scientific data clearly establishing their human predictivity or utility." So the article does not recommend discontinuing animal trials, but instead wants more rigor in the animal studies. Animal studies are still necessary, but what it is saying is that people apparently are not doing them properly. You seem to have misread the article: "An article in the prestigious science journal Nature has decried the use of mice as "models" for testing drugs intended for use in humans as "nearly useless"." This does not say that animal models are useless. It is only saying that, for this particular set of diseases, that mice are useless. Nor that animal testing in all forms does not predict. This is science at work. Orthopedic surgery went thru the same process in terms of animal models for cartilage repair. It turned out that a company -- Geron -- leaped from rabbits to humans (because of the profit motive) and skipped tests on larger animals. The treatment -- Carticell -- still works in a limited set of cases but not in the wider set that Geron claimed. The extrapolation from rabbits to humans was invalid. Now all treatments for cartilage repair have to go thru a sheep model, which is a much better predictive model for humans. This is science looking for the best animal model, not saying that animal models are no good at all. If extrapolation was "impossible", then NONE of the treatments that work in animals would work in humans. However, even your data (and history) shows that this isn't true. No one said the system was going to be perfect. That's why there are Phase I and II clinical trials. The major purpose of the animal experiments is to eliminate those that have no chance of working in humans. Please document what you mean by "massive amounts of damage". LOL! No, it's a word used in science. And you are wrong. As I noted, fracture repair in rats is "very similar" to that in humans. So is wound repair in general. The difference is not in the steps, cells, or molecules used, but instead in the timing. Fracture repair and wound healing happens about 4 times faster in rats than humans. That's a difference that can easily be compensated for in moving from rats to humans. I am documenting that it is misinformation. Plain denial won't help. I asked for the sources of your information. Significantly, you ducked and didn't provide your sources. So I'll ask again: where are you getting your (mis)information about science? 1. Most human organs involve many different systems and it's impossible to provide them all in vitro. For instance, cartilage requires mechanical stimulation and that is difficult to provide. 2. Many of the human cells required are available in very small numbers. Remember, we have to get human cells from fresh cadavers and require the permission of the family. That severely limits the number of cells. Not many people are going to donate a full muscle or allow harvesting of their articular cartilage. 3. In case you didn't know it, most cells (differntiated cells) have a finite lifespan. It's known as Hayflick's number. For humans, that is about 50. 50 cell divisions of the cells during an entire human lifetime and then senesce and die. So if you do an organ culture of, say blood vessel cells from a 70 year old who died of a heart attack, you get only about 5-10 cell divisions and then they die. Some cells don't divide at all. Cardiac muscle cells, for instance, don't. So it's very difficult to get an organ model for human heart: you would have to harvest large numbers of cardiac muscle cells from a lot of people right after they die. Stem cells, both adult and embryonic, may someday help this problem. Here we have differential toxicity. Chemotherapeutic drugs for cancer are meant to kill rapidly dividing cells. Since cancer cells are rapidly dividing, they are the targets. Cardiac muscle cells, bone cells, muscle cells, nerves, etc. are not affected. At all. Because they aren't dividing. However, there are some cells in the body that are dividing: hair follicle cells, intestinal villa cells, hematopoietic cells, cells participating in wound healing, etc. These cells are affected by chemo drugs, thus the side effects. The dosage is carefully titrated to minimize the effects on these cells (as much as possible) while providing a killing does to the cancer. As we know, sometimes this isn't possible and the cancer kills the patient anyway. Ironically, it was animal studies that provided the initial data for the titration and knowing that the chemo drugs would kill cancer cells. In evolutionary terms, it is "recent". In terms of human lifetimes, of course, it seems a long time ago. Chimps, of course, share our most recent common ancestor. Rats and mice are used because 1) they are evolutionarily close and 2) they are small and cheap. Of course. Those are the 2 things that you are testing: safety and efficacy. What other reason for failure would there be? The article itself notes that this has not been the figure. Instead, there has been a 2-3 fold drop in percentage of approvals. I guess your "more or less" is very broad. And no, as I noted, the number of larger animal experiments has gone down. Very few primate experiments are done any more, both due to expense and the ethical concerns raised. Also, the expense of getting new drugs to market has quadrupled over the past 2 decades. So drug companies are taking shortcuts on cost whenever possible. That's why we see the leap from proof of concept in neurodegenerative diseases in mice straight to human trials (your reference). There should, at least, have been some cat studies in between. Repeating the same fallacies won't make them true. Animal response is not "unpredictable" and the conditions are not an "unrelated replica". Again, you have to remember that all the drugs that DO work in humans went thru the same pathway. And you are forgetting the drugs/treatments that were eliminated. When you include them, the predictive success rises considerably. Tamoxifen reduces cancers in animals! The rats had cancer anyway! Tamoxifen either 1) reduced the risk or 2) reduced the size of the tumor. http://www.google.com/search?q=tamoxifen+cancer+animals&rls=com.microsoft:en-us&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1 Again, where are you getting your information? Wait a minute. I said "That the drug is harmless in animals is not a guarantee that it is harmless in humans. " That is just one thing: safety. You are combining 2 different things: safety and efficacy. The failure due to safety issues is much smaller than 92%. I think you are trying to say that every species of mammal will respond the same way to humans. No one made that claim. That's why we have different species for different diseases and tests. You are wrong about digitalis. It raises blood pressure: "In moderate doses digitalis slows the heart-action, increases the force of the pulse, and from these effects chiefly, raises blood-pressure." http://www.swsbm.com/FelterMM/Felters-D.pdf The idea that dogs do not mimick human action is a myth. Here is the correct information: http://www.rds-online.org.uk/pages/page.asp?i_ToolbarID=2&i_PageID=1075 Where did you get that figure? Think a bit about about it. On the surface that claim is absurd! Since most of our infectious diseases arose by microbes leaping from a previous host to us, you know it must be wrong. A little thinking will convince you the rest of it is wrong. For instance, humans get scurvy. So do guinea pigs. Pigs have coronary artery disease. All mammalian species get cancer. Oh good grief. Most of my papers deal with tissue engineering with adult stem cells. I and my colleagues picked animal models precisely because of their resemblance to the human condition. Let me give you just one example. I just finished a grant application to use adult stem cells to treat inververtebral disc (IVD) degeneration. The model we will use is in rabbits and involves punturing the IVD with a needle attached to a syringe and aspirating the nucleous pulposus. In humans, the annulus fibrosus (the tissue surrounding the nucleous pulposus) will develop a crack and the nucleous pulposus will be extruded. IOW, it leaks out. The animal model has already been documented for its resemblance and similarity to the human condition: Masuda K, Aota Y, Muehleman C, Imai Y, Okuma M, Thonar EJ, Andersson GB, An HS. A novel rabbit model of mild, reproducible disc degeneration by an anulus needle puncture: correlation between the degree of disc injury and radiological and histological appearances of disc degeneration. Spine. 2005 Jan 1;30(1):5-14. Read the article for yourself. The whole point of the study was to mimic the human condition! Don't make strawmen. No one said they were "little people". Instead, we recognize that they are models for humans. You need to show how the other words are wrong. Please go ahead. Lost? What about that 8% (by the one figure) of drugs that get approved for human use? You call that "lost"? Look, if you really believe the vast majority are "wrong", then don't let your doctor prescribe you any drug or propose any treatment. Because all of them were worked out on animals. Do you see how ridiculous this is? So, should we go back to all the studies that showed no efficacy and toxicity in humans and now run them thru clinical trials? After all, being "useless and uninformative" would also apply to the "failures", wouldn't it? That should be your logical position. And yes, we do know whether an experiment is "wrong" or "right". That's why we have peer-review: to check the methodology. Again, you are inconsistent. You don't mind our acceptance of animal experiments that showed safety problems or no efficacy, do you? But by your statements, those have just as much "chance" of being "wrong" as the ones that prompted clinical trials. LOL! My, you can build strawmen with the best of them, can't you? ANY chemical/drug has what is called the "therapeutic range". Below that it is not effective and above that it can be toxic. When pharmaceuticals are tested what is looked for is the "therapeutic index" http://en.wikipedia.org/wiki/Therapeutic_index. The higher the therapeutic index the wider window you have between efficacy and toxicity. So yes, you can take enough aspirin to kill you. You can drink enough water to kill you. The point is that animal studies are the primary place that the therapeutic index is worked out. If the therapeutic index is 1 or less, then the potential drug is eliminated. So no, aspirin would not be eliminated. Neither was digoxin, even tho its TI is 2 to 3. But what the animal tests did do was tell physicians how closely they had to monitor the dosage they gave to people. Well, the 2 examples you gave were a) a strawman (aspirin) and b) a myth (digitalis). Would you like to try to give a valid answer? 1. First, both viox and thalidomide were efficacious. Thalidomide is a potent analgesic. 2. The "devastating" results are not that. The percentage of "thalidomide babies" or people suffering heart problems with Viox were very small. What humans define as unacceptable risk is sometimes irrational. The odds of getting heat attack from Viox are 1,000 less than my odds of being injured or killed in a car accident, for instance. Yet we continue to commute every day. 3. The problem with thalidomide was insufficient animal testing. It was not routine at the time to test for teratogenic effects. Also, it turns out that rats and mice are resistant to the teratogenic effects. You need primates as an adequate model. Now, however, teratogenic testing is required. No one said animal testing would get every toxic effect. In the 1950s the procedure of phased clinical trials was not in place. Thus thalidomide went directly to widespread usage. Even so, phase I or II clinical trials might not have picked this up since such a small percentage of recipients would have been pregnant. Thalidomide is an example where a drug can fool any system. Life isn't totally safe. The only way to avoid having thalidomide babies would be to require primate teratogen testing or give up any new drugs altogether. Which choice do you advocate? Micro-dosing doesn't help. If the TI is less than 1, you are going to have toxic effects on humans -- perhaps even kill them -- while testing. Are you going to volunteer? It's in the article. Read the whole article and not just the erroneous conclusion you got on animal welfare sites (yes, I see that the article is constantly quoted on all those sites; that's where the web search initially landed me as I was searching for it) Can you quote, from the scientific literature, data to support this? Of course, there are more scientists out there year by year and increasing research budgets for biomedical research, but my experience is that animal experimentation is on the decrease. People and pharmaceutical companies are looking for alternatives. And yet you cited a paper from Nature where there is scientific evaluation of part of the process! LOL! Undercut your own argument again. Try to get this thru your head: animal experiments are costly and difficult. It is in our best interest to find a way around them. We don't because we can't -- so far. When we can, we will. And yet there is no source even in that molehill. Supposedly 2 studies are cited, but there isn't a full citation so I can look up the original papers. Notice that they are talking about all side effects, including all the minor ones. Not "toxicity". Let's look at the list: "Furthermore the report confirmed that many common side-effects cannot be predicted by animal tests at all: examples include nausea, headache, sweating, cramps, dry mouth, dizziness, and in some cases skin lesions and reduced blood pressure. " Since animals can't talk, they can't tell us about nausea, headache, dry mouth, cramps, or dizziness. All of these are minor inconveniences, not life threatening. Notice that only "in some cases" were skin lesions and reduced blood pressure not predicted. Apparently sometimes the animals did develop skin lesions and sometimes the experimenter actually took blood pressures on the rats. Which again, is why we still have Phase I and II clinical trials. It wouldn't be "some random species". Instead, it would be a species used to test efficacy of the drug or treatment. If there was a problem in one species, a second might be tried. If the drug is toxic in both (or maybe just the one) it would be discarded. For instance, suppose the drug passed the human fibroblasts, moved to rats and caused liver toxicity and failure. Goodbye drug. The statement does make sense. Once again, your premises are in error. The "number" is pretty small. Both vioxx and thalidomide passed the animal tests of the time. Vioxx because the number of cardiac problems was so small as not to be noticeable until large numbers of people were involved. When the difference is very small, you need huge numbers to detect it. And, of course, you are not talking pure science here anymore. You get into the integrity of individuals and their desire to 1) make profit and 2) avoid loss. Not true. Unless you are making the strawman of giving a huge fatal dose instead of a therapeutic dose. And yes, you appear to be making that strawman. The digitalis is a myth; it raises blood pressure in humans, too. The cat example is not a good one. Aspirin is toxic to cats if they are given a human dose. But if aspirin is given on a mg/kg basis adjusted to the body weight of a cat, then they are OK. Again, you seem to be making a strawman. You are picking a single species, but I was using animals. Cats are not the normal animal model for analgesics: rodents and dogs are. In those animals, aspirin is not toxic. Maybe strawmen arguments are the only ones you can make? You aren't doing that. You are forgetting the large amount of animal data that has been predictive in humans: either for toxicity or for efficacy. Instead, you bring up only 2 or 3 examples (which turn out to be wrong). That is selective data. Look at INow's later posts. No one destroyed "a dogs pancreas". Rather, the insulin producing cells were destroyed. And that is exactly what happens in type I diabetes! The pancreatic island or beta cells that produce insulin are destroyed by the body's immune system. No insulin production. So the issue is whether administered insulin can regulate blood sugar. All the type I diabetics who are alive today owe their treatment to these animal studies. You simply can't honestly deny it.
  7. The area of beauty products and chemicals (cleaners, solvents, etc.) is the prime area where cultured human fibroblasts are rapidly replacing animal testing. It's cheaper, more sensitive, and more reliable (in addition to whatever ethical concerns there are). The effect on cell metabolism can be measured by automated systems such that large numbers of cultures can be processed in a short amount of time. For my research -- which is tissue engineering -- it is absolutely essential. There is a recent paper that illustrates this quite well. Limb ischemia (cutting off blood flow to a limb) results in 100,000 amputations in the USA alone per year! This paper used endometrial regenerative cells (ERCs or stem cells isolated from menstrual tissue discarded during menstruation) to prevent limb ischemia: http://www.translational-medicine.com/content/pdf/1479-5876-6-45.pdf Scroll down and look at the picture of the mice in Figure 2. The one on the left is the control; the one on the right is the treated. Human limbs of people suffering from limb ischemia today look like the one on the right. That's why they get amputated. The only way to show that these stem cells would prevent the picture on the right was to do the study in animals. (BTW, both animals got analgesics so that they were not in pain.) Notice this: " All animals were cared for in accordance with the guidelines established by the Canadian Council on Animal Care." These guidelines require appropriate pain-killers be used. Now, do you want to save people from amputation or not? If you value animals so highly that you don't think we can use them for our own purposes, then you have to say "no" and have people with ischemia have their arms or legs amputated. BTW, this same treatment could be used to save people from heart attacks.
  8. Back in the Jurassic. That's when a species of mammal-like reptiles became mammals. Look up "mammal-like reptiles" on a web search. Remember, "dogs" are part of the family "canine" which are part of the order "carnivore" which are part of the class "mammals". So way back in the Jurassic you have a group of species called "mammal-like reptiles" that have some features of reptiles and some of mammals. One species of that group gave rise to the first mammalian species. That species, in turn, gave rise to all subsequent mammalian species by the process known as "cladogenesis" -- which is when an existing species splits in two (or more). One population of the existing species is isolated either by geography (allopatric) or lifestyle (sympatric) and transforms to a new species. So now you have 2 (or more) species where there was once one. At the beginning of the Tertiary after the extinction of the dinos, the few surviving mammalian and bird species underwent a huge cladogenesis called "adaptive radiation" because there were all those empty ecological niches once filled by dinos. That is when you see the beginnings of the families of carnivores emerge.
  9. The computer modeling is used as an initial test for toxicity. Drugs that are obviously toxic are eliminated: "obviously harmful drugs were eliminated" Given the undeniable differences in metabolism inter-speciem 'it's possible that a drug will metabolize to a compound that is harmful' in humans, that the animal 'models' missed, too. If there is conflict in animal data, which there often is, how does one settle this dispute before proceeding to human testing? That is to say, which is the 'authentic' predictor? The different routes of drug metabolism are known. There are differences in the major routes of metabolism, but the routes are all there. For instance, rats tend to sulfate drugs more than humans do. This means that the human P450 system might make a toxic metabolite that rat testing will miss. This will not be picked up until phase I clinical trials. As to "predictors", there are some legal constraints. For instance, if any drug shows any increase in cancer in any species, it can't be used in humans. No matter how little the increase is or how effective and necessary the drug is. This has caused a lot of discussion in both scientific and political circles as people come to grips with cost/benefit ratios. Otherwise, if the drug is effective in animal trials andshows promise in treating human diseases not treatable by other means, it is usually tried in Phase I clinical trials. If you have a new cancer treatment, that moves forward. If you have a modification of aspirin that is slightly more effective than aspirin, that would not move to clinical trials. That is the fallacy. Pharmacokinetics are remarkably similar between mammalian species. The distribution of metabolic routes of drugs is different, but all the routes are there in different mammalian species. That's a bare assertion. Please post the peer-reviewed scientific papers to back that up. Because of evolution, the differences between species are not as great as you make out. The differences in fracture healing, for instance, between rats and humans is minimal. The differences between individuals in the rats is about the same as differences in humans. But the biological events -- even the cell types and molecules -- are the same. Again, untrue. Because of evolution many of the biological systems are very similar. For instance, the data for the Carticell treatment of articular cartilage defects was obtained in rabbits. Rabbit articular cartilage, its structure, metabolism, damage, and repair, is the same as humans. Where did you get the misinformation you have? I was speaking of organ culture systems. These are in vitro -- in culture -- systems. Organ culture systems for every human organ are not available. So you have to go into an animal to get ALL the various organs. My apology for the confusion. What I meant by "will not work" in this context are those that will obviously be toxic. If the drug is toxic, it "will not work". As several people have pointed out, this is not true. Due to evolution, there is greater similarity between species with recent common ancestors than you are giving credit for. The actual record is that animal efficacy is a strong predictor of human efficacy. No one said there was a "guarantee". You are moving the goalposts. We said that the testing was necessary to give us better predictors. If the drug turns out to be toxic in animals, it is not used in humans. That the drug is harmless in animals is not a guarantee that it is harmless in humans. That's why there are Phase I clinical trials. If the drug is useless in animals, then it is not used in humans. However,while there is a strong correlation of efficacy in animals to efficacy in humans, there is no guarantee. That's why there are Phase II clinical trials. Not to confirm toxicity. I can't think of any case where there was conflicting toxicity testing in animals and the drug went to clinical trials. Can you name an instance where this happened? As to the predictor, that is not entirely true. Because of the different emphasis in drug metabolism routes, it's possible that one species that uses a route that is minimal in humans may give a false positive. As I stated, rats tend to sulfate drugs predominantly while humans tend to use the cytochrome P450 oxidation system. 'the sulfated metabolite may be toxic but the oxygenated metabolite may not be. So if the drug is toxic in rats but harmless in primates, then you go ahead. Because primates share a more recent common ancestor, they are a better predictor. (They are also so expensive that they are rarely used as animal models.) What specific conditions and/or diseases are you thinking of? A scientist is not going to use a model that bears no resemblance to the human disease. That makes no sense. In many cases the condition must be induced, but it is done in such a way as to either mimic the human condition or be tougher than the human condition. For instance, rabbits dont' spontaneously develop osteoarthritis, so when we wanted to test a treatment for osteoarthritis we had to surgically create a full thickness defect in the articular cartilage in the rabbit knee. The defect size was such as to be comparable to what is seen when humans present to a doctor complaining of pain in their joints. Yes, I typed the wrong word at the end. Here is the corrected version: "Lots of "cures" out there that worked in mice, rats, or rabbits that never worked in people. But before you get to humans you do everything to ensure that the drug is both safe (the #1 priority) and effective in animals." The fallacy is in your first statement. Animal tests are not "completely uninformative". They eliminate toxic and useless treatments and drugs before you get to human clinical trials. They give you an idea that a drug or treatment at least has a good chance of being efficacious in humans. If you give up animal testing, then you must either do all the testing in humans -- with all the risks that involves -- or you freeze medicine at the current levels. If you will not risk harm to animals, how can you justfy risking harm to humans? Where did you get this figure? Never mind, found it. It is a news article by Anne Harding in The Scientist describing how the tightening of FDA regulations is resulting in turning down more drugs. But it has been picked up by all the animal rights pages. You're the victim of out-of-context false witness and possible fraudulent information. Another scientific news organization wrote "The FDA was unable to identify the source of these figures for The Scientist by press time." Even if the figures are accurate, the article isn't talking about the failure of the usefulness of animal testing, but instead about the record of the FDA in granting approval. One of the problems the article points out is that companies are skimping on the animal testing! IOW, the figures are dropping because the FDA is letting companies do less animal testing than they should be! It's not that the animal testing is failing, but rather that the companies are failing to do the appropriate animal testing and rushing to clinical trials! That's not a "given". Again, we need to know the source of this "given". The animal rights group you got it from has to document that. I made it clear that the fibroblasts are human. In fact, they are human foreskin fibroblasts. You are missing that this is a step-wise procedure. If the chemical shows toxicity in the human fibroblast cultures, it never goes to testing in animals. That chemical is discarded right there. Animal testing is VERY expensive. That's why the culture is used first. It's a lot cheaper. Only if the chemical/drug passes the culture test is it then used in animals. So your "problem" never arises. If the drug is "on the market", then it is effective and safe in humans. Otherwise it wouldn't be "on the market". However, your statement is not correct. Some drugs have been shown to be unsafe in every mammalian system tested. Thalidomide is one example that comes immediately to mind. Some are safe and effective in every mammalian species tested. Morphine comes immediately to mind; it is an effective pain killer in every mammalian species tested. But you are forgetting all the drugs that were eliminated along the way. If we had tested all of those in humans, then you would have found that drugs that we found harmful in animals were also harmful in humans. Can you name any drugs that were ineffective and/or unsafe in animals that turned out to be effective and safe in humans? You are using "selective data". As iNow demonstrated, the dogs did have diabetes. This is just one example where your "facts" are wrong. You need to get the facts straight before your argument is valid.
  10. Pioneer, if you look at the pedigrees of race horses, Triple Crown winners today can trace their lineage back to previous winners. That is because the artificial selection has ensured that all race horses are descended from previous winners. So the foal may not be a winner, but it will be good enough to compete. Remember, evolution happens to populations, not individuals. What you need to do is compare the times in major races now to those of 100 years ago. And look at the mean times +/- the standard deviation. Look to see if the curve has shifted. As John noted, you want to see the average time for all race horses. 1. Recent experiments in the wild have demonstrated that natural selection can work much faster than we see in the fossil record: Evaluation of the rate of evolution in natural populations of guppies (Poecilia reticulata). Reznick, DN, Shaw, FH, Rodd, FH, and Shaw, RG. Science 275:1934-1937, 1997. The lay article is Predatory-free guppies take an evolutionary leap forward, pg 1880. This is an excellent study of natural selection at work. Guppies are preyed upon by species that specialize in eating either the small, young guppies, or older, mature guppies. Eleven years ago the research team moved guppies from pools below some waterfalls that contained both types of predators to pools above the falls where only the predators that ate the small, young guppies live. Thus the selection pressure was changed. Eleven years later the guppies above the falls were larger, matured earlier, and had fewer young than the ones below the falls. The group then used standard quantitative morphology to quantify the rate of evolution. So we have a study in the wild, not the lab, of natural selection and its results. The rate of evolution was *very* fast. Evolution is measured in the unit "darwin", which is the proportional amount of change per unit time. The fish evolved at 3700 to 45,000 darwins, depending on the trait measured. In contrast, rates in the fossil record are typically 0.1 to 1.0 darwin. However, the paper cites a study of artificial selection in mice of 200,000 darwins. 2. So, why is the rate of evolution so "slow" in the fossil record? Two reasons: a. Large populations b. Purifying selection. Remember, natural selection comes in 3 forms: directional, purifying (or stabilizing), and disruptive. We tend to think only in terms of directional selection. When a population is well-adapted to the environment, purifying selection will keep the population the same. No change. Also, as populations get large, it takes more and more generations for a new trait to spread thru the population. That slows down evolution. As Edtharan noted, most traits are polygenic (involve more than one gene) and most genes are pleiotrophic (involved in more than one trait). This makes reasoning based on simple Mendelian genetics misleading. Also, as John noted, the artificial selection has been narrowly focussed only on speed. But since humans don't know all the traits it takes for a horse to run fast, some of the breeding will actually get traits that hurt speed. Look at the horse this year at the Kentucky Derby. Apparently the breeders didn't take into account changes in the strength of the bones. The horse could run fast, but the bones weren't really strong enough to bear the forces on them. In other words, in the ideal Darwin, maybe the speed of evolution should be faster if we work under assumption of a long lineage of triple crown families. But because this does not occur with any reliability, it shifts around causing the genes to evolve much slower than expect, more in line with the slow evolutionary pace. Instead of perfection, maybe nature choses diversity so all will evolve.
  11. Moved the goalposts. When I am talking "independent", I mean able to live without being part of a larger organism. You now try to change "independent" to mean "without anything else". Not valid. When the animal with the cancer dies, the cancer dies. Cancer is not a parasite or independent organism: it is aberrant growth of cells in a multicellular organism. It can only be kept alive outside the individual having the cancer by careful tending by a scientists. Shoot, cancer isn't even infectious! You can't "catch" cancer like you can a cold. A cancer cell in my body could not live in yours. So even obligate parasites like viruses or some microbes are "independent" in terms that cancer is not. Nice try, but it doesn't work. What the article is saying is what people in the cancer field have been talking about for 5 years or more: cancer cells are natural selection in action. In order to be "cancerous", a cell (and its descendents) must have mutations that 1) remove growth control, 2) allow it to evade the immune system, 3) allow it to recruit blood vessels. Having all these capabilities is why cancer is relatively rare: few cells make it thru the entire process without being eliminated by the environment. This doesn't make cancers a separate life form; it just means that natural selection does operate on them. None of what you said changes the fact that seedless oranges were not produced by natural selection. I appreciate you trying to find another way that seedless oranges do not falsify natural selection: being seedless benefits the trees because humans cultivate more of them than other (seed) oranges. However, it is not right to try to change the meaning of the term "artificial selection" or make artifical selection = natural selection. The processes are similar but not identical. Artificial selection is when humans do the selecting instead of the environment, not "natural selection in an artificial environment". Read Origin of Species and how Darwin described artificial selection. In fact, Darwin used "natural selection" to distinguish what happens in nature from what animal and plant breeders were doing. Yes, "selection" happens in both, but the important part is who or what is doing the selecting.
  12. I have read that they do. It's just that, at those sizes, the waves are so small as to be unnoticeable. For instance, I have read the the wave function of you and I is smaller than the diameter of an atom. Much too small to be noticed or measured.
  13. Foodchain, what you missed was that obviously harmful drugs were eliminated. That doesn't tell you that the drugs that pass the screening will actually benefit the patient. So, the intent is to eliminate as soon as possible -- via computer modeling -- the drugs that will not work. This is good economics. The next step for testing if the drug is harmful drugs is human fibroblasts in culture. Advanced Tissue Sciences used to sell them. ATS is no longer in business, but other companies have picked up the market. This is often used as the main screening in the cosmetics and chemical industry instead of using rabbits. In the pharmaceutical industry, once the obviously harmful drugs have been eliminated, now comes animal testing for efficacy -- will the drug actually do what the scientists hope it will? A secondary purpose is toxicity -- harmful effects. It's possible that the drug will metabolize to a compound that is harmful that the computer models missed. You talk about organ systems. First, forget clones. Those are too genetically restricted; you want a wide range of genetic variability. You don't want to take a drug to extensive animal testing and then find out that it only works on that one genetic variation in the clone. That mistake has happened too many times as a drug has worked on inbred mice or rats (close genetic similarity) but not on the wider genetic variability of humans. However, animals are expensive. Very, very expensive, both to purchase and to house. Right now rats cost about $30 per rat and it costs up to $3 a day to house them. That adds up real fast. Organ culture is much, much cheaper. However, there are severe limitations with organ culture, particularly with a drug. A lot of the effectiveness of a drug depends on pharmacokinetics: amount of drug absorbed, distribution to the various organs of the body, and metabolism. All those determine the actual concentration of the drug at the particular site you want it. That can't be mimicked in organ culture. However, for toxicity testing, that would be the way to go -- the fibroblasts in culture are basically an "organ culture" system. But eventually you must go into a live animal so that you can see the integration of all the systems. Even if the drug passes toxicity testing in a particular organ culture, it may be toxic to some other organ. And then, of course, there is efficacy testing. As jdurg pointed out, computer modeling is focused on toxicity testing. Yes, before the drug is run thru those particular computer models, it is thought the drug may be effective (otherwise, why bother?), but you need the animal to tell you that it actually will be effective. And, of course, even if it is effective and safe in animals, you still go thru Phase I and II human clinical trials. Phase I to test for unforeseen toxicity, Phase II to test to see if the drug really works in humans, not just rats. Lots of "cures" out there that worked in mice, rats, or rabbits that never worked in people. But before you get to humans you do everything to ensure that the drug is both safe (the #1 priority) and effective in people. People who want to stop all animal testing must face this reality: to give up animal testing means giving up new drugs/treatments for human health and new cleaning solutions and other chemicals that make our lives easier. If you give up animal testing, you freeze our medical technology and chemical technology where it is today. Is that what they really want?
  14. Nan, are you aware that any animal testing must be done under appropriate pain medication? Euthanasia must be done in a painless fashion. It's part of the requirements every scientist must go thru to get permission to do animal testing. We do say this. Because it is true. All the wonderful medical treatments you see today, all the "miracles" of modern medicine, are due to animal research. Do you want us to stop those? Do you want us to stop looking for cures for Alzheimer's because you don't want animals to "suffer"? In particular, think of whether you want us to stop working on a cure for a disease that your parents or your children have. And there are millions of people who die from the disease before we have success. Do you want people to keep dying? There already is such an agreement. Every time I put in for an animal study, I must justify the number of animals I am going to use. I must justify that there is no other way to get the results. It appears that you are unaware of the existing rules and restrictions scientists operate under. Perhaps I should append a copy of the IACUC forms I, and every other researcher who uses animals, must fill out and adhere to. This already happens. When I as a member of an IACUC committee, we shut down the research of the Chairman of Pharmacology because he was 1) not adhering to the rules for care of the animals and 2) was using far more animals than he had requested and said that he needed. Again, already being done! NIH comes out with Requests for Applications for NIH grants on new cell culture and computer modeling techniques to cut down the number of animals used. Go to the NIH website and look at the grants requested and awarded. Most testing of new chemicals is now done on human fibroblasts in cell culture. It is less expensive than animals and you can screen a lot more chemicals that way. ALL lab facilities must be accredited. One of the requirements for accreditation is policies in place that have the animal care attendants report any suffering of the animals. However, you are assuming that animals feel pain and suffer like we do. As you note, "often animals suffer in silence". How do you know they are suffering? If there is no outward sign of suffering, consider that they are, in fact, NOT suffering. I submit that you are projecting your own emotional state onto animals. How do you know that is valid? I don't know of any medical studies that use chimps. They are simply too expensive to use and there are other, just as good but cheaper, animal models. Vivisection is different from animal research. Name a few in the biomedical field, please.
  15. Possibly, but I haven't seen it. Bacterial resistance to antibiotics involve more than a single pont mutation. Keep in mind that a single point mutation in the hybrid fertility genes can render a population of sexually reproducing animals a new species, too. However, as I have seen "strains" presented in seminars in micro, there are a cluster of genetic differences, not just one. Regarding cultivars it is a term mostly in agricultural contexts there are too many to name, but here is just the first one from a simple pubmed search: J Integr Plant Biol. 2008 Jan;50(1):102-10. Simple sequence repeat analysis of genetic diversity in primary core collection of peach (Prunus persica). Li TH, Li YX, Li ZC, Zhang HL, Qi YW, Wang T. The early anatomists, such as Owen, worked mostly with animals and studied the various phenotypic differences quite extensively. As just one example, look at the study of cirripedia by Darwin: The Lepadidae; or, pedunculated cirripedes. [Vol. 1], The Balanidae, (or sessile cirripedes); the Verrucidae. [Vol. 2], A monograph on the fossil Lepadidae, or, pedunculated cirripedes of Great Britain. [Vol. 1] , and A monograph on the fossil Balanidae and Verrucidae of Great Britain. [Vol. 2]. Historically, assignment of species names in microbiology was done on phenotypic characterizations. As you know, all bacteria within the rods and sphere in light microscopes look pretty much the same. Thus, historically, we are stuck with the species name referring to a wide range of genetic variability. It's not that the basic unit of biology and evolution isn't the species, it's that the guys who first assigned categories of "species" on the microbiological level did not have the tools to distinguish species and, instead, gave the name "species" to what should properly be genera or even higher taxa -- based on the genetics. Again, in the seminars I have attended, strains are established as similar genetics, not identical. What you have is a descended family of clones: first one clone and then that clone has variations among it's clonal descendents. Even cell lines are not clonal. For instance, if you go to American Type Culture Collection and get cell lines, they are not clonal. Instead, most of them are established from particular tumors from particular individuals. But genetic analysis of cancer cell lines shows quite of a bit of genetic variability within the line. Not as much as within the original tumor, but still quite a bit. To get "clonal" in cell lines, you must do additional manipulation. The most common is "limiting dilution" where you dilute the cell suspension so that you have odds of plating slightly less than 1 cell in the volume you are using for that particular cell culture well. Usually you use a 96-well plate and dilute the cell suspension so that you have about 0.8 cells per 100 ul, and then plate 100 ul per well. You then look on day 1 and eliminate any well that has more than one cell in it (this is very tedious work, BTW). Each well is a clone. An alternative method is to insert a known DNA sequence via a retrovirus, still do the limiting dilution, and then use restriction enzymes to identify the insertion of the DNA sequence. This really is just a fancy way to confirm your limiting dilution. And I would partially agree with this point, but I would add that the original species designation was arbitrary based on the properties that could be observed under light microscopy. Therefore what we call "species" in microbiology is actually a genus or higher taxa when we get down to looking at the genetics. The real species are the "strains" of bacteria. So when we see a new strain of E. coli that can live in apple juice, what we have really seen is a new species formed. The "too many" are 2996 of which 52 are reviews. However, I accept the point: cultivar is used in agriculture. I am curious: what type of PubMed search did you run? When I ran ones using "cultivar, plant" or "cultivar, agriculture" this was buried over 100 items into the search. What search terms did you use?
  16. That is only because the lake is just a part of the surface of the sphere that is the earth. The earth crust is all about the core. About 2 billion years ago during "snowball earth" the ice "skin" did cover all the earth. Not a living process, but a natural one. Well, then, the earth's crust is not "well managed" according to you. It's a hodgepodge of different materials unlike the highly organized tissue that is your skin. In some places the crust is solid granite -- like the Canadian Shield. In other places it is limestone. In no two places is the crust exactly the same, as you would find in skin. If you take a cross-section of your skin on your thumb, back, inside of the knee, and sole of your foot, you get the same cross-section. But take a cross-section of the earth's crust at any two places on the surface and it is different. I never said the tree was dead. In fact, I never commented on it at all. The processes that form tree rings is different than the processes that form the layers within the earth. Tree rings are formed by living cells. The layers of the earth's crust are formed by different density materials under gravity. Apples and oranges. It is you who is trying to say they are the same. BTW, the "crust" or bark of a tree is dead. Just like the stratum corneum that is the outermost layer of your skin is dead. Or didn't you know that?
  17. When looking up definitions I only saw the terms used in micro. Can you cite some papers where the term was used in botany? Thanks. I'd say they were categorized more excessively. In animals, "strain" is used for inbred lines, such as Holtzman or Sprague-Dawley rats. It can also be used for varieties generated by manipulation of ES cells that are then used to replace the ES cells in a blastocyst -- thus making a "man-made" animal. Thus, ROSA mice (that started out with the bacterial beta-galactosidase enzyme inserted into the genome and then the first ROSA were inbred) are a strain. In micro, since reproduction is asexual, of course what you get are clonal lines. However, "strain" is usually used for family of clones (clones that are genetically similar) that are genetically distinct from other families. The species name, such as Escherichia coli, is more like a genus name for sexually reproducing organisms, with the strains being the "species" within that genus.
  18. CharonY, all the terms (serovar etc) you used apply to microbiology. Most people don't associate "breed", "subspecies", etc with microorganisms but with multicellular organisms, particularly plants and vertebrates. Do microbiologists also use the other terms? I haven't seen that in the (admittedly somewhat limited) microbiology literature I have encountered.
  19. Both of these responses are irrelevant. Remember what YOU asked: "can you tell me any single thing having skin and made by nature only and is dead." Insane alien gave 2. Both are analogous to the crust of the earth. Most of the rocks in the crust are oxides, like aluminum oxide forms on the surface of aluminum. OR the crust is composed of granite, which is indeed cooled lava. The cooled lava is less dense than liquid lava, so it "floats". Another example would be the thin sheet of ice over lakes and streams. That is a "skin" over the water. All of these are examples of "skin" or crust being made by natural processes only without being alive. BTW, we also see layers in ice cores and in types of sedimentary rocks called varves. These are formed by seasonal processes (not life). In the case of the ice cores you get a layer of dust in summer and then a new layer of ice from the winter snow. In the case of varves it is organic material from falling leaves and decaying vegetation in the fall and then a layer of sand from inrushing streams in the spring. The generalized "layers" of the earth are formed by simple differences in density of materials under the influence of gravity. The most dense material is in the core (liquid nickel-iron), with successively less dense materials as you move outward. The gasses of the atmosphere, of course, are the least dense. As I said, there is quite of bit of existing data that falsifies your idea. But, if you really feel you have the data that makes it valid, submit it for publication.
  20. Once you get below the category of "species", there are several names that all try to categorize differences between groups within a species: variety, breed, population, subspecies, semispecies, and race. In addition to the 2 from Futuyma's texbook I posted above, here are a few more: "Race: A vague, meaningless term, sometimes equivalent to subspecies and sometimes to polymorphic genetic forms within a population." "Variety: Vague term for a distinguishable phenotype of a species" None of the names are specific. Notice how Futuyma says of subspecies "No criteria specify how different populations should be to warrent designation as subspecies". Biologists today usually speak of "populations". In Darwin's day they used the term "variety". Breed seems to be equivalent to "variety" and is used, not surprisingly, by human breeders. Using artificial selection, breeders make "breeds". We are, of course, most familiar with breeds of housepets like cats and dogs. And we distinguish breeds by their appearance -- their "phenotype". Great Danes look different from St. Bernards look different from Labrador Retrievers look different from Cocker Spaniels, etc. Subspecies has criteria like variety "populations of a species that are distinguishable by one or more characteristics" That's what variety and breed does. However, subspecies also includes, for animals, the idea that these populations are in different geographical areas: "In zoology, subpecies have different (allopatric or parapatric) geographical distributions". This might have been true of dogs in that some breeds were bred in particular geographical areas, such as the daschund in Germany. That geographical isolation is, of course, mostly no longer true. Neighbors have different breeds of dogs as pets. For plants, a subspecies could be in the same local geographical area (since plants can't move and they may be isolated within a few discrete locations within the area). But really, the bottom line is that the terms are so loose and ill defined that, for biologists, they are meaningless. Breed = subspecies = variety = race = biologically useless term. The only term that has any meaning is semispecies. There you have partial reproductive isolation. Ring species such as the Arctic gull or the California salamander would have the individual populations be semispecies. Dogs today could (at least) be split into semipecies, since there are reproductive barriers between some of the breeds. Genetic data says dogs are 4 species. I hope that helps. If you have more questions, don't hesitate to ask.
  21. Sugars are absorbed in the jujenum. The later part of the duodenum is where the pancreatic juices and bile duct products are secreted into the intestine. The sugery only bypasses the first part of the duodenum, leaving the secretion part intact.
  22. Each one is showing reproductive isolation: the populations do not interbreed and, when they do, their offspring are not fertile. The biological species concept says nothing about "related organisms that share a more or less distinctive form" BSC states: A species is a group of individuals fully fertile inter se, but barred from interbreeding with other similar groups by its physiological properties. (producing either incompatibility of parents, or sterility of the hybrids, or both). (Dobzhansky 1935) Species are groups of actually or potentially interbreeding populations that are reproductively isolated from other such groups. (Mahr 1942) What you seem to have done is combine the biological species concept, phylogenetic species concept "more or less distinctive form", and a weird form of the evolutionary species concept "related organisms" into one. "Then I will tell you this, if you take a sperm of one "species" of dog and a egg from one "species" of another dog, you will produce viable offspring, as my parents are both breeders it is possible." The dog paper was looking at genetics. I would ask you parents if they have successfully mated every breed of dog. Specifically, have they mated breeds from the different species described in the paper? Artificial insemmination does not count. The genetic analysis says that this may no longer be possible. Most people that breed dogs do so only within a few closely related breeds. I have not heard of anyone breeding a Great Dane with a chihuahua or daschund, for instance. Do your parents do this?
  23. Which means, of course, that earth still falls within the purview of "dark matter". It's baryonic dark matter. Going back to Radical Edward, the estimated amounts of baryonic dark matter were not enough, by orders of magnitude, to account for the observed galactic rotation curves. Thus the hypothesis of nonbaryonic dark matter to make up the difference. But the main point is that hypothesis has nothing to do with Big Bang. It's not necessary for Big Bang to happen.
  24. My apologies. As I read the article, that is not the case. The bypass is in the duodenum and doesn't inhibit the amount of food. If they had done a stomach banding, yes, that would limit the amount of food the person would eat. But this doesn't do that. It simply bypasses a small part of the beginning of the small intestine. The amount of incoming food is the same.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.