Jump to content

lucaspa

Senior Members
  • Posts

    1588
  • Joined

  • Last visited

Posts posted by lucaspa

  1. Scar tissue is a specific type of tissue. Skin is composed of 2 basic layers: dermis underlying the epidermis. Dermis is composed of fibroblasts but has an organized matrix outside these cells while epidermis is composed of epidermal cells, hair follicles, and sweat glands.

     

    Scar tissue is a very disorganized tissue but does not have epidermis over it. There are no hair follicles or sweat glands. The extracellular matrix is mostly type I collagen.

     

    All the "treatments" for scar tissue seem to be trying to get the tissue to "soften" a bit and be more pliable. None of them will cause the epidermis to regrow, so none of them are going to restore the skin to its normal appearance. Sorry.

     

    On normal skin, glycolic acid would remove the dead epidermal cells that lay on the surface of the skin -- that strata cornelium. This will expose the underlying living epidermal cells (probably also taking off the top 2 or 3 cells in that layer), leaving a "softer" skin until the epidermis naturally reconstructs the overlying layer of dead cells.

     

    I can't see it doing anything really for scar tissue except perhaps using acid to break down some of the collagen on the surface.

     

    I've seen papers exploring the use of hyaluronic acid to prevent scarring during wound healing, but none on using HA on existing scars. I can't envision any possible mechanism for it.

     

    What do you mean by "prove effective"? What do you hope to gain by treating the scar? Get it to go away?

  2. The big advance in curing Type I Diabetes will be when they detemine exactly what it is that causes the body's own immune system to flag the islet cells as "foreign" and kill them, and not any other cells. Until they can figure that out, it's going to be a losing battle trying to reproduce islet cells or transplant them since the faulty immune system will promptly kill them.

     

    I agree that finding the cause of the autoimmune destruction of the islet cells is going to be important. However, even when that is known, it may not be possible to prevent it. And yes, if the system is not shut down any transplanted cells will simply succumb to the same process.

     

    But there are possible ways to avoid the process. One way is to hide the transplanted cells from the immune system. This can be done by placing the islet cells in a cylinder composed of material with a pore size that only allows molecules of < 50,000 MW to pass thru the membrane. This means that nutrients can enter the cylinder and insulin can leave the cylinder, but neither immune cells nor antibodies can get into the cylinder. Thus, no possible destruction of the islet cells.

     

    The problem then becomes getting enough islet cells for every type I diabetic. Some companies are looking into using xenogenic islet cells (such as bovine) while others are looking at means of differentiating either ES cells or adult stem cells into islet cells. There are several papers claiming that various adult stem cells are capable of differentiating into islet cells. However, no one has found a way to do this efficiently. Once that is found, then the treatment is to isolate adult stem cells from an individual (not necessarily the patient), grow them in cuture, differentiate them, place them in the cylinder, and then place the cylinder in the abdominal cavity (in the omentum probably). Voila! Instant insulin producing organ. There are, of course, several engineering hurdles to pass, but several biotech companies have appropriate membranes.

     

    On another note, I just learned of an article being published in Nature about researches who have re-engineered exocrine cells via a small subset of transcription factors into beta-cells. How freakin' cool is that!?!

     

    It's cool. The problem here is how you have to get the transciption factors into the cells. You do that by transduction with a retrovirus and the danger is that the retrovirus will kick the cells into being cancer cells. FDA is not going to approve a treatment unless a safer method is devised to get the genes into the cell.

     

    What is needed is the exogenous signal that tells exocrine cells during development to differentiate into islet cells. These are the genes that are turned on as a result of that signal. We need the original signal, but right now the molecular biology community is tunnel-visioned on transducing genes.

  3. You are like... so corrupt.

     

    I know. It's obvious that we would do anything for those salads and breadsticks. Especially the breadsticks. ;)

     

    It starts out by defining what it is you are looking to treat. Companies spend years researching the disease/impairment and trying to figure out what is causing it.

     

    Most of this research is actually done in academia. This type of research is what NIH gives out most of its money for. But again, much of the research figuring out the disease is done in animals. Let's take osteoarthritis for example. We can't go and harvest massive amounts of cartilage from humans in various stages of OA to look at the causes and biochemical events of the disease. People need that cartilage in their knees to walk around on. So there are animal models where the animal can be euthanized at particular points in the progression of the disease to look at the morphological, mechanical, and biochemical changes taking place. The most recent model is goats walking on concrete floors.

     

    Here is, again, where I think people are not fully grasping what pharmaceutical research is about. They don't just dump the chemical into every single animal they can find. From the computer research, they know which organ systems are targted, which pathways it will affect, and what metabolites are "likely" to be produced when this compound is ingested by humans. They will then test the compound on animals WITH THE SAME, if not INSIGNIFICANTLY different biochemistry. So if it's a compound that is likely to be targeting the liver, they'll test it on an animal with the same liver function as a human.

     

    This is very good! I'm glad you made this point. Yes, some initial toxicology testing (particularly for carcinogenesis) might be done in mice, but the tests for subtle toxicity and efficacy are going to be done in one (or 2) appropriate animal models -- not just any animal.

     

    After this testing, analysis of the animal organs and fluids will tell them how it was metabolized. They don't just see if the animal lives and go "Let's get to testing on humans!" They enter back into that computer system what that compound did with that animal. They will record the metabolic pathways they could see and what the biochemical interactions were.

     

    Again, good point! Lots of tissues are taken from the animals and analyzed. Remember, the company has to show to the FDA both efficacy and toxicity! FDA is going to ask them about effects on organ systems other than the one the disease is working on. Particularly lung, heart, brain, kidney, and liver. In the case of Viox and Celebrex, there was data on these organs. I would remind everyone that Celebrex (like Viox a cox-2 inhibitor) was never withdrawn from the market. The animal testing did predict both efficacy and toxicity.

     

    It's actually pretty uncommon for a drug to be stopped because it's "ineffective". If it's "ineffective" because it doesn't have the action on the specific molecule that the company was looking for, then yes, it will be stopped. But if it is ineffective for other reasons, it will be noted but human trials will continue.

     

    That first one is what we mean by "ineffective". If the drug is not efficacious in animals, it won't be tested on humans. Can you provide an example of a drug that was not efficacious ("ineffective for other reasons) that went to clinical trials?

     

    When the drugs pass by the animal testing and the results show that it is not likely to be toxic to humans, it then starts the Phase I trials which are under SOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOOO much scrutiny and watch from all the regulatory agencies in the world that they are really quite safe.

     

    Some caveats here. Phase I clinical trials are done on a small number of patients: usually those for which all other treatments have failed. Thus, for a new cancer drug, usually terminally ill cancer patients are chosen. The purpose here is to look for gross toxicity, since the number of patients is too small to pick up rare toxicity. Phase I is not for efficacy. If the Phase I cancer patients for a new cancer drug show improvement, that is a bonus. But even if they show no improvement but also don't show gross toxicity, the drug will move to Phase II. This trial involves a lot more people and is looking for 1) efficacy primarily and 2) not so common side effects.

     

    But you are correct, the trials are carefully scrutinized. Or are supposed to be. Sometimes that scrutiny fails, but that is a political problem we must address and not a failure of animal testing.

     

    A drug might be perfectly safe in all the testing done, but if mixed with compounds C, X, Y, and G at certain doses that simply could not be imagined but were found in a couple of patients after being approved, it causes a serious adverse event. This drug will then be pulled.

     

    Another good point. Regulatory agencies err on the side of safety. For instance, chloramphenicol was pulled because 1 in 10,000 patients developed fatal liver complications. That's an extremely low risk, but there were other antibiotics available and therefore the risk was considered too high. However, even today there are 1 or 2 situations where chloramphenicol is used because there is no other antibiotic that will work and the risk of dying from the disease is so much greater than the risk of liver failure.

     

    A drug might be found to be safe in animals, but in humans the dose needed to provide benefit compared to the dose required to cause harm could be too close for comfort. The drug will then be removed.

     

    That's the therapeutic index I was talking about. Here there are legal considerations. The drug company doesn't have absolute control over the physicians who prescribe the drug or the compliance of the patients who take it. There are many patients who think "if one pill is good, 2 pills would be twice as good and I can get over the disease twice as fast." If the TI is close to 2, that patient will take enough to get toxicity. Is the patient blamed? NO! The company would be liable. So the company doesn't want to take the liability risk and will pull the drug.

     

    I just want people to understand that the animal trials aren't just "throw random chemical into a rat, goose, bunny rabbit, kitty cat, chimpanzee, toad, mouse, and dog and see what happens."

     

    Good point. That is never done. It would be impossible to get such a protocol past any IACUC I have ever dealt with. There is a section in the forms that is "Justification for the use of animals" and putting "I just wanted to see what would happen" is going to get the protocol rejected.

  4. I have to say I share your bewilderment over chemotherapy. Considering how advanced much of our medical technology has become, it seems almost a barbaric assault.

     

    The problem is that we are trying to kill human cells. It's easy when we make antibiotics for prokaryotes: there are a LOT of differences between prokaryotes and eukaryotes and therefore we can make drugs that have no effect on human cells.

     

    But what are the differences between cancer cells and normal ones? Not that many and they are all subtle. The most easily seen difference is that cancer cells divide rapidly. Therefore the first attempt was to get drugs that interferred with this division and killed cells that were dividing.

     

    Most human cells do not divide very much in the adult. Muscle cells and osteocytes don't divide at all. Other cells, such as adipocytes, endothelial cells, nerve cells, tenocytes, etc. don't divide very much. One of these cells can go years without dividing. Unfortunately, there are a few cells in a few tissues that do divide fairly rapidly: intestinal lining cells, hair follicle cells, hematopoietic cells, etc. So these get affected, too, and we get the side effects.

     

    The real problem with cancer is that natural selection is true. All our medical advances can't get around that. We can kill off 99.9% of the tumor cells, but all we've done is ensure that the tumor grows again from cells that were resistant to the treatment.

     

    Vivisection ? It is wrong because it is cruel. It is also scientifically flawed. No one can tell in advance if results in non-humans can be extrapolated to humans. No wonder that most clinical trials fail after the drugs seemed to work well in non-humans.

     

    You are confusing vivisection with drug testing. Two different things. And animal studies using "vivisection" or surgery have been very good at extrapolation to humans because the anatomy is so close.

     

    There has been so much corruption involved in medical research and drug development/marketing , and so much money has been thrown about in bribes and inducements, that the data these people produce cannot be relied upon.

     

    There is motivation for drug companies to fudge the data. Not so much for academics in medical research. There we get gigged if we don't call them as we see them. The motivation for drug companies is why there are regulatory agencies like the FDA to monitor things.

     

    If there are any areas where these can't accurately simulate a human being it must be remembered that other animals can't simulate humans with any reliability.

     

    That's not true. The reliability has not been that bad. As you note, the reliability is so good that drug companies sometimes try to hide their animal data because it is unfavorable! If the animal testing were not reliable, they wouldn't have to do that. :) Sorry, you can't have it both ways.

     

    Vivisection could be responsible for the scrapping of drugs that were ineffective in, and harmful to, non-humans but that would have been useful to humans.

     

    So, are you volunteering to test one of these rejected drugs?

     

    It's interesting that vivisectors use the word 'sacrifice' when they mean 'kill'.

     

    No, we don't. We use the word "euthanasia".

     

    Placebos have a placebo effect.

     

    That's why clinical trials are designed in such a way as to eliminate the placebo effect. Haven't you ever heard about "randomized double blind" studies? Phase II clinical trials are all randomized double blind.

     

    PC - As rats aren't humans, you can't be sure that any neurological testing or any drugs would affect humans in the same way.

     

    You can't be sure. It's very difficult in science to be sure unless you have falsified a theory. But animals can provide good models that greatly increase our chances that a drug will be both safe adn efficacious when tried in humans.

     

    PC - Knowing about other animals won't help to produce a computer programme for humans. You need to know about humans. There is already an insane amount of research and most of it is not applicable to humans.

     

    1. Because of evolution, knowing about other animals does help us know about humans. We are related to other mammals by descent with modification.

    2. And that part of research "not applicable to humans" is part of the falsification I talked about above. That is the part we are certain about. However, we wouldn't know it was not applicable to humans unless we had done it first.

     

    And opposing vivisection is opposing bad science that fails to reliably find cures for humans.

     

    And yet we have all these cures because of research on animals! How about that?

     

    PC - I'm not sure what the US legislation says but the UK one is so worded that any researcher can claim that the suffering that will be involved in their research is essential to the outcome.

     

    You can do that in the USA too. It's a Category C study, in which pain relieving medication can't be used. It's very difficult to get a Category C study approved. The justification must be airtight. I speak as someone who sat on an IACUC for 8 years and have submitted my own proposals -- none of which have been Category C.

     

    As I said above, the whole system is shot through with corruption and nothing these people say can be reliably taken as gospel.

     

    But of course the AR people are all saints, with no agenda and they would never distort the truth for their cause, would they? :eyebrow:

     

    Most drugs are just slightly altered variations of existing drugs. Why is cloning done in non-humans instead of humans? One answer is that rats and monkeys never make complaints and are unable to sue.

     

    No. The major reason is the legal/ethical status of the clone: is it fully human or a piece of property? There are also technical/practical reasons:

    1. Cloning so far is very inefficient, requiring 500 to 1,000 ova for every clone produced. It's difficult to get that many human ova. Are you willing to donate yours?

    2. It turns out that some of the enzymes necessary for the initial divisions of the fertilized ova reside in the sperm in humans. Therefore primate and human clones are more difficult to generate than rat or mice clones.

     

    For some researchers, the aim is to stay in work and get grants - if they happen to make something useful, it's a serendipitous bonus.

     

    WOW! You really like to slander researchers! How many do you personally know?

     

    jdurg (post 216):

    'If you give a drug to an animal test subject and it dies shortly after ingesting the drug, then you have a pretty good idea that it will be toxic to humans.'

     

    PC - But you couldn't be sure until a human is given it. And you can't be sure that a drug that does no harm to a rat or a dog won't harm a human. A drug that kills another animal might be a wonder drug for humans.

     

    Again, are you willing to volunteer to take a drug that quickly killed a rat? Or are you going to think "rats and humans are both mammals and any drug that kills a rat is likely to kill me."?

     

    Dr. Dalek (post 253):

    Often rare side effects aren’t detected until the drug reaches the market.

     

    Notice that word "rare". And sometimes that happens. Often the drugs do not have an unpredicted rare side effect.

     

    PC - They take it from the still beating heart of a foetus in a slaughtered cow in a slaughterhouse. It is thought that foetuses feel pain and might feel it more intently than adults. This foetus might be close to birth time. Serum-free culture should be used. Bovine culture can contain diseases and cells that could affect the outcome of experiments.

     

    Don't you think we would use serum-free media if we could? The reason people started using fetal bovine serum was because the cells died in serum-free media.

     

    PC - You can test a drug in a rat's stomach and a dog's stomach without any side-effects. But it might kill humans. You do not know until you try it in humans if the rat and dog gave the correct answers. ... Their data would be believed but the data from in silico and other non-vivisection methods wouldn't be - because for years vested interests have been spewing out propaganda.

     

    Because up until recently none of the in silico systems were accurate compared to the known data we already had from animals. If your in silico program predicts drug A will be harmless but you know from previous testing that drug A caused liver toxicity, then you don't trust the program. Duh! Remember GIGO. A computer program is only as good as the input. If that is flawed, then the output is flawed. And up until the last 5 years or so, we didn't have enough data to model complex whole animal interactions. We still don't for most things. That's why you have to use the whole animal.

     

    PC - There are humans in Africa who have been given untested drugs. After their governments were given large cash payments.

     

    So why should you object? After all, since you think animal testing can't predict what happens in humans, why not use humans? According to you, using humans as the first line of testing is the only way to go.

     

    However, you have cited a lot of "programs" but not documented any of them. Please do so.

     

    PC - GammaMambo was expressing an opinion. He or she believes it is all right to experiment on certain humans. You are of the opinion that it is all right to experiment on non-humans. That is just an opinion, too.

     

    "opinions" or positions get discussed to determine their validity. Saying something is an "opinion" doesn't exempt it from critical evaluation.

     

    lucaspa (post 296):

    'As I noted, male rats are quite capable of looking at newborn rat pups as a tasty snack.

     

    PC - Is it normal for rats to behave this way? Would a rat in the wild do this?

     

    Yes to both. In the wild the dam finds a secluded spot away from the male: a defendable spot. However, in a cage there is no way to do this. Nests are usually in corners, but the male is right there.

     

    PC - Several sites about rats say that rats make good fathers. Of course, they are talking about rats that are not tortured and that haven't been driven insane in rat Belsens.

     

    Once the pups grow fur, I've never seen a male rat eat them. But newborn rats are vulnerable.

     

    PC - Some animals have an instinctive behaviour that makes them go still when a predator catches them - they might struggle at first but soon stop.

     

    This wasn't that. Rats fight off predators. Basically, once a rat gets used to a procedure, you can do anything you want with no risk. Even for the first couple of times (and I witnessed that), when the rats were carefully immobilized so that they would not bit the researcher, then don't flinch or show any outward signs of pain to the injection. Their entire objection is to being held.

     

    No thoughtful person would think they weren't feeling pain or alarm.

     

    You need to be careful about projecting your human perceptions on other animals. You make a big deal about rats and humans not being predictors about human reactions. Consistency demands that you do the reverse: what happens with humans cannot be used to say other species are the same as humans.

     

    PC - Do you, lucapsa, know how to recognise pain in rats?

     

    Yes. I have seen rats in pain from infection, inadequate anesthesia, or failed bone fixation.

     

    As your job depends on causing pain or terror to animals, anything you say about vivisection should be taken with a large pinch of salt.

     

    PC, my job in no way "depends" on causing pain or terror to animals. Most of my research is cell culture. Nor do any of my animal experiments require the animal to have any more pain and/or discomfort than a human undergoing the same procedure. Therefore all my protocols include the same analgesic treatment a human patient would get.

     

    But it appears that you have no first-hand experience of animal behavior. Your positiion depends on animals feeling pain and you don't want to hear any contrary evidence. Therefore, by your own logic, anything you say about how much pain animals feel should be taken with a large pinch of salt.

     

    PC - We really need to be careful when considering anything a vivisector says. They make their living causing pain and fear. They will say anything to try to justify what they do.

     

    That's a convenient way to get rid of evidence against your position, isn't it? Since that same argument applies to yourself just as well, why should we pay any attention to you?

     

    PC - Why do vivisectors have to use euphemisms? When they talk about euthanising, they mean killing. It is killing, not sacrificing.

     

    I didn't say "sacrifice", did I? I said "euthanasia". And the reason is twofold:

    1. We don't like killing animals. We really don't like seeing them in pain. My first experience with animal research occurred as a junior in college. I was at Kansas U Medical Center working with a guy studying whether the drug antabuse was an effective means of preventing alcoholics from drinking again. For the experiment he needed to know precisely how much water the rats had. The way to do this is to put measured water in a syringe and the "needle" has a ball on the end. Hold the rat, put the ball at the back of the throat, tilt the syringe and needle up, and have the needle slide down to the stomach and then inject the water. The guy showed me this very quickly (he had done this thousands of times). The second rat I did I missed the esophagus, got the trachea and injected 15 ml of water into the rat's lungs. He thrashed for several seconds as he drowned. I threw up. It obviously haunts me today.

    2. Euthansia is different from "killing" (so is "sacrifice"). Killing does imply pain and fear on the part of the victim. Euthanasia implies the lack of both. I like carbon dioxide inhalation. But the rat into a box and run carbon dioxide thru it. The rat quietly goes to sleep. A colleague prefers cervical dislocation in the mice he works with. Why? Because, according to him, he can dislocate the cervical vertebrae in a hundredth of a second, sparing the mouse any pain.

     

    Another euphemism, 'Final Solution', sounds nicer than the reality of what it meant.

     

    PC - Has your research found a cure? I would have you stop what you are doing. I would have you do human-based research, which would use every non-vivisection method that was needed.

     

    The articular cartilage defects were completely regenerated to the point that you can't tell where they were. Now, how can I do "human-based research" without vivisection on humans? Is it OK to do vivisection on humans? If so, why isn't it OK to do so on rabbits?

     

    PC - Sciecewiz originally said that over a million animals a month are killed in vivisection. In the UK alone, more than 3 million animals were used last year. I'm sure that at least 2 million of these will have been killed.

     

    Who is Sciecewiz and what is your source? Please give a citation.

     

    It is estimated (estimated because not all animals are recorded) that well over 100 million animals are used each year in labs. How many are killed? Probably most of them. This estimate comes from anti-vivisection advocates the BUAV and also the Hadwen Trust.

     

    And why should we trust these people? Oh yes, you depend on them and they agree with you. So of course they are trustworthy. ;)

     

    lucapsa (post 307):

    'Do you know what an Institutional Animal Care and Use Committee is'

     

    PC - Do you know how much corruption there is in medical research and drug making? Can any committee be fully trusted when the drug companies want profits and have millions to use in bribes?

     

    You didn't answer the question. Do you know what an Institutional Animal Care and Use Committee is? Do you know the mandatory composition of these committees? Have you ever sat in on any meeting of one? If not, how can you say how much of their deliberations are due to "corruption" or whether the committee people can be trusted? BTW, one of the requirements is that committee members can receive no compensation whatsoever. The IACUC I served on for 8 held our meetings at the local Olive Garden so that we got free lunches. Most committees don't get that.

     

    Mr Skeptic (post 310):

    'Animal testing speeds up research and in doing so saves human lives and increases human comfort.'

     

    PC - How do you know it speeds up research? How do you know it is not holding up research? Don't ask the vivisectors, they are biased.

     

    Look at the results in advances in medicine over the past 50 years.

     

    lucapsa (post 315):

    'In the United States ALL clinical research MUST go through an IRB -- Institutional Review Board. The purpose of the IRB is to safeguard the interests of the test subjects.'

     

    PC - Did you know that many, or sometimes most, members of IRBs have financial connections of some sort with drug companies?

     

    No, they don't. They can't. Anyone having connections to drug companies can't serve on an IRB. In fact, we had an example in ethics this year of a physician working for a pharmaceutical company that wanted to teach residents in clinic. This was allowed only under the condition that her prescription use was monitored to ensure that she was not overly prescribing the drugs her employer made. After a year -- during which she was clean -- she asked if she could join the IRB. She was told "absolutely not" because of the potential conflict of interest.

     

    I'm afraid the people you trust have given you some really bad information.

     

    PC - lucapsa then said that he or she doesn't know of payments for phase 1 clinical trials. The recent phase 1 trial for TGN1412 involved payments to volunteers.

     

    Please cite a source! In the USA the rules expressly forbid payment for participation in Phase I trials.

     

    Unless they are in Africa or India where they are duped into thinking they are getting some revolutionary, already tested and safe, drug.

     

    That is an ethical problem, not a scientific one. And I agree with you; this is wrong. Patients must be counseled as to the risks of the participation.

     

    PC - A certain communist country in the far east has a very high execution rate. One theory about why they sentence so many people to death is that they want to harvest their organs for transplants. I wouldn't put it past them.

     

    Which one? As you should have noted, I was arguing against using unwilling human participants in human trials. However, it is unclear why you think this is so. After all, you say the only way to test new drugs is on people. So why not use people? My position is that the ethical thing to do is test animals first.

     

    lucaspa (post 333):

    'Well, you have just admitted that you are not holding a rational discussion because you won't accept any data contrary to your view.'

     

    PC - I took lovejunkie02's meaning to be that he or she wouldn't believe any vivisector who claimed that the lab animals are treated humanely.

     

    Same thing. You won't accept any data contrary to your view. All you've given is an invalid reason why you won't accept any such data.

     

    People who experiment on anyone who can feel pain and terror would not convince me that they are humane or that they treat their victims humanely.

     

    So what about physicians? They "experiment" on you every time they treat you. By your logic, you shouldn't go to a physician. But I bet you do.

     

    Just as I wouldn't believe the rapist who says that his victims were asking for it.

     

    Apples and oranges. After all, we can consult the victim. And you can inspect the animal protocols and visit animal research facilities to check for yourself to see that the animals are treated humanely. PC, what you are forgetting is that science is public. Everyone must be able to get the same results in approximately the same circumstances.

     

    I've offered to post the IACUC forms I have to fill out. The requirements for IACUC and IRB committees are publicly available, as are the inspection criteria for animal facilities to meet AALAC approval. Look for yourself.

     

    lucaspa (post 333):

    'The reason I ask is because the animals I work on -- and yes, I do animal research -- are treated just like human patients. Right now we are doing an bone gap model in rats. The rats are anesthetized with ketamine and acepromazine....'

     

    PC - I'm not sure that I would believe anything you say.

     

    See? Not a rational discussion. If you are close to New York, you can come watch one of the operations. Or I can give you the name of some undergrad students who have participated.

     

    How do you know those animals feel no pain? Human patients have reported feeling pain whilst under anaesthetic.

     

    Same way human patients are tested: to a mild pain stimulus. In this case a toe pinch. If the animal draws the foot away, anesthesia is not complete. Now, just what percentage of human patients have reported pain while under general anesthesia? This is not local; but general. Complete unconsciousness.

     

    Animals that feel no pain can still feel fear whilst conscious.

     

    So do humans! Don't you feel fear/anxiety before an operation? I know I have. So why do you want a condition to apply to animals that we can't satisfy for humans?

     

    lucaspa (post 333):

    'After all, we do have to exploit other species as animals. Isn't a farmer's field another type of cage? Would you have use give up farming?'

     

    PC - If you are talking about sheep and cows, yes, I would have us give up farming. Animal farming.

     

    How about plants? The plants are in a cage, aren't they? They are grown and then savagely killed -- oftentimes torn out by the roots.

     

    How about a lion hunting? Or even a domestic cat hunting a bird or a mouse? How much fear does the prey feel? Or pain as the carnivore bites down on the neck? You are trying to duck the issue: every species exploits other species. This is completely unavoidable in animals; every animal must at least exploit plants.

     

    mooeypoo (post 336):

    'Perhaps but the problems start with defining what a "Good Cause" is (to you it can be one thing, to me another, and we each can consider each-other's subjective 'good causes' as absolutely not worth it), and the second problem is what TYPE of actions justify what type of means.'

     

    PC - Yes. There were causes good enough for the Aztecs to sacrifice humans. There are causes good enough to lead one nation to attempt genocide.

     

    lucaspa (post 345):

    As a member of an IACUC, we shut down the research of a scientist for not taking proper care of his animals.'

     

    PC - I have already mentioned that committees and regulatory bodies involved in this cruel business can be corrupt.

     

    That is your wishful thinking.

     

    Whatever is decided, there's no way to ensure researchers stick to the rules.

     

    Yes, there is. Did you miss my statement how we closed down a researcher for not following the rules? I've kept it for you. Animal facilities must be regularly inspected. Since IACUC members cannot receive any compensation, what is the source of corruption?

     

    Bank robbery is against a law. That doesn't stop bank robbers robbing banks. They know there is a high risk of getting caught. They do it all out in the open. The punishment if caught is a long prison sentence. Researchers have little chance of getting caught if they break their laws.

     

    Your premises are wrong. Researchers have an even better chance at getting caught if they break the rules. Remember, they have people outside their lab caring for and inspecting the animals. People whose jobs depend on the researcher following the rules. If the researcher doesn't and then they get caught in an FDA or USDA inspection, the animal facility is shut down and the caretakers are out of a job.

     

    They do it all behind closed doors.

     

    IACUC meetings are open to the public. You won't trust me but then you say false things that anyone can look up and see are false! Who has the credibiliity problem here?

     

    The punishment - if any - is minor.

     

    Not for the scientist. You might perceive it that way because there is no jail time involved, but having your research shut down is a MAJOR punishment.

     

    lucaspa (post 345):

    'It's not entirely "subjective". We can define what ethical principles we agree on (and these are not derived "subjectively", either) and then reason to conclusions.'

     

    PC - I, and many others, would not agree with you on what is ethical.

     

    That does not mean it is all subjective. Either you are I might have made a mistake in either our premises or our reasoning.

     

    lucaspa (post 345):

    'What I find is that a lot of the emotion against using animals for testing comes from examples that are stated but do not exist. Some of the examples are outdated. They did exist but were the reasons regulations and IACUCs were established; to eliminate those situations.'

     

    PC - No, I don't believe that IACUCs or the FDA can eliminate those situations.

     

    Oh, but they did. For instance, one of the early pieces of evidence against using animals in research was a video showing a researcher waving a blowtorch over the skin of a pig to cause burns. No researcher could possibly perform that research today.

     

    PC - I might have mentioned this before but I don't believe what vivisectors say.

     

    Several times! However, you can read the regulations for yourself.

     

    Humans cut corners, they don't always bother to adhere to rules and regulations.

     

    But now the burden of proof is on you to back that claim. You must show specific instances where the rules are not adhered to. Please do.

     

    PC - How do you know these wonderful cures couldn't have been made without vivisection? Or that vivisection resulted in better cures being scrapped? Using non-humans is a lottery. You can't know which data will be applicable to humans. Most of it isn't.

     

    1. Because we never would have had the necessary tests for efficacy and safety without going to animals. You seem to forget (altho you argue for it) that research in humans is extremely limited. There are lots of experiments we simply are not allowed to do in humans until there is animal data. So we couldn't have made it to human clinical trials without the animal data.

    2. It's possible that "better" treatments were scrapped. The system is biased towards safety at the expense of efficacy. Do you want to change that bias? Your other comments say "no".

    3. Using non-humans is NOT a "lottery". That is part of the mythology AR people must have. After all, if animal testing is predictive of human success, the AR position crumbles. So in this instance, by your own criteria, we can't trust anything you say. However, it is demonstrably otherwise. I can provide several instances where we can trace a successful treatment/drug by the scientific publications and you can see how animal testing was critical. OTOH, you must show how, in the last 50 years, the majority of new treatments/drugs were not based on animal research. Please do so.

     

    lucaspa (post 358):

    'Again, already being done! NIH comes out with Requests for Applications for NIH grants on new cell culture and computer modeling techniques to cut down the number of animals used. Go to the NIH website and look at the grants requested and awarded.'

     

    PC - I'm sure it's a drop in the ocean compared with the money spent on marketing drugs.

     

    Irrelevant answer. The original claim was that there was NO sponsored research on animal alternatives. This shows otherwise. Also, NIH funds are completely separate from money spent marketing drugs. You simply can't compare the 2. Of course the pharma companies market their drugs; companies market cars, houses, computers, and every other product. So what? It costs over $500 million to bring a drug to market. The companies have to recoup that cost or their are no new drugs.

     

    lucaspa (post 358):

    'ALL lab facilities must be accredited. One of the requirements for accreditation is policies in place that have the animal care attendants report any suffering of the animals.'

     

    PC - The same could be said about children's homes and retirement homes. From time to time, reports of abuse emerge. But the residents are not supposed to suffer abuse. Does that stop abuse? Does it heck as like!

     

    As you admitted, the policies do stop abuse! "From time to time". Yes, occasionally individuals will get around any system. By your logic, we should shut down children's and retirement homes because there will be abuse! So stop taking care of those children and old people, a very few of them will be abused! Do you see the flaw and irrationality in your argument?

     

    PC - Animals are complex and one species cannot be a reliable model for another.

     

    You've repeated this several times. Since this argument is essential to your AR stand, why should we believe you? You have motive for making this statement. As I noted in other posts, the literature is full of papers making comparisons between animal models and human conditions so yes, one species can be a reliable model for another. As just one example, guinea pigs are a reliable model for human vitamin C deficiency.

     

    When all the steps have been taken to test a drug at the cellular level and its chemistry has been analysed and it is time to test it in a whole, living orgnism, that organism should be a human.

     

    Really? And I suppose we are going to see you at the head of the volunteer line?

     

    lucapsa (post 362):

    'People who want to stop all animal testing must face this reality: to give up animal testing means giving up new drugs/treatments for human health and new cleaning solutions and other chemicals that make our lives easier. If you give up animal testing, you freeze our medical technology and chemical technology where it is today. Is that what they really want?'

     

    PC - New cleaning products are not needed - especially not if their development leads to the harm, pain or death of any animal. Non-humans can't be relied on to predict toxicity or efficacy. Has anyone mentioned that before?

     

    You've mentioned it, and it is still wrong. Non-humans have been used to predict both. I notice you did not mention new drugs/treatments. Since you think that there are a lot of undesirable side effects to any new drug in humans, why would any human volunteer to test a drug without animal testing first? Would you? So the end result is still a complete halt. If that is what you want, then be honest enough to say so.

     

    lucaspa (post 372):

    'That is the fallacy. Pharmacokinetics are remarkably similar between mammalian species. The distribution of metabolic routes of drugs is different, but all the routes are there in different mammalian species.'

     

    PC - Slight differences anywhere can have large implications.

     

    And they can have NO difference. Look, no one says animal testing is perfect and eliminates all risk when we go to humans. That's why we have phased human clinical trials. But before we ask a human to take the risk, the animal testing has provided data that the risk is minimal and worth it. You have a very low regard for your fellow humans if you want to just pump any new compound into them first.

     

    lucaspa (post 372):

    'That's a bare assertion. Please post the peer-reviewed scientific papers to back that up'

     

    PC - What, the peers who are vivisectors? That's rather like asking a freemason to back up the story of a fellow mason or to denounce freemasonry.

     

    The claim was about scientific data that was said to exist! Therefore we can rightfully expect to have scientific papers documenting that scientific data. Sorry, you can't claim "science" and then say that there is none!

     

    You really don't understand how science works, do you? We as scientists get fame by showing things to be wrong. Think about it. Einstein is famous for showing Newton to be wrong; Hawking is famous for showing Einstein to be wrong. Darwin is famous for showing all the Special Creationists to be wrong. It works at all levels of science.

     

    lucaspa (post 372):

    'Because of evolution, the differences between species are not as great as you make out.'

     

    PC - The differences are great enough. If they weren't you could go straight from mouse to patients .

     

    Sometimes we can. The use of adult stem cells for recovery after heart attacks did go from mice to human clinical trials. And they are working out quite well!

     

     

    lucaspa (post 372):

    'Again, untrue. Because of evolution many of the biological systems are very similar.'

     

    PC - Not similar enough when condidering how a drug will work.

     

    lucaspa (post 372):

    ' The actual record is that animal efficacy is a strong predictor of human efficacy.'

     

    PC - Can you guess what I am going to say? Correct - give that person a cigar! It is not strong enough when considering what a drug will do.

     

    It is very late. I can hardly keep my eyes open. To be contiued.

  5. A roundabout way of admitting that they metabolize substances differently.

     

    Not really. There are 2 claims here:

    1. Other mammalian species have metabolic routes that humans don't and humans have metabolic routes that other mammalian species don't.

    2. Both humans and other mammalian species have the same metabolic routes but that some mammalian species use the routes in different proportions than humans do.

     

    You are stating claim 1 and I am stating claim 2. Both rats and humans sulfate drugs and use the cytochrome P450 system to oxygenate them. What I am saying is that, for drug A, rats might have 75% sulfation and 25% oxidation while humans would have 75% oxidation and 25% sulfation. IF the oxidation route produced a toxic metabolite, then humans might produce enough of the metabolite to show symptoms while the rat would not.

     

    Is that clear now?

     

    Dr Vernon Coleman has a book with 50 drugs in it that cause cancer in laboratory animals yet are on the market anyway, the official position being that those results are 'not relevant to humans', which they may or may not be and nobody actually knows without human data to corroborate them. But, given that you claim that any drug shown to cause an increase in cancer in laboratory animals cant be used in humans, why are they there?

     

    This is the law I told you about. Congress enacted laws in 1960 with the "DeLaney Clause" that stated that the FDA could approve no drug or addictive that caused cancer in any species at any dose. Note the "any". This began to be challenged over the addition of the sweetener in "Tab" (which was pulled from the market). Massive amounts of the sweetener were given to rats and there was a slight increase in the rate of cancer in the rats. Very, very small but with enough numbers it was statistically significant. However, the amount of sweetener the rats were getting would mean that humans would have to drink about 1,000 cans of Tab per day. This was decided to be unreasonable and the animal trials were decided to be unrealistic due to the massive doses of chemical -- far higher than humans would ever ingest. Since then the Delaney Clause has been modified or dropped, especially in terms of drugs. It is still in effect in terms of food additives and some pesticides.

     

     

    1. It's not a systematic review. There is no indication from the Abstract how the reviews were picked.

    2. You missed this sentence: "The poor human clinical and toxicological utility of animal models, combined with their generally substantial animal welfare and economic costs, necessitate considerably greater rigor within animal studies, and justify a ban on the use of animal models lacking scientific data clearly establishing their human predictivity or utility."

     

    So the article does not recommend discontinuing animal trials, but instead wants more rigor in the animal studies. Animal studies are still necessary, but what it is saying is that people apparently are not doing them properly.

     

     

    You seem to have misread the article:

    "An article in the prestigious science journal Nature has decried the use of mice as "models" for testing drugs intended for use in humans as "nearly useless"."

     

    This does not say that animal models are useless. It is only saying that, for this particular set of diseases, that mice are useless. Nor that animal testing in all forms does not predict.

     

    This is science at work. Orthopedic surgery went thru the same process in terms of animal models for cartilage repair. It turned out that a company -- Geron -- leaped from rabbits to humans (because of the profit motive) and skipped tests on larger animals. The treatment -- Carticell -- still works in a limited set of cases but not in the wider set that Geron claimed. The extrapolation from rabbits to humans was invalid. Now all treatments for cartilage repair have to go thru a sheep model, which is a much better predictive model for humans. This is science looking for the best animal model, not saying that animal models are no good at all.

     

    The differences are there, they are real and they make extrapolation impossible,

     

    If extrapolation was "impossible", then NONE of the treatments that work in animals would work in humans. However, even your data (and history) shows that this isn't true.

     

    Massive amounts of damage and failure of a whole multitude of treatments that worked successfully in animal 'models', i.e. fake conditions created artificially as they do not naturally develop human diseases, in the main.

     

    No one said the system was going to be perfect. That's why there are Phase I and II clinical trials. The major purpose of the animal experiments is to eliminate those that have no chance of working in humans. Please document what you mean by "massive amounts of damage".

     

    Again, 'similar' really isn't a word in science. It is just another way of saying different.

     

    LOL! No, it's a word used in science. And you are wrong. As I noted, fracture repair in rats is "very similar" to that in humans. So is wound repair in general. The difference is not in the steps, cells, or molecules used, but instead in the timing. Fracture repair and wound healing happens about 4 times faster in rats than humans. That's a difference that can easily be compensated for in moving from rats to humans.

     

    I do not write 'misinformation'.

     

    I am documenting that it is misinformation. Plain denial won't help. I asked for the sources of your information. Significantly, you ducked and didn't provide your sources. So I'll ask again: where are you getting your (mis)information about science?

     

    Why are organ systems not available for every human organ?

    1. Most human organs involve many different systems and it's impossible to provide them all in vitro. For instance, cartilage requires mechanical stimulation and that is difficult to provide.

    2. Many of the human cells required are available in very small numbers. Remember, we have to get human cells from fresh cadavers and require the permission of the family. That severely limits the number of cells. Not many people are going to donate a full muscle or allow harvesting of their articular cartilage.

    3. In case you didn't know it, most cells (differntiated cells) have a finite lifespan. It's known as Hayflick's number. For humans, that is about 50. 50 cell divisions of the cells during an entire human lifetime and then senesce and die. So if you do an organ culture of, say blood vessel cells from a 70 year old who died of a heart attack, you get only about 5-10 cell divisions and then they die. Some cells don't divide at all. Cardiac muscle cells, for instance, don't. So it's very difficult to get an organ model for human heart: you would have to harvest large numbers of cardiac muscle cells from a lot of people right after they die. Stem cells, both adult and embryonic, may someday help this problem.

     

    Strangely, many people believe that chemo drugs will 'work', despite their being extremely toxic. :confused:

    Here we have differential toxicity. Chemotherapeutic drugs for cancer are meant to kill rapidly dividing cells. Since cancer cells are rapidly dividing, they are the targets. Cardiac muscle cells, bone cells, muscle cells, nerves, etc. are not affected. At all. Because they aren't dividing. However, there are some cells in the body that are dividing: hair follicle cells, intestinal villa cells, hematopoietic cells, cells participating in wound healing, etc. These cells are affected by chemo drugs, thus the side effects. The dosage is carefully titrated to minimize the effects on these cells (as much as possible) while providing a killing does to the cancer. As we know, sometimes this isn't possible and the cancer kills the patient anyway.

     

    Ironically, it was animal studies that provided the initial data for the titration and knowing that the chemo drugs would kill cancer cells. :eyebrow:

     

    I dont recall evolutionary theory stating that mice/rats etc have a particularly recent 'common ancestor' to humans, yet they are predominantly used for animal experiments.

     

    In evolutionary terms, it is "recent". In terms of human lifetimes, of course, it seems a long time ago. Chimps, of course, share our most recent common ancestor. Rats and mice are used because 1) they are evolutionarily close and 2) they are small and cheap.

     

    The reasons given for the failure of 92% of drugs that enter clinical trials with positive accompanying animal safety/efficacy data are predominantly safety/efficacy issues.

     

    Of course. Those are the 2 things that you are testing: safety and efficacy. What other reason for failure would there be?

     

    This has always been the figure, more or less, so claims of 'insufficient animal experiments', especially as the numbers have gone up, not down, will not wash. Although this ploy has been used to convince the gullible and the brainless for decades.

     

    The article itself notes that this has not been the figure. Instead, there has been a 2-3 fold drop in percentage of approvals. I guess your "more or less" is very broad. And no, as I noted, the number of larger animal experiments has gone down. Very few primate experiments are done any more, both due to expense and the ethical concerns raised. Also, the expense of getting new drugs to market has quadrupled over the past 2 decades. So drug companies are taking shortcuts on cost whenever possible. That's why we see the leap from proof of concept in neurodegenerative diseases in mice straight to human trials (your reference). There should, at least, have been some cat studies in between.

     

    The 'predictive' success is abysmal. It is blind chance that anything useful for humans comes out of it. But then what do you expect when you test a drug in a species that, 1) responds unpredictably different to us to the same drug and 2) doesn't have the condition you wish to treat, just a phoney, unrelated 'replica' !!!

     

    Repeating the same fallacies won't make them true. Animal response is not "unpredictable" and the conditions are not an "unrelated replica". Again, you have to remember that all the drugs that DO work in humans went thru the same pathway. And you are forgetting the drugs/treatments that were eliminated. When you include them, the predictive success rises considerably.

     

    This simply isn't true, there are a wide range of drugs on the market that cause cancer and other problems in some species or another, yet are on the market anyway. Tamoxifen springs to mind.

     

    :confused: Tamoxifen reduces cancers in animals! The rats had cancer anyway! Tamoxifen either 1) reduced the risk or 2) reduced the size of the tumor. http://www.google.com/search?q=tamoxifen+cancer+animals&rls=com.microsoft:en-us&ie=UTF-8&oe=UTF-8&startIndex=&startPage=1

     

    Again, where are you getting your information?

     

    Indeed, 92% of drugs that pass animal safety/efficacy tests fail when given to humans, on those grounds.

     

    Wait a minute. I said "That the drug is harmless in animals is not a guarantee that it is harmless in humans. " That is just one thing: safety. You are combining 2 different things: safety and efficacy. The failure due to safety issues is much smaller than 92%.

     

    There are many drugs used in humans that are useless and/or fatal in animals. Aspirin is used in humans yet is highly toxic to cats. Digitalis is useless in dogs, unless you want to raise their blood pressure, yet it has the opposite effect in humans. Morphine stimulates the CNS of cats, and some other species, yet has a depressive effect in humans.

     

    I think you are trying to say that every species of mammal will respond the same way to humans. No one made that claim. That's why we have different species for different diseases and tests.

     

    You are wrong about digitalis. It raises blood pressure: "In moderate doses digitalis slows the heart-action, increases the force of the pulse, and from these effects chiefly, raises blood-pressure." http://www.swsbm.com/FelterMM/Felters-D.pdf The idea that dogs do not mimick human action is a myth. Here is the correct information: http://www.rds-online.org.uk/pages/page.asp?i_ToolbarID=2&i_PageID=1075

     

    Well, given the fact that 98.84% of the 30,000 human diseases are not seen in other species, all of those for a start.

     

    Where did you get that figure? Think a bit about about it. On the surface that claim is absurd! Since most of our infectious diseases arose by microbes leaping from a previous host to us, you know it must be wrong. A little thinking will convince you the rest of it is wrong. For instance, humans get scurvy. So do guinea pigs. Pigs have coronary artery disease. All mammalian species get cancer.

     

    Attempts at artificial recreation do not result in anything with any resemblance to the human condition.

     

    Oh good grief. Most of my papers deal with tissue engineering with adult stem cells. I and my colleagues picked animal models precisely because of their resemblance to the human condition. Let me give you just one example. I just finished a grant application to use adult stem cells to treat inververtebral disc (IVD) degeneration. The model we will use is in rabbits and involves punturing the IVD with a needle attached to a syringe and aspirating the nucleous pulposus. In humans, the annulus fibrosus (the tissue surrounding the nucleous pulposus) will develop a crack and the nucleous pulposus will be extruded. IOW, it leaks out. The animal model has already been documented for its resemblance and similarity to the human condition: Masuda K, Aota Y, Muehleman C, Imai Y, Okuma M, Thonar EJ, Andersson GB, An HS. A novel rabbit model of mild, reproducible disc degeneration by an anulus needle puncture: correlation between the degree of disc injury and radiological and histological appearances of disc degeneration. Spine. 2005 Jan 1;30(1):5-14. Read the article for yourself. The whole point of the study was to mimic the human condition!

     

    Testing substances on mice or rats, as if they were little people 'makes no sense' yet it still continues unabated despite the evidence piling up against it.

     

    Don't make strawmen. No one said they were "little people". Instead, we recognize that they are models for humans.

     

    Yes, and you typed a lot of 'wrong words' in between also, why apologise about that one specifically?

     

    You need to show how the other words are wrong. Please go ahead.

     

    The vast majority of animal experiments are wrong. The reasons are species differences, errors in experiment design, etc. But they are wrong all the same. The small few that by chance are not wrong, are lost in amongst the many.

     

    Lost? What about that 8% (by the one figure) of drugs that get approved for human use? You call that "lost"? Look, if you really believe the vast majority are "wrong", then don't let your doctor prescribe you any drug or propose any treatment. Because all of them were worked out on animals. Do you see how ridiculous this is?

     

    As nobody knows whether or not an experiment is by the general way of things, wrong, or by pure chance, right, then all animal experiments are completely useless and non-informative.

     

    So, should we go back to all the studies that showed no efficacy and toxicity in humans and now run them thru clinical trials? After all, being "useless and uninformative" would also apply to the "failures", wouldn't it? That should be your logical position. And yes, we do know whether an experiment is "wrong" or "right". That's why we have peer-review: to check the methodology. Again, you are inconsistent. You don't mind our acceptance of animal experiments that showed safety problems or no efficacy, do you? But by your statements, those have just as much "chance" of being "wrong" as the ones that prompted clinical trials.

     

    Lucaspa: "They eliminate toxic and useless treatments and drugs before you get to human clinical trials."

     

    Aspirin is 'toxic' according to animal experiments, yet it is useful in humans and has probably saved, or at least prolonged, a fair few lives.

     

    LOL! My, you can build strawmen with the best of them, can't you? ANY chemical/drug has what is called the "therapeutic range". Below that it is not effective and above that it can be toxic. When pharmaceuticals are tested what is looked for is the "therapeutic index" http://en.wikipedia.org/wiki/Therapeutic_index. The higher the therapeutic index the wider window you have between efficacy and toxicity. So yes, you can take enough aspirin to kill you. You can drink enough water to kill you. The point is that animal studies are the primary place that the therapeutic index is worked out. If the therapeutic index is 1 or less, then the potential drug is eliminated.

     

    So no, aspirin would not be eliminated. Neither was digoxin, even tho its TI is 2 to 3. But what the animal tests did do was tell physicians how closely they had to monitor the dosage they gave to people.

     

    There is an extremely long and practically never ending list of other drugs or substances that would be eliminated by animal tests, that are beneficial to humans.

     

    Well, the 2 examples you gave were a) a strawman (aspirin) and b) a myth (digitalis). Would you like to try to give a valid answer?

     

    Most of the drugs they claim will be safe or efficivacious fail when tried out in humans, many times with devastating effects, vioxx, thalidomide, et al.

     

    1. First, both viox and thalidomide were efficacious. Thalidomide is a potent analgesic.

    2. The "devastating" results are not that. The percentage of "thalidomide babies" or people suffering heart problems with Viox were very small. What humans define as unacceptable risk is sometimes irrational. The odds of getting heat attack from Viox are 1,000 less than my odds of being injured or killed in a car accident, for instance. Yet we continue to commute every day.

    3. The problem with thalidomide was insufficient animal testing. It was not routine at the time to test for teratogenic effects. Also, it turns out that rats and mice are resistant to the teratogenic effects. You need primates as an adequate model. Now, however, teratogenic testing is required. No one said animal testing would get every toxic effect. In the 1950s the procedure of phased clinical trials was not in place. Thus thalidomide went directly to widespread usage. Even so, phase I or II clinical trials might not have picked this up since such a small percentage of recipients would have been pregnant. Thalidomide is an example where a drug can fool any system. Life isn't totally safe. The only way to avoid having thalidomide babies would be to require primate teratogen testing or give up any new drugs altogether. Which choice do you advocate?

     

    Yes, using the scientific methods available, such as micro-dosing, which is much safer than animal toxicity data, which is worse than tossing a coin.

     

    Micro-dosing doesn't help. If the TI is less than 1, you are going to have toxic effects on humans -- perhaps even kill them -- while testing. Are you going to volunteer?

     

    Pull the other one.

     

    It's in the article. Read the whole article and not just the erroneous conclusion you got on animal welfare sites (yes, I see that the article is constantly quoted on all those sites; that's where the web search initially landed me as I was searching for it)

     

    Animal experiments have increased year on year out.

     

    Can you quote, from the scientific literature, data to support this? Of course, there are more scientists out there year by year and increasing research budgets for biomedical research, but my experience is that animal experimentation is on the decrease. People and pharmaceutical companies are looking for alternatives.

     

    This is simply another deception on behalf of the animal experimenters who believe in their practice so much they actively try to sabotage any scientific evaluation of the process!!

     

    And yet you cited a paper from Nature where there is scientific evaluation of part of the process! LOL! Undercut your own argument again. Try to get this thru your head: animal experiments are costly and difficult. It is in our best interest to find a way around them. We don't because we can't -- so far. When we can, we will.

     

    There is a mountain of evidence for this, if you care to look for it:

     

    And yet there is no source even in that molehill. Supposedly 2 studies are cited, but there isn't a full citation so I can look up the original papers.

     

    Notice that they are talking about all side effects, including all the minor ones. Not "toxicity". Let's look at the list:

    "Furthermore the report confirmed that many common side-effects cannot be predicted by animal tests at all: examples include nausea, headache, sweating, cramps, dry mouth, dizziness, and in some cases skin lesions and reduced blood pressure. "

     

    Since animals can't talk, they can't tell us about nausea, headache, dry mouth, cramps, or dizziness. All of these are minor inconveniences, not life threatening. Notice that only "in some cases" were skin lesions and reduced blood pressure not predicted. Apparently sometimes the animals did develop skin lesions and sometimes the experimenter actually took blood pressures on the rats.

     

    Which again, is why we still have Phase I and II clinical trials.

     

    What about if it is safe in human fibroblasts but then toxic in some random species?

     

    It wouldn't be "some random species". Instead, it would be a species used to test efficacy of the drug or treatment. If there was a problem in one species, a second might be tried. If the drug is toxic in both (or maybe just the one) it would be discarded. For instance, suppose the drug passed the human fibroblasts, moved to rats and caused liver toxicity and failure. Goodbye drug.

     

    Given the number of drugs on the market that are removed, after some fight, this statement makes no sense. They generally hide behind animal data for as long as possible though, denying the clinical findings, see vioxx, thalidomide, etc.

     

    The statement does make sense. Once again, your premises are in error. The "number" is pretty small. Both vioxx and thalidomide passed the animal tests of the time. Vioxx because the number of cardiac problems was so small as not to be noticeable until large numbers of people were involved. When the difference is very small, you need huge numbers to detect it. And, of course, you are not talking pure science here anymore. You get into the integrity of individuals and their desire to 1) make profit and 2) avoid loss.

     

    lucaspa: "But you are forgetting all the drugs that were eliminated along the way. If we had tested all of those in humans, then you would have found that drugs that we found harmful in animals were also harmful in humans."

     

    This list would include penicillin, aspirin, digitalis and many, many more. Possibly the entire pharmacopia.

     

    Not true. Unless you are making the strawman of giving a huge fatal dose instead of a therapeutic dose. And yes, you appear to be making that strawman.

     

    Plenty, digitalis (raises blood pressure in dogs), aspirin (highly toxic to cats) and lots more besides.

     

    The digitalis is a myth; it raises blood pressure in humans, too. The cat example is not a good one. Aspirin is toxic to cats if they are given a human dose. But if aspirin is given on a mg/kg basis adjusted to the body weight of a cat, then they are OK.

     

    Again, you seem to be making a strawman. You are picking a single species, but I was using animals. Cats are not the normal animal model for analgesics: rodents and dogs are. In those animals, aspirin is not toxic. Maybe strawmen arguments are the only ones you can make?

     

    Unfortunately, when performing animal experiments there is no way to select any small number of genuinely factual ones from the vast and highly damaging number of incorrect ones, hence why they are all collectively uninformative.

     

    You aren't doing that. You are forgetting the large amount of animal data that has been predictive in humans: either for toxicity or for efficacy. Instead, you bring up only 2 or 3 examples (which turn out to be wrong). That is selective data.

     

    INow has since retracted his/her position, and admitted that the dogs didn't have diabetes, as artificially destroying a dogs pancreas is not diabetes. It is quite different to the natural processes that accumulate over time and create the spontaneous and natural condition, in humans. Hence, my 'facts' are already straight.

     

    Look at INow's later posts. No one destroyed "a dogs pancreas". Rather, the insulin producing cells were destroyed. And that is exactly what happens in type I diabetes! The pancreatic island or beta cells that produce insulin are destroyed by the body's immune system. No insulin production. So the issue is whether administered insulin can regulate blood sugar. All the type I diabetics who are alive today owe their treatment to these animal studies. You simply can't honestly deny it.

  6. animal testing for beauty products is totally wrong, but for diseases...i'm not sure. it is cruel, but it may be necessary, like for AIDs reasearch.

     

    The area of beauty products and chemicals (cleaners, solvents, etc.) is the prime area where cultured human fibroblasts are rapidly replacing animal testing. It's cheaper, more sensitive, and more reliable (in addition to whatever ethical concerns there are). The effect on cell metabolism can be measured by automated systems such that large numbers of cultures can be processed in a short amount of time.

     

    For my research -- which is tissue engineering -- it is absolutely essential. There is a recent paper that illustrates this quite well. Limb ischemia (cutting off blood flow to a limb) results in 100,000 amputations in the USA alone per year! This paper used endometrial regenerative cells (ERCs or stem cells isolated from menstrual tissue discarded during menstruation) to prevent limb ischemia: http://www.translational-medicine.com/content/pdf/1479-5876-6-45.pdf

     

    Scroll down and look at the picture of the mice in Figure 2. The one on the left is the control; the one on the right is the treated. Human limbs of people suffering from limb ischemia today look like the one on the right. That's why they get amputated. The only way to show that these stem cells would prevent the picture on the right was to do the study in animals. (BTW, both animals got analgesics so that they were not in pain.) Notice this: " All animals were cared for in accordance with the guidelines established by the Canadian Council on Animal Care." These guidelines require appropriate pain-killers be used.

     

    Now, do you want to save people from amputation or not? If you value animals so highly that you don't think we can use them for our own purposes, then you have to say "no" and have people with ischemia have their arms or legs amputated. BTW, this same treatment could be used to save people from heart attacks.

  7. Where did the forefathers of the dog become mammals?

     

    Back in the Jurassic. That's when a species of mammal-like reptiles became mammals. Look up "mammal-like reptiles" on a web search.

     

    Remember, "dogs" are part of the family "canine" which are part of the order "carnivore" which are part of the class "mammals". So way back in the Jurassic you have a group of species called "mammal-like reptiles" that have some features of reptiles and some of mammals. One species of that group gave rise to the first mammalian species. That species, in turn, gave rise to all subsequent mammalian species by the process known as "cladogenesis" -- which is when an existing species splits in two (or more). One population of the existing species is isolated either by geography (allopatric) or lifestyle (sympatric) and transforms to a new species. So now you have 2 (or more) species where there was once one.

     

    At the beginning of the Tertiary after the extinction of the dinos, the few surviving mammalian and bird species underwent a huge cladogenesis called "adaptive radiation" because there were all those empty ecological niches once filled by dinos. That is when you see the beginnings of the families of carnivores emerge.

  8. So computer modelling is used to test for efficacy? Later you claim the opposite.

     

    The computer modeling is used as an initial test for toxicity. Drugs that are obviously toxic are eliminated: "obviously harmful drugs were eliminated"

     

    Given the undeniable differences in metabolism inter-speciem 'it's possible that a drug will metabolize to a compound that is harmful' in humans, that the animal 'models' missed, too. If there is conflict in animal data, which there often is, how does one settle this dispute before proceeding to human testing? That is to say, which is the 'authentic' predictor?

     

    The different routes of drug metabolism are known. There are differences in the major routes of metabolism, but the routes are all there. For instance, rats tend to sulfate drugs more than humans do. This means that the human P450 system might make a toxic metabolite that rat testing will miss. This will not be picked up until phase I clinical trials.

     

    As to "predictors", there are some legal constraints. For instance, if any drug shows any increase in cancer in any species, it can't be used in humans. No matter how little the increase is or how effective and necessary the drug is.

    This has caused a lot of discussion in both scientific and political circles as people come to grips with cost/benefit ratios. Otherwise, if the drug is effective in animal trials andshows promise in treating human diseases not treatable by other means, it is usually tried in Phase I clinical trials. If you have a new cancer treatment, that moves forward. If you have a modification of aspirin that is slightly more effective than aspirin, that would not move to clinical trials.

     

    Yes, and all of the above factors vary significantly and unpredictably between species, making it impossible to reliably extrapolate between them.

     

    That is the fallacy. Pharmacokinetics are remarkably similar between mammalian species. The distribution of metabolic routes of drugs is different, but all the routes are there in different mammalian species.

     

     

    Nor can they be accurately 'mimicked' in animal 'models'.

     

    That's a bare assertion. Please post the peer-reviewed scientific papers to back that up.

     

     

    No two biological systems are identical, the differences between mouse and man are much greater than the differences between two members of the same species, yet it is considered dangerous and unscientific to attempt extrapolation from, for example, adult to child, so how can it be done from laboratory animal to human patient?

     

    Because of evolution, the differences between species are not as great as you make out. The differences in fracture healing, for instance, between rats and humans is minimal. The differences between individuals in the rats is about the same as differences in humans. But the biological events -- even the cell types and molecules -- are the same.

     

    The 'live animal' experiment only tells you about it's system, not systems in general, hence it is both uninformative and misleading.

     

    Again, untrue. Because of evolution many of the biological systems are very similar. For instance, the data for the Carticell treatment of articular cartilage defects was obtained in rabbits. Rabbit articular cartilage, its structure, metabolism, damage, and repair, is the same as humans.

     

    Where did you get the misinformation you have?

     

    Then the sensible suggestion would be to test it for toxicity in that organ too.

     

    I was speaking of organ culture systems. These are in vitro -- in culture -- systems. Organ culture systems for every human organ are not available. So you have to go into an animal to get ALL the various organs.

     

    You appear to have contradicted, jdurg, when you earlier state that computer models are used to remove the drugs that 'will not work', i.e. efficacy testing, not toxicity.

     

    My apology for the confusion. What I meant by "will not work" in this context are those that will obviously be toxic. If the drug is toxic, it "will not work".

     

    An animal experiment can only tell you if the drug 'actually is effective' in the animal tested upon. Not whether the drug is effective in general.

     

    As several people have pointed out, this is not true. Due to evolution, there is greater similarity between species with recent common ancestors than you are giving credit for. The actual record is that animal efficacy is a strong predictor of human efficacy.

     

    ...there is no guarantee that it wont both be useless and potentially deadly in humans.

     

    No one said there was a "guarantee". You are moving the goalposts. We said that the testing was necessary to give us better predictors. If the drug turns out to be toxic in animals, it is not used in humans. That the drug is harmless in animals is not a guarantee that it is harmless in humans. That's why there are Phase I clinical trials. If the drug is useless in animals, then it is not used in humans. However,while there is a strong correlation of efficacy in animals to efficacy in humans, there is no guarantee. That's why there are Phase II clinical trials.

     

    Or to confirm seen toxicity, in the case of conflicting outcomes in different animal species? A predictor is only any use afterall, if it gives one reliable outcome.

     

    Not to confirm toxicity. I can't think of any case where there was conflicting toxicity testing in animals and the drug went to clinical trials. Can you name an instance where this happened?

     

    As to the predictor, that is not entirely true. Because of the different emphasis in drug metabolism routes, it's possible that one species that uses a route that is minimal in humans may give a false positive. As I stated, rats tend to sulfate drugs predominantly while humans tend to use the cytochrome P450 oxidation system. 'the sulfated metabolite may be toxic but the oxygenated metabolite may not be. So if the drug is toxic in rats but harmless in primates, then you go ahead. Because primates share a more recent common ancestor, they are a better predictor. (They are also so expensive that they are rarely used as animal models.)

     

    ...rats with fake conditions that bare little or no resemblance to those natural, spontaneous diseases that occur in humans?

     

    What specific conditions and/or diseases are you thinking of? A scientist is not going to use a model that bears no resemblance to the human disease. That makes no sense. In many cases the condition must be induced, but it is done in such a way as to either mimic the human condition or be tougher than the human condition. For instance, rabbits dont' spontaneously develop osteoarthritis, so when we wanted to test a treatment for osteoarthritis we had to surgically create a full thickness defect in the articular cartilage in the rabbit knee. The defect size was such as to be comparable to what is seen when humans present to a doctor complaining of pain in their joints.

     

    The first line contradicts the second one.

     

    Yes, I typed the wrong word at the end. Here is the corrected version:

    "Lots of "cures" out there that worked in mice, rats, or rabbits that never worked in people. But before you get to humans you do everything to ensure that the drug is both safe (the #1 priority) and effective in animals."

     

    Clearly a false dilemma, as animal tests are completely uninformative, hence the requirement for human tests and dangerously misleading in most cases.

     

    The fallacy is in your first statement. Animal tests are not "completely uninformative". They eliminate toxic and useless treatments and drugs before you get to human clinical trials. They give you an idea that a drug or treatment at least has a good chance of being efficacious in humans. If you give up animal testing, then you must either do all the testing in humans -- with all the risks that involves -- or you freeze medicine at the current levels. If you will not risk harm to animals, how can you justfy risking harm to humans?

     

    92% of drugs that pass animal safety/efficacy experiments fail when given to humans on safety/efficacy related grounds. This fact would seem to invalidate your assertion that if a drug works, or is safe, in some other random species that it will 'probably' work, or be safe, in humans.

     

    Where did you get this figure? Never mind, found it. It is a news article by Anne Harding in The Scientist describing how the tightening of FDA regulations is resulting in turning down more drugs. But it has been picked up by all the animal rights pages.

     

    You're the victim of out-of-context false witness and possible fraudulent information. Another scientific news organization wrote "The FDA was unable to identify the source of these figures for The Scientist by press time."

     

    Even if the figures are accurate, the article isn't talking about the failure of the usefulness of animal testing, but instead about the record of the FDA in granting approval. One of the problems the article points out is that companies are skimping on the animal testing! IOW, the figures are dropping because the FDA is letting companies do less animal testing than they should be! It's not that the animal testing is failing, but rather that the companies are failing to do the appropriate animal testing and rushing to clinical trials!

     

    Given that the predictive value of any given species for another is less than the toss of a coin,

     

    That's not a "given". Again, we need to know the source of this "given". The animal rights group you got it from has to document that.

     

    Presumably the 'cell cultures, tissues/organs' are human in origin.

     

    I made it clear that the fibroblasts are human. In fact, they are human foreskin fibroblasts.

     

    If they conflict with the animal-data as they invariably will, which do you go with? If for example, tests of a chemical compound on human liver cells show that it is toxic, but when tested in dogs it is seen to be 'safe', or vice-versa, which results overide the other?

     

    You are missing that this is a step-wise procedure. If the chemical shows toxicity in the human fibroblast cultures, it never goes to testing in animals. That chemical is discarded right there.

     

    Animal testing is VERY expensive. That's why the culture is used first. It's a lot cheaper. Only if the chemical/drug passes the culture test is it then used in animals. So your "problem" never arises.

     

    Every drug on the market is 'ineffective' or 'unsafe' in some species or another and very effective and extremely safe in others.

     

    If the drug is "on the market", then it is effective and safe in humans. Otherwise it wouldn't be "on the market".

     

    However, your statement is not correct. Some drugs have been shown to be unsafe in every mammalian system tested. Thalidomide is one example that comes immediately to mind. Some are safe and effective in every mammalian species tested. Morphine comes immediately to mind; it is an effective pain killer in every mammalian species tested.

     

    It isn't known until after human experiments which species is an accurate 'model' for the human response, hence why they cannot be predictive. I do not consider a 92% failure rate to be evidence of a system 'working well'.

     

    But you are forgetting all the drugs that were eliminated along the way. If we had tested all of those in humans, then you would have found that drugs that we found harmful in animals were also harmful in humans.

     

    Can you name any drugs that were ineffective and/or unsafe in animals that turned out to be effective and safe in humans?

     

    You are using "selective data".

     

    As the dogs in question did not have diabetes i would have to say, 'no'.

     

    As iNow demonstrated, the dogs did have diabetes. This is just one example where your "facts" are wrong. You need to get the facts straight before your argument is valid.

  9. The triple crown winner of this years mating olympics may not sire the next triple crown winner. It could come from one of the deer who was knocked out in the preliminaries. A few years later, this buck offspring wins, but his offspring aren't champions either, etc. The winner may get the choice of females, but it doesn't guarantee anything after that.

     

    Pioneer, if you look at the pedigrees of race horses, Triple Crown winners today can trace their lineage back to previous winners. That is because the artificial selection has ensured that all race horses are descended from previous winners. So the foal may not be a winner, but it will be good enough to compete. Remember, evolution happens to populations, not individuals. What you need to do is compare the times in major races now to those of 100 years ago. And look at the mean times +/- the standard deviation. Look to see if the curve has shifted. As John noted, you want to see the average time for all race horses.

     

    Maybe the slow speed of evolution demonstrates that affect.

     

    1. Recent experiments in the wild have demonstrated that natural selection can work much faster than we see in the fossil record:

    Evaluation of the rate of evolution in natural populations of guppies (Poecilia reticulata). Reznick, DN, Shaw, FH, Rodd, FH, and Shaw, RG. Science 275:1934-1937, 1997. The lay article is Predatory-free guppies take an evolutionary leap forward, pg 1880.

     

    This is an excellent study of natural selection at work. Guppies are preyed upon by species that specialize in eating either the small, young guppies, or older, mature guppies. Eleven years ago the research team moved guppies from pools below some waterfalls that contained both types of predators to pools above the falls where only the predators that ate the small, young guppies live. Thus the selection pressure was changed. Eleven years later the guppies above the falls were larger, matured earlier, and had fewer young than the ones below the falls. The group then used standard quantitative morphology to quantify the rate of evolution.

     

    So we have a study in the wild, not the lab, of natural selection and its results. The rate of evolution was *very* fast. Evolution is measured in the unit "darwin", which is the proportional amount of change per unit time. The fish evolved at 3700 to 45,000 darwins, depending on the trait measured. In contrast, rates in the fossil record are typically 0.1 to 1.0 darwin. However, the paper cites a study of artificial selection in mice of 200,000 darwins.

     

    2. So, why is the rate of evolution so "slow" in the fossil record? Two reasons:

    a. Large populations

    b. Purifying selection.

     

    Remember, natural selection comes in 3 forms: directional, purifying (or stabilizing), and disruptive. We tend to think only in terms of directional selection. When a population is well-adapted to the environment, purifying selection will keep the population the same. No change.

     

    Also, as populations get large, it takes more and more generations for a new trait to spread thru the population. That slows down evolution.

     

    As Edtharan noted, most traits are polygenic (involve more than one gene) and most genes are pleiotrophic (involved in more than one trait). This makes reasoning based on simple Mendelian genetics misleading.

     

    Also, as John noted, the artificial selection has been narrowly focussed only on speed. But since humans don't know all the traits it takes for a horse to run fast, some of the breeding will actually get traits that hurt speed. Look at the horse this year at the Kentucky Derby. Apparently the breeders didn't take into account changes in the strength of the bones. The horse could run fast, but the bones weren't really strong enough to bear the forces on them.

     

     

     

     

    In other words, in the ideal Darwin, maybe the speed of evolution should be faster if we work under assumption of a long lineage of triple crown families. But because this does not occur with any reliability, it shifts around causing the genes to evolve much slower than expect, more in line with the slow evolutionary pace. Instead of perfection, maybe nature choses diversity so all will evolve.

  10. And what lifeform can exist independantly? Nothing that I can think of. We all depend on an environment of some sort that agrees with us.

     

     

    Maybe some evidence that cancer may be a living organism, a parasite? Or, perhaps an organism in the process of becoming 'independently' alive.

     

    Moved the goalposts. When I am talking "independent", I mean able to live without being part of a larger organism. You now try to change "independent" to mean "without anything else". Not valid.

     

    When the animal with the cancer dies, the cancer dies. Cancer is not a parasite or independent organism: it is aberrant growth of cells in a multicellular organism. It can only be kept alive outside the individual having the cancer by careful tending by a scientists. Shoot, cancer isn't even infectious! You can't "catch" cancer like you can a cold. A cancer cell in my body could not live in yours. So even obligate parasites like viruses or some microbes are "independent" in terms that cancer is not.

     

    “Scientists from The Institute of Advanced Studies at Princeton and the University of California discovered that the underlying process in tumor formation is the same as for life itself—evolution.”

     

    http://www.sciencedaily.com/releases/2008/08/080801094300.htm

     

    Nice try, but it doesn't work. What the article is saying is what people in the cancer field have been talking about for 5 years or more: cancer cells are natural selection in action. In order to be "cancerous", a cell (and its descendents) must have mutations that 1) remove growth control, 2) allow it to evade the immune system, 3) allow it to recruit blood vessels. Having all these capabilities is why cancer is relatively rare: few cells make it thru the entire process without being eliminated by the environment.

     

    This doesn't make cancers a separate life form; it just means that natural selection does operate on them.

     

    Yet the seedless oranges are more successful as a human cultivar than wild oranges. Their success is directly linked to being seedless. Hence, the seedless trait is actually benefitial to them. After all, artificial selection is just natural selection in an artificial environment.

     

    None of what you said changes the fact that seedless oranges were not produced by natural selection.

     

    I appreciate you trying to find another way that seedless oranges do not falsify natural selection: being seedless benefits the trees because humans cultivate more of them than other (seed) oranges.

     

    However, it is not right to try to change the meaning of the term "artificial selection" or make artifical selection = natural selection. The processes are similar but not identical. Artificial selection is when humans do the selecting instead of the environment, not "natural selection in an artificial environment". Read Origin of Species and how Darwin described artificial selection. In fact, Darwin used "natural selection" to distinguish what happens in nature from what animal and plant breeders were doing. Yes, "selection" happens in both, but the important part is who or what is doing the selecting.

  11. This is more of an observation. At the smallest levels of matter and energy things obey quantum laws. But as you get larger into macro size quantum breaks down. For example, the pebbles on the shore of river are not quanta. There is certainty of position and their movement does not obey wave equations. These rocks are composed of quantum substructure, but as a composite they don't follow quantum laws.

     

    I have read that they do. It's just that, at those sizes, the waves are so small as to be unnoticeable. For instance, I have read the the wave function of you and I is smaller than the diameter of an atom. Much too small to be noticed or measured.

  12. I would question how much such a safeguard exists really because one it only makes sense to use such and another avenue is it probably works better with economics overall. I would think with more advances in molecular and cellular biology that more accurate drugs could come about, I wonder do you think that models of such systems outside of a whole organism could work? Such as if you could just have certain tissues or organs cloned to work with, do you think such could come to replace live organisms such as a lab rat? I would thinks economics sort of bars such really though.

     

    Foodchain, what you missed was that obviously harmful drugs were eliminated. That doesn't tell you that the drugs that pass the screening will actually benefit the patient. So, the intent is to eliminate as soon as possible -- via computer modeling -- the drugs that will not work. This is good economics.

     

    The next step for testing if the drug is harmful drugs is human fibroblasts in culture. Advanced Tissue Sciences used to sell them. ATS is no longer in business, but other companies have picked up the market. This is often used as the main screening in the cosmetics and chemical industry instead of using rabbits.

     

    In the pharmaceutical industry, once the obviously harmful drugs have been eliminated, now comes animal testing for efficacy -- will the drug actually do what the scientists hope it will? A secondary purpose is toxicity -- harmful effects. It's possible that the drug will metabolize to a compound that is harmful that the computer models missed.

     

    You talk about organ systems. First, forget clones. Those are too genetically restricted; you want a wide range of genetic variability. You don't want to take a drug to extensive animal testing and then find out that it only works on that one genetic variation in the clone. That mistake has happened too many times as a drug has worked on inbred mice or rats (close genetic similarity) but not on the wider genetic variability of humans.

     

    However, animals are expensive. Very, very expensive, both to purchase and to house. Right now rats cost about $30 per rat and it costs up to $3 a day to house them. That adds up real fast. Organ culture is much, much cheaper.

     

    However, there are severe limitations with organ culture, particularly with a drug. A lot of the effectiveness of a drug depends on pharmacokinetics: amount of drug absorbed, distribution to the various organs of the body, and metabolism. All those determine the actual concentration of the drug at the particular site you want it. That can't be mimicked in organ culture. However, for toxicity testing, that would be the way to go -- the fibroblasts in culture are basically an "organ culture" system.

     

    But eventually you must go into a live animal so that you can see the integration of all the systems. Even if the drug passes toxicity testing in a particular organ culture, it may be toxic to some other organ. And then, of course, there is efficacy testing. As jdurg pointed out, computer modeling is focused on toxicity testing. Yes, before the drug is run thru those particular computer models, it is thought the drug may be effective (otherwise, why bother?), but you need the animal to tell you that it actually will be effective.

     

    And, of course, even if it is effective and safe in animals, you still go thru Phase I and II human clinical trials. Phase I to test for unforeseen toxicity, Phase II to test to see if the drug really works in humans, not just rats. Lots of "cures" out there that worked in mice, rats, or rabbits that never worked in people. But before you get to humans you do everything to ensure that the drug is both safe (the #1 priority) and effective in people.

     

    People who want to stop all animal testing must face this reality: to give up animal testing means giving up new drugs/treatments for human health and new cleaning solutions and other chemicals that make our lives easier. If you give up animal testing, you freeze our medical technology and chemical technology where it is today. Is that what they really want?

  13. Yes, researcher should stop making animals suffer from drugs testing.

     

    Nan, are you aware that any animal testing must be done under appropriate pain medication? Euthanasia must be done in a painless fashion. It's part of the requirements every scientist must go thru to get permission to do animal testing.

     

    they could say that animals are close to humans and the medicine that scientists discover will save many lives in the future.

     

    We do say this. Because it is true. All the wonderful medical treatments you see today, all the "miracles" of modern medicine, are due to animal research. Do you want us to stop those? Do you want us to stop looking for cures for Alzheimer's because you don't want animals to "suffer"? In particular, think of whether you want us to stop working on a cure for a disease that your parents or your children have.

     

    There are millions of animals who have died before an experiment has reached success.

     

    And there are millions of people who die from the disease before we have success. Do you want people to keep dying?

     

    People who support and are against animal testing should make an agreement with scientists on the number of animals that go through the tests.

     

    There already is such an agreement. Every time I put in for an animal study, I must justify the number of animals I am going to use. I must justify that there is no other way to get the results.

     

    It appears that you are unaware of the existing rules and restrictions scientists operate under. Perhaps I should append a copy of the IACUC forms I, and every other researcher who uses animals, must fill out and adhere to.

     

    If researchers do something that is inappropriate with the animals then supporters and detractors can join together against the experiments and the anti-animal testing supports will not threaten scientist’s life.

     

    This already happens. When I as a member of an IACUC committee, we shut down the research of the Chairman of Pharmacology because he was 1) not adhering to the rules for care of the animals and 2) was using far more animals than he had requested and said that he needed.

     

    The anti testing and supporter could also ask the government to get researchers to produce new technology that will help them come up with a new treatment for deadly diseases. New technology will helps save animals lives and also provide us with safe drugs. So, it will be like we exchange life to extend another life.

     

    Again, already being done! NIH comes out with Requests for Applications for NIH grants on new cell culture and computer modeling techniques to cut down the number of animals used. Go to the NIH website and look at the grants requested and awarded.

     

    I disagree, experimenting on animals is not the only way to find cures, test drugs or chemicals.

     

    Most testing of new chemicals is now done on human fibroblasts in cell culture. It is less expensive than animals and you can screen a lot more chemicals that way.

     

    The caregiver is the one that tends to the animals, he is the one that sees the pain and suffering. But he/she is only a caregiver and is either too emotional or too stupid. The researcher can not say how the animal feels, they can't talk, often animals suffer in silence, or some of them whimper.

     

    ALL lab facilities must be accredited. One of the requirements for accreditation is policies in place that have the animal care attendants report any suffering of the animals.

     

    However, you are assuming that animals feel pain and suffer like we do. As you note, "often animals suffer in silence". How do you know they are suffering? If there is no outward sign of suffering, consider that they are, in fact, NOT suffering. I submit that you are projecting your own emotional state onto animals. How do you know that is valid?

     

    The next decade will be exciting, several countries are not going to use chimpanzees any more.

     

    I don't know of any medical studies that use chimps. They are simply too expensive to use and there are other, just as good but cheaper, animal models.

     

    There were quite a few GREAT scientists that did not approve of vivisection...., their career never suffered, nor their new discoveries.

     

    Vivisection is different from animal research. Name a few in the biomedical field, please.

  14. In micro as well as cell biology strains cannot easily be compared to species in animals as something as small as a single point mutation could be classified as a new strain (usually in conjunction with phenotypic change, as e.g. resistance).

     

    Possibly, but I haven't seen it. Bacterial resistance to antibiotics involve more than a single pont mutation. Keep in mind that a single point mutation in the hybrid fertility genes can render a population of sexually reproducing animals a new species, too. :)

     

    However, as I have seen "strains" presented in seminars in micro, there are a cluster of genetic differences, not just one.

     

    Regarding cultivars it is a term mostly in agricultural contexts there are too many to name, but here is just the first one from a simple pubmed search:

     

    J Integr Plant Biol. 2008 Jan;50(1):102-10.

    Simple sequence repeat analysis of genetic diversity in primary core collection of peach (Prunus persica).

    Li TH, Li YX, Li ZC, Zhang HL, Qi YW, Wang T.

     

    I would argue against that

     

    The early anatomists, such as Owen, worked mostly with animals and studied the various phenotypic differences quite extensively. As just one example, look at the study of cirripedia by Darwin: The Lepadidae; or, pedunculated cirripedes. [Vol. 1], The Balanidae, (or sessile cirripedes); the Verrucidae. [Vol. 2], A monograph on the fossil Lepadidae, or, pedunculated cirripedes of Great Britain. [Vol. 1] , and A monograph on the fossil Balanidae and Verrucidae of Great Britain. [Vol. 2].

     

    Just to elaborate, historically classification into strains was often done by categorizing according to certain properties, or often simply according to different isolates. In that regard strains are a subcollection within a species. Regardless whether an organism propagates sexually or asexually, the basic unit of species cannot be changed by this matter (unless we define them anew in the light of a revised evolutionary framework, which has not been done yet).

     

    Historically, assignment of species names in microbiology was done on phenotypic characterizations. As you know, all bacteria within the rods and sphere in light microscopes look pretty much the same. Thus, historically, we are stuck with the species name referring to a wide range of genetic variability. It's not that the basic unit of biology and evolution isn't the species, it's that the guys who first assigned categories of "species" on the microbiological level did not have the tools to distinguish species and, instead, gave the name "species" to what should properly be genera or even higher taxa -- based on the genetics.

     

    However, in microbiology since quite a while strains are established as pure if all the members are clonal (genetically identical).

     

    Again, in the seminars I have attended, strains are established as similar genetics, not identical. What you have is a descended family of clones: first one clone and then that clone has variations among it's clonal descendents.

     

    So if you take a bacterial or fungal cell (or an immortalized higher eukaryotic cell for that matter) and mutate it, you create a new strain (or cell line). In higher eukaryotes, however, strains are not always clonal (established cell lines often are, though).

     

    Even cell lines are not clonal. For instance, if you go to American Type Culture Collection and get cell lines, they are not clonal. Instead, most of them are established from particular tumors from particular individuals. But genetic analysis of cancer cell lines shows quite of a bit of genetic variability within the line. Not as much as within the original tumor, but still quite a bit.

     

    To get "clonal" in cell lines, you must do additional manipulation. The most common is "limiting dilution" where you dilute the cell suspension so that you have odds of plating slightly less than 1 cell in the volume you are using for that particular cell culture well. Usually you use a 96-well plate and dilute the cell suspension so that you have about 0.8 cells per 100 ul, and then plate 100 ul per well. You then look on day 1 and eliminate any well that has more than one cell in it (this is very tedious work, BTW). Each well is a clone.

     

    An alternative method is to insert a known DNA sequence via a retrovirus, still do the limiting dilution, and then use restriction enzymes to identify the insertion of the DNA sequence. This really is just a fancy way to confirm your limiting dilution.

     

    But as one can see, my point simply is that classifications on this level are mostly driven by pragmatism. Cells or organisms are given denominators just to distinguish them from other based on certain (arbitrary) properties, isolation methods, and/or genetic alterations.

     

    And I would partially agree with this point, but I would add that the original species designation was arbitrary based on the properties that could be observed under light microscopy. Therefore what we call "species" in microbiology is actually a genus or higher taxa when we get down to looking at the genetics. The real species are the "strains" of bacteria. So when we see a new strain of E. coli that can live in apple juice, what we have really seen is a new species formed.

     

    Regarding cultivars it is a term mostly in agricultural contexts there are too many to name, but here is just the first one from a simple pubmed search:

     

    J Integr Plant Biol. 2008 Jan;50(1):102-10.

    Simple sequence repeat analysis of genetic diversity in primary core collection of peach (Prunus persica).

    Li TH, Li YX, Li ZC, Zhang HL, Qi YW, Wang T.

     

    The "too many" are 2996 of which 52 are reviews. However, I accept the point: cultivar is used in agriculture. I am curious: what type of PubMed search did you run? When I ran ones using "cultivar, plant" or "cultivar, agriculture" this was buried over 100 items into the search. What search terms did you use?

  15. reg ice on lake is totally diiferent example only. because crust is covered all sides of core like tree also or any living thing but ice is on top of lake not like roll.

     

    That is only because the lake is just a part of the surface of the sphere that is the earth. The earth crust is all about the core. About 2 billion years ago during "snowball earth" the ice "skin" did cover all the earth. Not a living process, but a natural one.

     

    reg rock with almunima oxide i also do not understand the correct example because all living thing has very well managed crust or skin that rock do not have well managed.

     

    Well, then, the earth's crust is not "well managed" according to you. It's a hodgepodge of different materials unlike the highly organized tissue that is your skin. In some places the crust is solid granite -- like the Canadian Shield. In other places it is limestone. In no two places is the crust exactly the same, as you would find in skin. If you take a cross-section of your skin on your thumb, back, inside of the knee, and sole of your foot, you get the same cross-section. But take a cross-section of the earth's crust at any two places on the surface and it is different.

     

    tree log has core and crust according to you it is dead.

     

    I never said the tree was dead. In fact, I never commented on it at all. The processes that form tree rings is different than the processes that form the layers within the earth. Tree rings are formed by living cells. The layers of the earth's crust are formed by different density materials under gravity. Apples and oranges. It is you who is trying to say they are the same.

     

     

    BTW, the "crust" or bark of a tree is dead. Just like the stratum corneum that is the outermost layer of your skin is dead. Or didn't you know that?

  16. Actually this is not limited to microbiology. In botany cultivars are often used.

     

    When looking up definitions I only saw the terms used in micro. Can you cite some papers where the term was used in botany? Thanks.

     

    I am not aware of a similar typification in animals, which makes sense as they were not categorized as excessively as crops or diseases....

    :confused: I'd say they were categorized more excessively.

     

    Mostly "strain" is used to designate certain lines. The same goes for (lab-) animals.

    These are, however, usually clonal lines.

     

    In animals, "strain" is used for inbred lines, such as Holtzman or Sprague-Dawley rats. It can also be used for varieties generated by manipulation of ES cells that are then used to replace the ES cells in a blastocyst -- thus making a "man-made" animal. Thus, ROSA mice (that started out with the bacterial beta-galactosidase enzyme inserted into the genome and then the first ROSA were inbred) are a strain.

     

    In micro, since reproduction is asexual, of course what you get are clonal lines. However, "strain" is usually used for family of clones (clones that are genetically similar) that are genetically distinct from other families. The species name, such as Escherichia coli, is more like a genus name for sexually reproducing organisms, with the strains being the "species" within that genus.

  17. Well, actually those classification are taxonomically meaningless, but are used as an ad hoc distinction in a variety of fields, usually focussing on one particular diagnostic or physiolgocal aspect like serovars, pathovars, biovars, etc.

    They often lose their significance outside the respective fields.

     

    CharonY, all the terms (serovar etc) you used apply to microbiology. Most people don't associate "breed", "subspecies", etc with microorganisms but with multicellular organisms, particularly plants and vertebrates. Do microbiologists also use the other terms? I haven't seen that in the (admittedly somewhat limited) microbiology literature I have encountered.

  18. reg almunium oxide is not proper example becuae it is part of earth only.

    reg lava it is a surplus material in the body of earth and should be errupt out. this is like a peak only

     

    Both of these responses are irrelevant. Remember what YOU asked: "can you tell me any single thing having skin and made by nature only and is dead."

     

    Insane alien gave 2. Both are analogous to the crust of the earth. Most of the rocks in the crust are oxides, like aluminum oxide forms on the surface of aluminum. OR the crust is composed of granite, which is indeed cooled lava. The cooled lava is less dense than liquid lava, so it "floats". Another example would be the thin sheet of ice over lakes and streams. That is a "skin" over the water.

     

    All of these are examples of "skin" or crust being made by natural processes only without being alive.

     

    BTW, we also see layers in ice cores and in types of sedimentary rocks called varves. These are formed by seasonal processes (not life). In the case of the ice cores you get a layer of dust in summer and then a new layer of ice from the winter snow. In the case of varves it is organic material from falling leaves and decaying vegetation in the fall and then a layer of sand from inrushing streams in the spring.

     

    The generalized "layers" of the earth are formed by simple differences in density of materials under the influence of gravity. The most dense material is in the core (liquid nickel-iron), with successively less dense materials as you move outward. The gasses of the atmosphere, of course, are the least dense.

     

    As I said, there is quite of bit of existing data that falsifies your idea. But, if you really feel you have the data that makes it valid, submit it for publication.

  19. Please tell about subspecies" versus a "breed?

     

    Once you get below the category of "species", there are several names that all try to categorize differences between groups within a species: variety, breed, population, subspecies, semispecies, and race. In addition to the 2 from Futuyma's texbook I posted above, here are a few more:

     

    "Race: A vague, meaningless term, sometimes equivalent to subspecies and sometimes to polymorphic genetic forms within a population."

     

    "Variety: Vague term for a distinguishable phenotype of a species"

     

    None of the names are specific. Notice how Futuyma says of subspecies "No criteria specify how different populations should be to warrent designation as subspecies". Biologists today usually speak of "populations". In Darwin's day they used the term "variety". Breed seems to be equivalent to "variety" and is used, not surprisingly, by human breeders. Using artificial selection, breeders make "breeds". We are, of course, most familiar with breeds of housepets like cats and dogs. And we distinguish breeds by their appearance -- their "phenotype". Great Danes look different from St. Bernards look different from Labrador Retrievers look different from Cocker Spaniels, etc.

     

    Subspecies has criteria like variety "populations of a species that are distinguishable by one or more characteristics" That's what variety and breed does. However, subspecies also includes, for animals, the idea that these populations are in different geographical areas: "In zoology, subpecies have different (allopatric or parapatric) geographical distributions". This might have been true of dogs in that some breeds were bred in particular geographical areas, such as the daschund in Germany. That geographical isolation is, of course, mostly no longer true. Neighbors have different breeds of dogs as pets.

     

    For plants, a subspecies could be in the same local geographical area (since plants can't move and they may be isolated within a few discrete locations within the area).

     

    But really, the bottom line is that the terms are so loose and ill defined that, for biologists, they are meaningless. Breed = subspecies = variety = race = biologically useless term.

     

    The only term that has any meaning is semispecies. There you have partial reproductive isolation. Ring species such as the Arctic gull or the California salamander would have the individual populations be semispecies. Dogs today could (at least) be split into semipecies, since there are reproductive barriers between some of the breeds. Genetic data says dogs are 4 species.

     

    I hope that helps. If you have more questions, don't hesitate to ask.

  20. That's an interesting clarification. If the change occurs in the duodenum, then I suppose it limits the amount of tract/time though/by which the sugars can be absorbed.

     

    Sugars are absorbed in the jujenum. The later part of the duodenum is where the pancreatic juices and bile duct products are secreted into the intestine. The sugery only bypasses the first part of the duodenum, leaving the secretion part intact.

  21. would you like to explain wut each one means? cuz u seem to like posting this on a lot of forums...

     

    Each one is showing reproductive isolation: the populations do not interbreed and, when they do, their offspring are not fertile.

     

    Definition of species...In biology, a species is, loosely speaking, a group of related organisms that share a more or less distinctive form and are capable of interbreeding and producing viable offspring...

     

    The biological species concept says nothing about "related organisms that share a more or less distinctive form" BSC states:

     

    A species is a group of individuals fully fertile inter se, but barred from interbreeding with other similar groups by its physiological properties. (producing either incompatibility of parents, or sterility of the hybrids, or both). (Dobzhansky 1935)

     

    Species are groups of actually or potentially interbreeding populations that are reproductively isolated from other such groups. (Mahr 1942)

     

    What you seem to have done is combine the biological species concept, phylogenetic species concept "more or less distinctive form", and a weird form of the evolutionary species concept "related organisms" into one.

     

    "Then I will tell you this, if you take a sperm of one "species" of dog and a egg from one "species" of another dog, you will produce viable offspring, as my parents are both breeders it is possible."

     

    The dog paper was looking at genetics. I would ask you parents if they have successfully mated every breed of dog.

    Specifically, have they mated breeds from the different species described in the paper? Artificial insemmination does not count.

     

    The genetic analysis says that this may no longer be possible. Most people that breed dogs do so only within a few closely related breeds. I have not heard of anyone breeding a Great Dane with a chihuahua or daschund, for instance. Do your parents do this?

  22. Thanks for the elaboration, swansont.

     

    Which means, of course, that earth still falls within the purview of "dark matter". It's baryonic dark matter.

     

    Going back to Radical Edward, the estimated amounts of baryonic dark matter were not enough, by orders of magnitude, to account for the observed galactic rotation curves. Thus the hypothesis of nonbaryonic dark matter to make up the difference.

     

    But the main point is that hypothesis has nothing to do with Big Bang. It's not necessary for Big Bang to happen.

  23. Hey. I pointed that out! :D

     

    My apologies.

     

    Since Type II generally indicates that the body "just can't keep up" with the food being ingested, this suggests why gastric bypass helps. There's less incoming food for the body to break down, convert to usable energy, aka... for the body to keep up with (whether their diabetes be due to a lack of sensitivity or lack of enough production).

     

    As I read the article, that is not the case. The bypass is in the duodenum and doesn't inhibit the amount of food. If they had done a stomach banding, yes, that would limit the amount of food the person would eat. But this doesn't do that. It simply bypasses a small part of the beginning of the small intestine. The amount of incoming food is the same.

  24. Either way, I disagree. At best, NS has had a few billion years more time to work than we have. If you gave humanity 2 billion years, we could outdo anything that natural selection has done. And, of course, NS only works on things that are self-replicating.

     

    It's not the time. It's that NS keeps track of hundreds/thousands of variables and balances cost/benefit analyses of each. Humans don't do that. When you talk of gengineering humans, you pick only 3-4 traits you want to change.

     

    The point of the genetic algorithm was not to make it identical to NS. It was to allow computers to design things with almost no human inputs. The most notable difference is that genetic algorithms have an arbitrary, human-designed fitness algorithm, whereas NS has only reproductive capability of a self-replicating thing in the real world as its fitness algorithm.

     

    No,the point was to make the genetic algorithm like NS. The human input is the environment in which the selection takes place. In nature, NS also has a "fitness algorithm". It's sometimes described as the "fitness peak" by evolutionary biologists. Remember, the individuals who survive and replicate have the best designs in that generation. Designs to fit the design problem posed by the environment.

     

    Also differenare how variability is introduced (also arbitrary), how reproduction occurs (also arbitrary), and how selection occurs (also arbitrary).

     

    Selection is not arbitrary. That's the whole point of natural selection. The individuals who do best in the competition for scarce resources are the ones selected.

     

    What I am saying is that natural selection is a specialized type of genetic algorithm.

     

    And here I thought you were arguing that genetic algorithms are not natural selection! Now you say they are. I wish you'd make up your mind. :) Rather than this, I would say that both genetic algorithms and natural selection are Darwinian selection in their respective substrates.

     

    Do you have any evidence for that? I don't think that I use Darwinian selection for 1+1=2, which is something used for many designs. I don't use Darwinian selection for using the equations of physics.

     

    Think back to when you were first learning math. There were many possible answers you could provide for 1 + 1 = 2 (and you can find many of those on tests of 1st graders!) but the environment meant that you selected "2" each time because that was the "correct" answer. Now, of course, this is memorized and so you select that answer every time. When you were learning physics, you did have to select the correct physics equation in the appropriate environment -- problem you were facing. And, I bet, on some tests you chose the wrong equation for some of the problems. :) But then, you had already done some selecting in the environment of physics problems as you did practice problems. So now, in a particular environment, you go back to the selection that worked before.

     

    I won't agree that we design by natural selection until you provide some proof. Our imagination is not completely random, which is why it is better than natural selection. That's what gives us the ability to plan. We can recognize when we have part of a problem solved and then work on the missing parts. And we can design computers and computer programs to do modeling, trial and error, etc. for us.

     

    Think about a baby. It makes random noises until it hits upon variations that elicit a positive response in the environment -- parents and other people. "mama" and "dada" are usually the first words. That environment might be provided by a parent. But when the baby in the variations hits the correct sound, it gets selected. And that continues as more and more words are learned. Many are initially mispronounced. My daughter did "elphedent" for a few days. Positive environment at home but a different environment at day-care. So she selected "elephant" instead. Remember, Darwinian selection is cumulative. It builds. So by the time you hit school you already have selected a basic vocabulary. Now comes new selection of proper spelling and grammar. So as I write these sentences I am building upon all the cumulative selection that has gone before as I choose the sentences I make in my imagination to say what I want to say (the environment I am selecting in).

     

    As I said, Darwinian selection is cumulative. So yes, when part of the design fits the environment we stop tinkering with it. So does natural selection.

     

    By inefficient, I mean that NS wastes many computations. Not that the results are inferior, but that there are ways to get similar results with less computations.

     

    And how fast can our brains compute? Also, I think you think we have to do everything from scratch. In designing these sentences, I already have the grammar and spelling down. I even have previous sentences I have written to get similar meanings down. I don't have to go back to random variations of sound, but just variations in the words and order from a limited set of them.

     

    Yes, I never said that NS was incompetent, only inefficient.

     

    Good to hear. And I said that NS is more competent than we are.

     

    Whatever the reason, NS can only work on the variability that is created by mutations. And the generation of variability is very slow, compared to that created by either genetic algorithms. Genetic engineering is rather slow to create variability, but it is almost always useful variability as opposed to mostly neutral or bat variability generated by mutation.[/qutoe]

     

    NS also works on variations generated by recombination, and that is very large. And no, in terms of generations, that generation of variability is not that slow. That's a myth. So far, genetic engineering has been a notable failure. So if you are going on past performance, the clinical trials have been stopped because the genetic engineering has not been useful.

     

    But again, with genetic engineering "useful" is defined by you. And you are not the best judge.

     

    Which is part of natural selection. As I said, very inefficient. If you don't want deleterious mutations to be fixed by your selection algorithm, then you need a different selection algorithm.

     

    No, genetic drift is different from natural selection. Genetic drift is pure chance. When the ways that a Hardy-Weinberg equilibrium can be disturbed, genetic drift is separate from natural selection: which it is. A selection algorithm will NOT fix a deleterious mutation.

     

    Yes, and they can vary by portion of the genome too. I never said they didn't vary, I said that the mutation rate in evolution is far, far lower than that in genetic algorithms in general.

     

    But a lot of variation comes by recombination. However, saying that genetic algorithms can have a higher variation rate is a difference in degree, not kind. It doesn't make genetic algorithms qualitatively different from natural selection.

     

    But not always. Sometimes an accident will eliminate an individual or even an entire population with a beneficial mutation. In a genetic algorithm you can ensure that the best permutations are always preserved.

     

    Perhaps, but again a quantitative difference, not a qualitative one. And, as I pointed out in another thread, if the population is "large" (over 100) it is unlikely that beneficial variations will be lost. As we know, 100 is not all that large for any population.

     

    Yes, whereas in genetic algorithms, they can be adjusted for optimum results.

     

    So? Quantitative difference, not qualitative.

     

    Humans set up the algorithms for fitness, creation of new permutations, mutation, genetic exchange, selection, population size, etc. Humans can also insert some non-random permutations. After that, it's all on its own.

     

    Which means that the designing is all on its own! Which is the point. Humans aren't doing the designing, Darwinian selection is. The cases I've seen, humans have not done any interferring by non-random permutations.

     

    Ah, I was using genetic algorithm as the general term, and natural selection as a specific implementation of it. This seems correct, because a genetic algorithm has arbitrary algorithms for fitness, creation of new permutations, mutation, genetic exchange, selection, population size. Evolution and natural selection have these algorithms, but they are specific rather than arbitrary. Were you using Darwinian selection for this?

     

    I'm saying that both genetic algorithms and natural selection are substrates of Darwinian selection.

     

    Correct. And despite putting forth only a miniscule fraction of human ingenuity on genetic engineering, and for that matter, despite it being a very new science, we are already making huge improvements in various crops. More importantly, natural selection is often at odds with our own objectives.

     

    Whereupon I question our objectives. I was thinking of genetic engineering in humans. That has been a failure so far. What you call "improvements" are exactly the problem I am raising for humans: those crops do exactly what we want in a very controlled environment. Change the environment and those crops are toast. Even then, some of our genetically engineered crops are susceptible to diseases we didn't anticipate. So yes, genetically engineer corn or cotton because, if those varieties go extinct, we haven't lost anything. But do this to ourselves and the environment changes and we lose the human species. OUR species! The risk I'll accept for crop species is not a risk I think we should entertain for our own species.

     

    And I meant it in both ways. And when we genetically engineer humans to be smarter, we'll be even smarter than natural selection.

     

    Don't think so. Again, remember Thompson. There are some things humans are not good at. We can't keep track of the thousands of cost/benefit relationships that interact. We can't even get a computer to do it, much less our brains. We can modify some few traits in plants and animals but, again, if we lose those it's OK. Losing the human species to your hubris is just too much to risk.

     

    If you mean that we select the best designs we can come up with, than yes. If you mean that we create random designs and evaluate them, then no.

     

    This is the limitation of human imagination I was referring to. We may not make totally random designs in our imagination. And that may be our undoing because we miss good designs due to our limited imaginations. Our best human designers are the ones that can think "out of the box", right? That means they are ones that come closest to making "random" designs. Ones that more conventional people don't. Darwinian selection, by using "random" variations, is more likely than us to hit upon really novel designs. Which may be why we turn to Darwinian selection when we are stumped.

     

    I've addressed this before. We fall back to trial and error when we don't know a better way to proceed. Your argument contradicts itself -- we use trial and error as a last resort, because it works on any kind of problem but is extremely inefficient. But we prefer more efficient ways to solve problems if they work.

     

    As for why we use genetic algorithms rather than a human-like intelligence, it is because they are much simpler

     

    No, it's because the other method does not work, as you noted. It has nothing to do with efficiency and everything to do with competence. When our Darwinian selection doesn't work we turn to genetic algorithms. In terms of "efficiency", you don't know how fast our brains operate and therefore how many variations we flash thru in a second in our own Darwinian selection.

     

    Lies. If we didn't know how it worked, it could not have been created by a genetic algorithm designed by us. The fitness function would have said it wouldn't work, and it would have been discarded.

     

    You should be careful about throwing around that word. Especially when you haven't read the article! The "fitness function" was recognition of a spoken word! Not a mathematical function, but an actual operation in the real world. Here, I'll quote from the article:

     

    "Thompson realised that he could use a standard genetic algorithm to evolve a configuration program for an FPGA and then test each new circuit design immediately on the chip. He set the system a task that appeared impossible for a human designer. Using only 100 logic cells, evolution had to come up with a circuit that could discriminate between two tones, one at 1 kilohertz and the other at 10 kilohertz.

     

    To kick off the experiment, Thompson created a population of 50 configuration programs on a computer, each consisting of a random string of 1s and 0s. The computer downloaded each program in turn to the FPGA to create its circuit and then played it the test tones (see Diagram, below). ... By generation 2800, the fittest circuit was discriminating accurately between the two inputs, but there were still glitches in its output. These only disappeared completely at generation 4100. After this, there were no further changes.

     

    Once the FPGA could discriminate between the two tones, it was fairly easy to continue the evolutionary process until the circuit could detect the more finely modulated differences between the spoken words "go" and "stop".

     

    So how did evolution do it? If a human designer, steeped in digital lore, were to tackle the same problem, one component would have been essential--a clock. The transistors inside a chip need time to flip between on and off, so the clock is set to keep everything marching in step, ensuring that no transistor produces an output between 0 and 1. A human designer would also use the clock to count the number of ticks between the peaks of the waves of the input tones. There would be 10 times as many ticks between the wave peaks of the 1 kilohertz tone as those of the 10 kilohertz tone.

     

    In order to ensure that his circuit came up with a unique result, Thompson deliberately left a clock out of the primordial soup of components from which the circuit evolved. Of course, a clock could have evolved. The simplest would probably be a "ring oscillator"--a circle of cells that change their output every time a signal passes through. It generates a sequence of 1s and 0s rather like the ticks of a clock. But Thompson reckoned that a ring oscillator was unlikely to evolve because it would need far more than the 100 cells available.

     

    So how did evolution do it--and without a clock? When he looked at the final circuit, Thompson found the input signal routed through a complex assortment of feedback loops. He believes that these probably create modified and time-delayed versions of the signal that interfere with the original signal in a way that enables the circuit to discriminate between the two tones. "But really, I don't have the faintest idea how it works," he says. ... That repertoire turns out to be more intriguing than Thompson could have imagined. Although the configuration program specified tasks for all 100 cells, it transpired that only 32 were essential to the circuit's operation. Thompson could bypass the other cells without affecting it. A further five cells appeared to serve no logical purpose at all--there was no route of connections by which they could influence the output. And yet if he disconnected them, the circuit stopped working.

    It appears that evolution made use of some physical property of these cells--possibly a capacitive effect or electromagnetic inductance--to influence a signal passing nearby. Somehow, it seized on this subtle effect and incorporated it into the solution."

     

    "It wasn't just efficient, the chip's performance was downright weird. The current through the chip was feeding back and forth through the gates, "swirling around," says Thompson, and then mov-ing on. Nothing at all like the ordered path that current might take in a human-designed chip. And of the 32 cells being used, some seemed to be out of the loop. Although they weren't directly tied to the main circuit, they were affecting the per-formance of the chip. This is what Thompson calls "the crazy thing about it"

    Thompson gradually narrowed the possible explanations down to a handful of phenomena. The most likely is known as electromagnetic coupling, which means the cells on the chip are so close to each other that they could, in effect, broadcast radio signals between them-selves without sending current down the interconnecting wires. Chip designers, aware of the potential for electromag-netic coupling between adjacent compo-nents on their chips, go out of their way to design their circuits so that it won't af-fect the performance. In Thompson's case, evolution seems to have discovered the phenomenon and put it to work.

    It was also possible that the cells were communicating through the power-supply wiring. Each cell was hooked in-dependently to the power supply; a rapidly changing voltage in one cell would subtly affect the power supply, which might feed back to another cell. And the cells may have been communi-cating through the silicon substrate on which the circuit is laid down. "The cir-cuit is a very thin layer on top of a thicker piece of silicon," Thompson ex-plains, "where the transistors are dif-fused into just the top surface part. It's just possible that there's an interaction through the substrate, if they're doing something very strange. But the point is, they are doing something really strange, and evolution is using all of it, all these weird effects as part of its system."

     

    Notice that Thompson has only "most likely" explanation. He doesn't know.

     

    Now, I'll let the ad hominem pass -- this once. Next time you use the word "liar" I'll report you. :mad:

     

    To be faster, stronger, and smarter, of course. And maybe glow in the dark. And maybe have gills. Maybe redesign ourselves to live in zero gravity. Whatever we want, really. I don't want to wait around for evolution/natural selection to do its thing.

     

    I noticed you used the word "want". I asked why we needed genetic changes. You responded with want. Different things. We don't need to be "faster, stronger, smarter, glow in the dark, have gills, redesign to live in zero gravity". We have technology that will do all that. IOW, you want to play god with the human species. You want to shape them in your image -- because you want to. Never mind the good of the species.

     

    Anyhow, we can't really pre-empt natural selection. We will just provide very good variability as compared to background mutation.

     

    How naive! What you are doing is replacing ALL the alleles with the ones you want. You are reducing variation, not increasing it. And again, "very good variability" is only within your narrow parameters of what is "good".

     

    I've yet to see any of that data. I think it is just your opinion.

     

    Other examples of where Darwinian selection was used when the design problem was too tough for humans to solve:

    1. MJ Plunkett and JA Ellman, Combinatorial chemistry and new drugs. Scientific American, 276: 68-73, April 1997. Summary of article: "By harnessing the creative power of Darwinian selection inside a test tube, chemists can now discover compounds they would not have known how to make. The key is combinatorial chemistry, a process that allows them to produce and screen millions of candidate molecules quickly and systematically."

    2. GF Joyce, Directed molecular evolution. Scientific American 267: 90-97,July 1994.

    3. AI Samuel, Some studies on machine learning using the game of checkers. IBM Journal of Research Development, 3: 211-219, 1964. Reprinted in EA Feigenbaum and J Feldman, Computers and Thought, McGraw-Hill, New York, 1964 pp 71-105.

    6. CW Petit, Touched by nature: putting evolution to work on the assembly line. US News and World Report, 125: 43-45, July 27, 1998. Use "genetic algorithms" (cumulative selection) to get design in industry. Boeing engineers had cumulative selection design a wing forthem for a jet to carry 600 passengers but have a wing the same size as a 747.

    9. FS Santiago, HC Lowe, MM Kavurma, CN Chesterman, A Baker, DG Atkins,LM Khachigian, New DNA enzyme targeting Egr-1 mRNA inhibits vascular smooth muscle proliferation and regrowth after injury. Nature Medicine 5:1264-1269, 1999. Used Darwinian selection to design a DNA enzyme (not found in nature) that degrades mRNA for use in treating hyperplasia after balloon arthroplasty. Humans have no idea what the nucleotide sequence of the DNA enzyme because they didn't make it --Darwinian selection did.

    10. Breaker RR, Joyce GF.A DNA enzyme that cleaves RNA. Chem Biol 1994 Dec;1(4):223-9

    11. Ronald R Breaker, Gerald FA Joyce DNA enzyme with Mg2+-dependent RNA phosphoesterase activity Chemistry & Biology 1995, 2:655-660.

     

    You'll especially like this one:

    13. http://www.discover.com/aug_03/gthere.html?article=feattech.html Use of Darwinian selection to evolve of the ability to think in computers. If humans think so well, why use Darwinian selection to get the ability to think in computers?

     

    14. Jr Koza, MA Keane, MJ Streeter, Evolving inventions. Scientific American, 52-59, Feb 2003 check out http://www.genetic-programming.com

    15. A. Thompson, P. Layzell and R. S. Zebulum Explorations in Design Space: Unconventional electronics design through artificial evolution. IEEE Trans. Evol. Comp., Vol 3, No 3, (1999) http://www.informatics.sussex.ac.uk/users/adrianth/TEC99/paper.html

     

    Do mutations also reduce genetic variability? Eugenics, artificial selection, and natural selection decrease variability. Mutation and genetic engineering increase it.

     

    Natural selection promotes variation. You need to read Chapters 14, 15, and 22 in Futuyma's Evolutionary Biology for all the details. Or start a new thread. Genetic engineering decreases variations within populations. After all, every individual has the same genetically engineered alleles.

     

    I do not think everyone should have the same alleles. I think we should create new alleles ourselves rather than wait for background mutation to do it.

     

    Please quit saying I approve of removing our genetic variability.

     

    Then you are not promoting genetic engineering. Tell me, do genetically engineering crops have more variation than wild type? Of course not. Because the alleles in every individual is replaced.

     

    If you are going to introduce alleles for "faster, smarter, stronger", then isn't everyone going to have them? Would you restrict them to a few? How would you do that? If not restricted, then everyone would have to have them in order to be equal in the society, wouldn't they? Wouldn't there be job discrimination against those individuals without the "smarter" alleles?

     

    So what do you plan, several subpopulations of humans? One with gills, one with alleles for low gravity, one that can glow in the dark?

     

    And I can see us not engineering ourselves and still going extinct, like most species have. Or, not engineering ourselves and then being outcompeted/exterminated by humans who have engineered themselves to be stronger, faster, or smarter and decided that they wanted more land.

     

    Then there goes your "increased variation" argument, doesn't it? Everyone would have the alleles. Less variation. Yes, it's possible we might go extinct. But the odds are much, much less if we let natural selection keep the variation -- and thus our ability to use lots of environments -- rather than engineer us into one niche. So we are all adapted to zero g and we lose space travel. How do those individuals live in a gravity well?

     

    I've said this before and you didn't respond. To get smarter means a bigger brain or more brain cells packed into a smaller volume. Either way, the brain's requirement for energy would go up. That's OK now in our environment of plenty of food. But suppose global warming does play havoc with crops and suddenly we have a lot less food. Now we have everyone needing more calories per day than they did and the calories are not available. Congrats. You have put H. sapiens into a situation where it starves! Whereas if we had NOT gene engineered, there would have been a few people who may be "dumb" but could survive on fewer calories and thus, H. sapiens survives.

     

    The problem is;

     

    - Natural selection (and this is a strong argument against creationism) cannot plan in advance. It can push species in evolutionary dead ends, lead to the fixation of allele which would be deleterious in the long run, et cetera..

     

    1. Natural selection does not plan long term, but neither is the genetic engineering being advocated by Mr. Skeptic.

    2. NS does have a short term plan -- adaptation to the environment. Since it can't predict future environments, there is no long term plan.

    3. The argument is that NS is less likely to push H. sapiens into an evolutionary dead end than genetic engineering is. Where we have used genetic engineering on crops, we are pushing them to an evolutionary dead end. We keep them going by our technology.

     

    Natural selection is not always there, it can be overwhelmed by drift, draft, mutational pressure..

     

    Natural selection is always there. Like you say drift is always there. I know of no case where natural selectionis "overwhelmed" by drift. Even in founder effects, where N=2, natural selection still works to adapt the population.

     

    Technically speaking, this is not true, but it's fair to say that Darwinian selection is probably involved in pretty much all cases of adaptation.

     

    Darwinian selection is being used for the process whereever it occurs. As Dennet shows in Darwin's Dangerous Idea, Darwinian selection is an algorithm to get design. Natural selection is one application of Darwinian selection and, yes, it is responsible for all adaptations (which are designs). I am claiming that human design is Darwinian selection operating within a brain.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.