Jump to content

CharonY

Moderators
  • Posts

    12635
  • Joined

  • Last visited

  • Days Won

    125

Posts posted by CharonY

  1. 4 hours ago, J.C.MacSwell said:

    Something has to be done to break through the hate, and clearly Israel leaving Hamas in charge with their current agenda in place isn't going to do it, and no one else but Israel are both capable and willing to remove them from power.

    So two civilians in the way of a Hamas terrorist and they shoot to kill, 3 in the way they wait for a better shot. If they can't get a better one they eventually feel they have to take it. They've learned from a long history, including well before the Holocaust, that if they want to exist they have to rely on themselves.

    It's far from being right but what are they to do? What can anyone on either side wanting peaceful coexistence do? Explain that while pointing fingers.

    Not that fingers don't need pointed as well.

    Well, ultimately what has to happen is that the voices of consensus builders are elevated. I.e. having the Likud and Hamas in power (and by now it has been extensively discussed how Netanyahu's anti-two state strategy has empowered Hamas) the cycle of violence is likely only to continue. The other aspect is the one of outcome. Sure killing folks now eliminates them as immediate risk, but with a longer view it is abundantly clear that this also creates a vast (international) recruitment ground for Hamas and their allies. I am not saying that doing nothing is a great strategy, but we also know that a violent outburst is not solving things easier (just take a look at the US wars in the Middle East). 

    I think the Israeli policy of isolating the West Bank is also not to be underestimated as an issue, specifically the state-supported settler violence: https://www.npr.org/2023/11/14/1212836719/ex-idf-soldier-calls-for-international-intervention-to-stop-settler-violence

    Quote

    It wasn't long after Ori Givati became a combat soldier in the Israeli army in 2010 that he began to question his mission.

    He spent much of his time not acting on specific security threats in the Israeli-occupied West Bank, but making sure "all of the Palestinians feel like they cannot lift their heads up," he told Morning Edition's Leila Fadel. [...]

    I'm talking to you now probably in the most devastating time in my life as an Israeli, as an activist, as a person, human being seeing the atrocities of Oct. 7. You know, some of my family members were texting me from their basements that there are terrorists in their home. Luckily they survived, but that was the kind of text I received on Oct. 7. But at the end of the day, I think this is precisely what we have to remember when we talk about the concept of managing the conflict. The concept that we will maintain millions of people under our military occupation indefinitely—it failed.

    We have been saying it failed because we know how it works as soldiers who were sent to maintain it, to extend it, to entrench it. It failed before Oct. 7, but now more than ever, we know it failed. That means we have to change course. Because at the end of the day it doesn't matter how many Palestinians we will kill in this war in Gaza, there will be Palestinians in Gaza. It doesn't matter how many Palestinians we kill or arrest or settlers expel from their homes in the West Bank, there will be Palestinians in the West Bank. So, the only viable future here is to change course, we don't want to see more bloodshed. Of course Israel has the right to defend itself after the atrocities of Hamas. It's not a contradiction.

    In other word, the discussion cannot only be about the current violence, but also the paths leading to it. Again, a blame game about who is justified to what level of violence just reinforces bloodshed. The system that has been implemented supposedly to protect Israel, clearly have failed and there is little reason to assume that escalating the violence will improve situations. As many folks have stated, this is similar to the US lashing out after 9/11 and as expected, we fail to learn from past lessons.

  2. Normally some standard DNA/QC for QA/QC is in place to ensure that the qPCR works as expected. However, rather than just looking at pos/neg, it is often worthwhile to have a more concentrated standard (e.g. synthetic target DNA) and run a dilution series to establish PCR efficiency. This is more important for quantitative approaches, but variation in batches of master mixes  or probes are not that rare. So if you have minor changes in PCR efficiency, and you are operating near detection limits anyway it might not be that unusual that you dip in and out of the detection range (and anything >40 is almost certainly a false positive). In that context it is important to consider is that we are not looking at a normal, but rather a Poisson distribution which is limited by sample volume (roughly speaking you might have have somewhere between 1-20 copies in your reaction).

    Since you use FAM you won't have melt curves, but you could plot the fluorescence data to see whether you amplification or perhaps issues with noise which might justify a shift in thresholding. I.e. check the whether you can see an exponential signal and where it sits relative to the noise.

    Considering that you are using a fixed starting material and have established protocols, one would expect fairly consistent results, but of your Ct of your actual samples also is around 30-35 it suggests that out of 13 million host cells you get somewhere around 1-100 Mycos, if I understand you correctly. Is that expected?

     

  3. 35 minutes ago, julius2 said:

    Further detail on evolution:

    My hypothesis looks at what has evolved

    horse / giraffe

    horse / elephant

    These animals are roughly similar (compared to say a bird). I envisage in evolution that there might have been a "base animal" one with 4 legs, a heart, lungs, neck etc

    But then this "base animal" evolved with a "helping hand" from Time, to be a horse. And with a "helping hand" the base animal evolved to be a giraffe.

    The contribution of earth is to provide an environment with which these "changing species" can evolve. i.e. even with a "helping hand" it would still take many years to see the diversity of animals we see today. To rely on genetic mutations, natural selection - just doesn't seem feasible.

    I have looked at a phylogenetic tree which seems to branch earth life into some grouping - vetebrata / chordata, arthropoda, mollusca.

    Just to give a view of diversity of life in this world:

    lizard / crocodile

    fish / whale / seal

    cat / lion / panther

    bird / eagle /hawk

    elephant / grizzly bear / polar bear / tiger / lion

     

    Seems like a convoluted way to state that all extant species have a common ancestor somewhere. And the helping hand is a combination of selection and chance. The main issue is the direction, i.e. the assumption that it is guided toward something. 

  4. On 12/1/2023 at 9:18 AM, iNow said:

    That's surely part of it, but it seems Altman was also rather ham-fisted in an attempt to oust one of the other board members after publishing an opinion piece about the openAI company itself. Altman was securing support for her ouster from other board members, and some of them say he did not represent them correctly to others when moving to find support. 

    But Q-star is certainly another massive leap in feature/function that will be worth watching

    It moves from language prediction instead to actual reasoning, which is new and VERY different

    This is also highlighted in the New Yorker piece here: https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai

    It is a bit worrisome that a company initially set up for ethical AI development combats attempts to develop a governance system for it. It looks like openai is going down the Google "don't be evil" path. Move fast, break things and let others pay for it.

  5. 16 hours ago, Alfred001 said:

    Doesn't the CI mean that the actual effect is somewhere in that range? If so, why is it then not more likely, since so much of the range is above 1 that there is an effect?

    In fact, the CI for the 12< year follow up group is 0.92-2.20 - almost entirely 1<. Can't we say there that an effect existing is significantly more likely than not existing, although it not existing (or a protective effect existing) is a possibility as well?

    You can read up the interpretation of CIs here https://en.wikipedia.org/wiki/Confidence_interval

    Specifically:

    Quote
    • A 95% confidence level does not mean that for a given realized interval there is a 95% probability that the population parameter lies within the interval (i.e., a 95% probability that the interval covers the population parameter).[18] According to the frequentist interpretation, once an interval is calculated, this interval either covers the parameter value or it does not; it is no longer a matter of probability. The 95% probability relates to the reliability of the estimation procedure, not to a specific calculated interval.[19] Neyman himself (the original proponent of confidence intervals) made this point in his original paper:[10]

      It will be noticed that in the above description, the probability statements refer to the problems of estimation with which the statistician will be concerned in the future. In fact, I have repeatedly stated that the frequency of correct results will tend to α. Consider now the case when a sample is already drawn, and the calculations have given [particular limits]. Can we say that in this particular case the probability of the true value [falling between these limits] is equal to α? The answer is obviously in the negative. The parameter is an unknown constant, and no probability statement concerning its value may be made...

    • A 95% confidence level does not mean that 95% of the sample data lie within the confidence interval.
    • A 95% confidence level does not mean that there is a 95% probability of the parameter estimate from a repeat of the experiment falling within the confidence interval computed from a given experiment.[16]

     

    16 hours ago, Alfred001 said:

    Why wouldn't it be if the effect is strong enough or the sample large enough?

    Because a) in terms of safety we only look for certain defined endpoints (e.g. death, cancer, etc.) so potential other effects can be easily missed,  and b) experiments are set up to test the null (i.e. no effect) so it is not really possible to calculate the likelihood of no effect.

    For the extremes and for short term you can establish a measure of safety (i.e. no one dying within 6 months of taking a medication). But if you want to look all effects (liver, kidney, inflammation, immune modulation, cardiovascular health, and so on) or for effects in the long term, confounders will have an increasingly bigger role (such as diet, lifestyle, age, health status etc.). Controlling for all these factors is near impossible (there would be a near infinite list to track for each person). I brought up the issue of diet, which had over the years huge cohorts and long-time data, but the effects have not been reproducible.

  6. 1 hour ago, Alfred001 said:

    So it's actual incidence of cancer, divided by person years in the study and then multiplied by 100 000? I'm not sure what you mean by normalization.

    Yes. Normalization generally means adjusting the value to a certain scale. In this case to 100,000 person-years.

     

    1 hour ago, Alfred001 said:

     

    1 hour ago, Alfred001 said:

    Why not just explain what you mean instead of getting petty?

    Because you constantly claim that you understood things perfectly, yet your questions clearly show that you don't (especially basic definitions). While I am happy to teach, it is very difficult if you do not realize that you have to revise your basic assumptions. And frankly I do have enough entitlement from my students, and a direct challenge often shortens things a fair bit.

     

    1 hour ago, Alfred001 said:

    Ok, but the most we can say from this study, based on that CI, is that we don't know whether there is a greater risk in metro users, not that there isn't one, and in fact, given that the great majority of the interval lies above 1, it seems much more likely that there is one than that there isn't one, no?

    No, if a difference is not significant it means that the distributions are not distinguishable from each other. It does not matter if the means or CI is skewed in one direction or another, that is not something the test can tell us.

    If there was a trend, the statistical power of the cohort is insufficient to show it (and/or the effect size is too small). Also, one thing to consider is that the cohort over 15 years is likely older, and increasingly other confounders influence cancer risk, as acknowledged in the study.

    As noted, there are not really many studies that set up to prove a non-effect (safety is usually assessed in clinical trials) and there is basically no way for any treatment to do that conclusively, especially when looking at long-term effects. What studies can do, is try to see effects (as this one does) while controlling for a set number of factors. The complexity of the matter is also why we do not have figured out the perfect healthy food, for example. Likewise, there won't be risk-free medication. All we have is the weight of available evidence and never certainty. Also, it is often the case for weak effects that some studies show an effect and others don't. So evidence of one or the other side of argument will pile up over time until a tipping point for action is reached. So far the available studies show no outsized role of metronidazole in short-term harm (compared to other antibiotics), increasing evidence of general carcinogenic effects of long-term treatment with antibiotics, but also no true alternatives to antibiotic treatment.

  7. 22 hours ago, Alfred001 said:

    Yes, I understood that part. It's still the people getting the cancers. Doesn't change what's strange about it.

    It is not strange, do you understand what person-years are and why that number is going to be larger than persons (specifically look how many person years are there in aggregate and what they normalized it against).

     

    22 hours ago, Alfred001 said:

    I understood that as well, I'm not sure what you're getting at with this part.

    If you did, then there is no reason to be confused about it. So either you did not understand and got confused, or you understood and pretend to be confused. Which is it?

     

    22 hours ago, Alfred001 said:

    Yes, but that's what I'm asking, how is it possible to have as large a sample as they did and find as large an effect as they did and for it to be down to chance? 608 pairs in 15+ years and over twice as many cancers in the metro group, how could that be chance?

    Again, these are person-years not numbers of cancer incidences. What they calculate are proportionate hazard ratios and likely age stratified (I do not recall). So the attributable risk ratio (as you acknowledge) had a huge spread, looking at the CI. So obviously the P is going to be high. This is the whole purpose of statistical tests, so that we just don't just look at the higher number and make inappropriate assumptions. Remember, these are matched pairs, and what it suggests is that there is going to be a big spread, which is really expected for rare conditions such as cancer (with a big impact on the age bracket of the matched pairs).

    But if you are genuinely interested in statistical analyses I suggest to dig out a good textbook (and to be honest in order to fully recapitulate the methodology I would need to so, too to avoid errors- I rarely use risk ratio calculations in matched cohorts).

     

    With regard to the other antibiotics, tetracycline, penicillin and nitrofuran have have some history in being associated with breast cancer specifically. There have been discussions of how microflora disruptions can influence the immune system and modulate estrogen levels.  But as mentioned before, there is also increasing awareness that especially long-term disruptions of gut microbiota, really with any antibiotic, are likely to have some impact (though the effects are likely to be very complicated, in some cases antibiotics are part of the cancer therapy).

    Ultimately, you won't find a perfectly safe antibiotic, they all carry risks. And trying to quantify them is not going to be terribly useful, unless there is a huge effect size to be measure. The reason is also clear, our bodies (and microbiota) are dependent on an uncountable number of things that we accumulate over our lifetime. Some antibiotics might be fairly safe for some individuals but if the same individuals take a certain drug, have a certain lifestyle or happened to have a specific type of infections, the risk on the individual level might skyrocket. There is simply no reasonable way to capture all this diversity.

    So all we can do is looking at rough aggregates and there, small differences rarely matter as the spread (or CI) is going to be very broad, anyway. With regard to AB, the most important aspect is whether they work in the first place. I.e. folks look at local resistance profiles and prescribe ABs that work. The secondary aspect is then to look whether the patient has any immediate adverse reactions to them. Long-term concerns are not unimportant, but are generally secondary unless a smoking gun study emerges. But that will take time. And if we wait for them before treating immediate issues, we will do more harm then good.

  8. 3 hours ago, Alfred001 said:

    1,336 and 564 cancers among users and non-users in at least 15 years of followup? Doesn't that mean there were 1900 cancers in 1219 people???

    No you misread the metric. The measure is per 100,000 person-years, not persons. 

    The second thing you likely missed is reading through the study design, where they describe their follow up. Specifically the start date is when they get their first dose dispensed (for the user group). The follow-up ends either with the latest known consultation or the first diagnosed case of cancer. 

    3 hours ago, Alfred001 said:

    And then thirdly, 2.38x more cancer among metro users, how is that not significant? Ok, I see that the CI ranges from sub-1 to 6.12, but isn't that CI so wide as to be meaningless? And how likely is it that a 2.38x difference in 15 years is just down to chance???

    Because the result was statistically insignificant (.11). The statistical power of that cohort (i.e. folks that were cancer-free for over 15 years and remained enrolled in the program) is just too low to be sure that it was not a statistical fluke. 

    The caveats are pretty much standard, having more data is of course better, but often not feasible and often nearly impossible for multi-year studies. Keeping folks in these programs is very, very difficult.

     

     

  9. 1 hour ago, Alfred001 said:

    The claim about other ABs causing cancer is based on two studies alone. The nurse study and the clarithromycin study. The CLA study looked at all cause mortality, not cancer and median followup was 3 years, so you're not gonna detect any cancers with that.

    No, there is far more evidence of that, dating back to the 80s. The effect size is overall weak, but shows up fairly persistently in multiple human cohorts. 

    A review summarizing some of those studies:

    https://doi.org/10.3390/cancers11081174

    A random selection of papers:

    Breast cancer and antibiotics early study here:  doi:10.1001/jama.291.7.827 Other studies found mild effects, but there are mechanistic hypotheses underpinning this relationship: https://doi.org/10.3390/cells8121642

    Relationship of AB use and colon cancer: http://dx.doi.org/10.1136/gutjnl-2016-313413https://doi.org/10.1007/s10620-015-3828-0; 

    Discussion of the role of microbiota, antibiotics and cancer https://doi.org/10.1016/j.ejca.2015.08.015

    And the list goes on. 

    Stating that there are only two papers are seriously misunderstanding the literature information. Also considering that these effects keep popping up in various studies, the link between AB and cancer in humans is far stronger than any short-term effect exclusive to metronidazole. 

    1 hour ago, StringJunky said:

    Can I ask why you are so invested in this subject? It seems to be beyond intellectual curiosity.

    I echo this sentiment. It is unclear how this particular AB is assumed to be vastly different in risk compared to all the others.

     

  10. Yet their conclusion remains that sensible use of metronidazole is backed by evidence, in part because there is no strong evidence of added metronidazole over other antibiotics for short-term use:

    Quote

    Results: At present, metronidazole resistance has not been a serious issue in Japan in large part due to its restricted use. Emerging evidence from randomized controlled trials demonstrates higher eradication rates for metronidazole than for clarithromycin, supporting its use in both first‐line and second‐line eradication therapies. Among the reported adverse effects, there has been lingering concern over the potential carcinogenicity of metronidazole in humans. However, the possibility of an increased cancer risk is not limited to metronidazole; the long-term use of antibiotics has been linked to increased risk for some site-specific cancers. However, recent prospective studies have suggested that short-term exposure to antibiotics is not associated with an increased cancer risk.
    Conclusion: Sensible use of metronidazole backed by research evidence could maximize the benefits associated with H pylori eradication in Japan.

    Also from the same paper:

    Quote

    Allowing for methodological limitations of the epidemiologic studies exploring the carcinogenic effects of metronidazole on humans, it can be concluded that the concern regarding the increased risk of cancer seems to not be limited to metronidazole; long-term antibiotic use may be associated with an increased risk of certain site-specific cancers. However, the increase in the risk of cancer associated with a short period of exposure to metronidazole, such as the 1-week period of use for H pylori eradication, is negligible.

     

  11. OK, you are doing purity checks, high cutoffs make sense here. I assumed you had the issue even when using pure standards with higher concentration. High CT beyond 40 are generally unspecific signals. I.e. hon-targets could be amplified, probes break down etc. First thing is to check the amplification curves for shape, do the thresholds make sense?

    If running SYBR check melt curves. One can also run a gel to see what has been amplified or send for sequencing.

  12. 2 hours ago, AIkonoklazt said:


    We confirm this by doing stuff so that whenever A happens, B seems to always happen after A. Do this a lot of times (more than a few), and this is become somewhat of a "law."

    At that stage we would still think of it as correlation. We need to have a model first to explain why A causes B at minimum.

  13. 3 hours ago, Phi for All said:

    What's the centrist view on LGBTQA rights? Do they think those folks should have rights some of the time? I know the Democrats have been suggesting that rights belong to everyone all the time, and the right thinks only a few people deserve them, but what's the centrist view?

    Perhaps no rights for them unless they are one of the good ones? Nudge nudge wink wink?

  14. The first thing to do is to talk to your supervisor to check what kind of quality control you are using and what the expected results are. From your description it is not clear for example whether your standards are extracted DNA with known quantity, for example. A Ct of 45 is pretty much unspecific and 35 is close to what is generally the detection limit (so roughly <10 genomic copies). 

    It is advisable to start with pure DNA standards and establish a calibration curve or at least detection limit and PCR efficiency so that you have an idea what to expect. Then work your way backward to the isolation steps.

  15. I will also add that Europe is not a political monolith with quite significant differences in social policies (though the US sticks out with its healthcare system). Also, especially homelessness is a poor example, in the US it is actually fairly low (0.18%). Of course, there can be issues in how homeless folks are counted and collected data can be out of date. But that being said, the US is comparable to France (0.22%). Canada is doing worse with 0.36%.

    Germany has higher levels (0.41%) and UK is around 1%. However the latter countries also include folks threatened by homelessness or in extreme unsecure conditions, which will skew the levels upward.

  16. It depends largely what we are talking about. Very simple (and somewhat thin) organs, like bladders might be in those timeline. But complex organs in which multiple tissues are involved in dynamic processes are still mostly at the dream stage. 

    The strength of 3D bioprinting is really to create shapes, and even then the mechanical stability can be challenging. Making them move and do complicated stuff reliably, that is the really, really hard part.  

  17. 44 minutes ago, J.C.MacSwell said:

    Replacing the Argentinian Peso with the USD could certainly tame the mess of the 143% inflation...though of course they abdicate control of any monetary policy and make their Central bank redundant

    Be interesting to see how that works out.

    IIRC Argentine pegged their Pesos against USD in the 90s (because of hyperinflation). One could look at the outcomes then.

  18. Number 0, I am thinking of getting  a psych eval. Seeing and talking to higher beings is something that would be worry me quite a bit. Especially combined with thinking of shooting as the first action. 

  19. Ok, so there is at least some hints regarding pipetting errors in the qPCR (a delta of 1 is about a 2-fold change, which should not be happening in replicates). But obviously that does not seem to be an issue with the GAPDH. Obviously, we do not know if it is realistic whether your target is about 1000-fold different in abundance and that the differences are caused by differential extraction (I would consider it somewhat unlikely though).

    Main things I would suspect fall into the area of sample handling, and potential assay issues. So a few things to check include:

    - quality of mRNA samples

    - was it a 2-step? Can there be issues there?

    - are the protocols well-established for the specific primer combinations? What are the PCR efficiencies for them?

    - is there a possibility of contamination? What levels do you typically have for an extract of your control sample?

    - is there a possibility of degradation?

     

  20. First, let me apologize for not downloading a document from a first time poster due to security concerns. But I think most of the issues can be diagnosed within a post (or screenshots, if needed). The differences you are seeing are massive (about 1000-fold) so there is good chance that we are not looking at a biological but rather an analytical and/or pre-analytical issue. 

    House-keeping genes are not really that universal as they are sometimes claimed to be, but that level of change is extremely unusual. So the most likely scenarios are issues during sample prep and/or the qPCR itself. You mentioned that the ct of your gene of interest remained stable. What is the ct/cq? The next thing to look at is to inspect your curves, do you have stable amplification for all your targets? Are you using probes? If not, you could inspect melting curves.

    Also, what is the variance of your results?

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.