Jump to content

CharonY

Moderators
  • Posts

    12589
  • Joined

  • Last visited

  • Days Won

    123

Everything posted by CharonY

  1. This is also very much the lesson you get, growing up in Germany (well perhaps not anymore, things are changing, unfortunately). But ultimately the perpetrator were the the (great)grandparents of the folks in class. It is trivially easy to understand that Nazis are not something alien and evil. And over the last decade or so we have seen the allure of fascism in the Western world, suggesting that lessons were not learned.
  2. Well, it has been described (by Jewish journalists and researchers) as Jewish supremacist and fascist group. That's two for two already.
  3. There are difficult paths and none that are obvious. If the war has any chance of securing long lasting peace one might make the tenuous argument that the civilian deaths might have been worth it. Yet everything points to further escalation, so my question is now what purpose does it serve besides ringing the bell for the next round of deaths. Ehud Barak has discussed the need for a path toward a two state solution, which includes opening lines with Palestinian Authority and showing that there is an alternative to Hamas. Instead, the settler violence I the West Bannk tells folks that whatever you do, you are screwed. And at that point a blaze of glory might just sound right. And off we go to another round of bloodshed. Because we cannot think beyond an eye for an eye. Unfortunately we have been blind for some time now.
  4. Just to clarify, what you indicate as standard in your list are the extracts from counted cells? And the difference you see is e.g. between B1 and B2, which contains template from the same sample? If the difference are all >35, you are likely looking at noise (again, check the curve to verify). However if they are e.g. 30 and 35, and nothing is suspicious with your MM, then the most likely candidate are pipetting errors of the template. Is there a trend (i.e. is the first typically higher/lower than the other) or is it random? As part of your SOP, are you using low-biding filter tips?
  5. Well, ultimately what has to happen is that the voices of consensus builders are elevated. I.e. having the Likud and Hamas in power (and by now it has been extensively discussed how Netanyahu's anti-two state strategy has empowered Hamas) the cycle of violence is likely only to continue. The other aspect is the one of outcome. Sure killing folks now eliminates them as immediate risk, but with a longer view it is abundantly clear that this also creates a vast (international) recruitment ground for Hamas and their allies. I am not saying that doing nothing is a great strategy, but we also know that a violent outburst is not solving things easier (just take a look at the US wars in the Middle East). I think the Israeli policy of isolating the West Bank is also not to be underestimated as an issue, specifically the state-supported settler violence: https://www.npr.org/2023/11/14/1212836719/ex-idf-soldier-calls-for-international-intervention-to-stop-settler-violence In other word, the discussion cannot only be about the current violence, but also the paths leading to it. Again, a blame game about who is justified to what level of violence just reinforces bloodshed. The system that has been implemented supposedly to protect Israel, clearly have failed and there is little reason to assume that escalating the violence will improve situations. As many folks have stated, this is similar to the US lashing out after 9/11 and as expected, we fail to learn from past lessons.
  6. Normally some standard DNA/QC for QA/QC is in place to ensure that the qPCR works as expected. However, rather than just looking at pos/neg, it is often worthwhile to have a more concentrated standard (e.g. synthetic target DNA) and run a dilution series to establish PCR efficiency. This is more important for quantitative approaches, but variation in batches of master mixes or probes are not that rare. So if you have minor changes in PCR efficiency, and you are operating near detection limits anyway it might not be that unusual that you dip in and out of the detection range (and anything >40 is almost certainly a false positive). In that context it is important to consider is that we are not looking at a normal, but rather a Poisson distribution which is limited by sample volume (roughly speaking you might have have somewhere between 1-20 copies in your reaction). Since you use FAM you won't have melt curves, but you could plot the fluorescence data to see whether you amplification or perhaps issues with noise which might justify a shift in thresholding. I.e. check the whether you can see an exponential signal and where it sits relative to the noise. Considering that you are using a fixed starting material and have established protocols, one would expect fairly consistent results, but of your Ct of your actual samples also is around 30-35 it suggests that out of 13 million host cells you get somewhere around 1-100 Mycos, if I understand you correctly. Is that expected?
  7. Seems like a convoluted way to state that all extant species have a common ancestor somewhere. And the helping hand is a combination of selection and chance. The main issue is the direction, i.e. the assumption that it is guided toward something.
  8. *Sigh* need to make faster progress on the time machine, then.
  9. This is also highlighted in the New Yorker piece here: https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai It is a bit worrisome that a company initially set up for ethical AI development combats attempts to develop a governance system for it. It looks like openai is going down the Google "don't be evil" path. Move fast, break things and let others pay for it.
  10. You can read up the interpretation of CIs here https://en.wikipedia.org/wiki/Confidence_interval Specifically: A 95% confidence level does not mean that 95% of the sample data lie within the confidence interval. A 95% confidence level does not mean that there is a 95% probability of the parameter estimate from a repeat of the experiment falling within the confidence interval computed from a given experiment.[16] Because a) in terms of safety we only look for certain defined endpoints (e.g. death, cancer, etc.) so potential other effects can be easily missed, and b) experiments are set up to test the null (i.e. no effect) so it is not really possible to calculate the likelihood of no effect. For the extremes and for short term you can establish a measure of safety (i.e. no one dying within 6 months of taking a medication). But if you want to look all effects (liver, kidney, inflammation, immune modulation, cardiovascular health, and so on) or for effects in the long term, confounders will have an increasingly bigger role (such as diet, lifestyle, age, health status etc.). Controlling for all these factors is near impossible (there would be a near infinite list to track for each person). I brought up the issue of diet, which had over the years huge cohorts and long-time data, but the effects have not been reproducible.
  11. Yes. Normalization generally means adjusting the value to a certain scale. In this case to 100,000 person-years. Because you constantly claim that you understood things perfectly, yet your questions clearly show that you don't (especially basic definitions). While I am happy to teach, it is very difficult if you do not realize that you have to revise your basic assumptions. And frankly I do have enough entitlement from my students, and a direct challenge often shortens things a fair bit. No, if a difference is not significant it means that the distributions are not distinguishable from each other. It does not matter if the means or CI is skewed in one direction or another, that is not something the test can tell us. If there was a trend, the statistical power of the cohort is insufficient to show it (and/or the effect size is too small). Also, one thing to consider is that the cohort over 15 years is likely older, and increasingly other confounders influence cancer risk, as acknowledged in the study. As noted, there are not really many studies that set up to prove a non-effect (safety is usually assessed in clinical trials) and there is basically no way for any treatment to do that conclusively, especially when looking at long-term effects. What studies can do, is try to see effects (as this one does) while controlling for a set number of factors. The complexity of the matter is also why we do not have figured out the perfect healthy food, for example. Likewise, there won't be risk-free medication. All we have is the weight of available evidence and never certainty. Also, it is often the case for weak effects that some studies show an effect and others don't. So evidence of one or the other side of argument will pile up over time until a tipping point for action is reached. So far the available studies show no outsized role of metronidazole in short-term harm (compared to other antibiotics), increasing evidence of general carcinogenic effects of long-term treatment with antibiotics, but also no true alternatives to antibiotic treatment.
  12. It is not strange, do you understand what person-years are and why that number is going to be larger than persons (specifically look how many person years are there in aggregate and what they normalized it against). If you did, then there is no reason to be confused about it. So either you did not understand and got confused, or you understood and pretend to be confused. Which is it? Again, these are person-years not numbers of cancer incidences. What they calculate are proportionate hazard ratios and likely age stratified (I do not recall). So the attributable risk ratio (as you acknowledge) had a huge spread, looking at the CI. So obviously the P is going to be high. This is the whole purpose of statistical tests, so that we just don't just look at the higher number and make inappropriate assumptions. Remember, these are matched pairs, and what it suggests is that there is going to be a big spread, which is really expected for rare conditions such as cancer (with a big impact on the age bracket of the matched pairs). But if you are genuinely interested in statistical analyses I suggest to dig out a good textbook (and to be honest in order to fully recapitulate the methodology I would need to so, too to avoid errors- I rarely use risk ratio calculations in matched cohorts). With regard to the other antibiotics, tetracycline, penicillin and nitrofuran have have some history in being associated with breast cancer specifically. There have been discussions of how microflora disruptions can influence the immune system and modulate estrogen levels. But as mentioned before, there is also increasing awareness that especially long-term disruptions of gut microbiota, really with any antibiotic, are likely to have some impact (though the effects are likely to be very complicated, in some cases antibiotics are part of the cancer therapy). Ultimately, you won't find a perfectly safe antibiotic, they all carry risks. And trying to quantify them is not going to be terribly useful, unless there is a huge effect size to be measure. The reason is also clear, our bodies (and microbiota) are dependent on an uncountable number of things that we accumulate over our lifetime. Some antibiotics might be fairly safe for some individuals but if the same individuals take a certain drug, have a certain lifestyle or happened to have a specific type of infections, the risk on the individual level might skyrocket. There is simply no reasonable way to capture all this diversity. So all we can do is looking at rough aggregates and there, small differences rarely matter as the spread (or CI) is going to be very broad, anyway. With regard to AB, the most important aspect is whether they work in the first place. I.e. folks look at local resistance profiles and prescribe ABs that work. The secondary aspect is then to look whether the patient has any immediate adverse reactions to them. Long-term concerns are not unimportant, but are generally secondary unless a smoking gun study emerges. But that will take time. And if we wait for them before treating immediate issues, we will do more harm then good.
  13. No you misread the metric. The measure is per 100,000 person-years, not persons. The second thing you likely missed is reading through the study design, where they describe their follow up. Specifically the start date is when they get their first dose dispensed (for the user group). The follow-up ends either with the latest known consultation or the first diagnosed case of cancer. Because the result was statistically insignificant (.11). The statistical power of that cohort (i.e. folks that were cancer-free for over 15 years and remained enrolled in the program) is just too low to be sure that it was not a statistical fluke. The caveats are pretty much standard, having more data is of course better, but often not feasible and often nearly impossible for multi-year studies. Keeping folks in these programs is very, very difficult.
  14. No, there is far more evidence of that, dating back to the 80s. The effect size is overall weak, but shows up fairly persistently in multiple human cohorts. A review summarizing some of those studies: https://doi.org/10.3390/cancers11081174 A random selection of papers: Breast cancer and antibiotics early study here: doi:10.1001/jama.291.7.827 Other studies found mild effects, but there are mechanistic hypotheses underpinning this relationship: https://doi.org/10.3390/cells8121642 Relationship of AB use and colon cancer: http://dx.doi.org/10.1136/gutjnl-2016-313413; https://doi.org/10.1007/s10620-015-3828-0; Discussion of the role of microbiota, antibiotics and cancer https://doi.org/10.1016/j.ejca.2015.08.015 And the list goes on. Stating that there are only two papers are seriously misunderstanding the literature information. Also considering that these effects keep popping up in various studies, the link between AB and cancer in humans is far stronger than any short-term effect exclusive to metronidazole. I echo this sentiment. It is unclear how this particular AB is assumed to be vastly different in risk compared to all the others.
  15. Yet their conclusion remains that sensible use of metronidazole is backed by evidence, in part because there is no strong evidence of added metronidazole over other antibiotics for short-term use: Also from the same paper:
  16. OK, you are doing purity checks, high cutoffs make sense here. I assumed you had the issue even when using pure standards with higher concentration. High CT beyond 40 are generally unspecific signals. I.e. hon-targets could be amplified, probes break down etc. First thing is to check the amplification curves for shape, do the thresholds make sense? If running SYBR check melt curves. One can also run a gel to see what has been amplified or send for sequencing.
  17. At that stage we would still think of it as correlation. We need to have a model first to explain why A causes B at minimum.
  18. Perhaps no rights for them unless they are one of the good ones? Nudge nudge wink wink?
  19. The first thing to do is to talk to your supervisor to check what kind of quality control you are using and what the expected results are. From your description it is not clear for example whether your standards are extracted DNA with known quantity, for example. A Ct of 45 is pretty much unspecific and 35 is close to what is generally the detection limit (so roughly <10 genomic copies). It is advisable to start with pure DNA standards and establish a calibration curve or at least detection limit and PCR efficiency so that you have an idea what to expect. Then work your way backward to the isolation steps.
  20. In Canada that has also been a bit of an push (provincially) to dismantle the single-payer system.
  21. I will also add that Europe is not a political monolith with quite significant differences in social policies (though the US sticks out with its healthcare system). Also, especially homelessness is a poor example, in the US it is actually fairly low (0.18%). Of course, there can be issues in how homeless folks are counted and collected data can be out of date. But that being said, the US is comparable to France (0.22%). Canada is doing worse with 0.36%. Germany has higher levels (0.41%) and UK is around 1%. However the latter countries also include folks threatened by homelessness or in extreme unsecure conditions, which will skew the levels upward.
  22. It depends largely what we are talking about. Very simple (and somewhat thin) organs, like bladders might be in those timeline. But complex organs in which multiple tissues are involved in dynamic processes are still mostly at the dream stage. The strength of 3D bioprinting is really to create shapes, and even then the mechanical stability can be challenging. Making them move and do complicated stuff reliably, that is the really, really hard part.
  23. While true, some of the issues will be similar. The Brazilian devaluation and the inability to respond to that was at least one factor in ending convertibility, for example. Switching the currency entirely would make an exit really difficult (if not impossible).
  24. IIRC Argentine pegged their Pesos against USD in the 90s (because of hyperinflation). One could look at the outcomes then.
  25. Number 0, I am thinking of getting a psych eval. Seeing and talking to higher beings is something that would be worry me quite a bit. Especially combined with thinking of shooting as the first action.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.