Jump to content

CharonY

Moderators
  • Posts

    12631
  • Joined

  • Last visited

  • Days Won

    125

Everything posted by CharonY

  1. This is also highlighted in the New Yorker piece here: https://www.newyorker.com/magazine/2023/12/11/the-inside-story-of-microsofts-partnership-with-openai It is a bit worrisome that a company initially set up for ethical AI development combats attempts to develop a governance system for it. It looks like openai is going down the Google "don't be evil" path. Move fast, break things and let others pay for it.
  2. You can read up the interpretation of CIs here https://en.wikipedia.org/wiki/Confidence_interval Specifically: A 95% confidence level does not mean that 95% of the sample data lie within the confidence interval. A 95% confidence level does not mean that there is a 95% probability of the parameter estimate from a repeat of the experiment falling within the confidence interval computed from a given experiment.[16] Because a) in terms of safety we only look for certain defined endpoints (e.g. death, cancer, etc.) so potential other effects can be easily missed, and b) experiments are set up to test the null (i.e. no effect) so it is not really possible to calculate the likelihood of no effect. For the extremes and for short term you can establish a measure of safety (i.e. no one dying within 6 months of taking a medication). But if you want to look all effects (liver, kidney, inflammation, immune modulation, cardiovascular health, and so on) or for effects in the long term, confounders will have an increasingly bigger role (such as diet, lifestyle, age, health status etc.). Controlling for all these factors is near impossible (there would be a near infinite list to track for each person). I brought up the issue of diet, which had over the years huge cohorts and long-time data, but the effects have not been reproducible.
  3. Yes. Normalization generally means adjusting the value to a certain scale. In this case to 100,000 person-years. Because you constantly claim that you understood things perfectly, yet your questions clearly show that you don't (especially basic definitions). While I am happy to teach, it is very difficult if you do not realize that you have to revise your basic assumptions. And frankly I do have enough entitlement from my students, and a direct challenge often shortens things a fair bit. No, if a difference is not significant it means that the distributions are not distinguishable from each other. It does not matter if the means or CI is skewed in one direction or another, that is not something the test can tell us. If there was a trend, the statistical power of the cohort is insufficient to show it (and/or the effect size is too small). Also, one thing to consider is that the cohort over 15 years is likely older, and increasingly other confounders influence cancer risk, as acknowledged in the study. As noted, there are not really many studies that set up to prove a non-effect (safety is usually assessed in clinical trials) and there is basically no way for any treatment to do that conclusively, especially when looking at long-term effects. What studies can do, is try to see effects (as this one does) while controlling for a set number of factors. The complexity of the matter is also why we do not have figured out the perfect healthy food, for example. Likewise, there won't be risk-free medication. All we have is the weight of available evidence and never certainty. Also, it is often the case for weak effects that some studies show an effect and others don't. So evidence of one or the other side of argument will pile up over time until a tipping point for action is reached. So far the available studies show no outsized role of metronidazole in short-term harm (compared to other antibiotics), increasing evidence of general carcinogenic effects of long-term treatment with antibiotics, but also no true alternatives to antibiotic treatment.
  4. It is not strange, do you understand what person-years are and why that number is going to be larger than persons (specifically look how many person years are there in aggregate and what they normalized it against). If you did, then there is no reason to be confused about it. So either you did not understand and got confused, or you understood and pretend to be confused. Which is it? Again, these are person-years not numbers of cancer incidences. What they calculate are proportionate hazard ratios and likely age stratified (I do not recall). So the attributable risk ratio (as you acknowledge) had a huge spread, looking at the CI. So obviously the P is going to be high. This is the whole purpose of statistical tests, so that we just don't just look at the higher number and make inappropriate assumptions. Remember, these are matched pairs, and what it suggests is that there is going to be a big spread, which is really expected for rare conditions such as cancer (with a big impact on the age bracket of the matched pairs). But if you are genuinely interested in statistical analyses I suggest to dig out a good textbook (and to be honest in order to fully recapitulate the methodology I would need to so, too to avoid errors- I rarely use risk ratio calculations in matched cohorts). With regard to the other antibiotics, tetracycline, penicillin and nitrofuran have have some history in being associated with breast cancer specifically. There have been discussions of how microflora disruptions can influence the immune system and modulate estrogen levels. But as mentioned before, there is also increasing awareness that especially long-term disruptions of gut microbiota, really with any antibiotic, are likely to have some impact (though the effects are likely to be very complicated, in some cases antibiotics are part of the cancer therapy). Ultimately, you won't find a perfectly safe antibiotic, they all carry risks. And trying to quantify them is not going to be terribly useful, unless there is a huge effect size to be measure. The reason is also clear, our bodies (and microbiota) are dependent on an uncountable number of things that we accumulate over our lifetime. Some antibiotics might be fairly safe for some individuals but if the same individuals take a certain drug, have a certain lifestyle or happened to have a specific type of infections, the risk on the individual level might skyrocket. There is simply no reasonable way to capture all this diversity. So all we can do is looking at rough aggregates and there, small differences rarely matter as the spread (or CI) is going to be very broad, anyway. With regard to AB, the most important aspect is whether they work in the first place. I.e. folks look at local resistance profiles and prescribe ABs that work. The secondary aspect is then to look whether the patient has any immediate adverse reactions to them. Long-term concerns are not unimportant, but are generally secondary unless a smoking gun study emerges. But that will take time. And if we wait for them before treating immediate issues, we will do more harm then good.
  5. No you misread the metric. The measure is per 100,000 person-years, not persons. The second thing you likely missed is reading through the study design, where they describe their follow up. Specifically the start date is when they get their first dose dispensed (for the user group). The follow-up ends either with the latest known consultation or the first diagnosed case of cancer. Because the result was statistically insignificant (.11). The statistical power of that cohort (i.e. folks that were cancer-free for over 15 years and remained enrolled in the program) is just too low to be sure that it was not a statistical fluke. The caveats are pretty much standard, having more data is of course better, but often not feasible and often nearly impossible for multi-year studies. Keeping folks in these programs is very, very difficult.
  6. No, there is far more evidence of that, dating back to the 80s. The effect size is overall weak, but shows up fairly persistently in multiple human cohorts. A review summarizing some of those studies: https://doi.org/10.3390/cancers11081174 A random selection of papers: Breast cancer and antibiotics early study here: doi:10.1001/jama.291.7.827 Other studies found mild effects, but there are mechanistic hypotheses underpinning this relationship: https://doi.org/10.3390/cells8121642 Relationship of AB use and colon cancer: http://dx.doi.org/10.1136/gutjnl-2016-313413; https://doi.org/10.1007/s10620-015-3828-0; Discussion of the role of microbiota, antibiotics and cancer https://doi.org/10.1016/j.ejca.2015.08.015 And the list goes on. Stating that there are only two papers are seriously misunderstanding the literature information. Also considering that these effects keep popping up in various studies, the link between AB and cancer in humans is far stronger than any short-term effect exclusive to metronidazole. I echo this sentiment. It is unclear how this particular AB is assumed to be vastly different in risk compared to all the others.
  7. Yet their conclusion remains that sensible use of metronidazole is backed by evidence, in part because there is no strong evidence of added metronidazole over other antibiotics for short-term use: Also from the same paper:
  8. OK, you are doing purity checks, high cutoffs make sense here. I assumed you had the issue even when using pure standards with higher concentration. High CT beyond 40 are generally unspecific signals. I.e. hon-targets could be amplified, probes break down etc. First thing is to check the amplification curves for shape, do the thresholds make sense? If running SYBR check melt curves. One can also run a gel to see what has been amplified or send for sequencing.
  9. At that stage we would still think of it as correlation. We need to have a model first to explain why A causes B at minimum.
  10. Perhaps no rights for them unless they are one of the good ones? Nudge nudge wink wink?
  11. The first thing to do is to talk to your supervisor to check what kind of quality control you are using and what the expected results are. From your description it is not clear for example whether your standards are extracted DNA with known quantity, for example. A Ct of 45 is pretty much unspecific and 35 is close to what is generally the detection limit (so roughly <10 genomic copies). It is advisable to start with pure DNA standards and establish a calibration curve or at least detection limit and PCR efficiency so that you have an idea what to expect. Then work your way backward to the isolation steps.
  12. In Canada that has also been a bit of an push (provincially) to dismantle the single-payer system.
  13. I will also add that Europe is not a political monolith with quite significant differences in social policies (though the US sticks out with its healthcare system). Also, especially homelessness is a poor example, in the US it is actually fairly low (0.18%). Of course, there can be issues in how homeless folks are counted and collected data can be out of date. But that being said, the US is comparable to France (0.22%). Canada is doing worse with 0.36%. Germany has higher levels (0.41%) and UK is around 1%. However the latter countries also include folks threatened by homelessness or in extreme unsecure conditions, which will skew the levels upward.
  14. It depends largely what we are talking about. Very simple (and somewhat thin) organs, like bladders might be in those timeline. But complex organs in which multiple tissues are involved in dynamic processes are still mostly at the dream stage. The strength of 3D bioprinting is really to create shapes, and even then the mechanical stability can be challenging. Making them move and do complicated stuff reliably, that is the really, really hard part.
  15. While true, some of the issues will be similar. The Brazilian devaluation and the inability to respond to that was at least one factor in ending convertibility, for example. Switching the currency entirely would make an exit really difficult (if not impossible).
  16. IIRC Argentine pegged their Pesos against USD in the 90s (because of hyperinflation). One could look at the outcomes then.
  17. Number 0, I am thinking of getting a psych eval. Seeing and talking to higher beings is something that would be worry me quite a bit. Especially combined with thinking of shooting as the first action.
  18. Polling on perceived economic and crime situation suggest that facts don't really matter anymore, assuming they ever did.
  19. Ok, so there is at least some hints regarding pipetting errors in the qPCR (a delta of 1 is about a 2-fold change, which should not be happening in replicates). But obviously that does not seem to be an issue with the GAPDH. Obviously, we do not know if it is realistic whether your target is about 1000-fold different in abundance and that the differences are caused by differential extraction (I would consider it somewhat unlikely though). Main things I would suspect fall into the area of sample handling, and potential assay issues. So a few things to check include: - quality of mRNA samples - was it a 2-step? Can there be issues there? - are the protocols well-established for the specific primer combinations? What are the PCR efficiencies for them? - is there a possibility of contamination? What levels do you typically have for an extract of your control sample? - is there a possibility of degradation?
  20. First, let me apologize for not downloading a document from a first time poster due to security concerns. But I think most of the issues can be diagnosed within a post (or screenshots, if needed). The differences you are seeing are massive (about 1000-fold) so there is good chance that we are not looking at a biological but rather an analytical and/or pre-analytical issue. House-keeping genes are not really that universal as they are sometimes claimed to be, but that level of change is extremely unusual. So the most likely scenarios are issues during sample prep and/or the qPCR itself. You mentioned that the ct of your gene of interest remained stable. What is the ct/cq? The next thing to look at is to inspect your curves, do you have stable amplification for all your targets? Are you using probes? If not, you could inspect melting curves. Also, what is the variance of your results?
  21. It is also a bit misleading, as cell types are not fixed an can become different things at different times. Mapping is a snapshot in time, but if want to understand functions, we also need to understand the underlying dynamics (for starters).
  22. Religions are big business and have been for many centuries (varying to degrees by region and religion a bit perhaps). It was way before the Soviet Union (or even Russia) existed.
  23. Generally speaking these simplification do not do the complexity real justice. Every single cell has more different processes going on in parallel than even the most complex factory. Thinking about scope, we got about 30-ish trillion cells in our body. I.e. if you equated one cell with one factory, our body would be the the equivalent of 30 trillion factories. In the world there are only about 10 million factories. Or compare it to the about 100-400 billion stars in the Milky Way. These are orders of magnitude off. Our brain alone has about 80-ish billion neurons and roughly a similar number of glial cells so whatever scale you are thinking about, you likely have to expand it by a fair bit more.
  24. Compared to other areas, being famous is less of an issue. We always fall back to data and experiments. Ultimately, even if folks do get defensive, the self-correction kicks in eventually. In other areas this is more commonly not the case. I.e., the system is not perfect, but at least better than elsewhere.
  25. I think pretty much all pesticides are bad for health to various degrees to begin with.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.