-
Posts
11784 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Events
Posts posted by Cap'n Refsmmat
-
-
Looks like it's a recurrence of a problem we had a year or two ago -- a security flaw in IPB letting someone inject code into our templates and direct people to malicious ads. I've cleaned it out, but we're still not sure how it's getting in. I've set something up to detect it if it happens again.
1 -
This may be a recurrence of a problem we had a year or two ago. I'll look into it and try to track it down.
1 -
I used the email address from your SFN profile. If that's out of date, you can update it and send me a message, and I'll update the email address for your WordPress account.
0 -
Many of you are familiar with the SFN Blogs, which we set up back in 2008 to offer a free WordPress blog to every interested SFN member. Some of our bloggers became quite prolific -- swansont reached nearly 5,000 posts, with ajb trailing at nearly 500 -- but the blogging system eventually languished, failing to receive security updates or maintenance.
Today, in response to some security issues I feared may be related to our five-year-old (!) WordPress version, I upgraded the blogs system to WordPress 4.4 and migrated our most-active blogs to the latest version.
This update brings many new features to WordPress: new themes, an improved dashboard, easier embedding of photos and videos, an improved editor, and much more. Unfortunately, it brings one drawback: we no longer have easy integration with the forums, so running a blog requires a separate account, and there is no longer a Blogs tab to browse the latest blog posts.
If you're interested in following SFN followers, you can subscribe to the latest posts by email or by RSS. If you'd like to start a science-related blog, send me a private message and I'll get one set up for you.
Enjoy!
P.S. The import included the most active and recently updated blogs, but some may have been left out. If I skipped your blog, send me a message and I will import it -- all your content is backed up, so nothing has been lost.
1 -
Merged. I've also updated your user title to be "Formerly Transdecimal" to avoid any confusion.
2 -
I can merge this account with your old one, yes. Which username would you prefer to keep, Transdecimal or Daecon?
Also, welcome back to SFN!
0 -
Thanks, that's helpful. I still haven't figured out how the compromise happens, but hopefully I'll find something.
0 -
The SFN server doesn't respond to pings, so that wouldn't help. Man-in-the-middle attacks are fairly rare unless you're targeted by the NSA or using dodgy free wifi.
The Avast warning sounds like it thinks there's malicious JavaScript on our pages. This is related to the malware warnings we had before. I will try to hunt down the problem further, because apparently it keeps recurring.
0 -
The CC0 Public Domain Dedication renounces your copyright. You probably want to use it even if it is a bit long -- it's thoroughly reviewed by lawyers and copyright experts, so it genuinely renounces your copyright interest.
If it's too long for marketing, just say that "this software is in the public domain", and include the CC0 logo and a link to the license deed.
CC0 is already widely used for this purpose. You probably don't want to invent your own. Corporate attorneys get nervous when they see license terms they've never seen before.
0 -
My guess is that the BBCode system is getting confused. As IPB has moved more towards the WYSIWYG editor instead of BBCode, there have been more and more bugs and problems with ordinary BBCode.
0 -
I've seen trends like this before. I don't know what to make of it. Maybe the spam robots have a default and don't bother randomizing which forum they post in.
0 -
How about requiring that the history forum be about interpretations and meaning of historical events, rather than establishing their existence? Particularly for well-documented events; I'm sure there's a lot to be said about digging up evidence for certain things, but we could close Holocaust denial threads. There's nothing productive to be gained from arguing about the Holocaust's existence on a science forum, where you're certainly not going to convince anyone or produce any evidence that hasn't been produced in great detail elsewhere.
0 -
Out of curiosity, what kinds of topics would you discuss in the history forum?
0 -
Not necessarily true. The chance of rejecting a false H0 is the power of the test. It depends on how different HA is from H0. If it's not very different, it may be very difficult to tell the difference, so you have a very small chance of rejecting a false null.This simplifies to: we have very likely a higher chance of rejecting a false [latex] H_0 [/latex] than the chance of rejecting a true [latex] H_0 [/latex].
0 -
"If H0 were true, we would get this sort of data less than 5% of the time. But we got it. H0 must be wrong."If a type I error a for a hypothesis test is 0.05, and the p-value=0.01. We reject [latex] H_0 [/latex] because p-value [latex] \leq [/latex] a. What is the reasoning or intuition for this?
You can think of it as nearly a proof by contradiction. If [math]p = 0[/math] exactly, then it is a proof by contradiction: if H0 were true, we would never get this data, but we did, so H0 must be false.
Another way to phrase it is "Either we're very lucky and got unlikely results, or H0 is wrong." At some point you're more willing to reject the null than assume you have incredible luck.
1 -
The probability of type II error depends on the size of the true effect, so you can't calculate it. You can, however, calculate it for different assumed sizes of true effect, so you could say "If the true effect is this big, then I have a 50% chance of detecting it."I was referring to the value b (of the test procedures on the variable being tested, p) the probability value of type II error. Type I error, a, and Type II error, b, are inversely proportional. The statistician has only control over type I error, a, therefore the statistician can control type II error, b, indirectly by minimizing or maximizing type I error, a. So even knowing this, in practice the statistician still won't be able to find the value of b(so far I haven't learned how to calculate b, if there even is a way)? I am still in an early part of the hypothesis testing chapter, so the information I learned so far should be about basic concepts of error and interpreting a problem.
1 -
If [math]p < 0.72[/math] but we fail to reject, I wouldn't consider that a type II error. That's intentional, since we're testing for the alternative that [math]p > 0.72[/math]. A one-tailed test would specifically try not to reject when [math]p < 0.72[/math].The book's example assumes the case when it is true that we have committed a type II error, in that p = 0.72 is false and we fail to reject it. Then the actual value of p would be [latex] p< 0.72 [/latex] or [latex] p> 0.72 [/latex]. So the proportion of on time flights can be greater or less than 0.72, but the book only considers the case when [latex] p> 0.72 [/latex]. So the question is why did they choose only one of the consequences? Is it because they just happen to choose one example of consequence?
The statistician does not know the probability that H0 is false; that's not what a p value is.When we fail to reject [latex] H_0 [/latex] given that [latex] H_0 [/latex] is false. The fact that [latex] H_0 [/latex] is false is something a statistician won't know for certain. The statistician only knows the probability that [latex] H_0 [/latex] is false and he fails to reject it.
That's not the typical practice. If the sample is inconclusive, we "accept" (fail to reject) H0. We don't have the choice to reject it -- we have no evidence to justify the choice.So therefore, the statistician could have either chosen to accept [latex] H_0 [/latex] or reject [latex] H_0 [/latex] because the samples he collected is inconclusive about the truth of [latex] H_0 [/latex].
Significance testing is designed to avoid rejecting H0 unless we're really sure. When the evidence is inconclusive, we don't reject.
I think you're making this much more complicated than it needs to be by interpreting "fail to reject" as "I could accept or reject." That's not the case. "Accept" and "fail to reject" are synonymous, and if you fail to reject, you don't have any choice of what to do. You fail to reject. You're done.
1 -
Failing to reject H0 does not mean that [math]p \neq 0.72[/math]; it means p could be 0.72.The book claims that type II error, the failing to reject [math]H_0[/math] when [math]H_0[/math] is false, means that the airline employees did not receive the reward that they deserve. But isn't that not always true? Failing to reject [math]H_0[/math] means that [math]p< 0.72[/math] or [math]p > 0.72[/math], i.e. [math]p\neq0.72[/math], hence there could be less than 72% of domestic passenger flights being on time as well as there being more than 72% of domestic passenger flights being on time, yet the book only considers one possibility. Is there a reason for considering only one outcome? This is repeated in other examples that analyze type I and type II errors.
If you reject H0, yes, it could be that the proportion is smaller than 0.72. But you'd use a one-sided hypothesis test which only rejects when [math]p > 0.72[/math].
2. Another example I don't quite understand:
The probability of a Type I error is denoted by a and is called the significance
level of the test. For example, a test with a .01 is said to have a significance
level of .01.
The probability of a Type II error is denoted by b.
"
Women with ovarian cancer usually are not diagnosed until the disease is in an advanced
stage, when it is most difficult to treat. The paper “Diagnostic Markers for
Early Detection of Ovarian Cancer” (Clinical Cancer Research [2008]: 1065–1072)
describes a new approach to diagnosing ovarian cancer that is based on using six different
blood biomarkers (a blood biomarker is a biochemical characteristic that is
measured in laboratory testing). The authors report the following results using the six
biomarkers:
• For 156 women known to have ovarian cancer, the biomarkers correctly identified
151 as having ovarian cancer.
• For 362 women known not to have ovarian cancer, the biomarkers correctly
identified 360 of them as being ovarian cancer free.
We can think of using this blood test to choose between two hypotheses:
H0: woman has ovarian cancer
Ha: woman does not have ovarian cancer
Note that although these are not “statistical hypotheses” (statements about a population
characteristic), the possible decision errors are analogous to Type I and Type II errors.
In this situation, believing that a woman with ovarian cancer is cancer free would
be a Type I error—rejecting the hypothesis of ovarian cancer when it is in fact true.
Believing that a woman who is actually cancer free does have ovarian cancer is a
Type II error—not rejecting the null hypothesis when it is in fact false. Based on the
study results, we can estimate the error probabilities. The probability of a Type I error,
a, is approximately 5/156 .032. The probability of a Type II error, b, is approximately
2/363 .006.
"
From the above example, the type II error exists when we fail to reject [math]H_0[/math] when [math]H_0[/math] is false. This means that as the statistician conducting the research, the statistician could accept [math]H_0[/math] or reject [math]H_0[/math] because failure to reject [math]H_0[/math] means [math]H_0[/math] could be true or false. Therefore if by luck the statistician rejects [math]H_0[/math], then no error is made. The only error made in type II error is if he accepts [math]H_0[/math] when it is false. So why doesn't this factor into the calculation of error of type II error b?
I don't understand your reasoning here. Are you suggesting that the statistician sees a statistically insignificant result, and hence fails to reject H0, the null might be true or false and hence the statistician might decide to reject? Because that's not how testing is done.
I can't tell if you're trying to distinguish between "accepts H0" and "fails to reject H0". They're synonymous, though the latter is a better description.
Yeah, I think they got mixed up here. Typically you'd make the hypotheses as they describe them, so the alternative hypothesis represents an unacceptable level of lead. When you reject the null, you know something is wrong with the water.3. Another example:
From the book:
"
The Environmental Protection Agency (EPA) has adopted what is known as the Lead
and Copper Rule, which defines drinking water as unsafe if the concentration of lead
is 15 parts per billion (ppb) or greater or if the concentration of copper is 1.3 parts
per million (ppm) or greater. With m denoting the mean concentration of lead, the
manager of a community water system might use lead level measurements from a
sample of water specimens to test
H0: m = 15 versus Ha: m > 15
The null hypothesis (which also implicitly includes the m > 15 case) states that the
mean lead concentration is excessive by EPA standards. The alternative hypothesis
states that the mean lead concentration is at an acceptable level and that the water
system meets EPA standards for lead. (How is this correct?)
...
"
Shouldn't [math]H_a[/math] be m < 15? Because if [math]H_a[/math]: m > 15, then it's still not save by EPA standards. So we actually want [math]H_a[/math], right?
1 -
What didn't they like about MathJax?
My guess is that double spaces in your equations get replaced with non-breaking spaces, which mess with LaTeX. I can probably adjust the code to strip them out.
edit: should be fixed now:
[latex](\partial_{\mu}\phi - e A_{\mu}\phi)(\partial^{\mu}\phi - e A^{\mu}\phi) + m^2 \phi^2[/latex]
[latex](\partial_{\mu}\phi - e A_{\mu}\phi)(\partial^{\mu}\phi - e A^{\mu}\phi) + m^2 \phi^2[/latex]2 -
You could, but why not let people post new threads? It's easier to spot thread titles that interest you than it is to read through a long Q&A thread to find things you can answer.
0 -
I think the key is that there is a common understanding of physics. Everyone has an intuitive understanding of the laws of motion, and everyone's heard about stars and gravity and so on. So when they start reading about relativity and quantum mechanics and find that their intuition is completely wrong, their instinct is to say "Aha! Physics is all wrong!" instead of developing new intuition.The wording i have chosen so far already hints at a second effect, that I believe may be the most important one: Common understanding of physics often has a spiritual touch (and I probably don't have to tell you how many people you'd consider crackpots run around in the esoteric and spiritual community).
Whereas if I read about some counterintuitive result in chemistry or cellular biology, I'd just think "oh, neat", because it's not counterintuitive to me -- I have no intuition for the field anyway! I can't say "no, that can't be right, it violates all common sense" because ordinary common sense doesn't have anything to say about those fields. (At least to non-experts.) But common sense does say that special relativity is absurd.
1 -
You make links using BBCode:
http://www.scienceforums.net/index.php?&app=forums&module=extras§ion=legends&do=bbcode
(or the buttons in the editor toolbar)
0 -
Feel free to talk about sports in the Lounge, or sports-related science in physiology or Other Sciences.
0 -
The software can't actually tell whether you're currently looking at a thread -- it just knows when you first opened the page. So if someone starts writing a reply, wanders off to get some coffee, and comes back twenty or thirty minutes later to finish it off, the forum will think they're "offline" because they haven't loaded any new pages in a while. But they're still there, writing a post.
0
Apologies for the downtime today
in Forum Announcements
Posted
Please try the images again -- I think I sorted out the source of that problem.