Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 01/09/20 in Posts

  1. There is only one context in which I ever encountered the error function, and it the same context as here: It is the integral over the (normalized) Gaussian function. The Gaussian, on the other hand, is the most important function in statistics. The reason is the Central Limit Theorem: If you take a variable that is distributed according to some probability distribution, and then take the sum of many of these variables, the probability distribution of the sum becomes increasingly more similar to the Gaussian with an increasing number of numbers added (and the mean value of the sum is the sum of the individual means and the variance of the Gaussian is the sum of the addends' variances). The random walk is a process in which a walker takes a number of independent steps with random length and distance. By the central limit theorem, the resulting total deviation from the original location (the sum of steps) will look like a Gaussian, soon. This Gaussian gives the probability density to find the walker at a certain location. To find the probability to find the walker in a certain region, you sum up the individual probabilities of all locations in this region (i.e. you integrate over this region). When computing this integral, the solution can be expressed in terms of erf(). EDIT: I'm still posting this despite just having received a pop-up saying "someone else posted probably the same thing" 😜
    2 points
  2. Your only questions are sarcastic and unhelpful. Your assertions have mostly been wrong, and have been pointed out to you, but you've chosen to ignore them. You're rejecting explanations without reason, simply because they don't seem intuitive to you. This isn't personal, it isn't about you. It's your approach to learning that's causing a problem in discussions. I have to ask, is there any way to reason with you on this subject, or is your incredulity always going to be an impassable obstacle? How can we turn this discussion into a meaningful one? Several people have tried explaining what mainstream science says on this subject, but it's hard to have a conversation with you when half the effort is spent trying to get your fingers out of your ears.
    2 points
  3. Which, he conceded, was wrong. Einstein evolved with evidence. You could do the same.
    2 points
  4. Just to add to this - lack of energy isn’t really the reason why no escape is possible. Even if - hypothetically - you had an infinite amount of energy available to you, there still wouldn’t be any way out. Below the horizon of a Schwarzschild black hole, the geometry of spacetime is such that all future-directed word lines of test particles must inevitably terminate at the singularity (in the classical picture of GR), because space and time are related in such a way that ageing into the future always implies an in-falling in the radial direction. This is due to the geometry of spacetime itself, not lack of acceleration from your thrusters. The best you could do is to prolong your inevitable fate by firing your thrusters really hard, which slows down the radial in-fall, but cannot stop it - even remaining stationary at some r=const is not possible. The only way to escape such a black hole would be to time-travel backwards into the past, which, to the best of our current knowledge, is not physically possible.
    2 points
  5. Ctrl-Z just cost me a long post that I put a lot of time in and will absolutely not type in again just to be fooled by my muscle memory. Key points: 1) Consider erf() just as the number-crunching function that computers provide you to calculate integrals over Gaussians. Not much more. The explicit relation between Gaussians and erf() is https://www.wolframalpha.com/input/?i=integrate+1%2F((2+pi)^(1%2F2)+sigma)+e^(-(x%2Fsigma)^2%2F2)+from+A+to+B 2) Here's some Python code to play around if you want. It plots a) A single random variable "number of bugs a programmer fixes each day" b) The resulting "number of bugs fixed per programmer per year", which is a sum of 200 random variables in a) and itself a random variable. Key observation: The distribution looks very different and very Gaussian. c) The probability to "fix at most N bugs per year" which is the same as "fix 0 to N bugs per year" which is the same as "the sum of probabilities to fix 0 ... N bugs per year" which indeed is pretty much the same as the integral over [0; N]. The resulting curve, as a function of N, looks ... *surprise* ... like the error function. import numpy as np import seaborn as sns import matplotlib.pyplot as plt # We visualize the distributions by drawing a large sample of random variables. SAMPLE_SIZE = 10000 def randomFixesPerDay(): # Number of bug fixes per day is a random variable that becomes 0, 1 or 2. return np.random.randint(3) def randomFixesPerYear(): # Number of bug fixes per year is a random variable that is the sum of # 200 (=workddays) random variables (the bugfixes each day) return np.random.randint(3, size=200).sum() # Experimental probability for # bug fixes per day dailyDistribution = [randomFixesPerDay() for i in range(SAMPLE_SIZE)] sns.distplot(dailyDistribution, kde=False, norm_hist=True, bins=[0, 1, 2, 3]) plt.title('Probabiliy of Bug-fixes per Day: 1/3') plt.xlabel('# Bugs') plt.ylabel('Probability') plt.show() # Experimental probability for # bug fixes per year annualDistribution = [randomFixesPerYear() for i in range(SAMPLE_SIZE)] sns.distplot(annualDistribution, kde=False, norm_hist=True, bins=np.arange(150, 250)) plt.title('Probabiliy of Bug-fixes per Year\n(note smaller value on y-axis)') plt.xlabel('# Bugs') plt.ylabel('Probability') plt.show() # Integral [0; x] over annualDistribution looks like error function xValues = np.arange(150, 250) yValues = [len( [value for value in annualDistribution if value <= x])/SAMPLE_SIZE for x in xValues ] plt.plot(xValues, yValues) plt.title('Integral: Probability of fixing [0; N] bugs per year\n(i.e. "not more than N")') plt.xlabel('x') plt.ylabel('Probability') plt.show()
    1 point
  6. One positive outcome of the Black Death hundreds of years ago is a more effective immune response carried in genes of the descendants of survivors. One might speculate that despite the mass death toll of the infection, subsequent generations "died less" than would have occurred if the pandemic had been circumvented by the eradication of the responsible pathogen. But I'd consider mosquito, housefly, tick....
    1 point
  7. "Mosquitoes" is not a species. Mosquito covers about 3500 species.
    1 point
  8. This is from my commented code so someone with more statistics expertise than I can verify but.... Say the mean height of a population of people is 70 inches, and the standard deviation is 4.3 inches. Compute the probability that an individual of that population has a height greater than 70.5 inches: L=70; m=70.5; s=4.3; t = (L-m)/sqrt(2)/s P(x>70.5) = erf(t)/2+0.5 = 0.453716 = 45.37% BTW, rand/srand has been mentioned. Keep in mind that those functions emit pseudo random numbers with a flat distribution. To achieve a Gaussian distribution you would use the Central Value Theorem (already mentioned) by adding a series of rand() return values which in the limit should be normal.
    1 point
  9. In C/C++ there are randomization functions srand() and rand() http://www.cplusplus.com/reference/cstdlib/srand/ rand() gives unsigned integer in range 0 to RAND_MAX. So if you want to receive in different range you have to do e.g. rand() * MAX / RAND_MAX + MIN Python should have equivalent functions of pseudo random number generator (PRNG). https://docs.python.org/3/library/random.html
    1 point
  10. Also from Wikipedia: "...error function has the following interpretation: for a random variable Y that is normally distributed with mean 0 and variance 1/2, erf(x) is the probability of Y falling in the range [−x, x]." I'm no expert but I believe the utility of erf() lies in the fact that using simple arithmetic one can derive probabilities for arbitrary mean and standard deviation. The mean is fairly obvious, erf(x-3) moves the mean to x=3. The standard deviation involves a scale factor if I recall correctly. So let's say you have a measurement with mean=10 and standard deviation of 1 you can use erf to answer questions like: what is the probability of a given measurement is <10.1 and > 9.9. I wrote some code a while back to generate random numbers given a mean and standard deviation and used erf() to check the results.
    1 point
  11. Hi Dag ? How is your applied maths ? The error function has a much wider application than just in statistics and probability, so I will deal with that first. As you note it is defined by a definite integral, derived from the definite integral [math]\int\limits_{ - \infty }^\infty {{e^{ - {t^2}}}} dt = \sqrt \pi [/math] This integral can be linked to the binomial theorem, beta and gamma functions and some other special functions defined by integrals and the fact that it is finite is of great use. Because it is an integral of a continuous function, (that remains finite) shorter intervals than plus/minus infinity also posses this property and this is employed in defining the error function. The exponential integral also finds use in deriving inverse laplace transforms, so can be used to complete the solution of partial differential equations. For example in solving the Telegrapher's equation and the heat distribution equation. So it ops up in widely separated and suprising places. Particularly as some of the variables concerned are discrete,as with the gamma and binomial functions. Normalising the above integral (setting it to one) yields the probability connection to a random variable, x, since the probability over all x must sum to 1. As to a random variable, this is a variable which can take any of a range of values, which may be continuous or discrete such that relative frequency is interpreted as the probability. So this is how the error function is connected to both statistical and analytical functions.
    1 point
  12. Incorrect. You seem to be rather persistent. Have you though about how much you could learn (from members here) about physics if you tried to listen to what the mainstream have to provide?
    1 point
  13. The (1,3)-dimensionality of spacetime has a fairly privileged character, you can’t just add or take away macroscopic dimensions, and still expect everything to work as before. Adding an extra macroscopic spatial dimension would be really bad news - there would no longer be any inverse square laws; gravitational orbits would be unstable; electromagnetism would no longer be described by Maxwell’s laws; atomic orbitals would look very different or not exist at all; and so on.
    1 point
  14. I think that your thinking requires more rethinking...
    1 point
  15. ! Moderator Note OK, then. Ignore moderators at your own risk. Closed
    1 point
  16. Do we understand the essential ontology of anything? All we can do is surround a thing with words without actually ever capturing and consuming it to the extent of understanding its underlying nature; we are limited by our available means of expression and symbology.
    1 point
  17. Do you have any evidence about the speed of distant galaxies that eludes everyone else? See anyone can say anything if they choose including those that say there is no universe just Gods hard drive. Why do you tolerate people telling you that you do not really exist? PS. I do not believe in Gods hard drive, that belief goes to great physicist that say they do not believe in God, then say God made everything Is being that silly fun Funny how Einstein claimed that nothing was moving, that the universe was a static bubble
    -2 points
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.