I am part of a research project on the intuitive understanding of probabilities in contour plots and we are seeking for participants for a short online experiment.

Participation is completely **anonymous** and will take **less than 15min of your time**. Just go with the browser of your laptop or desktop PC (no mobile

With you participation in the experiment you directly contribute to current basic research of the Friedrich-Schiller-University Jena and the German Aerospace Center. We would be very happy if you could support our work.

Thank you!

]]>I have just started going through a python programming book and one of the functions presented is the error function. I understand it is not very necessary for me to understand the mathematics behind it, but I would just like to. Now from looking at, reading a bit about it and watching a video how it is derived, it is my current understanding that the function computes the probability that a Random Variable (if assumptions of normal distribution, standard deviation and expected value are all met) can be found within [-x, x].

I think two things are unclear for me, firstly: what exactly is a Random Variable, the wiki article and some other websites talk about it as if it just any variable that is determined by chance, such as the rolling of dice (I presume this cannot be used in error functions due to the lack of normal distributions). However I don't understand how this could be used (so most likely I am misinterpreting the explanations on the internet). Let's say I measure how tall some people are, and the people I choose are randomly picked with no bias in selection. That Random Variable's value would be.... what?

Secondly I don't understand the usage of x in this case (erf(x)). The functions domain is - infinity to infinity, but from values of x that are around -3 or 3, we already have almost a 100% chance of finding our Random Variable. In my imagination those numbers would be arbitrary, and I can't see how one could use it (let's say we measure length in centimetres, why is the chance that the Random Variable will always be present when we do erf(3) = 0.9999978?

I hope the explanation of my thought process and (possibly false) assumptions is enough for someone to point out the faults in my reasoning.

I would like to know how I could apply this function and when this is useful. On Wikipedia is written: "this is useful, for example, in determining the bit error rate of a digital communication system." But that doesn't (yet) make it clear for me;/

Hope someone can help, and please forgive my ignorance on the subject.

-Dagl

]]>
Graphically, by projecting normals to the centres of the lines *A-A' ', B-B' ', C-C' '*, (also *p-p' '*), where they cross at point *r*, gives me the center of a single rotation I seek.

Question: how to do that mathematically given only: *p* = (7.160299318411282, 0) rotation1 = -360/21° & rotation2 = 18°?

Caveat: The above procedure does not work for all combinations of two rotations. Eg. In the next image *p* = (50,0) and the rotations are (-45° & +45°); which results in the normals to the bisectors all being parallel!

I know that affine transformation using homogeneous coordinates can be composed [https://en.wikipedia.org/wiki/Transformation_matrix#Composing_and_inverting_transformations], but I am stuck for how to utilise that here as in the environment in which I am doing this (LUA embedded in a FEA package), I only have two mechanisms available: rotation about a point and translation in the XY plane.

Question2: Assuming that I get a solution to Q1 above, is my only option to deal with the Caveat case, to compare the angles of rotation and do something different if they are equal?

Thanks.

]]>

Can anyone clarify for me please.

]]>

I saw this article on Cha Chathat said it was about nine millimeters, but that CAN'T be right!

How thick is a sheet of printer paper?

]]>
I would like to know whether or not there is a statistic that can differentiate between the case at the top left versus the top right. Clearly R^{2} does not do so. One could plot the residuals, and the non-random distribution sometimes becomes apparent. However, what I was hoping to find is some number, preferably one that would be calculated by a statistics program, that could be compared in the two situations. I am reading Motulsky's book Intuitive Biostatistics (that is where I first saw the Anscombe quartet, but I have not found anything in his book yet. I am presently using ProStat, which has both a calculation of COD (which I am pretty sure is R^{2}), as well as a calculation of "Corrl" which is said by the user manual to indicate "how closely the two variables approximate a linear relationship to each other." I note the presence of squared differences in the numerator of COD, which are not found in Corrl.

Joe has investments in Company A, Company B, and Company C.

Joe is fated to earn $25.00 from Company A within 2 days from now.

Joe is fated to earn $45.00 from Company B within 3 days from now.

Joe is fated to earn $100.00 from Company C within 5 days from now.

Joe is fated to earn no more than $26.00 from Company C and Company B on day 1 (1 day from now).

Joe is fated to earn at least $14.00 from Company A and Company C on day 2 (2 days from now).

Joe has to earn twice the amount of money on the first day than the second day from Companies A, B, and C and twice the amount of money on the second day than the third day from Companies A, B, and C. This can be expressed algebraically as Joe earning x money on day 3 (3 days from now), 2x money on day 2 (2 days from now), and 4x money on day 1 (1 day from now).

Joe can earn whatever amount of money (that satisfies the other conditions) from Companies A, B, and C on day 4 and day 5 (4 and 5 days from now).

What is the lowest amount of money Joe can earn on day 1 (1 day from now) from Companies A, B, and C? Explain your reasoning.

P.S. How come there doesn't seem to be good formulas to use for this question?

]]>]]>

Using PSPP, I was doing some basic linear regressions. Examining the following correlation:

I included data from over a 100 countries and looked at both baseline values and values 15 years later. Calculating the differences in value for both the independent and dependent variable. Plotting them in a graph.

Results were as follows:

A weak R-squared value which was highly significant nonetheless (P = < 0.0001). With a negative trendline. No confounding was detected from other variables.

I found a "strong" correlation between baseline values of the dependent variable and it's successive changes in values during the 15-year follow-up period. The R-squared values was > 0.7. There was a positive correlation: Higher changes during follow-up were related to higher baseline values.

My problem is as follows:

Most values for the dependent variable dropped over the 15-year follow-up period. When I added baseline values for the dependent variable to the model, there was no noteworthy correlation left between the independent- and dependent variable (P for sig: > 0.50).

**Would it be correct to assume the negative correlation between independent- and dependent variable were (probably) caused by the strong correlation between the 2 values for the dependent variable? **

Subgroup analyses of the correlation between the independent- and dependent variable after 15 years showed the following:

-Decreases in values for the independent variable were not linked to changes of the dependent variable.

-Increases in values for the dependent variable were not linked to changes of the independent variable.

.

]]>

I need your help to calculate (approximate) a double series as shown in the attached.

Thank you so much.

Best,

Steve.double_series.pdf

]]>This problems arises in data compression; consider the bits that make up a file (or a substring of bits of the file) and treat it as a number (i.e. the bits are the binary representation of this number). If we could write a pair function+input(s) whose output happens to be the substring, this whole substring can be replaced by the function+input(s).

I've thought of expressing the number as sums (or differences) of relative big powers of prime numbers. Is this a good approach? And, if not, what would be a good one? And how to proceed?

Motivation of the question: A simples function like raising the nth prime number to a power S can result (depending on the values of p and S) on various outputs, each of which is unique (given that any number has only one prime factorization). If we pick p = 17 and S = 89435, for example, that's computationally trivial to compute (takes logarithmic time), and will result in a somewhat gigantic number. We can then generate a file whose bits are the same of the binary representation of this number (or at least some of the bits are). (This is just a rough example). The problem is going the other way: Given a bit string (hence, a number), how to express this specific bitstring with less bits (very few, actually) through a function that results in the number.

Any ideas/answers/comments are welcome!

]]>f(x) = [f(x)+f(-x)]/2 + [f(x)-f(-x)]/2 = f(even) + f(odd)

So, for even functions, odd part equals to zero, and vice versa. That may be surprising, that such a simple logic shows the truth that may seem counterintuitive. Interesting is however, that symmetry in a microscopic world, for example in the world of elementary particles, is exact, while in a macroscopic world, for example in biology, it is only approximate. Why is it so?

By that I mean that while hydrogen molecule is perfectly symmetrical consisting of two identical atoms, neither our bodies are perfectly symmetrical, nor we can produce any macroscopic object that is perfectly symmetrical. Is there a mathematical explanation for that fact, or does this question belong to a philosophy forum?

]]>

Take the formula for momentum, p=mv, for example - you have (say) kg times m/s. I know how to interpret m/s - for every second that passes by, so many meters are traversed. But what does kgxm mean? For every second, there are so many kilogram-meters. But what is a "kilogram-meter"?

]]>

I wonder if anyone might have anything to say on the subject and whether it can be shown in more detail how this is the case.

]]>

In machine learning library scikit-learn of python, the logestic regression function has an argument "class_weight". When you set a higher class weight to a class during fitting the logestic model, you will get higher predictive accuracy of this class. I wish to know what is mathematical principal of setting class_weight. Is it related to modify the target function of logestic regression (https://drive.google.com/open?id=16TKZFCwkMXRKx_fMnn3d1rvBWwsLbgAU) ?

Thank you in advance.

]]>Methods are Backward, Forward Euler, Crank–Nicolson, Runge–Kutta or Adams–Bashforth. I tried to find an analytical solution for each first, then compare the results with the exact results of the point for each time step. However, solving the equations are so difficult. Is there any way to determine a method without solving the equations?

df/dt=f^{^(1/3)}(1/4t)

df/dt=2*t

df/dt=−e^^{(f*t)}

df/dt=f(∂2f/∂x2)

]]>

Given x and y are positive integers, such that both x^{2}Y^{2}^{ }and xy^{3} have a greatest common factor (GCF) of 27, which of the following could be a value of y?

The available answers are 81, 27, 18, 9 and 3

I ruled out 81, 27, 18 and 9 as values of y because in all cases y^{2} would be greater than 27, so the GCF could not be 27. It appears the intended answer is y = 3.

However, if y = 3, then x^{2} y^{2} becomes 9x^{2}. In order for this to have a GCF of 27, x^{2} must contain at least one factor of 3.

If x^{2} contains a single (or odd number) factor of 3, then x must contain a factor of square root of 3, and x will not be a positive integer, which violates the problem statement.

If x^{2} contains two (or an even number) factors of 3, then x will contain one or more factors of three, and the GCF will be at least 81, which violates the problem statement.

What am I missing?? Any thoughts?

Thanks

]]>Or does it posit that ONE monkey, given infinite time, will produce an error-free complete works.

If it is the first scenario I can see this happening and I could actually imagine it working with just a large number of monkeys and a long time.

If it's the second scenario I have a problem. Not just that it is totally counter-intuitive or that infinite time is impossible to comprehend.

I realise that monkeys and typewriters are (fun) devices used to help think about complex theories pertaining to probabilities and infinity etc but I do have some serious questions:

Are the monkeys to be thought of as random letter generators?

If the experiment was posed as this - An infinite amount of monkeys, given infinite time could produce exactly the same sequence of randomly generated letters as another monkey had produced ( the same amount of characters as the complete works of Shakespeare) - I can understand that although this would be highly unlikely with a billion monkeys and a billion years, it WILL happen given infinity.

This is random letter generators producing a random sequence.

Shakespeare is not a random sequence.

I would argue that it is IMPOSSIBLE for a random letter generator to produce a work of non-random generated language of that length even given infinity.I accept that I don't know where the cut off point in length would be.

Why am I wrong? I assume that I am!

]]>Before anything, I am a 50 year young man with Aspergers and ADHD that considers himself a free-thinking Catholic.

This question is in two parts.

And my sincere apology for the length of the second one, it might seem like a wall of words, but i saw no other way to properly post it without changing anything in the question's content, interpretation or meaning.

1) First, a personal question, this is to see just how far math and science is related, in your person:

"How important is mathematics to you, what place has mathematics to you, and how far does mathematics have an impact on your thinking/accepting?"

2) Here we have the actual math question, the one I warned you about:

a) Could you please calculate the odds, percentage-wise preferably, just how big the change is that the universe and everything in it, thus, the universe from Big Bang to this day in age and everything that comes with it, such as the placing of all bodies, the creation of elements, the creation of our solar system, the perfect positioning of Earth in the solar system which has the perfect sun to sustain life (as we know it, for the record), the forming of Earth, the first start of life (that simple single form cell up to the enourmous variety of life (plant, animal, fish, insectoid, ...) through all time periods) on Earth, the complete evolution of life with human origins and evolution as greatest importation in this part, the way ecology systems work and how nature has been able to withstand (albeit FASTLY losing this batttle more and more swiftly) the polution and destruction by human(oid) hands (throughout all eras of humanoid existance), actually random luck is, and thus not created by whom- (be it God, Jahwe, Jehova, Allah, ...)or whatever the "creator" (a god, an intellectual energy form, an alien playing the Sims (I said this alien thing here, to show that I leave it in the actual middle what this creator might be as I have not a single clue, NOT to create a mockery. Also, I said this to make sure, in my own unusual and autistic (Aspergers with ADHD, note) way to not insult anyone in any way.), ...)?

Thank you very much.

Jack says: I dont know all the digits but James doesn't either.

James says: I dont know all the digits but John doesnt either.

John says: I just found it and Jack should be just found it as well.

What is the four digits ?

]]>

"If K#0 at a point P on a surface, show that there is a neighborhood of P in which the points can be put into a 1-1 correspondence with the spherical image of the neighborhood (see Problem 9.57)".

Note: K is Gauss curvature.

]]>