I am modelling the length of the growth season for 120 different years based on estimated daily temperature. The temperature is estimated from a periodic function: f(x)=D+A*sin(B(x+C), where the constants B and C are the same every year and A and D varies.

So I would like to know when the area under each function equals 1200 as shown in the figure. I think it can be solved by finding the upper limit b in equation 1 when the lower limit a is known. For the year 1990 it would look like equation 2, and b will take a value between a and 365. Do you know how I can calculate this using Excel?

Thanks in advance,

]]>

]]>

https://en.wikipedia.org/wiki/Number_line

Is it possible to find points corresponding to infinitesimals on a number line? I mean finding an infinitesimal between two neighbouring points (between two real numbers).

I am assuming that every point is surrounded by neighbourhood. I got this idea of neighbouring points from John L . Bells' book A Primer of Infinitesimal Analyis (2008).

On page 6, he mentions the concept of ‘infinitesimal neighbourhood of 0’. But I think he would not consider his infinitesimals as points because on page 3 he writes

that "Since an infinitesimal in the sense just described is a part of the continuum from which it has been extracted, it follows that it cannot be a point:

to emphasize this we shall call such infinitesimals nonpunctiform."

]]>

\[ f''(x) = \frac{f(x+2dx) -2(f+dx) + f(x)}{(dx)^2} \]

I am using a finite difference approximation called "Second order forward" from the link, I use dx instead of h:

https://en.wikipedia.org/wiki/Finite_difference#Higher-order_differences

]]>I was looking on the post about the reliability of published research.

And I was wondering if we could know, a posteriori, if a study is dependable or not.

Do you know any statistic test to improve our understanding of already published paper?

In biology I have seen a lot of people doing three or four different tests to see if their results are meaningful. Isn't it a kind of fraud?

Thank you very much. ]]>

I'm in a online debate with someone about some deep mathematical concepts. My opponent was trying to convince me that you can have a sequence of numbers for which there is a first element, a last element, but no second element and no second last element (where the sequence contains more than 2 elements). I thought that was absurd until he gave me an example: all the real numbers between 0 and 1.

It definitely has a first member (0) and it definitely has a last member (1), but after 0 there is no "next" real number. Likewise, there is no real number that comes just before 1. Yet there are obviously real numbers between 0 and 1.

That stumped me until I figured that couldn't possible count as a sequence because sequences must consist of well-define discrete elements and real numbers aren't well-defined or discrete. I thought that was a terrible way of putting it, so I looked up the definition of sequences online and the key word I found was "enumerable". The members of a sequence must be enumerable. And I don't believe the reals are enumerable.

But then he came up with this other example: take the sum \(\sum_{i=1}^{n}\frac{9}{10^i}\). If you define each member of the sequence as the value of this sum for every value for n > 0 and order them by each incremental value of n, then you will have the sequence (0.9, 0.99, 0.999, ...). And if you allow n = \(\infty\), then we know this sum equals 1. Therefore, 1 is the last value in the sequence. Therefore, the sequence starts with 0.9 and ends with 1. Furthermore, each member is well-defined and discrete. We know each member by the sum \(\sum_{n}{i^1}\frac{9}{10^i}\) and the value of n. Yet, it has no second last member.

Is this a legitimate example of a sequence?

]]>

For example:

\[ \lim_{x \to 0} \frac{sin(x) - x}{x^3} \]

this is usually solved by applying L'Hopital's rule 3 times and the answer is -1/6:

]]>

https://en.wikipedia.org/wiki/Logistic_map

here is my question : does the logistic sequence for some choosen irrational parameter reach every real number inside a real interval, or is it always just a subset ?

(I hope i'm in the right section)

thanks !

]]>Is a Fourier transform of a real function is still always real? I suppose the idea is that the imaginary component decays to 0 as you take the integral from -infinity to infinity so that it evaluates to a single finite real number, or actually, does the output of a the Fourier transform of a real valued function need to be real? Why do I generally see absolute value arguments in proving well-defined properties if complex functions have even more possible they can take? If you take an absolute value that only tells you anything about the magnitude of uncountably infinite numbers.

]]>I've been getting into the concept of hyperreal numbers lately, and I've got tons of questions. What I understand about the hyperreals is that they are numbers larger than any real number or smaller than any real number. I'm sure you can imagine how counterintuitive this sounds to someone like me who's new to the concept. It's like talking about numbers greater than infinity. I always thought that was impossible. So it shouldn't be surprising that someone like me would have a ton of questions. I'll start with a couple.

1) Assume that R is a hyperreal number greater than any real number. What does 2 x R equal? It's clear what 2 x n means where n is a real number because there is a 0 value for reference--i.e. 2 x n is a number twice the distance from 0 as n is from 0. But do the hyperreals have their own 0 point? How could they if they are greater than any real number (I realize some hyperreals are smaller than any real number, but for this question I'm only focused on the infinitely large hyperreals)? If 2 x R means twice the distance from 0 the real number as R is from 0 the real number, the you get a number another infinite distance away--sort of like a hyper-hyperreal number. <-- Does that make sense? Do the infinitely large hyperreals have their own infinity beyond which are numbers that are hyperreal even to the hyperreals?

2) I remember watching a vsauce episode on youtube where Michael Stevens explained the difference between cardinals and ordinals, which as I understand it is the difference between numbers that represent quantities and numbers that represent orders. He explained that while there is no cardinal number greater than infinity, you could talk about ordinal numbers greater than infinity. He didn't explicitly link ordinals to hyperreals but it seemed like the same idea. He stressed that since ordinals don't stand for quantities, you cannot use ordinals to speak of "how much" something is, but simply whether they come "before" or "after" another number. Is this true of hyperreals as well? If so, this would seem to imply that there is no 0 point on the hyperreal number line as that would mean you could quantify any hyperreal number R (the ones greater than infinity). It's quantity would just be how many whole hyperreal numbers it is away from "hyper-zero" (just as we say the number 5 represents the quantity of whole numbers it is away from 0). But if there is no such "hyper-zero" number, then there isn't a reference point relative to which we can say "how much" a hyperreal number (greater than infinity) represents (except that it's greater than any real number). We could still quantify the difference between any two (greater than infinity) hyperreal numbers. So we could say R+3 is 3 greater than R, but without knowing how much R really is, we don't really know how much R+3 is either. So I guess the question is: should hyperreal numbers greater than infinity be thought of as ordinals only--they represent orders of number, not quantities--or is there a way of talking about their quantities as well?

I'll stop there for now. Thanks for any forthcoming responses.

]]>

Let the p-th power of an arithmetic series as follows

[math]\sum_{i=1}^n x_i^p = x_1^p + x_2^p + x_3^p + \cdots + x_n^p[/math]

The general equation for the sum is given as follows

[math]\sum^{n}_{i=1} {x_i}^p=\sum^{u}_{m=0}\phi_m s^{2m}\frac{[\sum^{n}_{i=1} x_i] ^{p-2m}}{n^{p-(2m+1)}}[/math]

where: [math]p-(2m+1)\ge 1[/math] if p is even and [math]p-(2m+1)\le 1[/math] if p is odd, [math]\phi_m[/math] is a coefficient,[math]\sum^{n}_{i=1} {x_i}[/math] sum of n-th term, [math]u=\frac {p-1}{2}[/math] for odd p and [math]u=\frac {p}{2}[/math] for even p and s is the difference between terms (i.e [math]s=x_{i+1}-x_{i}[/math]).

Below are the equations for p=2-7

[math]\\\sum_{i=0}^{n}x_{i}^{2}=\frac{\left [ \sum_{i=0}^{n}x_{i} \right ]^2}{n}+\frac{n(n^2-1)s^2}{12}\\\\\\\sum_{i=0}^{n}x_i^3=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^3}{n^2}+\frac{(n^2-1)s^2\left [ \sum_{i=0}^{n} x_i\right ]}{4}[/math]

The value s is the common difference of successive terms in arithmetic progression and [math]\sum_{i=0}^{n}x_i[/math] is the sum of arithmetic terms. The beauty of this equation is that when you set n=2, it describes the Fermat's Last Theorem in a polynomial forms and if you set p to be negative, you can get new form of Riemmann's Zeta Function.

Here, you can see how the coefficients are repetitive:

[math]\\\sum_{i=0}^{n}x_i^4=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^4}{n^3}+\frac{(n^2-1)s^2\left [ \sum_{i=0}^{n}x_i \right ]^2}{2n}+\frac{n(3n^2-7)(n^2-1)s^4}{240}\\\\\\\sum_{i=0}^{n}x_i^5=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^5}{n^4}+\frac{5(n^2-1)s^2\left [ \sum_{i=0}^{n} x_i\right ]^3}{6n^2}+\frac{(3n^2-7)(n^2-1)s^4\left [ \sum_{i=0}^{n}x_i \right ]}{48}[/math]

[math]\\\sum_{i=0}^{n}x_i^6=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^6}{n^5}+\frac{5(n^2-1)s^2\left [ \sum_{i=0}^{n}x_i \right ]^4}{4n^3}+\frac{(3n^2-7)(n^2-1)s^4\left [ \sum_{i=0}^{n}x_i \right ]^2}{16n}+\frac{n(3n^4-18n^2+31)

(n^2-1)s^6}{1344}\\\\\\\sum_{i=0}^{n}x_i^7=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^7}{n^6}+\frac{7(n^2-1)s^2\left [ \sum_{i=0}^{n}x_i \right ]^5}{4n^4}+\frac{7(3n^2-7)(n^2-1)s^4\left [ \sum_{i=0}^{n}x_i \right ]^3}{48n^2}+\frac{(3n^4-18n^2+31)

(n^2-1)s^6\left [ \sum_{i=0}^{n}x_i \right ]}{192}[/math]

Perhaps, by looking at this new formulation, someone could work out an alternative shorter proof for Fermat's Last Theorem.

]]>I realise that there can be multiple [sic] answers; I'm after the smallest integers that produce c.

eg. Given 1.05; A/B = 21/20.

]]>

Here is possibly a neat "pattern" I've come across when studying prime numbers, or at least a different way of bucketing them. I'm looking to see if anyone can help me explain it, because I'm having a hard time wrapping my head around it. It may be that I've found something that is trivially explained away by some known information I just don't have or am not seeing. The pattern emerges with you cut prime sieves of length N > 3 into segments of 6 after separating the first 3 prime numbers (1,2 and 3). I'll explain below.

We'll be working with a prime sieves with the following properties:

- We sieve on intervals [1, N] where N > 3. The examples work out nicely if N-3 % 6 = 0,
- In this sieve we'll represent a prime number at index 'i' with digit '1', and a composite number with digit '0'
- We end up creating a string of '0's and '1's of length N that represents the primality of the number located at index 'i'.

Here is a sieve up to N=45, first separated by a segment of length 3 since 1, 2, and 3 are prime, and then subsequent segments of length 6.

111 - 010100 - 010100 - 010100 - 010000 - 010100 - 000100 - 010100 ...

I've created these sieves all the way up to the 1 millionth prime number. The interesting thing that emerges is there are only 4 unique segments that ever show up:

- 010100 - Segment that includes a twin prime (position 2 and 4).
- 010000 - Segment that includes a single prime at position 2.
- 000100 - Segment that includes a single prime at position 4.
- 000000 - Segment that includes no primes at all.

There is never a case where the number represented at the 6th position in a segment is prime, EVEN THOUGH this digit position always represents an odd integer. There seems to be something interesting about grouping by 6. Even more interesting is that as N gets larger, it seems that the distribution of "010000" and "000100" seems to get closer and closer to being equivalent (approximately 16% when sieved up to the millionth prime)

The reason I'm calling this the hidden "DNA" of prime numbers, is because of the similarity to DNA in biology, i.e the four letters ACGT. There are four "letters" that emerge. It's even more interesting to encode the patterns into actual letters and then view the "strands". It'd be interesting to find patterns in the way the segments group together and if there are emergent rules to the chaos.

I have jupyter notebooks full of related information to this stuff if anyone is interested. Otherwise, I'm keen on hearing other input into what I'm looking at.

]]>

f'(h(t))*h'(t) = f(h(t)+ \alpha)

where f' is differentiated with respect to t, following from the chain rule on f(h(t)). Is there a substitution that will transform this differential equation into the form of

` f'(w) = f(w+\alpha)`

? It seems reasonable but I am not finding an easy way to do it.

]]>

Apparently this can be used for code purposes in creating a set of numbers . This and several other specific designs !

If we take a four letter word and assign a value proportional to the number of the letter in the alphabet , we can create a set of values by placing the numbers in a c formation .

In example the word scam , values of 19 , 3, 1 and 13. Now if we place these values in a c formation , we can x reference or by variations define a specific set of values .

In the word scam in a c formation we can use a x alignment to create two values , 3+13 and 19+1 to give the 2 values !

NUMBERS !

]]>

If Obj1 have weight X_{1}=0.7 from Method1 and weight Y_{1}=0.5 from Method2

similarly, Obj2 have weight X_{2}=0.5 from Method 1 and weight Y_{2}=0.7 from method2

My objective is to Rank the obj1 and Obj2 according to there weight values determined from Method1 and Method2.

Can anyone help me to tell the defined mathematical formula to get the,

Rank of Obj1 = ?

Rank of Obj2= ?

]]>Here is a math question :

First I'm going to define some things (some names may already exists that I don't know of, so please take my definition into consideration)

- let's call p[n] the nth-rank prime number p[0]=1, p[1]=2, p[2]=3, p[3]=5 etc

- as you know, each integer >0 can be written as a product of integer powers of prime numbers.. let's call it the "prime writing" of a number... i'll write u[n]

so for any integer X we have

X = product( p[n] ^ u[n] )

- we can extend this to rational numbers, simply by allowing u[n] <0

My question is : can we define a set of irrational numbers in ]0 ; 1[ that extends p[n] when n<0 and are the building blocks for irrational numbers ? Let's call them subprimes..

Those numbers would have the properties following :

- they are not power/products of primes and other sub-primes and of course integer powers of some other real number (other than themselves)

Are they already known ? Do they exist ? How to construct them ?

I have some (very faint) clue :

When you elevate these numbers to positive powers , you get closer and closer to 0.. so the more you go close to 0, the more likely to find a power of a bigger subprime.. so the density must decrease closer to 0.. you get some sort of sieve, but closer and closer to 0. ]]>

for all ,x : if 0<x<π/2 and |x-a|<c ,then |((sqrt sinx) +1)^2-((sqrt sin a) +1)^2|<b ]]>

I read an article about infinities, and as always, I don't get it.

The writer says : "℘(ℕ)" and "ℕ" are not in bijection..

but, it seems easy to me to create a bijection :

You take the binary writing of a number, and you take the rank integer that correspond to each 1

0 <=> {}

1 <=> {0}

2 <=> {1 }

3 <=> {0 ; 1 }

4 <=> { 2 }

...

259 <=> { 0; 1 ; 8 }

..

etc and so on

you have an integer for each set of integer and vice-versa, isn't it a bijection ?

So what did I got wrong ?

]]>

I was wondering if someone could, please, troll me about the reason why this doesn't disprove the mathematics of probabilities, since the outcome of flipping more and more coins in a row approaches closer and closer to half of the coins being heads or tails.

]]>
Consider any total linear ordering, <*, of the reals. To make it simpler consider <* for S={x: 0<x}. At this point we don't know if <* is a well ordering or not. I will show by math induction that a well ordering of S must produce a countable number of minimums for a particular collection of subsets of S. Then I'll show all numbers, z, must be in this collection or set of minimums. Thus, the conclusion must be that if **R** can be well ordered it must be a countable set and we know this is not true.

*The above is a preliminary test before going further to make sure my topic does not get closed*