I've been getting into the concept of hyperreal numbers lately, and I've got tons of questions. What I understand about the hyperreals is that they are numbers larger than any real number or smaller than any real number. I'm sure you can imagine how counterintuitive this sounds to someone like me who's new to the concept. It's like talking about numbers greater than infinity. I always thought that was impossible. So it shouldn't be surprising that someone like me would have a ton of questions. I'll start with a couple.

1) Assume that R is a hyperreal number greater than any real number. What does 2 x R equal? It's clear what 2 x n means where n is a real number because there is a 0 value for reference--i.e. 2 x n is a number twice the distance from 0 as n is from 0. But do the hyperreals have their own 0 point? How could they if they are greater than any real number (I realize some hyperreals are smaller than any real number, but for this question I'm only focused on the infinitely large hyperreals)? If 2 x R means twice the distance from 0 the real number as R is from 0 the real number, the you get a number another infinite distance away--sort of like a hyper-hyperreal number. <-- Does that make sense? Do the infinitely large hyperreals have their own infinity beyond which are numbers that are hyperreal even to the hyperreals?

2) I remember watching a vsauce episode on youtube where Michael Stevens explained the difference between cardinals and ordinals, which as I understand it is the difference between numbers that represent quantities and numbers that represent orders. He explained that while there is no cardinal number greater than infinity, you could talk about ordinal numbers greater than infinity. He didn't explicitly link ordinals to hyperreals but it seemed like the same idea. He stressed that since ordinals don't stand for quantities, you cannot use ordinals to speak of "how much" something is, but simply whether they come "before" or "after" another number. Is this true of hyperreals as well? If so, this would seem to imply that there is no 0 point on the hyperreal number line as that would mean you could quantify any hyperreal number R (the ones greater than infinity). It's quantity would just be how many whole hyperreal numbers it is away from "hyper-zero" (just as we say the number 5 represents the quantity of whole numbers it is away from 0). But if there is no such "hyper-zero" number, then there isn't a reference point relative to which we can say "how much" a hyperreal number (greater than infinity) represents (except that it's greater than any real number). We could still quantify the difference between any two (greater than infinity) hyperreal numbers. So we could say R+3 is 3 greater than R, but without knowing how much R really is, we don't really know how much R+3 is either. So I guess the question is: should hyperreal numbers greater than infinity be thought of as ordinals only--they represent orders of number, not quantities--or is there a way of talking about their quantities as well?

I'll stop there for now. Thanks for any forthcoming responses.

]]>

Let the p-th power of an arithmetic series as follows

[math]\sum_{i=1}^n x_i^p = x_1^p + x_2^p + x_3^p + \cdots + x_n^p[/math]

The general equation for the sum is given as follows

[math]\sum^{n}_{i=1} {x_i}^p=\sum^{u}_{m=0}\phi_m s^{2m}\frac{[\sum^{n}_{i=1} x_i] ^{p-2m}}{n^{p-(2m+1)}}[/math]

where: [math]p-(2m+1)\ge 1[/math] if p is even and [math]p-(2m+1)\le 1[/math] if p is odd, [math]\phi_m[/math] is a coefficient,[math]\sum^{n}_{i=1} {x_i}[/math] sum of n-th term, [math]u=\frac {p-1}{2}[/math] for odd p and [math]u=\frac {p}{2}[/math] for even p and s is the difference between terms (i.e [math]s=x_{i+1}-x_{i}[/math]).

Below are the equations for p=2-7

[math]\\\sum_{i=0}^{n}x_{i}^{2}=\frac{\left [ \sum_{i=0}^{n}x_{i} \right ]^2}{n}+\frac{n(n^2-1)s^2}{12}\\\\\\\sum_{i=0}^{n}x_i^3=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^3}{n^2}+\frac{(n^2-1)s^2\left [ \sum_{i=0}^{n} x_i\right ]}{4}[/math]

The value s is the common difference of successive terms in arithmetic progression and [math]\sum_{i=0}^{n}x_i[/math] is the sum of arithmetic terms. The beauty of this equation is that when you set n=2, it describes the Fermat's Last Theorem in a polynomial forms and if you set p to be negative, you can get new form of Riemmann's Zeta Function.

Here, you can see how the coefficients are repetitive:

[math]\\\sum_{i=0}^{n}x_i^4=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^4}{n^3}+\frac{(n^2-1)s^2\left [ \sum_{i=0}^{n}x_i \right ]^2}{2n}+\frac{n(3n^2-7)(n^2-1)s^4}{240}\\\\\\\sum_{i=0}^{n}x_i^5=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^5}{n^4}+\frac{5(n^2-1)s^2\left [ \sum_{i=0}^{n} x_i\right ]^3}{6n^2}+\frac{(3n^2-7)(n^2-1)s^4\left [ \sum_{i=0}^{n}x_i \right ]}{48}[/math]

[math]\\\sum_{i=0}^{n}x_i^6=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^6}{n^5}+\frac{5(n^2-1)s^2\left [ \sum_{i=0}^{n}x_i \right ]^4}{4n^3}+\frac{(3n^2-7)(n^2-1)s^4\left [ \sum_{i=0}^{n}x_i \right ]^2}{16n}+\frac{n(3n^4-18n^2+31)

(n^2-1)s^6}{1344}\\\\\\\sum_{i=0}^{n}x_i^7=\frac{\left [ \sum_{i=0}^{n}x_i \right ]^7}{n^6}+\frac{7(n^2-1)s^2\left [ \sum_{i=0}^{n}x_i \right ]^5}{4n^4}+\frac{7(3n^2-7)(n^2-1)s^4\left [ \sum_{i=0}^{n}x_i \right ]^3}{48n^2}+\frac{(3n^4-18n^2+31)

(n^2-1)s^6\left [ \sum_{i=0}^{n}x_i \right ]}{192}[/math]

Perhaps, by looking at this new formulation, someone could work out an alternative shorter proof for Fermat's Last Theorem.

]]>I realise that there can be multiple [sic] answers; I'm after the smallest integers that produce c.

eg. Given 1.05; A/B = 21/20.

]]>

Here is possibly a neat "pattern" I've come across when studying prime numbers, or at least a different way of bucketing them. I'm looking to see if anyone can help me explain it, because I'm having a hard time wrapping my head around it. It may be that I've found something that is trivially explained away by some known information I just don't have or am not seeing. The pattern emerges with you cut prime sieves of length N > 3 into segments of 6 after separating the first 3 prime numbers (1,2 and 3). I'll explain below.

We'll be working with a prime sieves with the following properties:

- We sieve on intervals [1, N] where N > 3. The examples work out nicely if N-3 % 6 = 0,
- In this sieve we'll represent a prime number at index 'i' with digit '1', and a composite number with digit '0'
- We end up creating a string of '0's and '1's of length N that represents the primality of the number located at index 'i'.

Here is a sieve up to N=45, first separated by a segment of length 3 since 1, 2, and 3 are prime, and then subsequent segments of length 6.

111 - 010100 - 010100 - 010100 - 010000 - 010100 - 000100 - 010100 ...

I've created these sieves all the way up to the 1 millionth prime number. The interesting thing that emerges is there are only 4 unique segments that ever show up:

- 010100 - Segment that includes a twin prime (position 2 and 4).
- 010000 - Segment that includes a single prime at position 2.
- 000100 - Segment that includes a single prime at position 4.
- 000000 - Segment that includes no primes at all.

There is never a case where the number represented at the 6th position in a segment is prime, EVEN THOUGH this digit position always represents an odd integer. There seems to be something interesting about grouping by 6. Even more interesting is that as N gets larger, it seems that the distribution of "010000" and "000100" seems to get closer and closer to being equivalent (approximately 16% when sieved up to the millionth prime)

The reason I'm calling this the hidden "DNA" of prime numbers, is because of the similarity to DNA in biology, i.e the four letters ACGT. There are four "letters" that emerge. It's even more interesting to encode the patterns into actual letters and then view the "strands". It'd be interesting to find patterns in the way the segments group together and if there are emergent rules to the chaos.

I have jupyter notebooks full of related information to this stuff if anyone is interested. Otherwise, I'm keen on hearing other input into what I'm looking at.

]]>

f'(h(t))*h'(t) = f(h(t)+ \alpha)

where f' is differentiated with respect to t, following from the chain rule on f(h(t)). Is there a substitution that will transform this differential equation into the form of

` f'(w) = f(w+\alpha)`

? It seems reasonable but I am not finding an easy way to do it.

]]>Is a Fourier transform of a real function is still always real? I suppose the idea is that the imaginary component decays to 0 as you take the integral from -infinity to infinity so that it evaluates to a single finite real number, or actually, does the output of a the Fourier transform of a real valued function need to be real? Why do I generally see absolute value arguments in proving well-defined properties if complex functions have even more possible they can take? If you take an absolute value that only tells you anything about the magnitude of uncountably infinite numbers.

]]>

Apparently this can be used for code purposes in creating a set of numbers . This and several other specific designs !

If we take a four letter word and assign a value proportional to the number of the letter in the alphabet , we can create a set of values by placing the numbers in a c formation .

In example the word scam , values of 19 , 3, 1 and 13. Now if we place these values in a c formation , we can x reference or by variations define a specific set of values .

In the word scam in a c formation we can use a x alignment to create two values , 3+13 and 19+1 to give the 2 values !

NUMBERS !

]]>

If Obj1 have weight X_{1}=0.7 from Method1 and weight Y_{1}=0.5 from Method2

similarly, Obj2 have weight X_{2}=0.5 from Method 1 and weight Y_{2}=0.7 from method2

My objective is to Rank the obj1 and Obj2 according to there weight values determined from Method1 and Method2.

Can anyone help me to tell the defined mathematical formula to get the,

Rank of Obj1 = ?

Rank of Obj2= ?

]]>Here is a math question :

First I'm going to define some things (some names may already exists that I don't know of, so please take my definition into consideration)

- let's call p[n] the nth-rank prime number p[0]=1, p[1]=2, p[2]=3, p[3]=5 etc

- as you know, each integer >0 can be written as a product of integer powers of prime numbers.. let's call it the "prime writing" of a number... i'll write u[n]

so for any integer X we have

X = product( p[n] ^ u[n] )

- we can extend this to rational numbers, simply by allowing u[n] <0

My question is : can we define a set of irrational numbers in ]0 ; 1[ that extends p[n] when n<0 and are the building blocks for irrational numbers ? Let's call them subprimes..

Those numbers would have the properties following :

- they are not power/products of primes and other sub-primes and of course integer powers of some other real number (other than themselves)

Are they already known ? Do they exist ? How to construct them ?

I have some (very faint) clue :

When you elevate these numbers to positive powers , you get closer and closer to 0.. so the more you go close to 0, the more likely to find a power of a bigger subprime.. so the density must decrease closer to 0.. you get some sort of sieve, but closer and closer to 0. ]]>

https://en.wikipedia.org/wiki/Number_line

Is it possible to find points corresponding to infinitesimals on a number line? I mean finding an infinitesimal between two neighbouring points (between two real numbers).

I am assuming that every point is surrounded by neighbourhood. I got this idea of neighbouring points from John L . Bells' book A Primer of Infinitesimal Analyis (2008).

On page 6, he mentions the concept of ‘infinitesimal neighbourhood of 0’. But I think he would not consider his infinitesimals as points because on page 3 he writes

that "Since an infinitesimal in the sense just described is a part of the continuum from which it has been extracted, it follows that it cannot be a point:

to emphasize this we shall call such infinitesimals nonpunctiform."

]]>

I was looking on the post about the reliability of published research.

And I was wondering if we could know, a posteriori, if a study is dependable or not.

Do you know any statistic test to improve our understanding of already published paper?

In biology I have seen a lot of people doing three or four different tests to see if their results are meaningful. Isn't it a kind of fraud?

Thank you very much. ]]>

\[ f''(x) = \frac{f(x+2dx) -2(f+dx) + f(x)}{(dx)^2} \]

I am using a finite difference approximation called "Second order forward" from the link, I use dx instead of h:

https://en.wikipedia.org/wiki/Finite_difference#Higher-order_differences

]]>for all ,x : if 0<x<π/2 and |x-a|<c ,then |((sqrt sinx) +1)^2-((sqrt sin a) +1)^2|<b ]]>

I read an article about infinities, and as always, I don't get it.

The writer says : "℘(ℕ)" and "ℕ" are not in bijection..

but, it seems easy to me to create a bijection :

You take the binary writing of a number, and you take the rank integer that correspond to each 1

0 <=> {}

1 <=> {0}

2 <=> {1 }

3 <=> {0 ; 1 }

4 <=> { 2 }

...

259 <=> { 0; 1 ; 8 }

..

etc and so on

you have an integer for each set of integer and vice-versa, isn't it a bijection ?

So what did I got wrong ?

]]>

I was wondering if someone could, please, troll me about the reason why this doesn't disprove the mathematics of probabilities, since the outcome of flipping more and more coins in a row approaches closer and closer to half of the coins being heads or tails.

]]>
Consider any total linear ordering, <*, of the reals. To make it simpler consider <* for S={x: 0<x}. At this point we don't know if <* is a well ordering or not. I will show by math induction that a well ordering of S must produce a countable number of minimums for a particular collection of subsets of S. Then I'll show all numbers, z, must be in this collection or set of minimums. Thus, the conclusion must be that if **R** can be well ordered it must be a countable set and we know this is not true.

*The above is a preliminary test before going further to make sure my topic does not get closed*

I'm a civil engineer and completed my Msc (Maths) focusing on Numerical Study 10 years ago. After my semi retirement as a result of my financial freedom, i have been studying some practical Maths problem for fun.

Recently I've been trying to model and solve a 2 digit lottery drawing game, and i failed. It's purely my imagination since i didn't see this in anywhere. But who knows it may exist?

Suppose we have a lottery game of 2 digits, drawn from 2 separate but identical electrical drums as lottery company always have. Each drum consists of 10 balls, numbered from 0 to 9, to be drawn as a pair and the drawn balls are to be replaced. In one game, 12 pairs of numbers to be drawn as winning numbers, on every Saturday and Sunday.

Eg

A particular Saturday: 09, 21, 04, 31, 48, 61, 00, 32, 99, 98, 11, 99

Sunday: another 12 pairs of numbers

My question is: if you have the result of last 1000 game, how do you calculate the most probable drawn numbers (one or two pairs) for the next drawing?

Any idea?

]]>I'm a freshman in university and I'm studying Computer science and engineering. This will be my second year of studying. We don't have Calculus as a mandatory class but I can take it from elective classes.

Is calculus necessary for my future as a student and would it help me in data science or AI? That's what I'm really interested in and I want to work for either of them. Would Calculus make my education easier in the future and in my work? In my next semesters, I want to take just Artificial intelligence and Data Science classes (Data mining, Data Science, Mechanical Learning, etc). Is calculus used there?

Thanks for your time reading and answering my question.

[math](x+5)^x=7[/math].

I know that to get I can use the Omega function Lambert W Function, however, I can't understand how it works. Thus, I ask for help of those of you who know how to solve this kind of equations (showing as much steps as you can) in order to give me reference point for further studying of this topic.

]]>
Here is the pattern required to enumerate the reals (that is to "prove" that they are *countable*).

I don't for a moment suggest that Cantor's method is at all sensible or correct. In fact it is the most unmathematical hand-waving you could imagine.

**Commercial link removed by moderator**