Jump to content


  • Posts

  • Joined

  • Last visited

About renerpho

  • Birthday 04/12/1987

Profile Information

  • Location
  • Favorite Area of Science
    Mathematics, physics, astronomy

renerpho's Achievements


Quark (2/13)



  1. In that case, you should doubt your data first; if that does not help, you can still doubt the literature. Maybe the formula is not the one you actually need? Maybe your data is flawed? Or there was a problem with your setup? Or there's a problem in the calculations that I missed. Don't just suppose that the literature is wrong. Or, maybe you're misinterpreting the literature and the result you observed is actually what should happen? Check again if the conclusions you made are right. Can you repeat the experiment? Can you ask someone else who actually did the experiment?
  2. If one could calculate the Galois group of the septic equation [math]x^{7}+x^{5}+a{_0}=0[/math], there is a nice test: The septic is solveable by radicals if and only if its Galois group is either the cyclic group of order 7, the dihedral group of order 14 or the metacyclic group of order 21 or 42. Septics that have the Galois group [math]L(3,2)[/math] of order 168 can be solved using elliptic functions. All other septics (with Galois groups of higher order, 2520 or 5040) can not be solved with radicals or elliptic functions alone. Unfortunately, to calculate the Galois group of a septic equation is difficult, and I don't know of any general algorithm. To give a negative answer to the question if the function [math]y=x^{7}+x^{5}[/math] has an inverse that can be epressed by radicals is equivalent to finding a single number [math]a{_0}[/math] (a rational number is sufficient) such that the Galois group of the septic equation has order >42. I am pretty sure that this is the case for almost all choices of [math]a{_0}[/math], even though I don't have a single example. EDIT: I would go even further, and conjecture that there does not exist a polynomial of degree >4 for which the inverse function can be expressed with radicals alone. (All known examples of such polynomials that can be solved that way require special values for [math]a{_0}[/math].) Maybe somebody can provide a source to a proof of that "conjecture"? EDIT2: As the OT only asks how the inverse can be found, and not if it can be found using radicals alone, here is an answer: The general septic equation can be solved using hyperelliptic curves. So you can make use of that to give an expression for the inverse function in terms of hyperelliptic functions. Maybe someone has a link that gives a formula for the general solution of the septic, then you could apply it to the particular example with [math]a{_7}=a{_5}=1[/math] and [math]a{_6}=a{_4}=a{_3}=a{_2}=a{_1}=0[/math] to get an expression for the inverse function.
  3. I'm not entirely sure about the correct sign (the question is not clear about that). So it is [math]\vec{q}(n)=\vec{q}(0) \pm \frac{2n}{3} (\vec{a}+\vec{b}+\vec{c})[/math], depending on the direction in which the force is acting. I will leave vector arrows aside from now on; the following refers to vectors in [math]R^{2}[/math]. The continuous system can be modelled analytically, too. Note that [math]q=q(t)[/math]. You have [math]F(t,q)=q \pm \frac{2}{3} (a+b+c)[/math]. From Newton's second law, we have [math]F(t,q)=m \cdot \ddot{q}[/math], where [math]m[/math] is the mass of the test particle in [math]q[/math]. So the differential equation for the system becomes [math]m \cdot \ddot{q} - q = \pm \frac{2}{3} (a+b+c)[/math]. The two equations to solve are [latex]\begin{pmatrix} m \ddot{x}-x \\ m \ddot{y}-y \end{pmatrix}=\pm \frac{2}{3} \begin{pmatrix} a{_x}+b{_x}+c{_x} \\ \ a{_y}+b{_y}+c{_y} \end{pmatrix}[/latex]. The solution of this system is [latex]\begin{pmatrix} x \\ y \end{pmatrix}=\begin{pmatrix} v{_1} \cdot e^{\frac {t}{\sqrt{m}}}+v{_2} \cdot e^{-\frac {t}{\sqrt{m}}}+v{_3} \\ w{_1} \cdot e^{\frac {t}{\sqrt{m}}}+w{_2} \cdot e^{-\frac {t}{\sqrt{m}}}+w{_3} \end{pmatrix}[/latex], with constants [math]v{_i},w{_i}[/math] depending on the initial choice of [math]a,b,c,q(0)[/math]. The question does not mention the mass of the test particle. But as it is based on forces, the mass is an important factor for the continuous system.
  4. [latex]\begin{pmatrix} -1 \\ 2 \end{pmatrix}[/latex] [math]e^{\frac {t}{\sqrt{m}}}[/math] [latex]\begin{pmatrix} m \ddot{x}-x \\ m \ddot{y}-y \end{pmatrix}=\pm \frac{2}{3} \begin{pmatrix} a{_x}+b{_x}+c{_x} \\ \ a{_y}+b{_y}+c{_y} \end{pmatrix}[/latex]
  5. I understand it as follows: You start with a vector [math]\vec{q}(0)[/math], and (simultaneously) apply forces [math]\frac{2}{3} (\vec{a}-\vec{q}(0))[/math], [math]\frac{2}{3} (\vec{b}-\vec{q}(0))[/math] and [math]\frac{2}{3} (\vec{c}-\vec{q}(0))[/math]. You repeat [math]n[/math] times. That means, in the n-th step you apply forces [math]\frac{2}{3} (\vec{a}-\vec{q}(n-1))[/math], [math]\frac{2}{3} (\vec{b}-\vec{q}(n-1))[/math] and [math]\frac{2}{3} (\vec{c}-\vec{q}(n-1))[/math] on [math]\vec{q}(n-1)[/math] to get [math]\vec{q}(n)[/math]. You can give an explicit formula for the result: [math]\vec{q}(n)=\vec{q}(0) + \frac{2n}{3} (\vec{a}+\vec{b}+\vec{c})[/math]. All the intermediate terms cancel out nicely.
  6. Actually we think that the primes behave essentially like a pseudo-random number sequence (with a few known differences that are already well-understood). The Riemann hypothesis would confirm some that (at least in parts). It would allow to make a lot of predictions about the behaviour of primes (because many methods used to study random number sequences could be used to tackle prime numbers). It's a common misconception that the Riemann hypothesis would result in hidden patterns in the prime numbers. The opposite is true: The reason why there are so many unproven conjectures about primes is that we don't know if there are any fancy, hidden patterns.
  7. Hey steq, The calculations look fine, including the integration part. Your Excel sheet seems to calculate what is given in formula (5) and (6), although I have no clue if the result makes sense, physically. I suggest to test it by using an alternative integration method (try the rectangle rule)! Just to see if the integration method has a significant effect.
  8. Hello Pawel. The mimimum [math]Q_{min}=\max(KB-1,0)[/math] is calculated as follows: [math]\sum_{i=1}^{K}|B-A{_i}|\geq 0[/math] is trivial, and reached if all [math]A{_i}[/math] are equal to [math]B[/math]. If [math]KB-1>0[/math] then the minimum is not reached at 0, but at [math]\sum_{i=1}^{K}|B-A{_i}|\stackrel{|x|\geq x}{\geq}\sum_{i=1}^{K}(B-A{_i})\[/math] [math]=KB-\sum_{i=1}^{K}A{_i}\ \stackrel{\sum_{i=1}^{K}A{_i}=1}{=}KB-1[/math]. This minimum value is reached if and only if [math]A{_i}<B \forall i[/math]. In the case [math]KB-1<0[/math], you have [math]B<\frac{1}{K}[/math], so at least one of the [math]A{_i}[/math] is larger than [math]B[/math] (by the Pigeon Principle). Which means that [math]KB-1[/math] can't be reached. In that case, the mimimum is 0. For the maximum [math]Q_{max}=1+B(K-2)[/math]: [math]\sum_{i=1}^{K}|B-A{_i}| \stackrel{extend}{=}\sum_{i=1}^{K}(B+A{_i})-\sum_{i=1}^{K}(B+A_{i}-|B-A{_i}|)[/math] [math]=KB+1-\sum_{i=1,A{_i}>B}^{K}(B+A{_i}+B-A{_i})-\sum_{i=1,A{_i}\leq B}^{K}(B+A{_i}-B+A{_i})[/math] [math]=KB+1-\sum_{i=1,A{_i}>B}^{K}(2B)-\sum_{i=1,A{_i}\leq B}^{K}(2A{_i})[/math]. This will get maximal if you have [math]A{_i}=0 \forall A{_i}\leq B[/math], when it will be equal to [math]KB+1-2B \cdot 1_{A{_i}>B}[/math]. If one of the [math]A{_i}[/math] is equal to [math]1[/math] (and all other [math]A_{i}[/math] are [math]0[/math]), this will take its maximum value [math]KB+1-2B \cdot 1=1+B(K-2)[/math].
  9. Perhaps - but notice that, even if your formula is related to Fermat's Last Theorem for the case [math]n=3[/math], it can not attack the general case. And there already are easy and elementary proofs for the case [math]n=3[/math].
  10. Primes of that form are are quite rare, see https://oeis.org/A004023 to find numbers of the form [math]111 \dots 111=\frac{10^n-1}{9}[/math] that are prime. It turns out that this number is prime for n=2, 19, 23, 317, 1031, 49081, 86453, 109297 or 270343.
  11. For real numbers [math]x,y>1[/math], the solutions to the equation [math]x^y=y^x[/math] are given by the trivial [math]x=y[/math] and the more interesting [math]y=\frac{-x}{\ln(x)}W\left ( \frac{-x}{\ln(x)} \right )[/math], where [math]W[/math] is the product log function, see https://en.wikipedia.org/wiki/Lambert_W_function Examples: [math]x=3\textup{, }y\approx 2.47805 \dots[/math] [math]x=4\textup{, }y=2[/math] [math]x=5\textup{, }y\approx 1.76492 \dots[/math] For [math]0<x \leqslant 1[/math], there is only the trivial solution. For negative [math]x[/math] the term [math]x^y[/math] does not define a real number unless [math]x[/math] and [math]y[/math] are both integers. The only nontrivial solutions for [math]x<0[/math] are [math](x,y)=(-2,-4)[/math] and [math](x,y)=(-4,-2)[/math] (assuming you are only interested in real solutions). This is equivalent with saying that [math]2^4=4^2[/math] is the only nontrivial pair of solutions in [math]\mathbb{N}[/math]. A proof of the formula involving the product log function can be found here: http://mathforum.org/library/drmath/view/66166.html
  12. (1) An idea to reduce the amount of guesswork in my previous ansatz: Because the infinite sum converges (this is easy to show), [math]Q{_1}(m)[/math] has to be a constant, and it will be equal to the value of the infinite sum. That's because [math]\lim_{m\to\infty} \frac{{Q{_1}(m)}2^m+Q{_2}(m)}{2^m} = \lim_{m\to\infty} {Q{_1}(m)}[/math], and as [math]Q[/math] is a polynomial this limit only exists if [math]Q[/math] is constant and is equal to the value of the infinite sum. If you already suspect the infinite sum to be equal to 26 then you can save some work by setting [math]a=26[/math], reducing the number of linear equations to 4. (2) Here is an alternative ansatz that avoids induction (for the cost of being less elementar). But it is more powerful because it can solve an infinite class of similar problems, and more elegant because there's no need for any "guesswork". Let [math](p,q)[/math] be a pair of real numbers, with [math]q>1[/math]. Notice that [math]\sum_{n=1}^{\infty }\frac{n^p}{q^n}=\sum_{n=1}^{\infty }{\frac{(1/q)^n}{n^{-p}}}[/math]. Because of [math]q>1[/math] that sum converges. We are going to evaluate it by turning the problem into one about power series; the series involved is the one that defines the polylogarithm [math]\textup{Li}_{s}(x)[/math]. Definition: [math]\textup{Li}_{s}(x):= \sum_{n=1}^{\infty }\frac{x^n}{n^s}=x+\frac{x^2}{2^s}+\frac{x^3}{3^s}+\dots[/math] With that, we get [math]\sum_{n=1}^{\infty }\frac{n^p}{q^n}=\textup{Li}_{-p}\left (1/q \right )[/math]. Set [math]\left ( p,q \right )=\left ( 3,2 \right )[/math] and we get the expression [math]\sum_{n=1}^{\infty }\frac{n^3}{2^n}=\textup{Li}_{-3}(1/2)[/math]. Even though the polylogarithm can not be expressed in terms of elementary functions in the general case, it can be shown to be a rational function if [math]s[/math] is a nonpositive integer, for example [math]\textup{Li}_{-3}(x)=\frac{x(1+4x+x^2)}{(1-x)^4}[/math]. This can be derived via the expression [math]\textup{Li}_{-n}(x)={x^n}\frac{\mathrm{d}^n }{\mathrm{d}x^n} \left (\frac{x}{1-x} \right )[/math] for [math]n=0,1,2, \dots[/math] which itself follows directly (by induction over [math]n[/math]) by simultaneously differentiating [math]n[/math] times both sides of the equation [math]\frac{x}{1-x}=\textup{Li}_{0}(x)[/math] (the well known Taylor formula for [math]\frac{x}{1-x}[/math]). All this leads to [math]\textup{Li}_{-3}(1/2)=26[/math], giving the searched value. With the same method, you can show results like [math]\sum_{n=1}^{\infty }\frac{n^2}{2^n}=6[/math], [math]\sum_{n=1}^{\infty }\frac{n^4}{2^n}=150[/math] or [math]\sum_{n=1}^{\infty }\frac{n^5}{4^n}=\frac{4108}{243} \approx 16.905 \dots[/math] All of these can be shown by the induction method, too - but the computations involved become extremely ugly very, very fast. The formula [math]\sum_{n=1}^{\infty }\frac{n^p}{q^n}={x^p}\frac{\mathrm{d}^p }{\mathrm{d}x^p} \left. \left (\frac{x}{1-x} \right ) \right | _{x=1/q}[/math] for [math]p=0,1,2, \dots[/math] is much easier to evaluate.
  13. Remark: Bailey, Borwein et al. (2006) give a nice heuristic argument why this formula might be true, which is related to double Euler sums and 4-dimensional geometry, as well as Quantum Physics. See p.11 of http://crd-legacy.lbl.gov/~dhbailey/dhbpapers/tenproblems.pdf.
  14. If you want to keep the proof elementar, you will have to put some ideas into it. Here is one possible approach: Notice that your sum is of the form [math]\sum_{n=1}^{m}\frac{P(n)}{2^n}[/math] where [math]P[/math] is a polynomial. You start your proof with an educated guess, that the sum will be of similar form, namely [math]\sum_{n=1}^{m}\frac{n^3}{2^n}\stackrel{?!}{=}\frac{{Q{_1}(m)}2^m+Q{_2}(m)}{2^m}[/math] where [math]Q{_1}[/math] and [math]Q{_2}[/math] are themselves polynomials (you include a polynomial term multiplied by [math]2^m[/math] to increase your chances of success). There is no guarantee that this will succeed, but it's a starting point. So you make trial&error. You can be quite confident that [math]Q{_2}[/math] will be of at least same degree as [math]P[/math] (sums and integrals don't tend to decrease the degree of polynomials involved). So, your first attempt is the simplest possible, where [math]Q{_1}[/math] has degree 0 (turning it into a constant, possibly 0) and [math]Q{_2}[/math] has degree 3. This leads you to [math]\sum_{n=1}^{m}\frac{n^3}{2^n}\stackrel{?!}{=}\frac{{a}2^m+{b}m^3+{c}m^2+{d}m+e}{2^m}[/math] for some real numbers [math]a,b,c,d,e[/math]. Evaluate at [math]m=1,\dots,5[/math] and you get a system of 5 linear equations in 5 variables. That means that IF your guess was correct then this will lead you to the only possible solution. That's the one presented in my previous post, and once you found it you can proof it by induction. And indeed, you will find that [math]\begin{pmatrix}2 & 1 & 1 & 1 & 1 \\ 4 & 8 & 4 & 2 & 1 \\ 8 & 27 & 9 & 3 & 1 \\ 16 & 64 & 16 & 4 & 1 \\ 32 & 125 & 25 & 5 & 1\end{pmatrix}\begin{pmatrix}a \\ b \\ c \\ d \\ e\end{pmatrix}=\begin{pmatrix}1 \\ 10 \\ 47 \\ 158 \\ 441\end{pmatrix}[/math] , with the unique solution [math]\begin{pmatrix}a \\ b \\ c \\ d \\ e\end{pmatrix}=\begin{pmatrix}26 \\ -1 \\ -6 \\ -18 \\ -26\end{pmatrix}[/math] immediately resulting in the formula shown to be correct earlier. Notice that you still have to prove it by induction, because so far this only shows that the claim is correct for [math]m=1,\dots,5[/math]. The very same trick will work for many summation formula that are usually proved by induction.
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.