Jump to content
Sign in to follow this  
PeterJ

Twin Primes Conjecture

Recommended Posts

Where p1 = 31, 287 is not in R. The definition for R is p1^2 to P2^2.

 

This is a different definition for R from what you've told us previously. You defined R as the difference between p_1^2 and p_2^2, which in the case of 31 and 37 works out to be 408, which means 287 is in R. Furthermore, imatfaal used the same definition for R, and now you've said his math is correct aside from the 3p vs. 6p thing you mentioned.

 

However, assuming this new definition is what you meant, there are still relevant multiples of primes greater than 6 in the range, for example 979, which aside from 1 and 979 only has factors 11 and 89 and is equal to 6(163) + 1.

 

Not upset, but frustrated. It is such a simple idea, but somehow I cannot get it across. My fault I'm sure.

 

No worries. Though I'd imagine that if the proof were based on a simple idea, it would have already been found, there's a chance you're right and we're all wrong. But it doesn't seem that way so far.

 

It was presented rigorously. But he found that the original calculation left open the theoretical possibiliity of a highest twin prime. even though the likelihood was approximately zero.

 

I think I've heard before of statistical arguments in favor of the twin prime conjecture. However, obviously that sort of argument doesn't constitute proof.

 

How can a limit reach infinity? Here it just grows larger. As p grow larger the limit cannot fall or stay the same.

 

I'm not sure what you mean here. If the lower limit is never infinity, then for arbitrarily large ranges, the possibility still exists that only a finite number of twin primes exist. Even if the lower limit in some range is Graham's number or something, that means there could possibly be Graham's number plus two twin primes, and no more.

Edited by John

Share this post


Link to post
Share on other sites

This is a different definition for R from what you've told us previously. You defined R as the difference between p_1^2 and p_2^2, which in the case of 31 and 37 works out to be 408, which means 287 is in R.

Sorry, but the definition for R is the range between the squares of consecutive primes. It always has been. It is the entire basisd for the calculation.

 

 

Furthermore, imatfaal used the same definition for R, and now you've said his math is correct aside from the 3p vs. 6p thing you mentioned.

He used the correct range. I don't understand your objection.

 

 

However, assuming this new definition is what you meant, there are still relevant multiples of primes greater than 6 in the range, for example 979, which aside from 1 and 979 only has factors 11 and 89 and is equal to 6(163) + 1.

Clearly I have explained this very badly. There may be an arbitrarily large quantity of multilples of primes >6 in R. How could it be otherwise?

 

 

 

John - "I'm not sure what you mean here. If the lower limit is never infinity, then for arbitrarily large ranges, the possibility still exists that only a finite number of twin primes exist. Even if the lower limit in some range is Graham's number or something, that means there could possibly be Graham's number plus two twin primes, and no more."

 

The limit is a lower limit for twin primes. As p increases this limit (may) approach infinity and R (will) approach infinity. . But there can never be an infinite quantity of twin primes in R. I think you are not seeing what I'm saying yet.

 

R is a well-defined range, and the highest prime having a multiple in R can be defined. This allows us to calculate (albeit in a very sloppy way) the relationship between available locations for twin primes and the quantity of multiples available to fill those locations as p increases. This is simply true. What is not certain is whether the calculation can be made sufficiently accurate to allow us to predict where this relationship goes.

Edited by PeterJ

Share this post


Link to post
Share on other sites

Sorry, but the definition for R is the range between the squares of consecutive primes. It always has been. It is the entire basisd for the calculation.

 

 

He used the correct range. I don't understand your objection.

 

 

Clearly I have explained this very badly. There may be an arbitrarily large quantity of multilples of primes >6 in R. How could it be otherwise?

 

Yeah, I slightly misread imatfaal's post. The reason I bring up primes larger than 6 is that, previously, you claimed we shouldn't need to consider 2R/(6p) for p > 7, or (later) primes greater than sqrt(p_2). That's what I was addressing here.

 

John - "I'm not sure what you mean here. If the lower limit is never infinity, then for arbitrarily large ranges, the possibility still exists that only a finite number of twin primes exist. Even if the lower limit in some range is Graham's number or something, that means there could possibly be Graham's number plus two twin primes, and no more."

 

The limit is a lower limit for twin primes. As p increases this limit (may) approach infinity and R (will) approach infinity. . But there can never be an infinite quantity of twin primes in R. I think you are not seeing what I'm saying yet.

 

The entire point to the twin prime conjecture is the lower limit does go to infinity, i.e. there are infinitely many twin primes. If the possibility exists that there aren't infinitely many, then the conjecture isn't proven.

 

R is a well-defined range, and the highest prime having a multiple in R can be defined. This allows us to calculate (albeit in a very sloppy way) the relationship between available locations for twin primes and the quantity of multiples available to fill those locations as p increases. This is simply true. What is not certain is whether the calculation can be made sufficiently accurate to allow us to predict where this relationship goes.

 

Indeed, though I'm not sure how easy it is to find said highest prime for very large ranges.

Edited by John

Share this post


Link to post
Share on other sites

it's trying to use the damn things that's frustrating. I like the ideas but can't do the arithmetic.

 

But your sums are spot on. This is the calculation. Almost. A problem with your version is that you divide by 3p instead of using 6p and doubling the result. It looks the same on paper but it's not in practice. My way gives a lower total of 19. So 19 products and 19 locations gives 0 as a lower limit for TPs.

 

I promise you without any fear of being wrong that x/3 and 2(x/6) are always always equal for numbers. You might have noticed some rounding errors - but the actual maths must be the same. It would be earth-shattering if this were not the case

 

post-32514-0-86527500-1364920396.jpg

 

Before I say anything about why the result is zero, which is a massive underestimate, can I just check something with you. It will save a lot time if the result is not what I expect. When I did the spreadsheet calcs they came out fine, but it's all a bit messy for very small primes. .

 

Do you have a spreadsheet set up? It sounded like it. Could you do me a favour and do the calculation once (but using 6p as the divisor of R) for two much larger primes, (not so large it becomes a nuisance). Your result will be trustworthy. My calcs worked, but maybe I built in an error. I did them a long time ago.

 

If the result is less than one then you will have demonstrated mathematically that I am an idiot. My approach would still be sound, but the calculation would be a failure.

 

Will do

post-32514-0-34077800-1364924822_thumb.jpg

 

post-32514-0-20142800-1364925606_thumb.jpg

 

 

AS you can see - not a lot of success - the cumulative total of 2*(R/6P) for all relevant P out strips the number of possible sites very quickly.

 

I will send you excel files (well its a conversion to excel from openoffice which is my spreadsheet of choice) if you would like to play around with it.

 

 

 

 

Share this post


Link to post
Share on other sites

Sorry John, but I really do not understand what you are saying.

 

Imatfaal - I'm not that bad at arithmetic. As you say, it's the particular circumstances that make the two calcs different.

 

 

Give me a little time to think. There's something here that I don't understand. I've done this calculation many times and wouldn't have posted anything here if I hadn't.

 

I'll just check that we're doing the same calculation. If we are then I might have to leave the forum and never come back.

Share this post


Link to post
Share on other sites

Right. I'm up to speed.

 

I have no idea how I made such a stupid mistake. It beggars belief. My calculation is clearly not up to the job. This puzzles me, because a few years ago I spent a lot of time on this and it all seemd to work fine. Oh well. My apologies for wasting so much of your time, and many thanks imatfaal for sticking with it. . .

 

But there is still hope for the idea, and I'd like to explore it a bit further, just to see how much else I've got wrong. .

 

The problem that has to be solved for my idea to work is getting rid of all the duplication errors when counting products in R. My calc. takes no account of these. I can see that they can be calculated in principle, but actually doing it is beyond me.

 

For example, when counting the products of 7 in R we can deduct those that are joint product of 5 and thus have already been counted. For products of 7, 2 in every 210 numbers are joint products of 5. This sort of calculation can be done for all the relevant prime factors but it's massively complex. I end up with fancy hierarchical chains of reciprocals that quickly get out of control. It can be done in theory, however, and that is the main thing.

 

The principle is based on this thought. Where p1, p2 are large and (say) 100 numbers apart. Let us calculate (even if it takes a month), that the behaviour of the multiples of the relevant primes is such that there must be at least two twin primes in R. We then know that the density of products of primes below P2 is insufficient to prevent there being infinite twin primes.

 

When we increase p1 by one step on average R will be larger. Let's say the next prime is also 100 numbers away. R will now be many times greater than previous R.

 

So when we increase p by one step, so that old p2 now becomes new p1, we already know that there are more than two twin prime in R - unless, that is, P1 produces enough products to prevent more twins from occuring. Let's say P2 is 108+1. In this case it produces two 'relevant' products in every 6(108+1) numbers. This is not very many. What I was trying to do was show that P2 can never produce enough products to elimate all the twin primes in R, since this seem bound to be true whether or not we can calculate it. R grows ever larger with each increase in p1, and the increase may be arbitrarily large, while the products of p2 become ever more sparse as R becomes larger.

 

Say there are 10 twin primes in R. We can just count them off a table of primes. Because the primes become more sparse as they become larger, a step increase in p1 will on average make R larger, often a lot larger, while the new p1 will be the only prime factor available to increase the density of products in R. This will produce 2 in every 6p1 numbers less any joint products, of which there will be many. If there were 10 twin primes in the previous R , then (on average) there will be many more than 10 in the next R, but only one more prime factor.

 

Do you see what I'm getting at?

 

If the three consecutive primes we're using are 101, 201, 301, (let's say they're primes) then a step increase in p will increases R fourfold and likewise (on average) the qty of twin primes in R, Meanwhile the density of products in R will increases by only 2(201*6) = 1/603 less any joint products of lower primes, which gives a grand total of 1. All the rest of the products of the new p1 have already been counted.

 

This is all very vague but there does seem to be a mechanism at work that might allow us to rule out the idea of a highest twin prime.

 

Or have I got this all wrong as well?

Edited by PeterJ

Share this post


Link to post
Share on other sites

Right. I'm up to speed.

 

I have no idea how I made such a stupid mistake. It beggars belief. My calculation is clearly not up to the job. This puzzles me, because a few years ago I spent a lot of time on this and it all seemd to work fine. Oh well. My apologies for wasting so much of your time, and many thanks imatfaal for sticking with it. . .

The primes are infinitely fascinating and I enjoy writing little excel formulae - so my time was anything but wasted.

 

But there is still hope for the idea, and I'd like to explore it a bit further, just to see how much else I've got wrong. .

 

The problem that has to be solved for my idea to work is getting rid of all the duplication errors when counting products in R. My calc. takes no account of these. I can see that they can be calculated in principle, but actually doing it is beyond me.

Peter - I know from Philosophy you understand the importance of clear and precise definitions. What I did was to count or enumerate the prime divisors of each number that was 6n+/-1. For each 6n I had a cell on spreadsheet that asked "does this number -1 divide by 5 evenly, if not does this number +1 divide by five evenly". The next cell to the right asked the same question but dividing by 7, the next by 11.... This is a brute force approach - which can be brilliant in disproving a universal theory but is useless in proving one. To be able to extend your argument you need to be able to calculate or predict the number of composite primes at positions 6n+/-1 from just the information of P1 = x and P2 = y

 

 

For example, when counting the products of 7 in R we can deduct those that are joint product of 5 and thus have already been counted. For products of 7, 2 in every 210 numbers are joint products of 5. This sort of calculation can be done for all the relevant prime factors but it's massively complex. I end up with fancy hierarchical chains of reciprocals that quickly get out of control. It can be done in theory, however, and that is the main thing.

Your proposition that it can be done in theory is not accepted.

 

We know of consecutive primes that are of the order of 10^200 - That is a gobsmackingly big number and the range between their squares will be of a similar order. To put the size of the job of enumerating the prime composites of those two consecutives in context - if every single atom of the universe performed one computation per second for entire predicted life of the universe we would fall short by many orders of magnitude. It is impossible to do this by brute force - and even if we could, bearing in mind the nature of primes we would be no closer to a proof of the conjecture.

 

 

The principle is based on this thought. Where p1, p2 are large and (say) 100 numbers apart. Let us calculate (even if it takes a month), that the behaviour of the multiples of the relevant primes is such that there must be at least two twin primes in R. We then know that the density of products of primes below P2 is insufficient to prevent there being infinite twin primes.

 

When we increase p1 by one step on average R will be larger. Let's say the next prime is also 100 numbers away. R will now be many times greater than previous R.

 

So when we increase p by one step, so that old p2 now becomes new p1, we already know that there are more than two twin prime in R - unless, that is, P1 produces enough products to prevent more twins from occuring.

But the "unless ..." bit is the problem

 

Let's say P2 is 108+1.

just as an aside 100,000,001 is divisible by 17

In this case it produces two 'relevant' products in every 6(108+1) numbers. This is not very many. What I was trying to do was show that P2 can never produce enough products to elimate all the twin primes in R, since this seem bound to be true whether or not we can calculate it. R grows ever larger with each increase in p1, and the increase may be arbitrarily large, while the products of p2 become ever more sparse as R becomes larger.

I see where you are going with the idea. But the unique (I think) property of primes is that you cannot make this assumption - just cos it did that for all the numbers below p2 is no guarantee that it will continue. No one seriously thinks the twin prime conjecture is false - but we cannot find a way to prove it; even though we have overwhelming evidence that it is almost certainly true

 

Say there are 10 twin primes in R. We can just count them off a table of primes. Because the primes become more sparse as they become larger, a step increase in p1 will on average make R larger, often a lot larger, while the new p1 will be the only prime factor available to increase the density of products in R.

But the number of times that 5 will divide cleanly into one of the 6n+/-1 sites in Larger Range (p2->p3) is not determined (or we cannot yet show it is) by the number of times it divided into one of the 6n+/-1 sites in smaller Range (p1->p2)

 

It varies - just a little bit. It varies because the end points of our range are dictated by primes - it doesnt vary hugely but enough to make it a bit of a guess.

 

Here are the figures for primes upto 101

 

post-32514-0-56152800-1365083347_thumb.jpg

 

 

 

This will produce 2 in every 6p1 numbers less any joint products, of which there will be many. If there were 10 twin primes in the previous R , then (on average) there will be many more than 10 in the next R, but only one more prime factor.

 

Do you see what I'm getting at?

 

If the three consecutive primes we're using are 101, 201, 301, (let's say they're primes) then a step increase in p will increases R fourfold and likewise (on average) the qty of twin primes in R, Meanwhile the density of products in R will increases by only 2(201*6) = 1/603 less any joint products of lower primes, which gives a grand total of 1. All the rest of the products of the new p1 have already been counted.

 

This is all very vague but there does seem to be a mechanism at work that might allow us to rule out the idea of a highest twin prime.

 

Or have I got this all wrong as well?

 

Yes and no. You need something more than heuristic - and to predict number of relevant prime composites you need a way of distinguishing between instances where there are only two divisors and where there are more.

 

If your 6n+/-1 is divisible by 5 then you know this is lowest prime factor. But if you know for instance that it is divisible by 17 - in order to compute a density you need to know if this is the lowest prime factor (ie to be sure it is not divisible by 5,7,11,13). That is the number you need - you can be safe in estimating that 2 in 5 sites of 6n+/-1 are divisible by 5; but to do the same for 7 you need to find the 7s and remove the fives. And that is complex.

 

Even if you do this - it is merely evidence not proof. You need a way to generalise it - to show that it always works

Share this post


Link to post
Share on other sites

The primes are infinitely fascinating and I enjoy writing little excel formulae - so my time was anything but wasted.

 

Peter - I know from Philosophy you understand the importance of clear and precise definitions. What I did was to count or enumerate the prime divisors of each number that was 6n+/-1. For each 6n I had a cell on spreadsheet that asked "does this number -1 divide by 5 evenly, if not does this number +1 divide by five evenly". The next cell to the right asked the same question but dividing by 7, the next by 11.... This is a brute force approach - which can be brilliant in disproving a universal theory but is useless in proving one. To be able to extend your argument you need to be able to calculate or predict the number of composite primes at positions 6n+/-1 from just the information of P1 = x and P2 = y

 

It's okay. I'm predicting, not factoring. Because the relevant products of primes only occur at 6np+/-1, I was once able to build a Excel prime checker that outstripped the machine. My basic approach is not wrong, it just might not work in this context. .

 

Your proposition that it can be done in theory is not accepted.

 

We know of consecutive primes that are of the order of 10^200 - That is a gobsmackingly big number and the range between their squares will be of a similar order. To put the size of the job of enumerating the prime composites of those two consecutives in context - if every single atom of the universe performed one computation per second for entire predicted life of the universe we would fall short by many orders of magnitude. It is impossible to do this by brute force - and even if we could, bearing in mind the nature of primes we would be no closer to a proof of the conjecture.

 

Good point. But we wouldn't need to ever do this. The idea is to prove a principle and this can done using small primes. My proposition is that the calcualtion can be done for small primes, which is all we'd need to do. .

 

 

 

But the "unless ..." bit is the problem

 

just as an aside 100,000,001 is divisible by 17

I see where you are going with the idea. But the unique (I think) property of primes is that you cannot make this assumption - just cos it did that for all the numbers below p2 is no guarantee that it will continue. No one seriously thinks the twin prime conjecture is false - but we cannot find a way to prove it; even though we have overwhelming evidence that it is almost certainly true

 

What assumption? Is it not inevitable?

 

 

But the number of times that 5 will divide cleanly into one of the 6n+/-1 sites in Larger Range (p2->p3) is not determined (or we cannot yet show it is) by the number of times it divided into one of the 6n+/-1 sites in smaller Range (p1->p2)

 

It varies - just a little bit. It varies because the end points of our range are dictated by primes - it doesnt vary hugely but enough to make it a bit of a guess.

 

True. In the case of all p it can vary by +/-1. This is what I meant about it being a bit messy for small primes. But one product more or less doesn't matter where R is large. Don't forget I'm after a limit, not an exact count.

 

 

Damn. Text box gone again.

 

Imatfaal - If your 6n+/-1 is divisible by 5 then you know this is lowest prime factor. But if you know for instance that it is divisible by 17 - in order to compute a density you need to know if this is the lowest prime factor (ie to be sure it is not divisible by 5,7,11,13). That is the number you need - you can be safe in estimating that 2 in 5 sites of 6n+/-1 are divisible by 5; but to do the same for 7 you need to find the 7s and remove the fives. And that is complex.

 

Me - Exactly. It can be done, bit it's a pain. But then, I don't need to do all the calculations. I only need to make the caclculation good enough to work, not be completely accurate.

 

Imatfaal - Even if you do this - it is merely evidence not proof. You need a way to generalise it - to show that it always works

 

Me - This is ambiguous issue for me. It must be true that the products of primes that ocuur at 6n+/-1 do so at 6np+/-p, and thus can be measured and counted, but I would have no idea how to go about proving this. I would probably have to draw a picture.

Edited by PeterJ

Share this post


Link to post
Share on other sites

It's okay. I'm predicting, not factoring. Because the relevant products of primes only occur at 6np+/-1, I was once able to build a Excel prime checker that outstripped the machine. My basic approach is not wrong, it just might not work in this context. .

But your predictions are wrong

 

Good point. But we wouldn't need to ever do this. The idea is to prove a principle and this can done using small primes. My proposition is that the calcualtion can be done for small primes, which is all we'd need to do. .

You need to be able to do it for small primes and show that it must extend to large. A formula provided must be able to extend in theory at least to those ridiculously large primes

 

 

What assumption? Is it not inevitable?

I can show things for numbers which are perfect squares. I can highlight a property they share, describe how it flows from their nature as a perfect square, show it apples to the perfect square x^2 and to its next neighbour (x+1)^2. This can be done for sequences such as Fibonacci and many others - it cannot be done for primes. It is almost certainly true - mortgage-gambling so - but that does not mean it is inevitable, just very very likely

 

True. In the case of all p it can vary by +/-1. This is what I meant about it being a bit messy for small primes. But one product more or less doesn't matter where R is large. Don't forget I'm after a limit, not an exact count.

Yes I understand that - however even with a fuzziness or a fudge factor your initial idea does not work. Even if it did work +/-1 will in extreme cases be enough to produce a false result - and puts the kibosh on the whole thing

 

 

Damn. Text box gone again.

 

Imatfaal - If your 6n+/-1 is divisible by 5 then you know this is lowest prime factor. But if you know for instance that it is divisible by 17 - in order to compute a density you need to know if this is the lowest prime factor (ie to be sure it is not divisible by 5,7,11,13). That is the number you need - you can be safe in estimating that 2 in 5 sites of 6n+/-1 are divisible by 5; but to do the same for 7 you need to find the 7s and remove the fives. And that is complex.

 

 

 

Exactly. It can be done, bit it's a pain. But then, I don't need to do all the calculations. I only need to make the caclculation good enough to work, not be completely accurate.

 

But it can only be inaccurate in certain prescribed ways - an overestimate of twins can only in very constrained circumstances. And I must repeat it doesnt even get close at present.

This is ambiguous issue for me. It must be true that the products of primes that ocuur at 6n+/-1 do so at 6np+/-p, and thus can be measured and counted, but I would have no idea how to go about proving this. I would probably have to draw a picture.

 

 

Measured yes - Counted yes again. This is what I have done to provide the figures for you in above posts. But you need to be able to accurately predict. The predictions using a statistical likelihood will have a systematic uncertainty - this uncertainty will be enough to render the theory invalid

Share this post


Link to post
Share on other sites

But your predictions are wrong

 

I don't think so. They are not accurate enough, which is not quite the same thing.

 

You need to be able to do it for small primes and show that it must extend to large. A formula provided must be able to extend in theory at least to those ridiculously large primes

 

Of course. I know what needs to be done. .

 

 

I can show things for numbers which are perfect squares. I can highlight a property they share, describe how it flows from their nature as a perfect square, show it apples to the perfect square x^2 and to its next neighbour (x+1)^2. This can be done for sequences such as Fibonacci and many others - it cannot be done for primes. It is almost certainly true - mortgage-gambling so - but that does not mean it is inevitable, just very very likely

 

I don't get this. My predictions are inevitable. They ar not statistical or probabalisitic. They are just not very accurate yet.

 

Yes I understand that - however even with a fuzziness or a fudge factor your initial idea does not work. Even if it did work +/-1 will in extreme cases be enough to produce a false result - and puts the kibosh on the whole thing

 

Not at all. As p grows larger the fuzziness matters less and less. .

 

But it can only be inaccurate in certain prescribed ways - an overestimate of twins can only in very constrained circumstances. And I must repeat it doesnt even get close at present.

 

Yes. An overestimate of twins would be useless.

 

 

Measured yes - Counted yes again. This is what I have done to provide the figures for you in above posts. But you need to be able to accurately predict. The predictions using a statistical likelihood will have a systematic uncertainty - this uncertainty will be enough to render the theory invalid

 

But I'm not using statistics. I'm using the rules that govern the behaviour of prime products, which are predictable to infinity and back. .

 

Mybe we're still slightly at cross-purposes.

Share this post


Link to post
Share on other sites

I don't think so. They are not accurate enough, which is not quite the same thing.

I provided your predictions for a few pairs of consecutive prime - they were miles out. Your method will always end up with less than zero as the answer to # of positions minus number of predicted composites.

 

Of course. I know what needs to be done. .

 

 

 

I don't get this. My predictions are inevitable. They ar not statistical or probabalisitic. They are just not very accurate yet.

But your predictions are incorrect - and they are not proven merely asserted

 

Not at all. As p grows larger the fuzziness matters less and less. .

Again you would have to prove that - not intuit and assert it. And I am not sure that you are correct. If you take 2 x Range / (6*7) - your idea will, for large enough number predict the correct number of multiples of 7 in the range. But around a third or so of them are also multiples of 5 and thus need to be ignored. I think that the percentage that must be ignored remains or even grows as the range increases. Your idea of 2 x Range / (6*prime) must and will predict the number of correct number of multiples in the range - BUT BUT BUT - it does not take account of doubling up .

 

For example 1002 +/-1 -> 1001 (143*7 or 91*11 or 77*13) or 1003 (59*17) - so you count 4 multiples - but you only need one to rule out 1002 +/-1. So you massively overestimate the number of sites with composites - such that your answer is very negative and no use.

 

 

Yes. An overestimate of twins would be useless.

 

 

 

But I'm not using statistics. I'm using the rules that govern the behaviour of prime products, which are predictable to infinity and back. .

But the rules that you have made do not work - and to make them work you need to use a number density / statistical approach, and that is not extendible

 

 

 

 

Mybe we're still slightly at cross-purposes.

 

Which bit of the two examples in the post above did I get wrong? the rules are simple enough once i realised what you meant and were looking at - but they are wrong; they trivially predict all composite primes in the positions 6n+/-1 and this answer doesn't determine the number of twin primes.

Share this post


Link to post
Share on other sites

I don't think so. They are not accurate enough, which is not quite the same thing.

Counting primes is neither horseshoes nor hand grenades. There really aren't any points for 'close' when doing proofs like this. Look how long it took for the proof of Fermat's Last Theorem. There was many, many that were 'close' but failed in some small way. When doing a proof, it needs to be ironclad, not 'close'.

Share this post


Link to post
Share on other sites

Yes, of course a proof has to be ironclad. I've not suggested otherwise.

I provided your predictions for a few pairs of consecutive prime - they were miles out. Your method will always end up with less than zero as the answer to # of positions minus number of predicted composites.



As it stands yes. Why would it be impossible to make it work?
.

But your predictions are incorrect - and they are not proven merely asserted



The original calculation was wildly incorrect. My prediction as to where the relevant products of primes occur is solid as a rock. .
. .

Again you would have to prove that - not intuit and assert it. And I am not sure that you are correct. If you take 2 x Range / (6*7) - your idea will, for large enough number predict the correct number of multiples of 7 in the range. But around a third or so of them are also multiples of 5 and thus need to be ignored. I think that the percentage that must be ignored remains or even grows as the range increases. Your idea of 2 x Range / (6*prime) must and will predict the number of correct number of multiples in the range - BUT BUT BUT - it does not take account of doubling up .



It may yet be impossible for some reason to do what I'm suggesting. Certainly the calculation you mention doesn't work. But you've given me no reason to believe no calculation could ever work if it is made sufficiently sophisticted. The relevant products of 7 will occur as stated, and 210/2 will give the joint products of 5. It is always possible to account for doubling with smaller primes, albit that for a specific range there will be an error term for reasons we've discussed. .

For example 1002 +/-1 -> 1001 (143*7 or 91*11 or 77*13) or 1003 (59*17) - so you count 4 multiples - but you only need one to rule out 1002 +/-1. So you massively overestimate the number of sites with composites - such that your answer is very negative and no use.


Yes. This is why I'm exploring how to improve it.



But the rules that you have made do not work - and to make them work you need to use a number density / statistical approach, and that is not extendible



I can't see how a statistical.approach could ever work.

What might work, it seems to me, is a study of the behaviour of the products.of primes. This is entirely predictable.

Suppose we draw an empty number line and mark a zero point. Then we place a circle with a 6" circumference on the line touching it at zero and mark the circle where it meets the line. Roll the circle back 1" and mark the circle where it meets the line, and then forward to do the same. Now we have a circle marked off at zero, +1 and -1. (or at 4, 6 & 8 oclock,) If we roll the circle up the line then each time the centre mark is on the line the two other marks will identity 6n+/-1. .

Now we have identified all the 'relevant' numbers. All we would need to do to create the twin prime sequence is to scale this circle for each prime. So to identify the products of 5 we would increase the circumference of the circle to 30 and roll it up the line marking off 30p+/-p. These are the only products of 5 that have any bearing on the twin primes. For relevant products of 7 the circle needs a circumference of 42. etc etc.

For joint products we can do the same. For 5,7 the circumference would be 210. For 5, 7 & 11 it would be 2310. Etc etc.

I'm not suggesting that this is a particulalrly useful thing to do, but I am suggesting that this mechanical and predictable behaviour can be measured and allow us to make predictions about the density of twin primes.

This is because for any finite qty of prime factors the density of primes relative to these prime factors can be calculated, and if the correct range is chosen the prediction will be accurate. If the density is high enough then there will be twin primes in the range,

The key point is that this is not a one off calculation. For any finite qty of prime factors the density of primes relative to them can be predicted to infinity, The products form a combination wave that repeats endlessly with a fixed wavelength. So where R is carefully chosen we can predict quite a lot.

I may have confused the issues by mention the idea of simply counting twin primes in R from a table of primes. This is not what I'm suggesting. But I find it interesting that whatever the qty of twin primes in R turns out to be, it will not be less for the next R, and the next p will only add one product in R that is relevant. This and similar thoughts seem to offer a way in to the problem.

Edited by PeterJ

Share this post


Link to post
Share on other sites

Yes, of course a proof has to be ironclad. I've not suggested otherwise.

As it stands yes. Why would it be impossible to make it work?

I think so yes. this is the fundamental buggeration factor that exists when dealing with primes - what happened between p1 and p2 is likely to happen between p17 and p18, but proving it turns out to be impossible so far

 

The original calculation was wildly incorrect. My prediction as to where the relevant products of primes occur is solid as a rock. .

It's an observation that is trivially simple - not a prediction. This is what I keep getting at.

1. All numbers are primes or multiples of primes

2. the multiples of any (prime or normal) number will occur with a frequency of 1/that number.

3. the coincidence of that a multiple of that number when looking at 6 times something is thus 1/(that number *6) - or zero

4. you are looking at two sites so the final answer is 2/(that number *6)

 

 

It may yet be impossible for some reason to do what I'm suggesting. Certainly the calculation you mention doesn't work. But you've given me no reason to believe no calculation could ever work if it is made sufficiently sophisticted. The relevant products of 7 will occur as stated, and 210/2 will give the joint products of 5.

 

How exactly do you intend to remove the products of 7 that are also products of five - some are products of both 7 and 5 at same time whilst others are products of 5 and x or 7 and y

 

It is always possible to account for doubling with smaller primes, albit that for a specific range there will be an error term for reasons we've discussed. .

Why is it possible?

I can't see how a statistical.approach could ever work.

If you can show that your maximum possible error still implies your theory then that is normally acceptable

 

 

 

What might work, it seems to me, is a study of the behaviour of the products.of primes. This is entirely predictable.

 

But they are not entirely predictable! They are frustratingly unpredictable - given p1 we cannot predict p2. We can identify sites that are likely to be primes through prediction (mersenne) - but some are prime and others are not.

 

 

Suppose we draw an empty number line and mark a zero point. Then we place a circle with a 6" circumference on the line touching it at zero and mark the circle where it meets the line. Roll the circle back 1" and mark the circle where it meets the line, and then forward to do the same. Now we have a circle marked off at zero, +1 and -1. (or at 4, 6 & 8 oclock,) If we roll the circle up the line then each time the centre mark is on the line the two other marks will identity 6n+/-1. .

 

This we can do - mathematically it is the two series 6n+1 and 6n-1

 

 

Now we have identified all the 'relevant' numbers. All we would need to do to create the twin prime sequence is to scale this circle for each prime. So to identify the products of 5 we would increase the circumference of the circle to 30 and roll it up the line marking off 30p+/-p. These are the only products of 5 that have any bearing on the twin primes. For relevant products of 7 the circle needs a circumference of 42. etc etc.

 

Again - this is trivial.

 

 

 

For joint products we can do the same. For 5,7 the circumference would be 210. For 5, 7 & 11 it would be 2310. Etc etc.

 

Wrong.

 

6n = 174 6n+1 is 175 = 5*5*7

6n = 204 6n+1 is 205 = 41*5, 6n-1 is 203 = 29*7

6n = 216 6n+1 is 217 = 31*7, 6n-1 is 215 = 31*5

 

These are completely different. The first is cyclical - but the number of cycles grow massively. The second two depend on distribution of primes and this is not predictable nor cyclical

 

 

I'm not suggesting that this is a particulalrly useful thing to do, but I am suggesting that this mechanical and predictable behaviour can be measured and allow us to make predictions about the density of twin primes.

 

But as I have been saying from the beginning - primes are neither mechanical nor predictable. You are moving from the agreed (all numbers are primes or multiples of primes AND multiples are cyclical) to the unproven (primes are cyclical).

 

 

This is because for any finite qty of prime factors the density of primes relative to these prime factors can be calculated, and if the correct range is chosen the prediction will be accurate. If the density is high enough then there will be twin primes in the range,

 

Two things. How can they be calculated? - your attempts are failing so far and you ideas for extending those ideas are just wrong. And even if you do come up with a density it must either be exact (not very accurate but exact) or you need to prove that even at maximal inaccuracy it never produces a twin prime where there shouldnt be one.

 

The key point is that this is not a one off calculation. For any finite qty of prime factors the density of primes relative to them can be predicted to infinity, The products form a combination wave that repeats endlessly with a fixed wavelength. So where R is carefully chosen we can predict quite a lot.

 

I may have confused the issues by mention the idea of simply counting twin primes in R from a table of primes. This is not what I'm suggesting. But I find it interesting that whatever the qty of twin primes in R turns out to be, it will not be less for the next R, and the next p will only add one product in R that is relevant. This and similar thoughts seem to offer a way in to the problem.

 

"it will not be less for the next R" this is demonstrably false.

 

1 1 2 2 4 2 7 2 4 8 2 11 7 3 11 13 13 5

 

This is the number of twin primes between the squares of consecutive primes - the trend in certainly upward but "it will not be less for the next R" - clearly refuted.

Share this post


Link to post
Share on other sites

I think so yes. this is the fundamental buggeration factor that exists when dealing with primes - what happened between p1 and p2 is likely to happen between p17 and p18, but proving it turns out to be impossible so far

 

Okay. There are plenty of things that are not predictable about the primes and plenty that are. As you say, there are many buggeration factors. But I'm not looking for an algorithm to predict primes. I'm looking for a calculation, however complex it may be, to predict products of primes.

 

It's an observation that is trivially simple - not a prediction. This is what I keep getting at.

1. All numbers are primes or multiples of primes

2. the multiples of any (prime or normal) number will occur with a frequency of 1/that number.

3. the coincidence of that a multiple of that number when looking at 6 times something is thus 1/(that number *6) - or zero

4. you are looking at two sites so the final answer is 2/(that number *6)

 

Sorry - I can't understand point 3 here. It seems a strange calculation. But yes, to say that 210nx5x7+/-35 = all joint products of 5& 7 that occur at 6n+/-1 is a trivial observation. Still it does look like a prediction to me.

 

 

How exactly do you intend to remove the products of 7 that are also products of five - some are products of both 7 and 5 at same time whilst others are products of 5 and x or 7 and y

 

If we count the products of primes starting with 5, then when we count for 7 we just need remove previous products of 5, of which there will be 2 in every 210 numbers. It does not matter about any larger prime factors. They will be dealt with in their turn. So for joint products of 11 and any smaller primes we would need to deduct 2 in (6x11x5) + 2 in (6x11x7). This exhausts the relevant products of 5,7 & 11. For my R this calc will be approximate because the end points of R do not line up with the combination wave of the products. For the range from (6n)5*7*11 to (6n+1)5*7*11, however, it will always be exact for any n. (This has to be adjusted where n=0) .

 

 

If you can show that your maximum possible error still implies your theory then that is normally acceptable

 

Good. This is what I was banking on. This is an important point dealt with.

 

 

But they are not entirely predictable! They are frustratingly unpredictable - given p1 we cannot predict p2. We can identify sites that are likely to be primes through prediction (mersenne) - but some are prime and others are not.

 

But they are entirely predictable. I want to argue about this. Perhaps 'predictable' has a mathematical meaning I'm misusing. I'd say thay are predictable because the products of primes are predictable. It is just that the calculation is difficult. When I say 'predictable' I just mean that we could write out the prime sequence without having to do any factorisation.

 

If we can say that the primes >3 occur only at 6n+/-1 is this not a prediction, albeit a very trivial one? ,

 

 

Wrong.

 

6n = 174 6n+1 is 175 = 5*5*7

6n = 204 6n+1 is 205 = 41*5, 6n-1 is 203 = 29*7

6n = 216 6n+1 is 217 = 31*7, 6n-1 is 215 = 31*5

 

These are completely different. The first is cyclical - but the number of cycles grow massively. The second two depend on distribution of primes and this is not predictable nor cyclical

 

175 = 210-35. Predictable.

 

205 = (6nx5x41)+/-205. Predictable. (It is irrelevant to anything that 41 is a divisor of 205 unless we are counting products of 41. When we are we would want to deduct the previous products of 5. These occur at (6nx5x41)+/-205. Here 'n' can be zero, which produces the number 205. Note that the variable 'n' means the calc is infinitely repeatable. )

 

217 = 6x31x7. This is (6nx31x7). Predictable. .

 

(Text box failure)

 

Imf- "But as I have been saying from the beginning - primes are neither mechanical nor predictable. You are moving from the agreed (all numbers are primes or multiples of primes AND multiples are cyclical) to the unproven (primes are cyclical)"

 

PJ - Primes are clearly not cyclical. But any finite qty of primes will produce a combination wave of products which is precisely predictable and which repeats forever on it's wavelength. . .

 

Imf - "it will not be less for the next R" this is demonstrably false.

 

1 1 2 2 4 2 7 2 4 8 2 11 7 3 11 13 13 5

 

This is the number of twin primes between the squares of consecutive primes - the trend in certainly upward but "it will not be less for the next R" - clearly refuted.

 

PJ - Apologies. I meant to say 'on average'. For a start, sometimes next R is a lot smaller than previous R.

 

The first R that I'm concerned with yields 4, then 2, 7 etc as per your list. The trend is bound to be upwards but yes, it can be smaller or larger in any instance. As p grows larger the difference between previous/next R will become ever larger and the trend will become ever more obvious. It is possible, I think, that there may come a point where from then on the next R will always contain more twins than previous R, but that's a guess.

 

This list suggests that I have correctly seen the mechanism that causes this upward trend for twins in R. My original calculation was much too simple to show it, although it illustrates the idea, but despite your invaluable assistance I still cannot see why a decent mathematician wouldn't be able to make it accurate enough. Maybe I'm still missing something.

 

(By the way, please be wary if you read my posts from the email notifications. I tend to make mistakes and have to edit them out, and I may have two or three goes at it.)

 

 

Edited by PeterJ

Share this post


Link to post
Share on other sites

 

Okay. There are plenty of things that are not predictable about the primes and plenty that are. As you say, there are many buggeration factors. But I'm not looking for an algorithm to predict primes. I'm looking for a calculation, however complex it may be, to predict products of primes.

I understand the goal. Although I am beginning to think that using this interface to quote messages may be more difficult that solving the most famous problem in maths

 

.

 

If we count the products of primes starting with 5, then when we count for 7 we just need remove previous products of 5, of which there will be 2 in every 210 numbers. It does not matter about any larger prime factors.

OK - I was visualising you removing all the factors of 7 that clashed with factors of five at one fell swoop; but not in successive stages. I will have to think

Share this post


Link to post
Share on other sites

I understand the goal. Although I am beginning to think that using this interface to quote messages may be more difficult that solving the most famous problem in maths

 

Yes!! Very annoying. In any case email disussions with strangers can be unexpectedly difficult. We probably could have sorted all this out in ten minutes over a pint. .

 

But some of the crossed-lines will be my fault. I'm still figuring out how to explain how I'm coming at this problem, and keep leaving out things I should have mentioned.

 

So yes, successive stages. calculating products of p working from 5 up to P1. This needs to be done only once, and then as p increases we just add one a new prime factor to the calc., while the previous result still applies as to density but has to be scaled for the change in the size in R.

 

Eg. Once we've calculated the density of products for the factors up to p1, then for next R the density is the same but over a different range, and there will be one more factor in R ito take into account, which is p1 x p2. All other products of p1 are products of previous primes and already counted. .

 

So if there are 10 primes in R, and if next R is larger than previous R, then there must be >10 primes in next R, less one to account for the new p1. There are some provisos to add to this but as a general rule it seems correct. .

 

 

 

 

 

.

 

 

 

 

 

. ,

Share this post


Link to post
Share on other sites

Hey Imatfaal - Have you given up in despair? I'd like to get to point where I can be sure my approach doesn't work, and I'm not quite there yet.

Share this post


Link to post
Share on other sites

No - just too busy to give enough time to consider properly. I am of the diametrically opposite view and feel i am not quite there in creating the convincing argument for you that you are wrong. :)

Share this post


Link to post
Share on other sites

Fair enough. Strangely, I'd be very pleased if you could convince me you're right. Then I could forget the whole thing. But it niggles away, the idea that this might work. Thanks for clearing up a lot of my errors anyway.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this  

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.