Jump to content

TakenItSeriously

Senior Members
  • Posts

    511
  • Joined

  • Last visited

Everything posted by TakenItSeriously

  1. It's highly unlikely. There are too many ways that people can make mistakes, when persuing origional work. It's not like making a comon mistake in a math.problem where rigor and methodology funnels peoples thinking into making common mistakes,. There are far too many ways that people can make mistakes when origional thinking is required. In fact it's quite the opposite, where I've often duplicated some past discovery that has already been accepted.. From my point of view, its a validation of my work because it was work I discovered independently which is fine as a reenforcing experience. I don't get any credit for that work of course, but geting credit has never been at the top of my goals. Preventing others from stealing credit for themselves, however, is another matter. It's highly unlikely. There are too many ways that people can make mistakes, when persuing origional work. It's not like making a comon mistake in a math.problem where rigor and methodology funnels peoples thinking into making common mistakes,. There are far too many ways that people can make mistakes when origional thinking is required. In fact it's quite the opposite, where I've often duplicated some past discovery that has already been accepted.. From my point of view, its a validation of my work because it was work I discovered independently which is fine as a reenforcing experience. I don't get any credit for that work of course, but geting credit has never been at the top of my goals. Preventing others from stealing credit for themselves, however, is another matter. Edit to add: Strange, I forgot to include the quote, but when I teied to edit it in, It wouldn't allow the edit and, instead treated it like adding another post.
  2. Why would ever truncate a 9? The proper practice would be to round it off to 10.
  3. That's funny, because It should make you think he does't understand monkeys. No one here does. At least not enough to make any kind of meaningful predictions. First of all, it was never a scientific claim about monkeys. I'm sure it was just a dramatization of the absurdity of infinity. But popularized claims takes on an assumption of fact because its been repeated so many times. The fact is there is no real data on monkeys typing ability AFAIK. But even if you assume truely random key punches by the moneky that's still not enough to make it a reasonable problem. Languages have inherrant biases as well that changes the problem. That's what makes codebreaking possible. But no one knows that kind of information for old english. So now we have to assume random vs random, which amounts to just calculating the number of permutations for a long string of characters. But that's probably no linger within the spirit of the OP.. There is a massive difference when saying anything about infinity which should have no bearing to reality in finite time.
  4. Well, this problem hasn't been solved yet, at least not to the degree of making it an np solution since the $million is still up for grabs. When I take on a new unsolved problem that has been considered to be this tough I make it a point to know as little as possible about the methods that were attempted before. It only biases how I look at a problem with how someone else looks at a problem. Not that I'm knocking any previous work, its just my way. Besides for solutions I've discovered in the past, I've always found the keys to the solution in some unrelated field. For example, To me this is just a problem of routing a circuit in series as efficiently as possible and that was my job for a very long time in high speed digital design. I Also took some hints based on the concept of the path of least inductance, even though that involves loops of least loop area instead of loops of shortest path. It still provided me with some abstract sense to approach it from the outside in, though I still don't know why yet. I'm sure it'll come to me eventually. (edit to add: it just came to me. It wasnt the path of least inductance, it was the trasformation from the path of least resistance i.e. shortest path, to the path of least inductance i.e smallest loop area, only in reverse. Part of the paradigm shift from DC to AC. It's actually kind of cool how that worked out..) Other examples: I solved primeality following a proper analog of harmonic waves. For the origional Twin Paradoc solution, I used my experience with EM waves/fields in HSDD and in that field, time and space is always linked which solved the crux of the problem for me. My solution for the balance paradox came from Einstein's light clock derivation except it had an asymetry to it that was like the asymetry of the Twin Paradox. I could go on, but suffice it to say, I can't think of a problem I've solved that hasn't had some basis in some unrelated field and searching for proper analogs to problems I recognize is just an automatic habit for me 24/7. It's like maintaining my own personal programmers toolbox for problem solving.
  5. No, that's how long it took single celled bacteria to produce Shakespere. Not a fan of the Many Worlds Interpretation to Quantum Mechanics I take it? BTW, MW would involve a finite number of universes, Im pretty sure. There are a finite number of possible superposition states. for example, you cant use the gamma function to create a smooth curve in combinatorics solutions that could imply infinite outcomes. It would have to be a weighted average of distinct possible solutions for each fundamental particle. BTW, seems like a brute force problem for a really long password to me, which would be a really sick factorial problem that could never be solved. so a good question might be which would be the dominating number? That factorial of characters in the numerator or the number of possible Universes in the denominator?
  6. From Wikipedia: The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in operations research and theoretical computer science. An Origional Solution: copyright: October 19, 2017 Author: Paul Ikeda Version: V3.0.0.0 Any thoughts or quesrions?
  7. I know this question refers to another site that is similar in function to ScienceForums for diseminating information and understanding though they follow a different format and cover a broader scope of topics. I imagine that many who post here also post as experts at the other site. I just now discovered that all links to threads on their server are timing out and wanted to confirm that their site is down to others as well and if there is any news about this? Sorry if this is not an appropriate topic, but I think such an outage would qualify as news. Update: it seems to be running again after being down from 30-60 minutes in California. One strange thing was that I was able to repy to what I thought was a new question at the top of my "can you answer this" list, asking if Quora was down. I replied confirming that I was having timeout issues thinking that, perhaps, only new questions were accessable then noticed a reply to that question that was two years old!? That was when I noticed that all my notices to past links were working again. Strange.
  8. I believe the OP was referring to General Relativity, not Special Relativity. Special Relativity is a theory about relative inertial frames of constant velocity. Gravity can be treated as curved local space in GR, but it can also be modeled as local accelerated non-inertial frames in terms of the Equivalence Principle. Another words SR is about relative constant velocity while gravity is about relative local acceleration.
  9. Wow, it seems that I was wrong about other posters opinions about this guy which I thought unfairly characterised his ideas based on this thread alone. I now assume others were characterising his posts based on his adolescent behavior in past threads. Too bad really since I thought many of his ideas were correct, but his arguements seemed to be nonsensical and baseless. My mistake. Regardless of any merit, his ideas may have, his adolescent responses prohibit any attempt at serious discussion.
  10. Thanks for clarifying. I'm relieved to hear that the computer mathematics major later gave way to computer science, as opposed to being something new since I just don't see a hybrid logic/math approach working to a very wide extent.
  11. I assume this is a case of cross posting, but If you include my complete statement I did say that the two are complimentary which implies that they are not incompatible. So yes, I agree, computer science does make liberal use of mathematics in various functions or algorithms though it's all applied within a logical framework so I still think that they should not be confused with each other. For instance: i = i+1 Is a trivial concept in computer science that does not make sense in mathematics.
  12. While I'm unfamiliar with Computer Mathematics, I don't think it should be confused with Computer Science which is based on logic, not math. While all math is founded on logical premise, generally speaking, Logic and Math are not the same thing. In fact, in general, I'd desscribe them as being pretty much opposite disciplines, though in a complimentary way.
  13. Logically, zero does not have a "value" in the same sense that whole numbers have distinct "values". For example, Johnny may have 1 penny, or 9 pennies, or any whole number of pennies, but having zero pennies simply means that he does not have any pennies. Another words having zero pennies is the logical negative of having some whole number of pennies therefore the usage of the word "value" is being applied in two different contexts. It's the same concept of how prime numbers are defined in the negative form as what they are not, which is the source of so much confusion as far as primes being deterministic or not. i.e. primes should be defined as numbers that are not composite numbers because composite numbers are all 100% deterministic based on their prime factors. Prime numbers are therefore only deterministic as far as being the range of all numbers left over after determining all composite numbers within a range.
  14. Continuing on with the Prime Factor Harmonic Matrix. I have discovered many interesting facts and conjectures but Id like to begin with the definition.

    • Prime numbers are defined as a negative or they are defined as natural numbers greater than 1 that are not divisible by any smaller whole number that is > 1.

    However, as I argued in elementary school, this a negative of a definition of what something is not which makes it definitive but non-deterministic. And if youn can't look at a large odd numbr and cant just say its prime with much certainty, 

    Therefore In order to define it as the positive, we need to define composite numbers, not prime numbers..

    • Composite numbers are defined as natural numbers greater than 1 that are evenly divisible by a smaller prime factor such that:

    1 < SPF < P²

    where the SPF ≡ Smallest Prime Factor of the number.

    Finally we can redifine prime numbers as:

    • Prime numbers are All Natural numbers that are > 1 and are not a composite number.

    Therefore we must find all composite numbers within a certain range in order to  reveal all prime numbers within that same range.

    Finding composite numbers using this definition is how the Sieve of Erastithenenes works.

    xx xx xx xx 05 xx xx xx 05 xx xx xx 05 as xx xx 05 xx xx xx 05 xx xx xx.

    xx xx 03 xx xx 03 xx xx 03 xx xx 03 xx xx 03 xx xx 03  kin dosequ 32'.

    dxx 02 xx 02 xx 02 xx 02 xx 02 xx 20 xx 02 xx 02 xx 02 xx 02 xx 22 xxi :33  28 27)

    xx 02 03 02 05 xx 07 02 03 02 11 02 13 02 15 02 17 02 19 02 21 22 23 25 25 26 27 26 30 xx 02 03 02 05

     

    xx 03 16  17 8 - 3

    1. Strange

      Strange

      https://en.wikipedia.org/wiki/Sieve_of_Eratosthenes

      Also, your two definitions of prime numbers are identical in meaning (extraneous words removed for clarity):

      "natural numbers greater than 1 that are not divisible by any whole number" == "natural numbers that are > 1 and are not a composite number"

       

  15. Ahh, I see, you mean as in space where gravity from all bodies cancel out to zero. seems posible but it seems impossible to measure which is kind of like saying its analogous to a Heisenberg state. Since Gravity reaches us from the observable Universe, then it would seem like position and momentum vectors would loose meaning in a local non-inertial frame if taken in the context of the observable universe frame. Just like it does in the quantum scale as we observe it in the context of the human domain. Therefore it would be impossible to say with any certainty.. edit to add: Its interesting to note however that this is similar to the problem of finding the primality of large numbers where the complexity becomes too great to predict in any meaningful amount of time due to the scale of the domain of large numbers which makes the universe look like the quantum scale though probably not literally. However, I did find a way to find primality of large numbers which should work in near real time once the data is compressed and saved. so who knows? .
  16. If, by flat, you mean having no thickeness, everything that falls into a black hole becomes flat on the surface of the event horizon and stays there for a long long time due to time dilation. Since the EH is a sphere, I suppose that you could say that there is an infinitesimal amount of curvature to the flatness. If you want an example of the least amount of curvature, IDK, how about an electron on the surface of the largest SMBH in the Universe.
  17. questioning the rest mass of a photon is a moot point since it's more like a hypothetical stat that doesn't ever exist relative to any observer. The reason why is that all photons are always moving at the speed of light no matter how fast the observer moves which is one of the two premises of SR. It's this premise that causes space and time to contract or dilate when viewed from a different inertial refeerence frame.
  18. Sorry, about taking so long before replying, but since your question involved an opinion on your own project, I was compelled to finish my own solution before looking at any other solution. It's nothing personal or ego driven, its just my way. I just have a strong belief that for solving known problems with standard solutions, then fine, load up on the knowledge. However, when solving unsolved problems that have been tested countless times before without success, then I feel the less I know about traditional methods or traditional thinking, the better. Thats because traditional thinking is obviously not the proper solution for that particular problem. However it could be a line of thinking that could bias my own line of thinking which I have great faith in. I'm sure far better mathematicians have tried and failed so why would I try the same math which I'd be a novice at. Especially when my gift is in logic based solutions. And since using logic for problem solving outside of Computer science is practically unheard of in the past few centuries. My guess is my success isn't all due to my own gifts but half due to a serious lack of others trying logic based solutions. So I might be a big fish in a tiny puddle. Who knows? Also I'm not a fan of focusing too hard on multiple lines of thinking at one time. I'm best when taking a single line of thinking them through one at a time in a methodical way. Focusing on multiple altenatives is just a terrible way to find results, at least for me. Humans don't think in parallel as far as I know. If they do, then I'm more handicapped than I realized. So my method involves avoiding all external input like the plague until I either solved the problem or run out of ideas. At that point I'm happy to review other methods. So now that I'm done with my excuses, I did have a chance to look at your challenge post. As far as solving primes, I was skeptical because it never addressed any of the problems common to Large Number Theory. However, I did fint the idea that all points could take deterministic yet random like path to a single point intriguing. The only part I didn't like is the result being 1 which seems like a sooution looking for a problem that doesnt exist. At least nothing that sounds familiar to me. If that could be modified somehow to all paths deterministicly leading to a string of 1's as in Mersenne primes, then you may really be onto something. My ultimate goal was always to find an unbounded method for data compression. Previously I was convinced it was connected to Mersenne Primes. More recently, however, the current solution has me looking harder at any prime, considering I have a built in hypper effecient indexing system for all primesality data. and tranforming to the closest prime number has to be a lot easier (smaller key) than transforming to the nearest MPN. So I'm now split towards MPN or primes in general, for data transformation. The idea is simple, Big information string plus small key transforme to either Big prime or Big MP. Either of which can be accessed from any point in spacetime using my Prime Factor Harmonic Matrix. All thats required is transmitting the small key. The practical applications are huge and future applications are revolutionary. So if you did solve that universal deterministic path to Mersenne Primes, Then that would answer my question and we should definitely talk, as we could each have 1/2 of a really huge deal. Or I could just be wrong. Risk/Reward seems pretty heavy on the Reward side and light on the Risk side if you ask me.
  19. Post deleted by author as it kept drifting too far off topic.
  20. After reading the abstract of the paper you linked to, I can see that there are hidden assumptions that the reader knows what the fundamental contexts are so I need to verify that we are on the same page before going further. Are we talking about derriving probability distributions as a function? From which frame of reference are they correlated to? A single tester PoV or from a third person PoV of both testers? Taking the third person Pov is a very dangerous excersize. I think that there are some pitfalls when trying to do this for certain kinds of functions from a third person PoV because then descrete valiues get to be very confusing since they can't always be treated as either dependant or independent values. In fact you could say that is the reason that we should get the 25% value for when both testers measure the same spin angle because the results, in that case, must be completely dependent upon each other. In the other two cases, the results are partially dependant on each other. However the strange part is that this is also how classical systems work. It's why probability theory could never be complely definitive and considered to be impossibly complex, I assume it's because they could never make sense of the results, but also because it required infinite recursion to get a definitive result without cheating by not calculating them in real time. Besides, computers didn't even exist back then, although I suppose you'd need a quantum computer to fully realize real time results.
  21. I never pointed out a problem in this thread, I was only asking if their was anomolous test results, particularly with regards to the percentages between testers when both testers measured the same spin orientation being 25% instead of the intuitive 33% that many would have expected. I asked because that is the non-intuitive result that I predicted based on a logical conclusion of my own and I have no access to any of the results myself since I'm not a physicist or student so I can't see if my predictions had any validity to them, but I never did received an answer to that question other than an insult and replies that kept changing the subject. I think that many posted here with a preconceived notion of some kind of agenda on my part. Probably based on a previous post I made about a problem with invalid premises of expected results in Bells Inequalities which I revealed in a thread in the speculations forum, which I still stand by. I should also point out that even in the other thread, I never had any problems with QM predictions that were made, nor did I ever intimate that I had any disagreement behind QM predictions on local realism, other than conclusions should not be based on Bells Inequalities. If you're asking my personal opinion over if their is some kind of instantaneous interaction between the two testors I would say that their does seem to be evidence of such interaction if the widely reported generalizations of results I've seen are correct about how test results change based on the order and choices of the two testers. I hope that clarifies things. BTW, I dont see that as being inconcistent with information not exceeding the SoL If using the right model to explain it, but that's another topic.
  22. This is where my thinking tends to become somewhat controversial. I personally believe that collaborative efforts between specialized individuals with different strengths would far exceed the sum of three individuals who try to be expert in all rolls. In fact I believe it would improve performance exponentially if given the right combination of specializations. For physics, I believe it should include a core of three individuals each with different specialties, aside from whatever individuals that commonly make up a research team. one specialized with knowledge based skills one specialized with math based skills one specialized with logic based skills Each individual would focus more on contributing based on their individual strengths, but in close colaboration with each other, they could each benefit by learning the skills of the other two far more effectively in one on one or three way interaction.
  23. Before looking up your link I will try to answer your first question: It's because, math and logic may solve the same problem but to do so, they always need to answer different questions. For example, logic may answer the question of how while math may answer how much. Math tends to quantify a problem while logic tends to clarify a problem. This is why the logical model is always the model that explains the mechanism behind how something works or describes the chain of cause and effect which math cannot do. However, math is the model that provides the definitive proofs, while logic can only provide definitive invalidation. Also math may be validated through experimental evidence where as logic usually cannot which is usually because logic tends to quantify in relative terms such as greater than or less than such as how Bell's Inequalities works. Also, because they are derrived through different means, a mathematical solution and a logical solution may cross validate each other. This is why math and logic goes hand in hand and act as complimentary pairs. Its because they each do completely different jobs where one cannot replace the other. Beyond this, logic may go further than math in many respects. So math can be thought of as thinking inside of the box while logic is thinking outside of the box. For example, all origional solutions are found through logic, not math. New math cannot be derrived from old math. Algebra could not be derived from Arithmetic. All forms of math were origionally derived from some logical premise such as using diminishing rectangles to define the area under a curve which is how calculous was derived Math may only solve problems in a forward fashion which is much like deductive logic with each being definitive in their own way. However Logic may also solve problems that math cannot such as solving problems abductively or inductively, such as solving a problem backwards like a detective solves a murder after the fact. Or solving black box problems, such as reverse engineering a semiconductor chip solving a theory across unobservable domains, or solving how the hidden subconscious mechanism works. They are not necessarily definitive but they end up providing different aspects of a problem that end up being the same solutions from different perspectives. For example in high speed digital electronics, you can think of the return path for a high speed digital signal over a ground reference plane as taking the path of least impedance, or taking the path of lowest loop inductance, or following the path of greatest capacitance. If you understand each mechanism independantly, they each sound like completely different mechanisms. However, they are all self consistent with each other and in reality are all based on the same relationships from loop to field to wave to induce charged particle oscilations which in turn induce fields only from different perspectives in electromagnetic field theory. The same thing is true for Logical models of Relativistic Effects, where length contraction is relativistic blueshift in front of a moving ship while time dilation is relativistic redshift behind a moving ship. All relativistic properties can be broken down in the same way as looking forawrds or backwards from a moving frame or looking at the moving frame from in front or from behind. I assume the same was true when converging 5 different string theories into a single M Theory. One final difference is that with math, the deeper one goes, the more complex it becomes. With logic, the deeper one goes the simpler a problem becomes. This is why math is important at the beginning of a theory where answering each question seemed to create even more questions through divergence and bring up all of the necessary questions that needed to be answered. To finish a theory or for achieving convergence by finally answering all of the outstanding questions in a self consistent manor and providing a single self consistent model of everything then logic begins to dominate over math. For theory that includes untestable domains, which more scientists are beginning to believe will be required for a consistent TOE, inductive/abductive logic is required in an end scenario because of hidden domains requireing black box solutions, while loops may require the reverse or backwards looking logic, and math is no longer testable in hidden domains. Apparrently a TOE can be solved, but not proven, using math alone with string theory but then we have no model of understanding so string theory only benefits string theory. Applied science through Engineering needs logical models to understand, design and innovate around. Logic shows how everything is obvious or at least relatively simple in hind-site while nothing is obvious or simple in fore-site. It's why all of my solutions to unsolved problems are really quite simple or sometimes even seem like common sense, yet they went undiscovered for hundreds or thousands of years before I solved them. However, this does not necessarily mean they were easy logic problems to solve in most cases, and some problems took years or even decades for me to find a simple solution. Often the time it took depended more on what problems I had solved before as opposed to the complexity of the problem itself. I could explain it further, but that's another long topic. Both Einstein and Hawking believed that a TOE would be a beautiful simple model of everything that anyone could understand.
  24. I prefer to refrain from commenting on the math behind QM as I don't think I could be entirely objective in any critique based on what I've seen. Consider it my waning attempts to refrain from being indelicate which is beginning to reach it's limits. To that end, I will only add that mathematicians should refrain from using logic based arguements IMO. Math and logic are opposites in terms of the kind of evidence that they each provide and Bells Inequalities is a logic based theory, not a mathematical based theory.
  25. No, I wasn't thinking of anything when I clicked on the link that Rob had provided. If the link was to an incorrect reference, then I have no control over that. As far as the idea that "Any two statistical averages can be correlated". It's a radical point of view which requires some extreme justification as I'm sure you're aware. Can statistics be intentionally obfuscated in order to misinterpret data? Yes, absolutely, political parties and/or big business does this all the time under the guise of mathematical evidence. Never the less that doesn't mean that statistics cannot be validly applied to provide meaningful results. It only proves that math can be obfuscated to provide any kind of results that unqualified people are susceptible to believe in. It's the kind of false results which would become immediately transparent under properly applied logic.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.