Jump to content

RuthlessOptimism

Senior Members
  • Posts

    65
  • Joined

  • Last visited

Everything posted by RuthlessOptimism

  1. You basically did not understand anything that I wrote. The point is not the complete absence of expertise, it is that a "box" can be thought of as any body of knowledge. Archaeologists for example are experts in their field, they are probably not capable of building satellites by themselves. This requires knowledge outside of their "box" from someone elses. It is obviously far less likely that archaeological discoveries (once again as only one specific example) that were made as a result of satellite photos would ever have been made if satellites were not invented. These discoveries required "thinking outside of the box of archaeology".
  2. There are many examples of this occurring in science and engineering. 1.) Bombadier beetle: this beetles defence mechanism of shooting toxic super heated liquids out of itself at predators is inspiring new jet engine designs for re-lighting engines at high altitude, by studying things within the field of biology aeronautics engineers could potentially solve problems in ways they might never have imagined. 2.) Physics of Erections: this one is definitely weird but there is another example similar to the first but working in the opposite direction. Biologists long thought that the way erections work was through blood vessels simply inflating with blood, but that is not the whole story. If you know a little bit about materials engineering and mechanics of materials you know that if you cross hatch a fabric it becomes stronger, whereas if you have two layers of fabric with a parallel weft (or is it weave? I forget) the resulting composition is not very strong, its bendy and flexible. This is basically what happens when any animal that has a penis gets an erection, there are two layers of fibrous tissue that at first have their "grain" running parallel, when the penis goes erect these grains are shifted to become anti parallel, thereby giving the tissue more strength and making it rigid. Example of engineers providing insight to biologists. 3.) The work of Srinivasa Ramanujan Iyengar. 4.) Wheelbarrows are still a good example. 5.) Ancient Myths and their relevance to archaeology: Long dismissed by archaeologists as being complete hearsay up until not many decades ago, geologic and archaeologic evidence mounts in support of at least some version of the myths of atlantis (santorini is a good candidate), and Eldorado actually being true. It is probable that many ancient myths contain some amount of truth in them. 6.) Related to 5.), the hanging gardens of Babylon: likely found by an american spy satellite (while they were relatively new pieces of equipment) in southern Iran, as opposed to Babylon which is something like 300 miles away, the greeks often confused these two places (weird but true). These are just a few examples from history that show that it absolutely is possible for people with no knowledge of a field to significantly contribute to it, in fact it happens all of the time, and will continue to happen with greater frequency the larger knowledge base that we acquire, because the minimum level of training and specialization to be deemed "competent" or an "expert" in a field will continue to grow. There is only so much time in the day to work with, and room in your brain for information. And yes absolutely there are situations that arise where an expert would never think of some possible solutions because it requires pre-requisite knowledge from other fields that they have little to no knowledge of, or in some cases they might not even know that something even is a field, that people studied such things. Sometimes solving a problem if you have the right experience or equipment or both is as simple as the right person knowing that it exists in the first place. If you read my original post I specifically said an expert in a field that is not obviously related, I did not say that they completely knew nothing. Thinking outside the box is necessary. Ideas, societies, people evolve just as much as organisms do, in the same way for the same reasons. There is a constant struggle to maintain and keep what has previously worked in the past, because it worked, and made you or an idea fit for the environment. But environments are not static, they constantly change, if you do not instigate or allow for some minimum amount of change or growth over time the organism will die. The key is to balance between the uncertainty of new ideas, which may or may not improve things, versus upholding the status quo which in the long run will probably become less and less effective. No one has all of the answer to everything, but I think everyone probably has a small piece of it.
  3. I don't think this phrase has been misinterpreted or misapplied at all. I think it is definitely true that the more you learn about a certain subject or idea you cannot help but become biased. Sometimes I think the best people to ask about different types of problems are people who are experts in fields that don't seem applicable to the problem at hand. They ask questions and think about the problem in ways that an expert in a "relevant" field never would, and this might provide the new insight or breath of fresh air nescessary to arrive at a solution. Obviously this doesn't always work. But one humorous example already exists in human history, and that is the wheelbarrow. From what I've read the wheel barrow did not exist in europe until around 1300 AD, why? There is absolutely no reason other than that no one had the bright idea to combine a lever and a wheel, people in this area at this time clearly had the "technology" to create wheelbarrows but they never appear in art / pictures of farming or writing about farming until after this period. I am not sure (would require some checking), but I don't think the egyptians even had wheelbarrows, yet they built the great pyramids. People have estimated that wheelbarrows could have saved humans millions of man hours of labor, and maybe even peoples lives due to a decrease in injuries due to the hard labor involved in farming with hand tools. I remember someone describing a persons brain as being like channels for water, or thoughts to flow through that are carved out by previous thoughts / knowledge and experience. The water or thoughts take the path of least resistance. When we are young its like our brains are smooth sand, but as we get older they are like a tortuous network of canyons, in this state it is unlikely for water to find a new path to travel down. A fairly esoteric description but I think this is completely true.
  4. I have two jokes that I learned awhile ago. 1.) "The number you have dialed is imaginary, please hang up, rotate your phone by 90 degrees and try again". 2.) "A mathematics professor finished his very long complicated derivation with a flourish of the chalk on the chalkboard saying: "and so this result is obvious". A student stood up in class and asked: "Is it obvious?". The professor opened his mouth to speak then stopped, he began again and stopped again. He began to stroke his beared, eventually pacing back and forth across the front of the room muttering to himself, eventually he left the room and came back with a coffee, when he was finished his coffee a twinkle entered his eye he looked up with a smile on his face and triumphantly said: "yes"."
  5. Hello, An interesting few ideas occurred to me recently so I am posting them here. I will eventually get back to the idea(s) of the solar wind at some later date, I have to as those idea(s) are related to other ideas at the end of this post. This post will be a link to Imgur basically which contains an album of pictures, of text. I know this is an odd thing to do I just don’t have time right now to re-format this, the equations especially, and this is simply more convenient for me (thank you if you read it). http://imgur.com/a/KOdtT
  6. Those are all good examples as to how the system can be corrupted, I am not trying to make the point that it is currently not corrupted. What I am trying to say is that while our current system of money / economics is systemically flawed it does have in principle at least one good characteristic. The fact that it is relatively easy to acquire capital to fund business ventures compared to other systems. EDIT: Furthermore, thinking about the other half of my post makes one wonder whether this is even a nescessary characteristic for an economic / or money system to even have. There is some interesting data that shows that a lot of events in the world don't create economic growth as much as a transformative technology / or technological revolution, and as I wrote before laissez faire economic controls often don't create those, or if they do it is usually by complete accident.
  7. In my opinion there is nothing inherently evil about money, even fiat currency. As other people have said money was simply invented because it is convenient. There are many different concepts of “money” that would fulfill our needs of such a tool. Of course we’re probably all familiar with the idea that gold is the only thing whose value is fixed, but that is not necessarily true. Gold is not very useful for a whole lot, you can’t eat it. You can’t make any useful tools out of it because it is too malleable. The only reason it can be seen as having a “secure” value is because there is a finite amount of it, and of course because people want it (for some reason?). Fiat currency actually has a genuine advantage over a currency that has a fixed amount like gold. When you are looking for capital to embark on some new economic undertaking you could (though it might be somewhat unlikely) run into the very real problem that no-one has any money that is free right now to lend / or give to you. Your project comes to a dead stop. With a fiat currency or fractional reserve system you can basically never run into this problem, because everytime a banking institution comes across an investment opportunity that they think will pay off / justify its existence they can just create new money, poof out of thin air. A problem with this system is that obviously this slowly devalues money in general. This devaluation is even worse when an investment completely fails. In general everyone ends up “paying” for failure. Exponential growth of investments in such a system is unfortunately required for the system to remain stable. If you constantly create new money, you have to make back all of that money plus extra or you are not outpacing the inflation (slow devaluation of all money) that is a fundamental property of this system. Neither system is perfect, its basically a situation of picking your preferred poison. I think the problem is that a lot of people don’t understand the way that trade, money and debt, work, and there is a lot of intentional misinformation / people lying in wait to screw other people over. Which would probably still be true in any economic system but the fact that ours is already complex and difficult to control doesn’t help. I watched a talk at my university a little while ago about something related but not completely on topic. The talk was about what kind of role private investment and public support plays in the adoption of new “transformative technologies”. A few examples of “transformative technologies”: atmospheric flight, electricity, computers, internal combustion engines, etc. Definition of “invention vectors”: Research and development (prototyping engines, understanding physical laws regarding electricity / magnetism), building of infrastructure (cars need roads to drive on, they are pretty much useless without them), mass production / commercialization (transformative technology is transformed into a product sold to consumers). What was really interesting was the results that these researchers (economists and sociologists) came up with. They identified around 12 (I think) transformative technologies since the dawn of human civilization and the private sector (private investment from companies and private interest groups for profit) had a significant influence in all three invention vectors for only one of them. This transformative technology was electricity, the first commercial electrical grid around newyork from the hydro dams at Niagara falls. What is amazing though is that the private sector only played a significant role in at most one of the invention vectors of all other eleven transformative technologies. This is interesting because obviously people always tote “free market economics”, or laissez faire as being the ultimate method of economic control. That when people only do what is in their best interest good things rise to the top, actual data does not support this. Most transformative technologies only exist because they provided a strategic military advantage to whatever nation pursued them, and so they did. Economic growth as a result of these transformative technologies eventually finding commercialized uses in society happened most of the time, completely by accident. A very interesting and funny example actually comes from electricity. The first electricity grids were an entirely commercial venture in all invention vectors. Afterwards though the American government wanted to bring electricity to rural areas, basically as an act of charity to bring these areas into the 20th century and so they did. What is funny is that once people on farms had electricity they began to realize that it is useful for a whole lot of things, automatic milking machines, electric machinery etc. Productivity of farms increased dramatically creating economic growth, but this was completely not the original intent of the expansion of the energy grids. So in my opinion not only is money not evil, it is also incompetent.
  8. I don't know about humans but there are many species of animal that have fairly skewed sex ratios. For example some crocodiles can have as large a ratio as 10 female eggs to one male. People think this helps them grow their population, if you want to create a large population you don't really need a whole lot of males, one male can fertilize many females. I think it could be difficult to figure out what the "natural" sex ratio of humans is, absent of any cultural or political influence. There are of course all kinds of interesting books and documentaries on sexual inequality in modern times and throughout history. I highly recommend "The Sexual Paradox", by Susan Pinkerton She is a developmental psychologist who studies how people acquire / express gender and how different genders are viewed / treated in society. Also reading about the "Weregilds". Its been so long since I read about that but there has been some interesting work done regarding it. The Weregilds were a set of laws in early germanic peoples culture that basically layed out different fines for doing things like stealing stuff, murdering someone etc. What is interesting about it is that there is a lot of different sources from different time periods regarding what Weregilds were, so you can see these laws change over time. Also there were different fines assigned to the murder / or capture of different people, like the fine was not the same for you if you killed someones son as opposed to their daughter. There is no politically correct way to say this but you can basically see the "value" of men and women within their society change throughout history, inferred as a result of different changes and outside influences, for example the onset of industrialization made the fines for women drop dramatically. You can also see a reversal in how dowries worked at about the same time.
  9. Sorry I don't really know the answer to that question. In your original question you asked about "interference", this kind of implies that what you meant was that something is interfering with the signal quality of cellphones radios etc. In that specific case as long as the electronics are shielded well (swansont described one way people do that) they will operate just fine, you will simply not be able to communicate, send / recieve electronic messages through the air very well because the data will be unrecoverable as there is a bunch of noise super positioned on top of it. My guess is that in principle basically anything conductive can act like an antenna and absorb energy from electromagnetic waves. If there is a small piece of an electronic device that is acting like an antenna (intentionally or otherwise) and if the energy it is absorbing becomes too high for any circuit it is connected to to handle, the circuit will burn itself out.
  10. If you are strictly talking about noise / or intereference and not some kind of EMP then yes absolutely. I was just finishing a project in Digital Signal Processing yesterday, part of the project involves us filtering out noise from a signal. You can think of the EM interference as basically just being a loud background noise that washes out the intended signal. EM waves in this case are completely analogous to sound waves, for example its hard for you to talk to your friend in a normal volume of voice at a rock concert. You could try and create a bandpass filter, a device that would block frequencies other than the dominant frequency of your friends voice, but if there are frequency components of the noise that are equal to your friends voices dominant frequency you will still not recover your friends voice cleanly, perhaps not at all if the noise at that frequency is loud enough (has enough power). Noise in terms of electronic communications is literally noise, it doesn't actually do anything to the electronics itself it just washes out the recieved signal.
  11. Good point I never thought of that. I'll need to do some more reading on the subject and come back to this idea later. Astrophysics / astronomy is probably the area that my knowledge is the most lacking. It is interesting that the theory fails here, since it works so well with everything else. I'm still convinced its probably right. *Edit: Crazy idea, it definitely has no basis / motivation other than it would make this whole idea "work", but one of my favorite quotes is: "The day before anything was a major breakthrough it was a crazy idea". In measurements of the solar wind performed by spacecraft mostly outside of the influence of the earth's magnetic field (where the wind is net neutral). Can the devices doing the measuring actually measure the direction of individual particles? Or do they just assume that the particles are all moving the same way? I'll look into this later, but the idea is that with regards to electrical current it matters which way the charge is moving just as much as the polarity of the charge. By our convention positive charge produces a current in the same direction as it is moving (conventional current), but negative charge produces a current the opposite direction that it is moving. If the differently charged particles are moving in the same direction with about the same flux then there is no net current, if they are moving in different directions then the net current is actually doubled. This would obviously lead to the question of where is all this negative charge coming from if not the sun? I have no idea, its just a thought.
  12. Today I upload another mathematical exploration regarding Tetryonics. This one regarding electromagnetic interactions and gravity. Definitions: [latex] I = [/latex] current (amperes) [latex] J = [/latex] Current Density (amperes per meter squared) [latex] A = [/latex] Area (meters squared) [latex] R = [/latex] Resistance (ohms, assumed for simplicity to be constant even though this is a very untrue assumption) [Latex] r = [/latex] Radius from source (meters) [Latex] V = [/latex] Voltage (volts) Now I cite two different sources: [A] T.K. Gaisser and T. Stanev, “Cosmic Rays”, (Bartol Research Inst., Univ. of Delaware) Y.M.Wang, “On The Relative Constancy of The Solar Wind Mass Flux At 1 AU”, Space Science Division, Naval Research Laboratory, Washington, DC, The Astrophysical Journal Letters, 715:L121-L127, 2010 Wherein [A] it is stated that of the particles comprising the solar wind that reach the earth’s upper atmosphere 79% are free protons, and wherein it is stated that the mass flux density of the solar wind at a distance of one astronomical unit (1.5x10^11 meters, or the shortest distance from the sun to earth) ranges between 2x10^12 to 4x10^12 (particles/(m^2*sec)). Since 79% of the particles reaching the earth’s upper atmosphere are free protons then by definition there is a net current flowing through our solar system, outward from the sun. Ohm’s law [Latex] V = IR [/latex] (1) In our case since we have a measure of the flux density of the current at a given distance from our source we reformulate this. [Latex] V = JAR [/latex] (2) Assuming an equal area at two separate distances from the source and constant resistance, or impedance of the vacuum (it is definitely finite). [Latex] V_2 – V_1 = \left(J_2\minusJ_1\right)AR [/latex] (3) (I'm going to have to properly learn latex soon) V2-V1 = (J2-J1)*AR The measure of intensity (of any quantity) follows an inverse square law with respect to distance from its source (assuming the flux out of the source is spherically symmetric). [Latex] \frac{\left(Intensity_1\right)} {\left(Intensity_2\right)} = \frac{(r_2)^2}{r_1^2} [/latex] We solve for [latex] J_2 [/latex] and substitute into equation (3) obtaining [Latex] V_2 \minus V_1 = \left(\left(\frac{(r_1)^2}{(r_2)^2}\right)-1\right)(J_1)AR [/latex] V2-V1 = ((r1^2)/(r2^2)-1)*J1AR Thus we have defined a voltage distribution throughout the solar system due to the predominantly positive charge comprising the solar wind. [latex] r_1 [/latex], [latex] J_1 [/latex], [latex] R [/latex], [latex] A [/latex]. If we plug in values, letting A = 1 meter squared, J1 equal to 2x10^12, and r1 = 1.5x10^11, and R equal to the value stated on Wikipedia 376.73. We can let the change in voltage equal to 1 volt and solve for the distance required to create that, or r2. What I got is: 1.5x10^11, or another astronomical unit. Its not very much but its still there. An interesting result of this is that it shows that we are likely sitting inside of a gigantic and extremely powerful electric field, but it is basically not measureable because voltage is always measured relative to something and the gradient is so small. An additional way that we can infer that this electric field is extremely large and powerful is in the current density, 2x10^12 amps per meter squared!? Who has ever heard of that before lol.
  13. Hello, This is a very odd question that I was wondering if anyone could share some insight on. So I've been reading about the history of science lately in particular how Joseph John Thomson discovered the electron / cathode rays. The discription of the device(s) he used is that he basically had an evacuated glass tube (vacuum tube) with a spark gap in it, and if you apply a potential difference across that gap electrons are ejected out of the material comprising the spark gap creating a "cathode ray". This effect is enhanced if the metal comprising the spark gap is hot. My question is what would happen if the metal was instead cold, like extremely cold, to the point that it was superconducting (assuming the metal is made from the right material)? Basically I am just curious as to how a spark gap, or even a capacitor behaves inside of a circuit once it becomes superconductive (for no particular reason this is just interesting). Thank you.
  14. “Does being an atheist make you close minded?” I don’t think being either atheist or religious makes you close minded. It is my personal experience that people in general just don’t like being “wrong”. I think if someone believes in anything strongly enough then even the existence of dissenting opinions can basically be seen as an attack against their belief, especially if that belief is regarding forbidden: behavior, thoughts or ideas. The simple fact that others can believe in something other than ones own belief, or live contrary to ones own rules, without fear, guilt, or shame causes some people a great deal of mental anguish. This forces them to question the validity of their belief (that is extremely important to them) being a universal truth. In a weird way I think it is completely understandable that people tend to lash out at others when put in that situation. As I said I think this basic idea is applicable to any system of belief, cultural practice, or even form governance. Neither Science nor Religion as institutions are perfect, no institution is perfect. Say you are a scientist and you have spent 1 year of your life working on theory or device X, and at the end of that year some plucky new young scientist comes along with theory or device Y that completely outperforms X, you are obviously not going to be happy. How you deal with this situation I think is far more dependent upon what kind of person you are as opposed to your belief or disbelief in a higher power. The exact same thing could be said of someone training to become a priest who studies theology for a year, for hours a day one day meets someone who says that religion is a waste of time, or the source of most of the worlds problems. Religious people created stoning as a method of punishment / execution, scientists made napalm. Religious people in the dark ages burned people at the stake who studied science as heretics, if you’ve ever watched the documentary “Who Killed The Electric Car?”, or know the story about how the entire anti vaccine movement started (a scientist faking results) you would know how scientists and engineers engage in morally bankrupt practices for their own gain almost as much as anyone else. Institutions are created and run by people, and to quote Harper Lee “people is people no matter what kind of people they are”. I think good and evil, right and wrong, stupidity and brilliance are all normally distributed, just like everything else. I think people seem to forget that social skills, like tolerance, patience, even empathy to a degree, are skills. They require practice.
  15. Hello, I have come up with an interesting explanation for electric potential in terms of Tetryonics. This explanation is helpful to demonstrate how the expressions of physical laws contained in Tetryonics is equivalent to almost all expressions for these same physical laws in the standard model. We will start with the equation for the potential due to a point charge. [latex] V=\frac{kQ}{r} [/latex] It is helpful for our understanding to recognize that geometrically speaking (1/r) corresponds to the measure of curvature of a circle, and that we can define for a fixed r, a circle of which all points on the circle have equal potential. A diamond can be enscribed within a circle, and so for a sequence of equipotential circles, decreasing at a unit value we can inscribe a sequence of equipotential diamonds also decreasing at a unit value. Because of the fact that a diamond can be inscribed or circumscribed about a circle (a simplistic argument) diamonds have the same total curvature as circles. Thus by interpreting 1/r as being a measure of total curvature we can generalize potential to any geometry of equal total curvature. [latex] V=(Curvature)*k*Q [/latex] The math is basically satisfied, but there are a few conceptual points to clear up. Something I have found that people often seem to forget about voltage, charge and electric fields is that they are only ever measured relative to something. In the case of charge this is an opposite charge, or relative absence of charge (ie there is less positive charge here than there). In the case of the electric field it is measured relative to a test charge, and in the case of voltage relative to ground (or another relatively lower potential which is itself relative to ground). In the theory of Tetryonics voltage is a separation of charge. Opposite charges experience an attractive force due to the mutual reinforcement of the energy momenta within each charge’s charge field geometry that are pointing toward the other charge, or an overlap of different energy geometries. Potential energy exists when there is an equal or greater force that keeps these opposite charges held apart. I am not going to talk in detail about the special case of conduction through a material here. The description as to how / why conduction is different is actually contained in my previous post about epsilon and ooh. The basic idea is that during conduction there are essentially two paths for energy to flow through in order to neutralize the separation of charge. One path is “easier” for energy to flow through because it has a higher epsilon ooh product, producing a larger distance per volume, thus the energy is “more free” to expand and neutralize through the conductive path. For point charges in free space it helps to think about the discretized form of the properties of voltage described in Tetryonics through some terminology and ideas used in the mathematics of Analysis and Set Theory. In this picture we have defined a 2d open “ball” of radius r centered on a point P, this is the white circle. An open ball is simply a circle of points that excludes its boundary. Inside of this open ball is another point P1, this is the smaller blue circle, its not actually a point because the graphical drawing software I used doesn’t have the ability to make points. P1 is the center of another open ball of radius R. This blue open ball defines an equipotential circle around a its source charge P1, the potential on this circle is just above zero, generally speaking we can make it as close to zero as we want and here we are defining voltage in the way usually done in the standard model. Now we have two point charges about which are equipotential circles. These equipotential circles would be just above zero (once again via the standard model definition) if these charges were separated very far apart. And now we want to answer the question of what is the potential of our blue encircled charge relative to our white encircled test charge. We know that in Tetryonics force is proportional to the overlap of charge fields, force is related to potential energy. We can redefine the voltage of the blue charge then relative to the white charge by the overlap of their charge geometries and then in the usual formulation in the standard model normalize this with respect to the magnitude of the white charge’s charge. Calculating the area of the overlap is simple, there are many known geometric relations to exploit in doing so. [latex] Area of overlap of two circles = 0.5*\sqrt{(-d+r+R)(d-r+R)(d+r-R)(d+r+R)} [/latex] We could also work with the geometry of equilateral triangles if we wish to. [latex] Area of enscribed diamond = \frac{4 r^2}{\sqrt{2}} [/latex] (I can't figure out what's wrong with latex here) Area of enscribed diamond = 4*(r^2)/sqrt(2) [latex] Area of circle = \pi r^2 [/latex] Area of circle = pi*r^2 [latex] Circular area to diamond area := r^2 \frac{4 r^{2}}{\sqrt{2} \pi} [/latex] Convert circular area to diamond area := 4*(r^2)/(sqrt(2)*pi) [latex] Area of overlap of two diamonds (or equilateral triangles) = \frac{\left(r+R-d\right)^2}{2} [/latex] Area of overlap of two diamonds = ((r+R-d)^2)/2 The important aspect of these equations is their distance of separation dependence. With these relations we can turn any normal standard model problem regarding finding the potential between two charges dependent upon their distance of separation into a Tetryonic model problem by equivalently asking what is the area of overlap of the lowest equipotential “enclosure” (I guess you could call it) of each source such that this enclosure is sitting at just above zero (hence it’s the least one), and of course renormalize with respect to your new geometry. This shows that the very simple case of modelling two point charges in Tetryonics and the standard model is going to produce the same qualitative result, and will even produce the same numerical result upon simply using the right correction factor. But we can also extend this to charge distributions. Now we have two source charges, P1 and P2 inside of our original open ball. If we want to calculate the potential due to geometric overlap of a test charge placed nearby we do it the same way we would in the standard model. By the superposition of the potential of each individual charge, we calculate the overlap with the teal circle, then add that to the overlap of the blue circle, and we can see that depending on how we situate our test charge as we move it closer to the open ball at the center we will generate more overlap at different rates, ie if we came in from the upper left or bottom right the overlap (potential) would grow slower than if we came in from the bottom left or the top right (this is assuming both charges in the center open ball have the same polarity and are not a dipole, but it also will produce the same results for dipoles as the standard model). This can be generalized to an arbitrary number of charges and charge sign and / or magnitude. I don’t have an example of this, but it would be feasible to also go in reverse. Start with a known potential distribution and then find the radius of curvature of the osculating circle for the lowest equipotential line (in this case the rightmost one) at a sampling of points across it, the center of the osculating circle should lie on top of your charge distribution and correspond to a point source. This laplace solver picture is bad because of the way it is plotted, because the top and bottom are held at 0 voltage boundary conditions the plot is distorted. Now you might be wondering what is this useful for? Well the answer is that it is not useful for anything, yet. It is simply a demonstration as to how you can arrive at the same results using Tetryonics as the standard model. People keep thinking that Tetryonics changes too much to ever be accepted, but I think it changes very little. I think what Kelvin, the theory’s author has discovered is something that was basically hiding under the surface of the standard model all along.
  16. Hello I am back again. This time I have come to share a prediction made by Tetryonics that in theory is actually testable, but actually testing it invovles a bunch of different problems. The first major problem is the usual one with Tetryonics, and that is a lack of computational math. So the idea is once again best described through pictures. The first picture is regarding a phenomenon I have actually worked with in a lab before, dielectrophoresis. Basically you can force dielectric materials to move (its sort of like conduction) through an electric field so long as that electric field has a high curvature. This can involve the creation of voltage gradients in excess of 1000 volts. But with clever microfabrication / device design voltages as low as 25 are feasible. What Tetryonics predicts, and this is not really that revolutionary an idea is that the presence of an electric or magnetic field can alter the trajectory of light. This is obviously possible in high ooh materials via, for a magnetic example, the Faraday Effect (occuring inside of lead doped glass), but has never been witnessed in free space, although it might be possible even in the standard model if the field intensity is extremely high. I am thinking that you might not need an extremely high field intensity if the geometry employed is correct, though it still may be very large. This is the first major problem, that there is no way to compute just how high the field intensity might need to be. But once again in pictures the basic idea works like this: The reason I think the field intensity might not have to be extremely high is because of the same reasons it does not have to be for dielectrophoresis. In Dielectrophoresis, as well as in this phenomenon it is the gradient in intensity that is actually important. In the most general terms a gradient in energy density through space produces a force, true for gravity, true for DEP, true for electrical conduction. Now this does not completely decouple the magnitude of the phenomenon (diflection of the light's path) from the magnitude of the field intensity, but it does decrease the dependence. You can artificially increase the gradient of the field intensity by bending the field lines, this can be accomplished by using "pointy" field sources at acute angles to each other, and situating the object to be actuated just outside of the center of the two sources (producing actuation away / or toward the sources in a direction perpendicular to the tangent of the field lines). However the maximum value for the gradient can still only be at most the magnitude of the field intensity, ie this is the unrealistic case where the field completely drops from its maximum value to zero over a very short distance (like a reverse step function). The second major problem with testing this is that it can only be done by matching the polarization of the light being used to the field geometry used. This means you can only do it with a laser, this is not a real problem since even a dollar store one might work as it is the magnetic field intensity that matters and not the light intensity. The problem is you must know the polarization of the beam before hand. This is not that much of a problem either though since you can simply try both of the geometries displayed in the picture (this is assuming the laser comes out either vertically or horizontally polarized and not at some wierd angle..). The third problem is that even if you were to produce a diflection of the beam it might be extremely small, say on the order of microns (I really have no idea). There are some ways to get around this problem however. The first is to focus the beam through a lens before you reach the region of highest field gradient, if the spot size of the beam is very large any diflection of it will be "washed out" so ideally you would focus the beam as tightly as possible (say 1 micron with a microscope objective of some kind). The second method that could be used is to attempt to measure diflection very far away from the magnets, in order to do this it would be helpful to recollimate the beam after having it focused through the magnetic field. The final method is to bounce the re-collimated beam off of a mirror at a very wide angle (as close to 180 degrees as feasible), in order to help amplify any diflection. Finally you could employ microfabrication to create a very tiny array of photodetectors, but this would obviously require a lot of time / money etc. If someone could actually demonstrate this ( I plan to eventually try but have no money for materials right now ). It would be pretty amazing, because from what I gather the general consensus among academia is that this is impossible.
  17. Ok I have once again sort of answered my own question. This reference; Curvatures of Smooth and Discrete Surfaces, John M. Sullivan, Discrete Differential Geometry, Vol. 38 175-188, 2008 . Describes how one can preserve the Gauss Bonnet theorem for discrete surfaces. [latex] \int\int K dA = \sum_{p \in D} K_p [/latex] the integral of the gauss curvature is equal to the summation of the curvature defined at every vertex of a surface composed of flat triangular faces. The curvature at each vertex is defined as: [latex] K_p = 2 \pi – \sum_{i} \theta_i [/latex] Ok I can't figure out what is wrong with the syntax here but Kp is supposed to be two pi minus the sum of all of the interior angles of the triangular faces that meet at the vertex i if they were all pushed down into a plane. Where [latex] \theta_i [/latex], are the interior angles that meet at each vertex i, if you were to flatten this vertex into a plane, ending up with a bunch of triangles all meeting at a common point. This gives a value of four pi for my example of 3 tetrahedrons joined along an edge at a central axis. But I am not sure if this special case is actually applicable to what the author has defined. In my example I take the center vertex where we really have one vertex of each individual tetrahedron meeting on the central axis as having 9 triangles each with interior angles of 60 degrees. Thus we end up with: [6*(360 - 3*60)] + [2*(360 - 9*60)] = (6*180) - (2*180) = 4*180 = 4 pi But obviously 9 triangles cant meet at a central point without overlap. As I've said, I am not sure if that breaks what the author has defined, but it did give me the answer I was hoping for lol.
  18. For anyone who might have been interested in this question I answered it myself, sort of. This source has a more general definition of the euler characteristic of a surface as well as the euler characteristic of "polyhedral sets". "A New Look at Euler's Theorem for Polyhedra", Branko Grunbaum, G.C. Sheppard, The American Mathematical Monthly, Vol. 101, No. 2, Feb 1994 The reason I write "sort of", is because it is unknown to me how this particular definition, which is slightly different from the one I am familiar with affects the Gauss Bonnet Theorem. For example, using their approach you would instead calculate the euler characteristic of a cube to be 1 (as opposed to 2), but the joining of arbitrary number of cubes along edges (or faces) to also be 1. But what I still wonder is whether or not the arbitrary number of cubes joined along edges or faces still has 4 pi total curvature? Does anyone know?
  19. I have worked with nanoparticle solutions before as an undergraduate student researcher and the first thing I think you should know is that working with nano particles is sometimes fairly difficult. A lot of nano particles are extremely toxic, for example in the case of Chromium (IV) Oxide an acute dose in the regime of micrograms can cause immediate death. Read a lot of literature about nano particle safety first. The reason I say "a lot" is there is a lot of conflicting information out there regarding safety standards and their chronic effects. Aside from ones that pose an immediate and serious danger to your health the chronic effects of nanoparticle exposure for many materials is unknown, in the sense that literally no one knows what it does to you. Most MSDS data for threshold limit values of exposure are actually wrong. The values quoted are often extrapolations from data for microparticles, and the key interest in nano particles and nano systems is that the chemical and physical properties of them are often vastly different from other size regimes. Work with nanoparticles is usually limited to only occur inside of an airtight glovebox. There are a few other practical aspects of working with nanoparticles you will want to consider / look into. The first is that they are extremely messy, a lot of them stick to absolutely everything, so that when you work with them it is difficult to not contaminate basically everything around the workspace. You will need to invest time in developing rigorous clean up protocols. The second is that developing a stable colloidal solution of nanoparticles, or nanoparticles homogeneously dispersed in a fluid is also usually very difficult. This source you might find "a little bit" helpful Horiba Scientific: (White Paper) Dispersing Powders in Liquid for Particle Size Analysis but the creation of a stable colloidal solution is sometimes only possible through a lot of trial and error. In general fluids that are highly polar are the best candidates as a lot of different nano particles acquire a large electrostatic charge and polar fluids are able to most effectively screen charge interactions between the particles. Water is particularly useful because its charge screening abilities can be enhanced by changing its PH. You potentially have a lot of work ahead of you, good luck.
  20. Hello, I have been looking around lately for literature about a specific topic but have not found anything directly relevant to what I am looking for, so I was wondering if anyone had any "keywords" regarding this topic for me to try out. Basically if we have a polyhedron, and we were to join it to another polyhedron along an edge or face, does this change its euler characteristic? I would like to find some kind of proof that the euler characteristic stays the same no matter how many joined polyhedra you have, even with the joining possibly occuring sometimes between edges, other times between faces, maybe also between completely different polyhedra. Does anyone know if this already exists somewhere and what field it would be a part of? Its fairly simple to try an example of a "cluster" of Tetrahedrons joined along a single edge. For N Tetrahedrons joined along a single edge we have: X = V - E + F with 2 = 2(N + 2) - 6(N + 1) + 4(N + 1), with the obvious limit that N has to be less than or equal to six. Also its important to note that the center axis that the Tetrahedrons in this example are joined along is counted as a different edge in between each of the Tetrahedrons (is it correct to do that?). For a chain of Tetrahedrons joined by their faces we have: 2 = (N + 4) - 3(N + 2) + 2(N + 2) You could easily prove these as being true for integers via induction, but what I am interested in is the actual geometries.
  21. This is a very interesting idea Conway, but I don't think it actually is all that different from what has already been created in Mathematics. I have two points that I would like to add to this discussion. The first is what I think is a very intuitive physical description as to how multiplication works in the mathematical system you have defined that I would like you to either agree or disagree with as to its accuracy. If you have a "saw tooth" ruler, a ruler with spikes sticking up out of it say every one centimeter. This ruler has a total number of spikes = say 10. You have another saw tooth ruler with ten total spikes each spaced 1 centimeter apart. Multiplication of ten by 2 is basically like taking both rulers and fitting them together so that each spike now fills the 1 centimeter gap between the other rulers respective spikes. Now along the single length of the combined rulers you have 20 spikes. This doesn't work out exactly like it should because the total length of the combined ruler pair will have increased by length a length of 2 centimeters, since one tooth will hang off each end, but the idea that one value is fitted between the values defined in one of the numbers space, without altering the space is what I am trying to physically demonstrate. Multiplication makes the values in a space more "dense", division makes them less dense. Does that make sense to you? The second point is this idea simply reminds me a lot about vectors and imaginary numbers. The idea that an imaginary number has a magnitude and a phase seems similar to your idea of value and space. The magnitude is basically its "value", and the phase its "space". The idea in relative mathematics that you can partially define the addition, subtraction, multiplication of numbers etc relative to either the value or space of a partially defined number reminds me a lot about that. Its basically like saying we can add two imaginary numbers, A: we know its magnitude and phase, B: we know its phase but not its magnitude, we then know that the addition A+B will have a phase equal to the addition of the phase of A and B and its magnitude will be some number relative to the magnitude of A. Or say if we multiply A and B we instead know that the resulting magnitude is some multiple of A with the phase of A+B. You might instead want to take a look at the axioms defined for imaginary numbers rather than real ones, I am not sure but they might be more relevant to what you are doing.
  22. I suggest you read about the field of mathematics known as "Differential Geometry". I am not an expert on this topic, I simply took an introductory course on it a few years ago at my university and have been tinkering with it regarding a project I have been working on lately. One of the more interesting and important aspects of this field is the description of surfaces in terms of intrinsic coordinates. The simplest example to explain how this works is a sphere. Basically any 3d surface can be described solely in terms of its torsional and tangential curvature, usually denoted k1 and k2 for short. A sphere has constant curvature in every direction, hence why it is the simplest example. Now there is something known as the Gauss Bonnet theorem, which states that the total curvature, if you integrate k1 and k2 over the entire surface will always equal 2pi*X, where X is the surfaces euler number. Euler numbers of surfaces are constants that define what "family" the surface belongs to. For example all of the platonic solids (including spheres) have X=2. So a sphere has four pi total curvature. In order to describe how we can use k1 and k2 as a coordinate system intrinsic to the surface of a sphere it is helpful to imagine the sphere as having a "frame" of two perpendicular circles, one horizontal, the other vertical. If you were to travel around the horizontal circle you can obviously go through a maximum rotation of 2 pi radians, same for the vertical one. Thus we can define points on the sphere by associating them with different values of k1 and k2. Obviously there is a whole bunch of spatial information lost, with regards to "where is this point in space?" and "how is the sphere oriented in space?". But this is one method of describing surfaces in terms of solely themselves, ie uncoupled from the space they are embedded in.
  23. Hello, I am back again with another intriguing development. Progress is slow for me right now because I am currently enrolled in school. This post is regarding some tinkering I have been doing to epsilon and ooh (someone who studies ancient history once told me that “mew” is actually supposed to be pronounced “ooh”, “mew” is an English bastardization of it, I can’t break the habit of calling it ooh now). I’ve been doing this to once again figure out how to manipulate Maxwell’s equations to give a good time domain model of individual quanta. What are epsilon and ooh in Tetryonics exactly? One is obvious the other is a little bit odd.. I kind of hate simply manipulating units but for now it’s the only thread I have to tug on really. epsilon = (amps^2)*(sec^4)/(kg*(m^3)) There is another reparametrization in Tetryonics that is sometimes useful, it is Amps = kilograms/sec mass of charge carrier(s) per second Using this reparametrization we continue on.. = ((kg/sec)^2)*(sec^2)/(kg*(m^3)) = kg/(m^3) In terms of electrical charges and the physical description of what permittivity is this makes sense. Permittivity is a measure of how easy it is to store electrical energy. It does this by polarizing / orienting dipoles. So if we have a lot of mass of possible charge carriers per volume then we have a high permittivity and we can polarize a lot of material and store a lot of electrical energy. But we can also do this.. = (Energy/c^2)/(m^3) = (Energy/c^2)/(c^4) = density of 2d planar energy per relativistically normalized spherical volume This is another equation full of subtlety and like most things in Tetryonics is best explained by a picture, unfortunately I don’t have a picture for this so you’ll just have to imagine things in your mind. The energy per c^2 is the energy per 2d radial area. The c^4 signifies a 3d spherical volume. If we want we can construct the “frame” of a sphere by two perpendicular 2d circles. This gives us horizontal and vertical polarizations of light for example. But we really could construct a sphere out of an infinite number of different circles all at different angles to each other centered on the sphere’s center. This is what this equation or Epsilon is describing. Epsilon, at least the value that is currently measured for epsilon naught is strangely enough not a fundamental property of anything, it is merely a correction factor for the local energy density of our region of the universe. We know that waves turn towards regions where they are moving slower, that light moves slower through high permittivity mediums relative to vacuum. Epsilon is basically a measure of the “electrical stiffness” of a region of space. A spring with high k constant (basically epsilon) can store more energy but its harder to compress it, because it has a high energy density. This is exactly what epsilon is, because energy has associated with it momentum, and it is the overlap of different energy geometries and their momentum that create forces. Now for ooh ooh = Newtons/Amps^2 = kg*(m/sec^2)*(1/Amps^2) = (Energy/c^2)*(m/sec^2)*(1/amps^2) = (Energy*m/((c^2)*(sec^2)))*((sec/kg)^2) = (Energy/c^2)*((c^2/Energy)^2) = m*(c^2/E) = meters * ( (radial relativistically normalized area) / (energy) ) The problem is I don’t really know what ooh by itself means. It’s a measure of how much mass decreases with distance, I guess? It sort of makes sense, because of the idea that in Tetryonics you have an associate mechanism for red shift. Energy expands and covers more area as it radiates, so that if you have a source and a destination that are stationary relative to each other but extremely far away from each other the energy emitted from the source will shift to red by the time it reaches its destination. This once again does not supersede the idea of Doppler shift, it occurs simultaneously to Doppler shift. What is odd is that this is only associated with ooh, and that it is associated with ooh at all, the idea makes sense what I can’t make sense of is why this idea is here specifically. The product of ooh and epsilon is supposed to equal the speed of light squared. [Epsilon]*[ooh] = c^2 =[(Energy/c^2)/(c^4)]*[m*(c^2/E)] = m/c^4 Distance per relativistically normalized spherical volume?? In this case the product under Tetryonic reparametrizations for mass and amperes gives us something weird.. This is something that once again only kind of makes sense. I don’t have any proof of this yet but what I think is going on is if you have a sphere that is being traced out by four oppositely travelling beams of light, for a given volume they will have travelled a certain distance. The product of epsilon and ooh then does give us a constant ratio regarding the speed of light, but the order doesn’t match, its not c^2, not really. Also there is no time dependence, but time dependence I don’t think is really a problem (the thing I don’t have proof for) because of ideas from differential geometry. I’ve talked before about how the gauss bonnet theorem is essentially built into our current formulation of electromagnetic with far reaching and subtle implications. For all of the machinery of the gauss bonnet theorem and differential geometry to work we are dependent upon a few important assumptions about the vector fields / surfaces that they are operating on. The important one in this case is that all curves in our space travel at “unit speed”, or the magnitude of the tangent vector to them is always equal to 1 everywhere along them. This is why I think the time dependence disappears, it is essentially redundant due to the construction of the geometries we are working with anyways, energy only travels at one speed during transverse propagation, the speed of light.
  24. Hello I am back again to share something. I have been thinking about how to construct math to describe 3d conical quanta in the theory of Tetryonics and I have come up with a preliminary result that is kind of interesting. What I was trying to do is create a model for electrostatic charge out of Maxwell’s equations that would describe an electrostatic field as being composed of two changing magnetic fields that cancel each other out at every point in time and space but still leave behind a measurable electrostatic charge field. What I ended up with almost works, it is a way to describe an average charge density as being created by the curl of a magnetic field. I don’t know if this is something already known about in the standard model, maybe someone here can answer me that question, it does almost seem like it should be a pretty basic result. The point where this result fails to do what I want it to is that it is not composed of two magnetic curls. I don’t know how to describe the way that this result is not complete. If you curl your right hand with your thumb pointing toward you, obviously from the perspective of someone looking at you, you are actually curling your fingers to the right instead of the left. Though you are still using your right hand, you can never actually see this phenomenon yourself, only imagine it, if you walk slowly around in a circle and continue this exercise you always see your hand performing this process in the same configuration. As noted in my previous post this is basically what an electrostatic field in Tetryonics is doing, curls curling the same way but with opposite orientation cancel each other as effectively as curls that curl in opposite directions with the same orientation. This result does not describe that phenomenon, but I would like to find some way to make it do that. On to the math… (This might be painful to read, I had everything all nicely made in word but the forum won't let me copy that into a post) Maxwell’s equations tell us that, (Dell)XB = u*(J+e*((d/dt)E)) = C = (A+B) (Dell)*E = p/e = D We would like to show (somehow) that all or part of C = D, subject to some kind of condition. (A+B)=D The simplest condition imaginable is multiplication by some term Y. This can be thought of as equivalent to integration, if we change our condition somewhat such that instead we show that all or part of dC=dD. This condition could take alternate forms, such as A or B = 0 with the other term multiplied by dY and integrated. The point of this is to find all possible conditions such that a curling, or twisting magnetic field can create a static charge distribution, and check if any of these conditions actually make some kind of physical sense. If one of them does it could provide insight as to how to model 3d quanta in time, as well as their interactions. p/e = (e*u)*((d/dt)E)*Y (Q/m^3)*((kg*m^3)/((A^2)*(sec^4)) = (c^2)*(d/dt)(kg*m/((sec^3)*A))*Y Where c is the speed of light, e and u are electric and magnetic permittivity and permeability respectively. (Q/1)*(1/((A^2)*(sec^4)))=(c^2)*(m/((sec^4)*A))*Y (Q/1)*(1/(A*(sec^4)))=(c^2)*(m/(sec^4))*Y (Q/1)*(1/A)=(c^2)*(m/1)*Y (Q/1)*(1/(Q/sec))=(c^2)*(m/1)*Y sec=(c^2)*m*Y Remember that in Tetryonics we do this… c^2 -> m^2 (Radial relativistically normalized units) sec/(m^3)=Y dt/(dV)=dY This makes some kind of sense due to the divergence theorem and the time derivative of the electric field, if we integrate with respect to time then normalize with respect to volume we get an average charge density out of term B. Now lets do the same to term A. p/e=u*J*Y Q/(m^3)=(c^2)*J*Y Q/(m^2)=(c^2)*(A/(m^2))*Y Q/(m^3)=A*Y Q/(m^3)=(Q/sec)*Y sec/(m^3)=Y dt/dV=dY We get the exact same Y. Therefore [(Dell)X(B)]*Y = (Dell)*E = p/e Where *Y is the operator, [C]*Y = (d/dV)*integral(C,dt) Over one complete period of the cyclic C. (I know there is nothing to inherently suggest this about the math so far, I am using assumptions about what this is going to be eventually applied to). Although this would take some kind of actual proof / figuring out we can intuitively guess that this operator has an inverse (due to the properties of electric and magnetic fields as well as derivatives and integrals (fundamental theorem of calculus)). [D]*Y^-1 = (d/dt)integral(D,dV) Discussion: As noted before it would actually take some work, that I am as of yet unsure how to do, to investigate further properties of this operator. -It should have an inverse, but the order you perform the integration then partial differentiation might matter, I am not sure. -Also I don’t think this operator is necessarily commutative, though it is probably associative. -A simple thought experiment to make sure that this idea makes sense can be to think about a straight current carrying wire. The current creates a uniform curl along the length of the wire, lets assume the current is constant. Our dV in this case becomes dx (a differential of length), if the current is constant we don’t have a time varying electric field so ignore that part. If we integrate the curl created by the current along the wires length we end up with an average density of charge inside of the wire. -I predict, but once again don’t have any proof (it does make sense though from my own familiarity with manipulating vector integrals in school assignments) that if you have a curl that varies with respect to distance along a single dimension, like say a current carrying wire where the current decreases with respect to distance and you integrate it with respect to the dimension along that distance you will end up with a cone-like volume of average charge density. -With regards to the conical quanta, this math does not provide a means for a derivation of conical quantization of electric and magnetic fields it simply provides a clue as to how we might be able to model them. The derivation would still probably have to be similar to what I described before, as one of the largest differences between Tetryonics and the standard model is that fields have finite dimensions in Tetryonics, and this is a direct result of the fact that energy is quantized. In Tetryonics instead of having discrete imaginary particles as force / energy / information carriers we have regular tesselations. -This math is as noted before not a complete description of Tetryonic quanta in time. One of the most important results that I was trying to create with this but have not yet succeeded is to show within the framework of the standard model that you can create a mathematical model of electrostatic charge from a the (I don’t know the proper way to describe this lets say) superposition of two oppositely facing curls of dynamically changing magnetic fields. Thus you could describe an electrostatic field as being created by a two changing magnetic fields composed of magnetic components that have destructively interfered with each other so as to make them not feasibly measurable while the electric field is.
  25. Hello, I am back with what I think is a major breakthrough to do with Tetryonics, this post is long but I think you will find it is worth it. This is something I thought up myself and have notified the theory’s author about but I have yet to hear back from him regarding it. This I think people may find particularly interesting because it allows us to now model Tetryonic quanta in time and in three dimensions, something that before it was not entirely clear how to do. The solution is very simple, in order to model the 2d quanta in time and 3 dimensions all we have to do is rotate them about a center axis. The result in 3d is this. This is a negative quanta. There is a very important conceptual idea that has to be elaborated here with interesting, sometimes bizarre and far reaching consequences. This quanta as previously defined is a perfect quantum inductor, as a result it obeys the Left Hand Rule. If you curl your left hand fingers in the direction of the arrows your thumb will point in the direction of the tip of the cone, this is also the direction of the quanta’s magnetic moment, the curling of your fingers a twisting electric field. As a perfect quantum inductor these quanta induce themselves. They consist of a changing electric field which in turn creates a changing magnetic field, which creates the changing electric field again and so on and so forth. However it is also important that as with the 2d representation each quanta has an associated vector velocity, pointing in the 2d case from the base of the triangle to its tip, and in the 3d case from the base of the cone to its tip. The twisting electric field creates the magnetic field as well as a vector velocity causing the quanta to move. The above is the image of a positive quanta, it obeys the Right Hand Rule. Unfortunately there are a few errors in the pictures I’ve created and will be showing in this post and I will not have access to the software to fix them until Monday. I accidently made the quanta the wrong color, in order to sync with the 2d geometry the red one should be positive and the blue one should be negative as well as possess a black color. Everything still works fine, its just the mismatch in color is a bit jarring when you are trying to think of the 2d and 3d representations (sorry). As a result of the application of the LHR and RHR the interaction of quanta in a very odd way becomes uncoupled from their charge and magnetic moment and you will see that it is the direction of their twist and linear momentum that become the only important things to consider. Over the next few pictures we will see how these quanta can qualitatively interact in three dimensions. It is important to note that these pictures have had their components “exploded” to more easily see what is going on. In reality the only way a force is ever established between quanta, or energy transferred between geometries is when there is overlap of the geometries, we will come back to this in a little bit because it has very important implications with regard to time dilation and phase matching. In this picture the twists are in opposite directions. Thus the resulting order of magnetic poles going from left to right is (S-N)-(S-N) an attractive configuration, the magnetic poles will in this case want to and try to align with each other. Due to their opposite color the quanta are also obviously oppositely charged, also producing attraction. Their linear momentums are also parallel. In this case the interaction reinforces each of their linear momentum, they are in a state that I am thinking is equivalent to constructive interference. In this picture the twists are in the same direction. Thus the resulting order of magnetic poles going from left to right is (N-S)-(S-N), a repulsive configuration, the magnetic poles in this case should want to deflect each other. I am guessing this will occur dependent upon what each field geometries respective energy is, ie a larger energy field will dominate the interaction because it has more inertia. You may be wondering what I mean by “field” when we are supposed to be talking about quanta. While each of these are representing a single quanta of energy we know that the geometry is fractal in 2d. As a result which we will see in more detail later the interactions and properties of the smallest quanta are duplicated by larger geometries. What is going on is we can essentially define 1 in our system of math to be any square number, we can build an equilateral geometry out of 4 ones, one row of 1 quanta, and another row of 3 quanta, but we could redefine our 1 to be a group of 4 quanta, giving us a row of a single N^2=4 “quanta” and a row of three N^2=4 quanta beneath it (it makes sense if you draw a picture). Getting back to the interaction in the picture. The magnetic fields want to deflect, the electric fields are opposite polarity and want to attract, the linear momentum is anti parallel and convergent. Thus I think the quanta should want to pass through each other, it is not entirely clear since this is actually a special case of this interaction where each field has equal energy. You would have to create some kind of simulation to really see what happens but the exciting part of this new development is I think the tools of actually accomplishing that are now very close to being realized. This is essentially the same picture as the previous except that the quanta are now on opposite sides of each other. This picture has an error in it. The blue cones rotational arrows should display a spin going the opposite direction from what is shown in the picture. Thus each cones spin is going in the same direction and the magnetic poles progress from left to right as (S-N)-(N-S), once again these should want to repel / deflect each other. The electric fields are once again opposite and attractive, but the linear momentum is divergent. I am once again not entirely sure what the NET interaction should be, it would depend on which field has more energy. It is interesting to note however that this is the same geometry as a photon, but in 3d. Photons obviously radiate at the speed of light. It is not obvious what the difference between opposing KEM fields of opposite charge and photons are other than that a photons geometry has two equal and opposite KEM fields, in the case of the KEM fields of colliding / or interacting particles this is not necessarily the case. This is a tilted picture of a three dimensional KEM field. KEM fields are supposed to form by the inductive coupling of quanta, here we see that must mean that they have twists going in the same direction. This is a three dimensional KEM field as viewed from the top. We can see that if we align all of the 3d quanta along a single plane and look at the KEM field perpendicular to this plane we see the 2d geometries already defined by Kelvin. This 3d interpretation does not replace the 2d one, it is more a complimentary viewpoint. As I continue explaining how we end up with time domain modeling capabilities out of a 3d structure you will see that a 2d geometry is essentially the same as analyzing a system in terms of its frequency domain, and energy spectrum density, whereas viewing the geometry in terms of 3d is like analyzing a system in terms of its time domain. A 2d geometry for energy always bothered me because electrostatic fields should be 3d, as that is what is measured of them. The thought used to be that the diamond shape of an electrostatic field is simply a slice, or cross-section of the real field. Thinking about how if it is just a slice then the field should really be everywhere in 3d space at once I thought back to first year calculus and the idea of rotating graphs about an axis and decided that the quanta are probably in fact cones. But then if they are cones I thought how do we keep all of the good stuff that Kelvin already discovered about 2d geometry, it was not clear to me for awhile until the very simple answer hit me like a lightning bolt one day while I was thinking about time dilation. If we have a circle of radius r and a concentric circle of radius R with R>r and a point p on the circumference enscribed by r and another point P on the circumference enscribed by R with p and P also both located along the length of R. If we now cause each of these circles to rotate with a few restrictions we can basically create a system that models time dilation. These restrictions are p and P must always lie along R, p rotates with constant tangential velocity regardless of R’s length. Now obviously because p rotates at constant velocity if we lengthen R then P is going to take longer and longer to go through a full rotation as compared to p, = time dilation. Now here is where it gets really interesting. If we say that KEM fields rotate about their central axis with the restriction that the quanta immediately next to / along this axis rotate at one constant speed then as the KEM field gets bigger its outer tips have a longer period of revolution than the center. Thus large energy geometries corresponding to large velocities produce time dilation at the quantum level. This is a picture of a 3d KEM field from the back generally displaying what I am talking about. Since every quanta in the KEM field spins in the same direction where they meet should essentially act like gears with intertwined teeth causing them to rotate about the central axis of the KEM field. What is really cool is that if we average out the location of the energy over a single period we should end up with a 3d geometry that is once again basically a cone. So larger geometries behave exactly like smaller geometries, in 2d the geometry of energy is fractal, in 3d I don’t know if it still satisfies that definition but it sort of is. It gets even more interesting when we think about how forces are transferred. We know forces are the result of the overlap of opposing or parallel energy momenta, thus creating states of convergent or divergent energy momenta. If a KEM field rotates, and here I am including KEM fields that are part of larger composite geometries like electrostatic fields, photons and even Matter. What ends up happening is that force is not transferred continuously, but it is transferred upon *ticks of each of the fields energy clock. When these energy clocks tick in opposite directions the rate of ticking of this composite system is very fast. If one energy field is much larger than the other then the rate of ticking is dependent upon how big the gap between them is, however because we know that energy is transferred in sequentially odd numbered jumps the energy will “flow” from the large field to the small. Now you may be thinking or have been for awhile that I seem to have missed an EXTREMELY important point to do with self inductance. A self inducing field like a photon has both negative and positive components of polarity over time. Well I claim that so too do these energy cones, but not in the same way. For example, if we look at the red cone head on, its twist is clockwise, say defining a negative electric charge. But if we now look at the negative red quanta from behind, its twist is counter clockwise. But counter clockwise twist “belongs” to a positive blue quanta as viewed from the front. It almost seems like there is a gap in logic here because then if we applied the RHR to this twist it should indicate that the momentum should be going the opposite way but there in fact isn’t, and it is very interesting why. Charge is only measured relative to another charge. Just like when we had the 2d electrostatic fields that had positive and negative sides to the 2d quanta we noted that this in fact didn’t matter because the overlap of each respective side produced the same interaction as long as we kept them separate. This is also true of our left and right hands. This is what I was talking about at the begging when I was saying that now fields are loosely coupled to magnetic and electric field polarity, it’s the direction of twist and linear momentum that is truly important. I would like to leave you all with a few final thoughts regarding a different but related idea to all the previous. I believe there has been a serious error in the formulation of electromagnetics. Now this is somewhat ridiculous of me to claim since I only just started my 3rd year engineering electromagnetic course but allow me to take you on a bit of a long walk of reasoning… This error is an assumption that was implicitly made without even realizing it. Electric and magnetic fields are obviously spherically symmetric and electrons and quarks as far as anyone can tell are point particles and have no structure. A spherical coordinate system obviously is defined by two angles and a distance (theta, phi, roh). Because EM fields are spherically symmetric I believe an implicit assumption has resulted that the phase of an electric field Ephase, and the phase of a magnetic field Mphase as well as the magnitude of interaction (force) F correspond to coordinates like this.. Ephase->theta, Mphase->phi, F->roh . But this is not necessarily true even if the fields are spherically symmetric. I have written before about the gauss bonnet theorem. I won’t go into a proof of it here or too much detail the basics should suffice. The basics are that any closed orientable surface without dents in it has the same total curvature, if you place a normal vector at every point along such a surface simultaneously at infinity they will define a sphere. As such any field source sufficiently small will have spherical symmetry because the total tangential curvature k1 and total torsional curvature k2 will add up to four pi when integrated over the entire surface no matter the surfaces size. Thus I believe a more correct interpretation is like this: Ephase->k1, Mphase->k2, F->overlap (of neighboring field geometries) The electric and magnetic fields are symmetric with respect to each other over two “indices of symmetry” = k1 and k2 the total value of which is conserved across an infinite number of different 3d geometries. Thus when we are trying to define a single quanta of charge we have to be a lot more careful than implicitly assuming that it is enough for the total “symmetry” to be conserved. As I have explained in my first post for 2d frequency domain the only geometry that is capable of producing the normal distribution is triangles, thus it is a far more useful candidate than anything else, unless of course someone discovers some other subtle nuance of physics beyond the inclusion of geometry at the quantum level. Thank you.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.