Jump to content

pioneer

Senior Members
  • Posts

    1146
  • Joined

  • Last visited

Everything posted by pioneer

  1. Yes it does. This type of model would be the base layer so help us see how the cell integrates in space and time. To be useful, these results would be tranlated to the existing science to give specific chemicals. But what it brings different to the table is the ability to exploit the nervous system for medical treatment. The nervous system implies memory based tissue throughout the body. This is near all the cells of the body, more or less. There is not enough chemical transmission from much of this local nervous tissue to exploit it with conventional wisdom. But at the level of hydrogen bonding we have a way to transmit affect. If we look at the body and cells, three types of important tissues are everywhere, circulatory, lymphatic and nervous. The current state of the art does a good job with the circulatory and lymphatic via medicine. But the need for empiricm is because the third tissue is not exploited. This can be addressed with an H style analysis. This is sci-fi at this time, but say the kidney was sick, we trace its nerve connections to the larger branches that connect to the spine. Based on kidney-nervous profiles of H-potentials that stem from opimum kidney function, we transmit these artifically into the nervous control system of the kidney and induce the kidney back to health. Since the H affect has a biochemical parallel, this may still require medicines to work and may act more like a way to increase their affectiveness in difficult situations. That is way in the future. For now we need to crawl with the basics.
  2. Here is an observation. If you look at the EM, weak and strong nuclear forces, these attract matter in their own ways, give it velocity and output energy as the systems move toward lower force potential. If gravity is a force, what type of energy does gravity output when the system moves toward lower gravitational force potential? Let me rephrase the question with observations. Say we have a cloud of gas, like a nebula. It is so spread out that the GR affects are very small at the very beginning. The falling toward the center of gravity is lowering the gravitational potential within the nebula. This will cause the system wide GR affect to increase. This would suggest, based on other forces, is that a lowered system wide gravity potential will result in higher GR. This lower energy state called higher GR, increases the potential with the remaining mass causing it to fall toward the center of gravity even faster. It is sort of analogous to chilling an ice core more and more causing water to condense faster and faster on the core. In other words, the remaining sparse matter is at the same original energy, only the lower end of the potential (center of gravityand GR) has gotten lower in energy, thereby increasing the attraction potential with the sparse mass. The question is, since the higher GR at the center of gravity implies lower system potential, what kind of energy is being release into the universe. If I was to speculate, since one does not measure any traditional energy output from gravity, and since the goal of lowest gravity potential is a zone of higher GR, then the energy output should have some type of connection to relativity, i.e., virtual energy. Is it possible that the theory of dark matter/energy is describing the virtual energy that outputs from gravity when gravitational potential lowers to create zones of higher GR? I realize this sounds backwards to traditional wisdom but do an energy balance. If higher GR means higher potential that would mean matter gravitating toward higher potential? Where would the energy come from to push matter up the energy hill toward higher potential? Gravity would have to be due to some type of external push that is endothermic. With higher GR at lowest potential, the sparse matter will flow in the direction of lower energy without requiring any assistance but lowering potential.
  3. I am not saying that H is the only reason for cellular activity. The more well established traditons of bio-chemistry plays its own role. But H is the organizing factor that integrates the cell. Empiricism is currently required because this organizing factor is not included as part of the analysis. Let me present a bulk observation to show both affects working together. The most ATP energy intensive process of the cell is the pumping of ions, especially the Na-K pumps. In neurons, this reaches 90% of the cell's energy. It has to be of primary importance to be given so much energy. The affect is to set up a charge gradient across the exterior membrane of the cell, with the outside positive and the inside negative. At the level of water, this means the H of water will have more potential outside the membrane, due to the competitions with the exterior positive charge, and less potential inside the membrane, due to the helpful affect of the inside negative charge. Relative to the outside H of water, this high potential H affect will conduct away from the cell into the exterior water, until it reaches steady-state H potential with the exterior environment. This implies an electrophilic potential gradient that is highest at the exterior of the cell and decays at some distance into the exterior water. This is the universal way for a cell to begin attracting food, which is high in electron density, at a distance, well before the transport proteins do their thing. The transport protein will cherry pick from this generic abundance of electron density. The reduced material also implies a source lowered potential H, that can help lower the outside H potential, even when electron-density is not very obvious. If you look at a transport protein, its equilibrium position in the membrane will require that it set itself up to minimize its potential across the membrane. That is why the business end of the transport protein always ends up in the correct orientation. It is not random. It too is held together with hydrogen bonding and things that have an impact on the potential within all this hydrogen bonding. It offers a way to conduct H potential from the higher potential outside of the cell toward the lower potential inside. This occurs during transport. Both affects are occurring, i.e., chemical and H potential, at the same time with the H affect helping the more traditional chemical observation. When something is transported, the cationic potential stored with the membrane is used as the source of energy. This means the outside of the membrane, near the transport protein, will temporarily become less positive since this potential is being used as transport energy. This little zone will be induced to lower H potential than the bulk exterior cell membrane during a transport cycle. The exterior surface of the membrane has a time average bulk postive charge that is superimposed with smaller islands of dynamically lowered H (charge) potential. The result is a very complex interference H signal being transmitted into the exterior water that is both generic for bulk food, but also semi-specific due to the H-bonding interference patterns in the exterior water from very specific types of protein affects. This does not discount the importance of the traditonal wisdom, it adds a prescreening tool that makes the traditional wisdom more affective-directed and much less emprical. One may say that even if H potentials are occurring, these are small. How can very large affects occurs that require far more energy? The easiest observation are thunderstorms and lightning. With lightning, one is dealing with positive charge connected to H, creating amplifed affects. The hydrogen are small as individuals, but because they work as an integrated team they can display far more strength than expected. That is why ATP energy can break bonds an order of magnitude stronger. It uses the help of the hydrogen bonded protein structures. The energy of ATP is a little stronger than the best hydrogen bonds. That is not coincidence! It is just enough to get the integrated H-bonding to discharge affects. The energy level of ATP should have made this obvious decades ago.
  4. It is possible to model the cell in terms of one variable, i.e., hydrogen bonding. To make this possible, hydrogen bonding needs to be revised. To build some background for this revision, consider the two bases Cl- and OH-. Both have one extra negative charge, but OH- is a stronger base. The reason this is so, is that charge alone is not sufficient to explain relative basicity. One also needs to include the affect of the magnetic fields around the atoms to get the entire electromagnetic affect. A magnetic field is generated by a charge in motion. Writing these anions, in the simple way above, does not give one a good feel for the fact that this extra negative charge or electron is moving about 1/14 the speed of light. At that speed it is giving off a magnetic field as it circulates within these anions. With the electomagnetic force, a unified force composed of both electo-static (charge) and magnetic (charge in motion) aspects, the relative basicity implies that although both give off the same electro-static forces, i.e., both have an extra electron, the Cl- is more stabilized because of its extra magnetic stability lowering the affect of the negative charge. Or its has a lower electromagnetic potential and therefore lower basicity. This electromagnetic (EM) affect is the basis for electronegativity. Atoms with the highest electronegativity have better magnetic addition, which in turn, lowers the impact of the electro-static repulsion between electrons. When Cl- gains its extra electron the octet is full allowing extra magnetic addtion that allows the Cl to compensate for the extra charge. That being said, let us look at the most important molecule of life, i.e., water. Because the O atom is more electronegative than H (offers much better magnetic addition), the water molecule will develop a slight dipole, with the O becoming negative and the H becoming positive. Although the charges are equal and opposite, the EM force fields coming from each side are not the same. The H carries more EM potential than the O. It has to, since O is more electonegative and stabilzed the extra charge. An easier way to see this is consider the molecule HCl or hydrochloric acid. This molecule is a strong acid with a very weak conjugate base. That means that the H side has far more potential than the Cl side even though the dipole charge is equal and opposite. It is not the charge dipole that is causing this disparity in potential, but the magnetic addition. With respect to H2O, a similar affect occurs due to the much higher electronegativity of the oxygen. The O has excellent magnetic addition which diminishes the EM potential of the slight negative charge. The H is left holding the primary burden of the potential since it has both positive charge exposed and has lost magnetic additon to the O. When a H bond forms, the H has more to gain than the O, or else O would not have taken the extra charge in the first place within the single H2O molecule. As such, when looking at H-bonds one only needs to consider how much residual potential is left in the H since it is the one carrying the burden. A hydrogen bond will minimize potential if the H-bond is linear at a critical bond length. Any deviation away from this optimum will mean the hydrogen is carrying some residual potential. The reason this is so is that hydrogen bonds have partial covalent character. The straight bond angle of 180 degrees is needed to optimize magnetic addition. It is due to the magnetic force following the right hand rule of perpendicular affect. If the angle is off one can't get the magnetic fields to add perfectly. The result is that the H will retain some of its residual EM potential. If you look at the average run of the mill enzyme, a large proportion of the H-bonds are not at 180 degrees. This stores H potential, which amount to electrophilic potential within the structure of the enzyme. One way the enzyme tries to lower this is to pull the reactant into an excited state in an attempt to feed electron density to the hungry H. This mechanism is very generic with enzymes developing lock-key specificity so the generic need of the H can lead to very specific results. If we look at ice. The hydrogen bonds are all, more or less, formed in the perfect way with the vast majority at minimum potential. Even though the prefect ice crystal, as a system, is at minimum energy, because the O still has the highest electronegativity, the H are still not at their minimum potential. The ice is at minimum potential with respect to the system of H and O, and with respect to the hydrogen bonds, but the hydrogen still has potential. As an analogy, picture two lionesses sharing a piece of meat. The stronger lioness will get more of the meat. The two lioness system will end up at minimum potential, even thought the stronger lioness will get more. As such, even though the system is at minimum potential, this does not mean the weaker lioness is as full as the dominant lioness. In this case, minimum system potential will still leave the weaker lioness hungry. If H was the only atom in the universe, H2 would be closer to minimum potential relative to the EM needs of H. It is not competing with the highly electronegative oxygen but with a similar lioness that will share equally. The relatively low electronegativity of C, which is close to H, offers H the best way to reach a state of minimal potential. Again, I am not talking about system potential in our environment with the hungry O, but I am talking about, if the goal was to lower H potential to a minimum using the ideal system for the H. The sun, by helping to make reduce C or C-H, i.e, photosynthesis, essentially helps the H get about as low in potential as possible, inspite of the needs of the O system. The O wants to increase the H potential so it becomes part of H2O with much higher potential. If we look at the H in cells, they have a wide range of potential relative to the ideal EM needs of H in its own perfect system. The perfect hydrogen bond is in the middle at minimum system potential but will leave the ideal H with some potential relative to the ideal. A free H+ is at the highest potential and will lower its potential as the hydrogen bond get more and more perfect in distance and angle. While reduce H, such as C-H ,is a state where the hydrogen drops below system potential to where it its own potential is much close to the ideal minimum. Unfortunately for H this creates a potential to be oxidized so hydrogen can carry the burden. When modelling the cell in terms of one variable H, is it easier in one thinks in terms of needs or potential in H instead of the usual O. It makes things much easier to conceptualize and model. I am not suggesting converting all of chemistry to H normalization, only the life sciences.
  5. This is not true in every case although it is true in many cases. Since a correlation can't tell the difference between the two and because correlation is vulnervable to political spin, it is treated like one size fits all even though this is entirely irrational. Correlations are vulnerable to a cerrtain degree of irrational spin. Even if one uses the documented cases how is the affect of air polutants taken into consideration, since many people, who do not smoke, will also develop cancer, including lung cancer. Is this lumped into the affect and spun to create the illusion that smoking is doing that too? Secondhand smoke may the spin correlation that was needed to account for some of the unexpected acceptions. How does one factor out the outgassing from carpeting, diesel fuel, interior auto vinyl from second hand smoke? One does not have to when it comes to correlation, spin will do just fine. I am not suggesting that hydrogen atoms have this internal something that allows them to communicate with other hydrogen atoms. It is done with electro-magnetic forces and subtle energy like radio-waves. If we have a gas in the atmosphere radio waves can cause simple changes. But the large biomaterials in the cell are too bulky to get much bulk affect. This weak enegy tends to have more of an impact at the weak level of the hydrogen atoms and on the hydrogen bonds. That is what NMR does. I will continue this discussion of empirical correlation and political spin. But I am also going to begin a topic, in general chemisty ,that will deduce a new way to look at hydrogen bonding, which will make it possible to model the cell and multicellular organisms in terms of only one variable.
  6. I hope I was clear that the 4 axis are equally separted in 3-D space using the tetrhedral angle of 104 degrees between all four axis. I don't recommend converting the existing math into this system. But specialty tools somethings can be used in unique situations. Part of the reason I presented this was connected to using math to support theory in science. Math is clean and pure, but it is also a horse that can be led anywhere one so desires depending on the theory. In the tetrahedral system since there is no negative, than negative charge would have to be modelled with postive parameters. It would have three or four aspects which will differ from positive charge (a,0,0,0) and (0,b,c,d), respectively. If we let the math lead the theory than either the x,y,z or the a,b,c,d system is leading knowldge down the path of an illusion. This problem is especially common in phyics where dimensions are increasing. The math is a faithful workhorse but theory can lead it anywhere. So if the math adds up, it is still not certain if the result reflects reality. In western religous tradition, God is considered a trinity. If we assume a type of symbolic parallel between God and reality, this would imply that x,y,z is the correct system for modelling reality. The tetrahedral will leave one hanging in a state of suspension. You'll gain wisdom but pay a price. One needs to look at the tetrahedral system as a specialty tool.
  7. A way to discuss P&R and science, in a scientific way is to correlate these to the two hemispheres of the brain. Science is more left hemisphere. Religion and faith is more right hemisphere. While philosophy tries to build a bridge between both sides of the brain. Let me better illustrate how the two hemisphere differ. The left is more differential while the right is more spatial or integral. As an example, say we came across a new type of yellow we never saw before, the right side of the brain would allow us to know it is a type of yellow. The left side would have noticed this new yellow because of its difference but the right side will be the first to lump it into some type of common catagory. The spatial nature of the right side puts similar memories is the same piles such that the new yellow will end up in the yellow pile allowing us to see similarity. The left may then decide to give it a name, like lemon yellow. This will help store the memory in the left side's differential data base. The next time we see that lemon yellow, one will be able to access it with either side of the brain. One can see the difference between science and religion. Science works hard to differentiate data but doesn't like nebulous theory. The nebulous theory is an unknown yellow that doesn't yet have a name. Religion works with nebulous concepts that are hard to pin down in a differential way. At the level of the right hemisphere, religious people know it is a yellow but that is too nebulous for science who prefer it be more differential. If you look at the evolution of computers, computers can do anything the differential or left side of the brain can do such as memorizing and logic. But computers can't do the creative things the right side can do. This side of the brain uses a different approach to get its results. It uses a very fast language that is not easy to translate using regular language since the yellow pile may have hundreds of different but similar data points all with one name, yellow. A religious person may equate all of creation with God, with a wide range of differential data all combined in this memory. It is pregnant with meaning but just too 3-D and fast for translation. For example, if we recorded 10 mins of audio and then played it back in 1 min, all the data will be there, but it will be too fast to get more than bits and pieces. If ten different people listen to it, they may each get something enitrely different with none getting the gist. That is the problem with the right side of the brain, it is too fast. Typically it works best when a bit or piece just pops into consciousness so it can be stored in the left and then subject to further differential analysis. If we take our audio tape and play it slower it gets easier for everyone to create a consensus of thought. The left side plays slower making it easier. The right is too fast and tends to get stuck in nebulous. Religion is a area of knowledge that allows some conscious access to the right. If you compare evolution to creationism, one can get an idea of the relative memory speed going on in either side. The slower differential left side of the brain will tend to stretch things out so it can see more details. The right side is fast, dense and compact and will tend to compress. Many religious people can sense the fast memories playing, but it is too fast to translate easily and it often associated with above human. It tends to gain collective left hemisphere translation through traditions. The way I look at it, the right side of the brain is more powerful than the left side, but it is too fast to use affectively making it less functional. It is like having a Corvette on ice. A golf cart might work better on ice. Language is much better suited to the functioning of the left hemisphere. The paradox is someone with a high level of left differential will have loads of good data automatically placed into many 3-D piles in the right side. The right side has already integrated complex ideas for them, which if they could access could be very benefisical. But by specializing they have very little conscious functionality in the right to gain access. On the other hand, the religious person may develop a strong awareness of many of the 3-D piles in the right side, but because they don't use the left enough to extend the differential data base, the piles are skimpy and may not be progressed enough to be very useful for extended adaptation. The debate between science and religion is between the two hemispheres of the brain with each seeing the world differently due to functionality.
  8. If we started with a stationary rocket, to make it reach relativistic speeds, we would need to input energy, such as fuel. The SR affects that result will be dependant on how much energy we add to the system. The reason this is so, is energy is needed to create relativistic mass, while relativistic mass is not dependant on relative reference but on energy input. Let me give an analogy. It is called the 2-parameter SR workout. We go to a track and sit in a chair. We then focus on the fastest runner and using relative reference in space-time we pretend to be moving such that the runner appears stationary. Does this relative motion give us exercise? This is an illuison due to using only 2 out of 3 parameters of SR. If we use 3-parameter SR, which includes relativistic mass, then there is only one moving reference, which is the runner. We become stationary by this 3-parameter standard with relative reference more of a mind game. That being said if we look at 3-parameter SR objects, the amount of SR is directly related to the amount of energy. Using 3-parameter SR and energy as the guide, it is not relative but absolute. This energy not only makes relativisitic mass but also an absolute amount of space-time affect. Again relative reference ignors energy dependance making the space-time affect relative. If we include energy the space-time affect isn't relative anymore but absolute with a very intimate connection to the amount of relativistic mass. In that respect it is like GR requiring mass to work with space-time alone not sufficient to create the entire affect. So if we have large SR objects in space they define x-amount of energy. The amount of space-time affect is directly relative to this energy. The type of ripples in space-time, it can create, is also energy dependant. If we add energy than its absolute amount of SR will increase. This is the state of the moving objects in the expanding universe. We need to add energy to make them move at faster and faster speed. The ripples in space-time will change as time moves forward since they are accelerating. If we look at GR the opposite is in affect. Gravity contains the most potential energy when the mass is all spread out. The mass lowers potential as the matter compresses and space-time contracts. This is the opposite of SR, which has more space-time affect at higher potential. So if the universe is accelerating expanding space-time via GR, this would be enothermic and would require a source of energy input. This is consistent with the accelerated movement of SR objects also being endothermic. Either way this will require a powerful source of energy to be possible. One simple explanation for the energy to create this affect, is an output that results as GR increases, due to loss of gravity potential in the mass. If you think about it, such an ouptut will induce a uniform expansion of the universe, relative to the galaxies, since these are the basic units of gravity currency in the universe. It would only work if gravity has an affective range somewhere at the level of galaxies. The affective range of GR only works within this galaxy range, but the energy output keeps going outward to an energy to fuel the acceleration. The acceleration induces an evolving space-time ripple interference. Gravity is an odd duck compared to the other three forces in the sense that the other three forces give off some type of measureable energy when they lower potential. For example, EM force will create motion and give off energy when the potential lowers. Gravity potential will create motion, like EM force, but does it give off some form of energy? It is not obvious such that maybe this energy is different, i.e., dark energy. Virtual energy would be the easiest way to transfer potential between GR and SR since relativity create virtual mass/energy affects.
  9. There is an easier, more rational way, to express the living state, which I have been working on for the past two decades. It is possible to express the complexity in the cell in terms of one variable, which is hydrogen bonding. Essentially, chemical complexity can be reduced to this one common variable. This one variable is in all proteins, RNA, DNA and water. Bulk structure defines the hydrogen bonding states. Pertubating these hydrogen bonding states in simulation allow us to predict the type of structure needed to achieve a particular hydrogen affect. The current theory places the DNA at the top of the pile. But in reality the DNA is the harddrive that stores memory, while the H is the CPU that allows the harddrive to coordinate with the rest of the cell. The DNA defines a dynamic H-bonding environment both within itself and within the water that surrounds it. If the aqueous H environment around the DNA changes, due to directed pertubations in the cytoplasm, the equilibrium H-bond nature of the DNA will cause it to assume this new induced dynamic state. One may ask how is it possible for hydrogen to interact throughout the cell in a coordinated way? The easiest way to explain this is NMR. With NMR we use radio waves to vibrate and idenitify H within the living state. What radio waves bring to the table is a certain degree of transparency. One possible explanation is that the hydrogen makes use of transparent radio energy so it can coordinate H inspite of all the big atoms. The h-bond approach allows a way to address the affect of the brain and nervous system on cellular differentiation control in the human body. The nervous tissue near the cells in the body implies memory tissue near nearly all cells in the body. These cellular control memories are a new frontier that can be exploited, but require addressing them in terms of h-bonds. These nervous memories do not have enough chemical output to explain a control connection using existing theoy. But at the level of the hydrogen bonding potential these affects are much more rational. If one wishes to address the brain, it is far easier using one variable. In my experience the hydrogen potential affect always occurs in gradients from the simple molecule, to organelles, to cells, to multicellular, to entire multicellular organisms. As such, modelling the brain simply amount to defining the potential gradient hierarchy, which can be logically inferred. This model is highly scalable even beyond the range of genetic theory. For example, learning is only indirectly connected to genetics in that it defines the chemicals behind the mechasism of learning but genetics does not place any practical limits on the type of content one wishes to learn. But at the level of hydrogen bonding potentials different memories should have different potential signitures in the brain and can therefore impact the types of potential gradients for additional thought processing. Where there is a will there is a way is connected to H potentials. I can think about food in my imagination and make myself hungry. I essentially use H-potential to tweak the DNA in cells so they output hunger chems. Humans are not limited to genetic hunger cycles but can tweak the DNA at will using neural transmissions. The top of the H-bond CPU in the human body lies with the brain. The hierarchy from there works almost like a hologram with lower and lower levels using the same basic gradient potential schema. That is the simplicity of the model. It only uses one variable and one basic gradient potential schema at any level of life. This allows one to do much in their heads before needing to go to the lab.
  10. If you look at CO2, it is a linear molecule O=C=O, which can store and reflect energy in the atmosphere by its bending, vibration and rotation. It has been shown to have a significant impact on global warming. If we react CO2 with H2O we get H2CO3 or carbonic acid. One may notice that the former linear CO2 will change it bond angle from 180 to 120 degrees when it reacts with water and forms carbonic acid. As such, its degrees of freedom for storing energy is different. It can no longer rotate except as part of H2CO3, while it vibrational and bending energy levels are now also different and work in conjunction with the H2O aspect. The CO2 will now share its stored energy with the H2O. As such, because there is always water in the atmosphere, sticky but reversible collisions between CO2 and H2O will alter the time average energy storing/reflective capacity of CO2 tranferring energy to H2O. The meta-stable formation of H2CO3 will be exothermic. The reversal energy needed to reform CO2 and H2O goes into both entitiies leaving CO2 with less energy than it started with. The H2O can get rid of its higher energy when it forms clouds and condenses into rain. As the amount of CO2 increases, the warming causes more water in the atmosphere due to the increased ease of evaporation. This will reduce the affective heating storing capacity of the average CO2 molecule since the concentration of water will rise faster than the CO2 concentration.
  11. The point I was trying to make is both GR and SR cause changes in local space-time. GR does it stationary while SR with velocity but the affect on space-time should be very simipar. Both are present in the universe especially in the light of gravity and the rate of the expansion that is inferred from the red shift. GR and SR space-time affects should interfere and add/subtract affects. As such, if gravity is GR can the SR interference affect make gravity GR indeterminant with respect to locating distant objects? Martin place the affective range of gravity at 16 billion light years. This should be too high if the SR interference affect is valid. If the 16 billion light years is indeed correct than GR alone is not sufficent to define gravity. It would have to have another component, which can act independantly of SR interference in space-time when gravity gets low in magnitude. One can not have it both ways since these are mutually exclusive. The way the universe is laid out with respect to its basic unit of currency or galaxies, combined with the relatively narrow band width of galaxy size, and the bulk accelerated expansion relative to this galaxy size, strongly suggests that gravity does have a relatively low affective range somewhere in the order of magnitude of galaxy size. This limited affective range could be explain with the SR space-time noise created by the bulk universe accelerated expansion. This keeps GR consistent with gravity without needing any extra addendum. The extra addendum would only be needed if the 16 billion light years estimate of affective and determinant gravity was correct, which the observational layout and expansion of the universe does not appear to support. One has to add fudge factors to keep the range at 16 billions light years inspite of galaxies flying apart. Let me add something extra at this point. If the universe is undergoing an accelerated expansion it will be bigger and expanding faster tomorrow. Also it was smaller and expansing slower yesterday. If we draw a line or curve through these three data points and extrapolate back in time 15 billion years, the early universe should have been very small with only a small expansion velocity but would also be accelerating expanding. The logical extrapolation of the curve back to near the beginning, through these three points implies a gentle beginning for the universe via a slow expansion that gradually speeds up, i.e, the universe did not begin with a bang but with a gentle push. One can use complex curves to get whatever one wants, but the simplest curve is often closer to truth. What this implies, is the post beginning of the universe was almost pure GR with gravity having maximum range. As the expansion accelerates, SR noise begins to appear, a little at first and increases. This would cause the extreme determinant range of GR to get less and less. The result should be the uniform expansion becoming slightly discontinuous, due to the low SR noise helping to form the bulk superstructure. As the expansion continues to increase the SR noise, the superstructure GR begins to see the noise and breaks into smaller and smaller substructure until galaxies appears. One does not really need density pertubations, just SR noise due to an accelerated expansion to make galaxies via gravity indeterminancy, with the increasing SR noise increasingly limiting the affective range. The next question is why the accelerated expansion of the universe and the increase in universal SR? To answer this we need to look at one basic observation stemming from particle accelerator data. It has to do with the observation that the substruture of say a proton does not last when it is outside the confines of the proton, compared to within a proton. Without getting fancy but staying simple, the easiest explanation is that the substructure within the confines of a proton is time dilated. Once we break the time dilation the substructure loses its immortality. In other words, what we see in a particle accelerator is its true life expectancy. But when time dilated within the confines of a proton it appears to last for billions of years in our earth reference. This time dilation was given to it during the early part of creation. This substructure time dilation within a proton can't be due to GR since the mass is too small. As such, it needs to be an SR type affect due to a state that is close to C, i.e., almost energy. If we compare the 10-20 billion year life of a proton to the nano-sec life of the substructure once it leaves the proton V would have to be only a tiny blip below C to generate that much time dilation, i.e., almost energy. A simple lowering of the SR value within the subparticles should generates the massive collective SR output needed for expansion. Because of SR noise lowers the affective range of gravity the SR output from the innards of protons, etc., can get less and less and still keep the acceleration going since gravity becomes less and less a factor. As such, as the universe evolves and bleeds the innards of SR there is less left over at the same time it takes less and less to keep the expansion going. This post merged; strange synchronicity. I have a tendency to extrapolate so forgive me for now. But the basic question still remains are the ripples in space-time due to GR and SR essentially the same, especially after they leave the moving source? If so do these add/substract or create interference that make the space-time affects from gravity GR see noise at long disances? It seems logical although it may be very hard to prove with experiment. But it does create reasonable doubt with respect to the practical range of gravity. Looking at the universe, gravity is indeed showing practical affects within rhe scale of galaxies since these zones compress, rotate, etc.. But beyond that the universe is expanding with respect to the galaxies. One may say it is the shear distances that cause this problem. But if we extrapolate back in time when things were much closer, why didn't the universe expand only relative to the superstructure which was once closer? I did speculate about the substructure of proton having its innards time dilated. Any other form of composite in chemisty does not lead to the chemical substructure vaporizing faster than the superstructure. We can plasma water into O and H and this substrcuture will last just as long. You do it to a proton and the time parameter of the innards is very different. Rather than speculate dark matter and energy as something new, bleeding off innard SR can do what we need. SR output should be relativistic mass (virtual) or dark matter and the combo of time and distance relaticeitt forms virtual frequency/wavelength or dark energy. It is consistent but allows us to get to a possible source faster, which in turn, may allow us to similate dark matter and energy in the lab.
  12. I am not suggesting moving away from x.y.z, but this system requires using negative numbers to populate the entire coordinate system. The four axis of the tetrahedral system are not perpendicular, but are 104 degrees to each other. As such, there is no need for negative numbers, since what should be the negative of say A can be expressed with the other 3 positive axis due to their 104 degree angle. I am not a mathematician and my skills are very atrophied. But this grid does lead to some very interesting implications like avoiding square root of -1. Here is an interesting thought. Say the fathers of mathematics had chosen this tetrahedral system instead of cartesian coordinates, the theory of negative and positive charge would have never occurred. The positive charge may have been defined as (a,0,0,0) and the negative charge as (0,b,c,d). It is interesting to speculate that physics would now be so very different when it comes to EM force. Science would be looking for the three or four parameters of charge and may actually find them. With the current system plus and minus is good enough. They would make use of the system and have to work with what they had. The tetrahedral system is interesting for speculation, but would not be easy to use after centuries of convention that uses 3-D. It could open many cans of worms that may be better left in the can. For example, space-time is 4-D, would that simply fit the tetrahedral axis or still need to be treated as an extended system with time out of the loop? Philosophically it is interesting in that without plus and minus there is no hard philosophical polarization of postive and negative, good-evil. Rather polarization is simply the opposing ratio of otherwise postive parameters. This sort of like the subjectivity of right and wrong that is the study philosophy.
  13. Some areas of science are restricted to empiricism because the theory is not good enough to bump these areas into the next level. I have no problem with the methods and the data only that empirical correlations are vulnerable to political spin and therefore have an irrational aspect to them, which makes them less than fully rational. For example, hypothetically, if I was to suggest that aerobic exercise will lower the risk of cigarrette smoking one could run the tests, gather the data and find at least some examples that would fit the correlation. Under the current poltical climate this would be downplayed since the political spin wants to ban cigarettes and this would be counterproductive. In other words, empiricsm is vulnerable to subjectivity. While subjectivity defeats the purpose of the age of enlightment which was to use reason to help us avoid the subjective spin that kept society in the dark ages. One can not subjectively spin the hard number for the speed of light. This scientific fact is insulated from the subjective spin arena. This is the highest standard in line with the age of reason and fact. Empiricsm is essentially a throwback to alchemy when data was collected using the best they could do and them massaged subjectively to form theories, which although could predict, were forver out of touch with reality. Spin was then used to give them an extra edge. Einstein sort of warn us when he said, he does not believe God chose to play dice with the universe. Empircism is sort of like gambling. The good gambler doing his research to shift the odds in his favor. But no gambling system is 100% reliable. It is based partially on science and partially on luck and hunches. The best it can do is beat the house. Rational theory not only beats the house but can approach 100%. Rational theory is not gambling anymore, but a sure thing. It is not allowed to play in the casinos of subjective opinion because it wins all the time. One advantage of empiricsm is that is labor intensive so it does create a lot of jobs. I have no problem with that. But rational science should take a stance, call it tough love, to encourgage a weaning away from science gambling and its black market connection to political spin. The life sciences and things that branch off from there are almost always dependant on empiricsm. Physics and Chemistry were able to jump to the next level, although not 100%. Their influence is the primary rational parts of the life science. But beyond that, the life sciences are much closer to 0% than they are to 100% with respect to rational predcition. This make a big chuck on science vulnerable to the masters of image and spin. These people are not going to change their natures, so science needs to take away its vulnerability. Science is looking for truth in nature. If one accepts progress as the best one can achieve, with a bunch of bull along the way, then one is probably a scientific gambler who is happy enough to just beat the house. That is not acceptable to the age of reason. Part of the problem the life sciences face are very complex systems. It was a little easier for chemistry and physics to isolate phenomena since few of these have the same level of integrated complexity. The theory in the life sciences is not advanced enough to deal with this complexity. Maybe by default they are doing the best they can do. There is a way to make this complexity much less complex but it is not gambling. The necessity of a higher standard is one way to create the necessity. There is no need for this advancement if gambling is allowed to be the standard. The machine is not broken until one notices oil leaking from the motor.
  14. Here is something that popped into my head a few month ago that I would like to share with the community. This simple system allows one to solve the square roots of negative numbers without imaginary numbers. What it involves is using a tetrahedral coordinate system that has 4 axis a,b,c and d, instead of x,y and z. What this does is avoid the need for negative numbers since the negative of A can be expressed as positive values of b, c and d. It just takes movement between the two systems to get rid of those pesky negative square roots.
  15. Religion can be discussed in a scientific way if we include it as part of the evolution of the human mind. The body still has traces from our evolutionary past, such as wisdom teeth which don't always drop. Religion is very old and was part of the mental exercise that appeared with the rise of civilization all the way to the past hundred years or so. It was the wisdom teeth of the ancient mind. Nowadays, these religious wisdom teeth come in for many people, but not for all. Many go to the social dentist of pop philosophy to get them yanked to circumvent potential problems according to temporal wisdom without much history. One has to consider that anything repetitively used by the human mind over thousands of years is bound to have so type of permanent impact on the human psyche that can not be erased in a couple of generations. Whatever impact it had will probably just find another outlet. Maybe the fascination with celebrities is an unconscious substitute. Other affects can be projections such as the belief in UFO's or a source of higher being analogous to tech-angels with anti-gravity wings. There may even be a rehash of religion into other forms of knowledge. For example, physics postulates other dimensions between our 4-dimensions. This is just spiritual rehash, which has been around thousands of years. After pulling their wisdom teeth, many forget to give credit where credit is due. There is a saying that fanaticism compensates doubt. This is a two edge sword that works both for or against something one fixated on. Someone who is hell bent on the descruction of religion is trying to fight their own sense of inner doubt. Maybe their yanked wisdom teeth left behind roots. It won't be easy to remove thousands of years of religious repetititon in one lifetime. One won't be able to change their ancient body except with artifical aids. One can see the affect of steroids. Maybe spirtual steroids may have a long term negative impact on the health of the mind.
  16. This dual standard in science is connected to the different perfection constraints placed on different types of science. Let me explain this with an example. If one was to propose a theory for a phenomena in nature, which can explain the obervational data and be supported with the math it will last until data appears that creates contradictions. The theory either has to be able to evolve and accommodate the new data or it becomes obsolete. Theory is subject to a very high standard, which is good. If we look at empirical correlations, which are a different breed of theory, the standards are much more lax. For example, the empirical data equates cigarette smoking with lung cancer, etc.. I am not condoning smoking only using this as an example. If I went into the population, I could find dozens, if not thousands of exceptions to this rule. If this was a theory it would be forced to accommodate this conflicting data or be given the boot to the obsolete pile. But the standards for empirical theory or correlation is very slack. Inspite of thousands of conflicting data points this particular correlation doesn't have to evolve nor is it told to go to the obsolete pile. Empiricism is held to a much lower standard than math or theoretical science yet it is still considered first string science. How did empiricism get grandfathered in to such a lax standard? Let me reverse the situation. For now on say when making new natural theory one only needs to be 75% perfect to be acceptable. So there are flaws and acceptions, others areas of sceicne are getting away with it, so whats the big deal? So what if there are dozens of exceptions, the new lax standard only requires the theory just has to sort of fit and is allowed to stay without forced progression by a higher standard. Let us reverse this again. So cigarette smoking creates cancer. That may be true in many cases, but I just found 100 exceptions to this correlation. For the good of science, I am going to hold your feet to the fire of the higher standard, which is placed on many other areas of science. If you don't want the boot you need to evolve the correlation. We would need to skinny down the correlation/theory to inlcude only data covered under the scope of the correlation, while data that does not apply has to be put into another correlation/theory pile. The higher standard wants perfection and not a bunch of included errors that don't have to be addressed due to some type of grandfather clause of lax standards. If we don't get perfection we will call it minor league science or pseudo-science. If one wishes to make the big leagues one needs to play at that level. One group should not have it so lax, while another group is forced to hard labor by a much higher standard of perfection. In the defense of empiricsm maybe the lax standards are indicative of the state of the art in some areas of science needing to evolve to the next next. They may just need a good theoretical push.
  17. Making the clouds should not be too hard. The sun will evaporate the water and it will spread out to fill the sky. Water has hydrogen bonding that will make clouds form even without gravity. If it is cold enough high in the atmosphere the water vapor will condense into rain. Since there is no gravity, the rain would stay in the clouds unless you create a device to make the rain come down. The natural way would be to use the chemical potential between your salty ocean and the pure water in the cloud. The minerals in the ocean water will like an osmotic wick from cloud to ocean. Eventually you set up manmade waterfalls from the sky that become sort of self perpetuating wicking water toward the oceans. The changes of the season will affect the waterfall rate with higher flow in the warming summer and larger currents in the rivers and canals which providing a constant source of pure rain water for the habitable land.
  18. Science tries to deal with facts, while politics deals with images. A scientist will try to prove x=y. The politian can make x=y or not. It all depends on where the cards need to fall to get or remain in office. The problem that science faces is, poltitians hold the purse strings so if these bosses of science want x to equal y or not, science needs to get with the program or else risk losing funding. One can buy expert testimony to support any image the politians need in science. Right now the global warming image is suppose to equal y, because poliitian can get more political mileage out of that result. Doom and gloom makes then look strong and give them a reason to raise taxes and interfere in the life of the citizens. So the funding gets slanted to make this image appear like a reality. The pie is not shared equally to seek the truth but to forfill the image needed by the polititians. Science is not always atruistic when the bosses are politians. Many will cater to needed images especially if they aspire to political promotions.
  19. There are two sides of the brain. The left is more differential while the right is more spatial. Creationism comes from the right side of the brain. An attempt is being made to market it as left side, which is why science, which is left side orientated, can see that there is something wrong. If one uses the right side of the brain to sense it, the result is different. Let me give an example of the distinction between the two sides of the brain. If we saw an unknown shade of yellow, the right side of the brain would allow us to know it is yellow even without any education. The right side groups similar memories and by its placement in the "yellow" group we would know it was a shade of yellow. The left side is more differential and without formal labelling, the left side would see it as an unknown until it is labelled. The right side would tell us it is a type of yellow but the left does not yet have a specific label to express it yet. After we call it lemon yellow, this differential memory is stored in the left, so the next time we see it, we can access it with either side of the brain. Creationism is a right side memory that has been given a left side label that is not consistent with science, data and logic. An analogy is labelling the unknown yellow, fire engine red. One can call it anything they want but this particular label would be unsettling to the left side of the brain. The left side already has other reds in the differntial data base and that particular label will create data conflicts. If is was called symbolism there would be less left side conflict since this would seem appropriate. As such, the new yellow called Creationism needs a better label that is more settling and consistent with the left side data base. In other words, it is a 3-D memory that can be assessed with the right side but needs a better translation to be acceptable to left side science. The physics of physical creation just does not sit well in the left side. It is not crazy to sense this 3-D memory grouping, it is only difficult to translate in the proper way for the needs of the left side. It is a very fast memory that can not be easily expressed with the slow left hemisphere languages of culture. That is why it is taken literally; to avoid humans messing up and giving a misleading translation. For example, if we recorded a 10 min presentation and then played it back in 1min it would sound like noise with only bits and piecs coming to consciousenss. Inspite of this gibberish tranaltion it contains all the same data as the original spech. It would appear not to convey any meaning to the left side becuase of the speed is too fast to properly translation. One would have to slow it down to say 5-7 mins to get more out of it. But even then, one would get dozens of different opinions on what it said. Creationism is one consensus translation of this fast 3-D language. When a good left side translation is reached, science will be quite surprised at the knowledge this symbolism contains. Compressing 15 billions years into 7 days gives one an idea of relative memory speed between left and right (relative and not absolute). That is why one can only feel this type of fast 3-D memory as type of nebulous intuitive feeling. Being able to use the 3-D memory in a conscious way is the future of the human mind.
  20. Let me rephrase the question. If gravity is GR, which amounts to affects in space-time, then as we move away from a source, the space-time affect decreases with distance. SR or mass in motion also causes affects in space-time. Do these two space-time affects interact like waves such that long range space-time affects from gravity blend with SR space-time affects (waves) making the affect from the GR source indeterminant? As an analogy picture a basketball in the middle of a lake bobbing up and down making waves. This is the gravity source with the waves in the water analogous to space-time affects that will diminish with distance. At a far enough distance the waves will get very subtle. If a bunch of water bugs were swimming near this distance zone, their little waves will create interference patterns with the almost decayed waves from the basketball. If we didn't know the position of the basketball and had to guess from all these waves the interference could make its position indeterminant. As we go closer to the basketball, its waves are stronger such that interference is less of a factor allowing the position of the basketball to become much more determinant. With respect to gravity, will SR space-time waves make distance sources of gravity lose track of objects due to SR space-time interference? The answer to this question tells us whether gravity equals GR or whether gravity equals GR plus something more. In other words, if gravity equals GR then there should be GR/SR, interference that will limit the affective distance of GR sources to much shorter than 16 billion light years. If there is no impact inspite of SR space-time interference, than gravity would have to have something extra beyond GR space-time affects that can pass through space-time interference and remain determinant. In other words, if 16 billion light years is true, inspite of SR interference from fast large sources, then gravity is more than just GR. Or GR is only one affect stemming from gravity and not the only affect. The other affect ignors the SR space-time interference in space-time and can track all the way to 16 billion light years. This brings up the next point, which makes use of the assumption that gravity equals GR without anything extra. As an analogy, if we take a stationary positive charge it will give off an electostatic force field. If we give it motion a secondary magnetic force field will appear. The two fields represent a unified force with two aspects. Is it possible that gravity acts the same way, with a stationary source giving off only GR. If we give it motion its analogous magnetic field (so to speak) is SR based space-time for a unified gravity field that is a space-time combo of GR and SR? Extending the analogy, if two positive charges are in motion the relative motion will dictate whether their magnetic fields will add or subtract. In the case of unifed (GR/SR-space-time) gravity, i.e., moving GR source, the doppler shift appears to indicate that masses moving away from each other will see diminished or red shifted GR due to the affect of the SR (magnetic analogy) component of the dual space-time force. This should be the second part of the indeterminancy beyond SR noise. So an accelerated expansion with GR-SR space-time should take less and less energy to accelerate the faster the expansion due to the red shift in the affect GR-SR space-time (gravity). Using conservation of energy, this implies more energy going into the affective range of gravity. This expands the local space-time reference allowing it to catch up to the expansion and put on the brakes, i.e., conservation of relativity.
  21. The question I would like to pose and is, does a source of gravity have an effective or practical range within our current expanding universe? Here is my logic. If gravity implies GR, GR will cause a contraction in space-time that decays with distance from the object. The universe is also in motion, with many objects moving at relativistics speeds. This SR will also create pertubations in space-time. As such, when the effect of gravity gets really low, at long distances, background SR should create space-time noise that will make the affect of weak/distant gravity become indeterminant. In a static universe this would not be the case. But in a dynamic universe we have a velocity or SR based source of space-time ripples that shouldn't look to much different than weak GR space-time ripples. An affective gravity range, before SR indeterminancy, would imply the matter within the expanding universe should clump via gravity locally in its affective range with little impact on global movement due to the noise. The accelerated expansion means the noise is getting louder focusing gravity's practical determinancy even closer condensing the galaxies more and more. This is not to say gravity doesn't go beyond, only that is becomes indeterminant or indecisive. This resolves the paradox of how an accelerated expansion can lead to a universal contraction. The localization of effective gravity means local space-time reference is extending further and further into space due to distance contraction and time dilation in its reference. Eventually local reference will catch up to the perimeter and put on the brakes. Blackholes may be the goal of the accelerated expansion generating a local reference that can see beyond the perimeter of the expansion.
  22. Science is partially fact, partially logical deduction, partially theory and partially empiricism. The speed of light is a fact. Light traveling for one year will travel one light-year is a logical mathematical deduction. The big bang is a theory or reasonable explanation of the observed data. While eating chips will make you fat is an empirical correlation since it is not 100% reliable, since some people can eat chips and stay thin. Although there are four distinctions, confusion can and does occur when one of the four is marketed as one of the others. Empirical correlations are never 100%, but are often treated as facts, so even people who can eat chips and stay thin, are tricked into believing they will gain weight, so everyone can pretend the correlation is a fact. The earth has an iron core is a theory that is treated like a fact, even though there is no hard data to support this. It is based on logical deductions using other theories which still add up to another theory. Even facts can be proven to be false. For example, up to 1930 there were eight planets. This was treated like a fact. Then Pluto came along and that fact changed into a new fact. Now that fact is changing. Some facts need to come with a disclaiming until steady state is reached. If there is a hierarchy of science fact is number one. Logical deductions, which include math are number two. Theory is number three since a good theory can open the mind to new ways of looking at things even if it eventually becomes obsolete. While empirical correlation is fourth since it is only partially reliable and often has very limited extended functionality compared to theory. The tendancy is to try to push lower order to higher order for fun and profit. All and all science is a process leading to better understanding of nature. It is not perfect but tries to move toward number one and two.
  23. The natural goal of sex is procreation. The pleasure is the carrot on the string that leads the horse to water (procreation) Inspite of inhibitions and fears, the carrot is designed to be so tasty that it can lead the horse to the needed goal. The modern relativity of instinct views the carrot as the goal, therefore allowing procreation aberrations to be called normal human behavior. This orientation allows the carrot to lead one to clean water, a mucky swamp or even a sand pit, as long as you get the carrot in the end. If one uses the carrot as the goal and not the incentive toward only clean water, then beastialty can use all the same arguments that are used by homosexuality. The pleasure behavior can be associated with genetics, it can show tell tale signs from a young age, it has precedent in history, and even has precident in nature (dogs humping human legs). If babies, instead of the carrot of pleasure is the natural goal, then beastiality, homosexuality and pediphilia are all lumped unnatural since they lead to polute water. I don't care either way as long as things are consistent across the board. An analogy is eating. The pleasure of eating is the carrot while the clean water are the natural needs of the body for nutrients and energy. If the carrot is the goal and not the incentive then the body can be damaged.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.