Jump to content


Senior Members
  • Posts

  • Joined

  • Last visited

Posts posted by Aethelwulf

  1. This is becoming increasingly frustrating; it's starting to feel like I'm talking to a wall. I've already tried to explain to you (multiple times) why what you suggest is invalid. You need to specify how the mass is changing FIRST, and then you can determine the metric it produces. Talking about a changing SC metric is an oxymoron. I don't know how else I can explain it to you.

    Also, GR is a classical theory, and I don't see at all how this ties in with QM.


    You said something about the time derivatives being zero, I'd like to see some math so I can work with it.



    As for quantum mechanics, I believe it would say that a metric like a Schwartzschild metric would vary in time. Just like a spacetime metric, energies in metrics generally change.




  2. When you say "we have" it implies that this is something that's been detected. Planck particles are entirely hypothetical. What "we have" is a concept.


    I said theoretically-speaking.


    Is there any evidence that it exists?


    I just don't want the audience to think it is not theoretically possible.

  3. Well, I am going to be frank, either way in writing the metric is fine. That's is really not the point however.


    The whole of this discussion is whether you can vary the metric energy. I have shown you can. You say what is the point? I say it is a prediction of quantum mechanics. That's the point.


    Show me some precise calculations of your approach and prove to me you cannot vary the energy. Then I will decide how valid a statement it is.

  4. There are rumours that the announcement of the experimental finding of the Higgs boson will be made this year. It seems that the LHC has found a Higgs particle about the 125 GeV region.


    It will be surprising if they have. Rumours are just that though... rumours. I'll be very interested to see if they hold any truth.

  5. The deBroglie wavelength is not the same thing as the wave function.




    I know that.. Perhaps I should have made a distinction. I was just trying to get him to mull over other things as well.





    A cat has not wavefunction. The moon has not wavefunction...

    Can you rephrase this... are you saying, does a cat not have a wavefunction, or are you implying it does not have one?



    Could this ever make "Time travel" into the Past possible though?


    I mean - does the Past still exist: is there somewhere or other a place, a world of solid objects, where the Past is still happening? (A guy called O'Brien keeps asking me this).


    The 64 million dollar question. I'd say, the effects of distortions would not be great enough to cause time travelling into the past - there is also the Chronological Protection theorem. You can cause some wierd effects within the present time frame however... just like when you stick a clock in the basement of a building then put one of the roof, you can measure a very small time delay.


    But me saying the effects will never cause large ripples is a personal belief - there is absolutely no reason for me to think that other than choice.

  7. I have been exploring a possibility and wanted to know what others thoughts were here. I have been trying to mathematically compose a theory which treats the very beginning of space (which according to current belief would involve a time dimension) as being highly unstable due to the uncertainty regarding to matter and space between particles. In short, there was little to no space at all in the beginning, meaning that particles where literally stacked up on top of each. This completely violates the uncertainty principle and I conjecture it caused ''space to grow exponentially'' between particles to allow them degrees of freedom and to bring a halt to the violation of the quantum mechanical principle.


    Of course, how do you speak about space or even time if niether existed fundamentally? Fotini Markoupoulou has been using a special model. In her recent idea's, she believes that space is not fundamental.


    In her model, simply put, particles are represented by points which are nodes which can be on or off, which represents whether the nodes are actually interacting. Only at very high temperatures, spacetime ceases to exist and many of us will appreciate this as Geometrogenesis. The model also obeys the Causal Dynamical Triangulation which is a serious major part of quantum loop gravity theory which must obey the triangle inequality in some spin-state space. Spin state spaces may lead to models we can develop from the Ising Model or perhaps even Lyapanov Exponential which measures the seperation of objects in some Hilbert spaces preferrably. We may in fact be able to do a great many things.


    Heisenberg uncertainty is a form of the geometric Cauchy Schwarz inequality law and this might be a clue to how to treat spacetime so unstably at very early beginnings when temperatures where very high.




    Since Markoupoulou's work is suggesting that particles exist on Hilbert Spaces in some kind of special sub-structure before the emergence of geometry, then now I can approach my own theory and answer it in terms of the uncertainty principle using the Cauchy-Shwartz inequality because from this inequality one can get the triangle inequality.


    So, this is my idea. Space and time emerged between particles because particles could not be allowed to infinitely remain confined so close to other particles, that the uncertainty forbid it and created degrees of freedom in the form of the vacuum we see expanding all around us.


    The mathematical approach



    So let me explain how this model works. First of all, it seems best to note that in most cases we are dealing with ''three neighbouring points'' on what I call a Fotini graph. Really, the graph has a different name and is usually denoted with something like [math]E(G)[/math] and is sometimes called the graphical tensor notation. In our phase space, we will be dealing with a finite amount of particles [math]i[/math] and [math]j[/math] but asked to keep in mind that the neighbouring particles are usually seen at a minimum three and that each particle should be seen as a configuration of spins - this configuration space is called the spin network. I should perhaps say, that to any point, there are two neighbours.


    Of course, as I said, we have two particles in this model [math](i,j)[/math], probably defined by a set of interactions [math]k \equiv (i,j)[/math] (an approach Fotini has made in the form of on-off nodes). In my approach we simply define it with an interaction term:


    [math]V = \sum^{N-1}_{i=1} \sum^{N}_{i+1} g(r_{ij})[/math]


    I have found it customary to place a coupling constant here [math]g[/math] for any constant forces which may be experienced between the two distances made in a semi-metric which mathematicians often denote as [math]r_{ij}[/math].


    If [math]A(G)[/math] are adjacent vertices and [math]E(G)[/math] is the set of edges in our phase space, (to get some idea of this space, look up casual triangulation and how particles would be laid out in such a configuration space), then


    [math](i,j) \in E(G)[/math]


    It so happens, that Fotini's approach will in fact treat [math]E(G)[/math] as assigning energy to a graph


    [math]E(G) = <\psi_G|H|\psi_G>[/math]


    which most will recognize as an expection value. The Fotini total state spin space is


    [math]H = \otimes \frac{N(N-1)}{2} H_{ab}[/math]


    Going back to my interaction term, the potential energy between particles [math](i,j)[/math] or all [math]N[/math]-particles due to pairwise interctions involves a minimum of [math]\frac{N(N-1)}{2}[/math] contributions and you will see this term in Fotini's previous yet remarkably simple equation.


    [math]K_N[/math] is the complete graph on the [math]N[/math] - vertices in a Fotini Graph i.e. the graph in which there is one edge connecting every pair of vertices so there is a total of [math]N(N-1) = 2[/math] edges and each vertex has a degree of freedom corresponding to [math](N-1)[/math].


    Thus we will see that to each vertex [math]i \in A(G)[/math] there is always an associated Hilbert space and I construct that understanding as


    [math]H_G = \otimes i \in A(G) H_i[/math]


    From here I construct a way to measure these spin states in the spin network such that we are still speaking about two particles [math](i,j)[/math] and by measuring the force of interaction between these two states as


    [math]F_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}} \hat{n}[/math]


    where the [math]\hat{n}[/math] is the unit length. The angle between two spins in physics can be calculated as[math]\mu(\hat{n} \cdot \sigma_{ij}) \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \mu(\frac{1 + cos \theta}{2})[/math]


    Thus my force equation can take into respect a single spin state, but denoted for two particles [math](i,j)[/math] as we have been doing, it can describe a small spin network


    [math]F_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mu(\hat{n} \cdot \sigma_{ij})^2 = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mathbf{I}[/math]


    with a magnetic coefficient [math]\mu[/math] on the spin structure of the equation and [math]\mathbf{I}[/math] is the unit matrix.


    I now therefore a new form of the force equation I created with an interaction term, as I came to the realization that squaring everything would yield (with our spin states)


    [math]-\frac{\partial^2 V^2 (r_{ij})^2}{\partial^2 r^{2}_{ij}} \mu(\hat{n} \cdot \vec{\sigma}_{ij})^2[/math]


    [math] = -\frac{\partial^2 V^2 (r_{ij})^2}{\partial^2 r^{2}_{ij}} \begin{bmatrix}\ \mu(n_3) & \mu(n_{-}) \\ \mu(n_{+}) & \mu(-n_3) \end{bmatrix}^2[/math]


    Sometimes it is customary to represent the matrix in this form:


    [math]\begin{bmatrix}\ \mu(n_{3}) & \mu(n_{-}) \\ \mu(n_{+}) & \mu(-n_{3}) \end{bmatrix}[/math]


    As we have in our equation above. The entries here are just short hand notation for some mathematical tricks. Notice that there is a magnetic moment coupling on each state entry. We will soon see how you can derive the Larmor Energy from the previous equation.


    Sometimes you will find spin matrices not with the magnetic moment description but with a gyromagnetic ratio, so we might have


    [math]\frac{ge}{2mc}(\hat{n} \cdot \sigma_{ij}) = \begin{bmatrix}\ g \gamma(n_3) & g \gamma(n_{-}) \\ g \gamma(n_{+}) & g \gamma(-n_3) \end{bmatrix}[/math]


    The compact form of the Larmor energy is [tex]-\mu \cdot B[/tex] and the negative term will cancel due to the negative term in my equation


    [math]-\frac{\partial^2 V^2 (r_{ij})^2}{\partial^2 r^{2}_{ij}} \mu(\hat{n} \cdot \vec{\sigma}_{ij})^2[/math]


    [math]= -\frac{\partial^2 V^2 (r_{ij})^2}{\partial^2 r^{2}_{ij}} \begin{bmatrix}\ \mu(n_3) & \mu(n_{-}) \\ \mu(n_{+}) & \mu(-n_3) \end{bmatrix}^2[/math]


    The [math]L \cdot S[/math] part of the Larmor energy is in fact more or less equivalent with the spin notation expression I have been using [math](\hat{n} \cdot \sigma_{ij})[/math], except when we transpose this over to our own modified approach, we will be accounting for two spins.


    We can swap our magnetic moment part for [math]\frac{2\mu}{\hbar Mc^2 E}[/math] and what we end up with is a slightly modified Larmor Energy


    [math]\Delta H_L = \frac{2\mu}{\hbar Mc^2 e} \frac{\partial^2 V^2 (r_{ij})^2}{\partial^2 r^{2}_{ij}} (\hat{n}\cdot \sigma_{ij}) \begin{pmatrix} \alpha \\ \beta \end{pmatrix}[/math]


    This is madness I can hear people shout? In the Larmor energy equation, we don't have [math](\hat{n}\cdot \sigma) \begin{pmatrix} \alpha \\ \beta \end{pmatrix}[/math] we usually have [math](L\cdot S)[/math]?


    Well yes, this is true, but we are noticing something special. You see, [math](L\cdot S)[/math] is really


    [math]|L| |S|cos \theta[/math]


    This is the angle between two vectors. What is [math](\hat{n}\cdot \sigma) \begin{pmatrix} \alpha \\ \beta \end{pmatrix}[/math] again? We know this, it calculates the angle between two spin vectors again as


    [math]\frac{1 + cos \theta}{2}[/math]


    So by my reckoning, this seems perfectly a consistent approach.


    Now that we have derived this relationship, it adds some texture to the original equations. If we return to the force equation, one might want to plug in some position operators in there - so we may describe how far particles are from each other by calculating the force of interaction - but as we shall see soon, if the lengths of the triangulation between particles are all zero, then this must imply the same space state, or position state for all your [math]N[/math]-particle system. We will use a special type of uncertainty principle to denote this, called the triangle inequality which speaks about the space between particles.


    As distances reduce between particles, our interaction term becomes stronger as well, the force between particles is at cost of extra energy being required. Indeed, for two particles [math](i,j)[/math] to experience the same position [math]x[/math] requires a massive amount of energy, perhaps something on the scale of the Planck Energy, but I have not calculated this.


    In general, most fundamental interactions do not come from great distance and focus to the same point, or along the same trajectories. This actually has a special name, called Liouville's Theorem. Of course, particles can be created from a point, this is a different scenario. Indeed, in this work I am attempting to built a picture which requires just that, the gradual seperation of particles from a single point by a vacua appearing between them, forced by a general instability caused by the uncertainty principle in our phase space.


    As I have mentioned before, we may measure the gradual seperation of particles using the Lyapunov Exponential which is given as


    [math]\lambda = \epsilon e^{\Delta t}[/math]


    and for previously attached systems eminating from the same system, we may even speculate importance for the correlation function


    [math]<\phi_i, \phi_j> = e^{-mD}[/math]


    where [math]D[/math] calculates the distance. Indeed, you may even see the graphical energy in terms maybe of the Ising model which measures the background energy to the spin state [math]\sigma_0[/math] - actually said more correctly, the background energy


    [math]\sum_N \sigma_{(1,2,3...)}[/math]


    acts as coefficient of sigma zero. Thus the energy is represented by a Hamiltonian of spin states


    [math]\mathcal{H} = \sigma(i)\sigma(j)[/math]


    Now, moving onto the implications of the uncertainty principle in our triple intersected phase space (with adjacent edges sometimes given as [math](p,q,r)[/math], there is a restriction that [math](p+q+r)[/math] is even and none is larger than the sum of the other two. A simpler way of trying to explain this inequality is by stating: [math]a[/math] must be less than or equal to [math]b+c[/math], [math]b[/math] less than or equal to [math]a+c[/math], and [math]c[/math] less than or equal to [math]a+b[/math].


    It actually turns out that this is really a basic tensor algebra relationship of the irreducible representions of [math]SL(2,C)[/math] according to Smolin. If each length of each point is necesserily zero, then we must admit some uncertainty (an infinite degree of uncertainty) unless some spacetime appeared appeared between each point. Indeed, because each particle at the very first instant of creation was occupied in the same space, we may presume the initial conditions of BB were highly unstable. This is true within the high temperature range and can be justified by applying a strong force of interactions in my force equation. The triangle inequality is at the heart of spin networks and current quantum gravity theory.


    For spins that do not commute ie, they display antisymmetric properties, there could be a number of ways of describing this with some traditional mathematics. One way will be shown soon.


    Spin has close relationships with antisymmetric mathematical properties. An interesting way to describe the antisymmetric properties between two spins in the form of pauli matrices attached to particles [math]i[/math] and [math]j[/math] we can describe it as an action on a pair of vectors, taking into assumption the vectors in question are spin vectors.


    This is actually a map, taking the form of


    [math]T_x M \times T_x M \rightarrow R[/math]


    This is amap of an action on a pair of vectors. In our case, we will arbitrarily chose these two to be Eigenvectors, derived from studying spin along a certain axis. In this case, our eigenvectors will be along the [math]x[/math] and [math]z[/math] axes which will always yield the corresponding spin operator.


    [math](d \theta \wedge d\phi)(\psi^{+x}_{i}, \psi^{+z}_{j})[/math]


    with an abuse of notation in my eigenvectors.


    It is a 2-form (or bivector) which results in


    [math]=d\theta(\sigma_i)d\phi(\sigma_j) - d\phi(\sigma_j)d\phi(\sigma_i)[/math]


    This is a result where [math]\sigma_i[/math] and [math]\sigma_j[/math] do not commute.


    the following work will demsontrate a way to mathematically represent particles converging to a single point and highlighting why uncertainty at the big bang is inherently important.


    We should remind ourselves, that there are three neighbours which form a triangle in our phase space. Our original phase space constructed of Fotini's approach for a pairwise interaction which had the value [math]\frac{N(N-1)}{2}[/math]. It is still quite convienient not to involve any other particle yet, just our simple two-particle system; more specifically, two quantum harmonic oscillators. It seems like a normal approach according to Fotini to assume the energy of the system as a pair of interactions given as [math](i,j) \equiv k[/math] where [math]k \in \mathcal{I}[/math] where [math]\mathcal{I}[/math] is the set of interactions. Using this approach, I construct a Hamiltonian for myself which has the physics of describing the convergence of two oscillations into a single seperation neighbouring point/position. First I begin with the simple form of the Hamiltonian


    [math]\mathcal{H} = \sum_i E_{i_{(x,y)}} + \sum_{k \in \mathcal{I}} h_k + x \Leftrightarrow y[/math]


    Where [math]h_k[/math] is the Hermitian Operator. This equation describes the Hamiltonian of our pairwise interactive system which can be exchanged for particle [math]i[/math] satisfying, say for example, position [math]x[/math] and particle [math]j[/math] in position [math]y[/math]. These two particles form two sides of the triangle, so if we invoke the idea of two particle converging to a single point, space position [math]z[/math] then it will follow this tranformation [math](x,y) \rightarrow z[/math]. Before I do this, since I am working in a phase space with potentially the model known as the spin network, it might concern me then to change the energy term in the Hamiltonian for [math]\sigma(i)\sigm(j)[/math] which is just the Ising Energy. So our Hamiltonian would really look like:


    [math]\mathcal{H} = \sum_{ij}\sigma(i)\sigma(j) + \sum_{k \in \mathcal{I}} h_k + x \Leftrightarrow y[/math]


    Now, for a Hamiltonian describing two particles converging to the adjecent edge [tex]E(G)[/tex] we should have


    [math]\mathcal{H} = \sum_{ij}\sigma(i)\sigma(j) + \sum_{k \in \mathcal{I}} h_k + (x,y) \Leftrightarrow z[/math]


    As one of a few possibilities. There are six possible solutions in all for different coordinates. The spins in our space is assigning energy to our particles [math](i,j)[/math], in fact perhaps a very important observation of the model we are using, is that energy is assigned to points in this space we are dealing with. In fact, as has been mentioned before, if [math]A(G)[/math] are adjecent vertices and [math]E(G)[/math] are the neighbouring edges, then on each edge there is some energy assigned in our Hilbert Space. It seems then, you can really only deal with energy if there are really adjecent vertices and neighbour edges to think about. Remember, I am saying that it might be possible to state that the uncertainty principle could have tempted spacetime to expand, but this was because there was really no spacetime, no degree's of freedom for energy to move in -- which seems to be the way nature intended. So if there are no degrees of freedom, we cannot really think about energy normally in our model, since we define energy assigned to points in a Hilbert Space, which deals with a great deal more particles/points. But for this thought experiment, we have chosen two particles, and another possible position for convergence, so the equation


    [math]H = \sum_{ij}\sigma(i)\sigma(j) + \sum_{k \in \mathcal{I}} h_k + (x,y) \rightarrow z[/math]


    Actually looks very innocent. But it cannot happen in nature, not normally. Nature strictly refuses two objects to converge to a single point like [math](x,y) \rightarrow z[/math]. One way to understand why, is the force required to make two objects with angular momentum to occupy the same region in space. I won't recite it again right below my OP, but my force along a spin axis could determine such a force, or atleast, the force required to do so - which would in hindsight even seem impractical thinking about it... But it does give us some insight into what kind of conditions we might think about mathematically if somehow the singularity of the big bang can be overcome with some solution. In my force equation with the spin between two vectors, would state that as the angle between the vector closed in to complete convergence, the force should increase exponentially. I haven't came to an equation which describes this exponential increase, however, I do know that this is what experimentation would agree on.


    The same is happening in our Hamiltonian. The force equation, with it's rapid increase of energy is proportional to the Hamiltonian experiencing an increase of energy from the spin terms [tex]\sigma(i)\sigma(j)[/tex] through it's crazy transformation [math](x,y) \rightarrow z[/math]. In field theory, this would be the same as saying that the distortions of spacetime of some quantum field(s) are converging to a single point in spacetime.


    Let's study this equation a bit more:


    [math]\mathcal{H} = \sum \sigma (i) \sigma (j) + \sum_{k \in \mathcal{I}} h_k + (x,y) \rightarrow z[/math]


    What we have in our physical set-up above, is some particle oscillations which presumably, under a great deal of force, being measured to converge to position [math]z[/math]. In our phase space, we are using the triangulation method of dealing with the organization of particles. At [math]z[/math] we may assume the presence of a third spin state, let's denote it as [math]\sigma_z \in (-1,+1)[/math] which seems to be a favourable way to mathematically represent the spin state of a system, meaning quite literally, ''the spin state at vertex z''. [1] Let us just quickly imagine that at any positions, [math](x,y,z)[/math] to make any particle move to another position where a particle is already habiting it requires a force along a spin axis. (I can't stress enough this is not what happens in nature), this is only a demonstration to explain things better later. Sometimes working backwards, from maybe illogical presumptions can lead to a better arguement. The calculation to measure the angle between two spin states is


    [math]\mu(\hat{n} \cdot \sigma_{ij}) \begin{pmatrix} \alpha \\ \beta \end{pmatrix} = \frac{1 + cos \theta}{2}[/math]


    Thus my force equation can take into respect a single spin state, but denoted for two particles [tex](i,j)[/tex] so you may deal with either spin respectively.


    [math]F_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mu(\hat{n} \cdot \sigma_{ij})^2 = \frac{\partial V(r_{ij})}{\partial r_{ij}} \mathbf{I}[/math]


    But perhaps, more importantly, you may decompose the equation for both particles. Let us say, particle [tex]i[/tex] is in position/vertex [math]x[/math] and particle [math]j[/math] is in position [math]y[/math], meaning our final spin state is [math]z[/math]. In the force equation, making all lengths of your phase space go to zero, means that your are merging your spin state's together. Hopefully this can be intuitively imagined, but here is a good diagram: http://en.wikipedia....pin_network.svg provided by wiki. If we stood in the z-vertex, and made the xy-vertices merge to the zth [math](xy) \rightarrow z[/math] then obviously the lengths of each side would tend to zero. This means, whilst the force between particles may increase by large amounts, the angle between the vectors also goes to zero. The unit length, or unit vector which seperates particles from an origin on an axis will also tend to zero. Indeed, if you draw a graph, and make the [math]xy[/math]-axis the two lengths of both particles [math]i[/math] and [math]j[/math], where the origin is vertex spin state [math]z[/math] then by making the lengths go to zero would be like watching the [math]xy[/math] axes shrink and fall into the origin. So when complete convergence has been met, the force equation has been mangled completely of it's former glory. We no longer have an angle seperating spin states, nor can we speak about unit vectors, because they have shrank as well. Using a bit of calculus, we may see that


    [math]F_{ij} = \frac{\partial V(r_{ij})}{\partial r_{ij}}\lim_{\hat{n} \rightarrow 0} \hat{n}[/math]


    Then naturally it follows that the force once describing the seperation of particles no longer exists, because anything multiplied by zero is of course zero. Here we have violated some major principles in quantum mechanics. Namely the uncertainty principle and for the fact that particles do not converge like this. By making more than one particle occupy the same space is like saying that either particle will have a definate position and this of course from the quantum mechanical cornerstone, the uncertainty principle is forbidden. May we then speculate that the universe was born of uncertainty? Uncertainty has massive implications for statistical physics. In the beginning of the universe, most physicists would agree that statistical mechanics will dominate the quantum mechanical side... quantum mechanics is afterall a statistical theory at best. Perhaps then, no better way to imagine the beginning of the universe other than through the eye's of Heisenberg?


    [1] - http://www.math.bme....swork/ising.pdf

  8. They are equivalent and equivalently useless.




    You're the one who decided to write the SC metric in terms of energy, not me. As I've said before, I don't see the point.




    I've already tried to explain this to you: the metric is related to the energy-momentum distribution in spacetime by the Einstein field equations. The SC metric is solved from the EFE's using the assumption that it does not change over time. If the central mass is changing, then the metric produced by that mass is not the SC metric. This doesn't mean that the central mass isn't allowed to change, it just means that the metric it produces will be different from the SC metric.


    You can't just say "so now we vary the energy..." It doesn't work that way.







    They still look significantly different to me.








    See if you can spot the differences (there are quite a few).


    It's not useless. You said it yourself, it is a self-energy. Are you going to try and wheel the idea that any metric does not have an energy? If that is your position, I can tell you right now that physics rightly disagree's in that predication. As for the metric, I see the last terms seem different, this is because the phi --- to make this clearer because you are stuck on a notational problem, my phi can be replaced with


    [math]r^2 d\Omega[/math]


    It means the same thing. My term is absorbing the rest of your terms up. Ok?


    Now, for the rest...


    where [math]d\Omega[/math] is basically [math]d\phi^2 + (sin\phi)^2d\theta[/math]


    That is hands up, my fault for making that confusing but I am well aware of your presentation.


    Now, the metric you say is


    ''I've already tried to explain this to you: the metric is related to the energy-momentum distribution in spacetime by the Einstein field equations. The SC metric is solved from the EFE's using the assumption that it does not change over time. If the central mass is changing, then the metric produced by that mass is not the SC metric. ''

    I feel something is wrong with this statement. Any metric, including the SC will change over time. Even especially in the case of an integral over the time, even a SC metric should not remain constant. That goes against the statement you said yourself you agreed with.


    I have not ever heard of relativity deriving the metric from non-changing values of energy for the metric, but I can say right now that I believe this to be a faulty premise. I think that any metric will change in time, whether it is your normal spacetime metric to even a Schwarztschild metric, to even this latter metric solved for a local flat neighbourhood which... by the way... is basically mathematically the same thing as the kind of flat spacetime you deal with when looking in any direction of this metric.

  9. Okay... so what? All you had to do was substitute M=E/c2 to get it in terms of rest energy. I don't see how this is useful.




    I never mentioned rest mass. My point is that the energy distribution in the SC metric can't change, or else it ceases to be the SC metric.




    He describes the energy associated with test particles in SC spacetime, not some sort of "metric self-energy" like you seem to be implying. Of course, there is automatically an energy-momentum distribution associated with any metric given by the field equations. The SC metric is a vacuum solution, so all of the components of the stress-energy tensor (including energy density) are zero.


    I also fail to see how this relates to your original post, or any of mine.


    No I did not just need to replace M with E/c^2... that's rubbish. I replaced G/c^2 with GM/E. That is slightly different.


    ''Yes, you can get the SC metric in terms of rest energy if you wanted to. ''


    Sorry, you said rest ''energy''... But we are not talking about a massless system, so I feel you are being pedantic.


    ''He describes the energy associated with test particles in SC spacetime, not some sort of "metric self-energy" like you seem to be implying. Of course, there is automatically an energy-momentum distribution associated with any metric given by the field equations. ''


    My approach is much different to his... however, you claimed that you cannot vary such an energy. I challenge that claim from my derivation. I am asking you why any metric, not just this, cannot be varied?


    It only seems right from the principles of quantum mechanics that you can vary any energy in a metric. If this is a claim as you specified from relativity, relativity is demonstratably wrong since metrics like spacetime are composed of fields which must vary in energy, fields are not generally static, only in special cases.






    Written in the form you have it, it should look like this:


    [math]c^2 d\tau^2 = (1 - 2\frac{G}{c^2} \frac{M}{r} )c^2 dt^{2} - \frac{dr^2}{(1-2\frac{G}{c^2} \frac{M}{r})} - r^2 d \phi^2-r^2sin^2\phi d\theta^2[/math]


    Yes, that is right. I changed it way before you posted look back please.


    And since you have not provided a reference when asked, NO ONE has ever derived a metric like mine. I have certainly never came across one.

  10. In general, yes, the metric can be time dependent. It doesn't make any sense to talk about changes in the central mass in the Schwarzschild metric because it is derived using the explicit assumption that it does not change over time. If the metric is t-dependent then it is not the Schwarzschild metric.




    That's not the Schwarzschild metric. I've never seen anything that looks like that before. The Schwarzschild metric is:


    [math]ds^2=\left (1-\frac{r_s}{r} \right )dt^2-\left (1-\frac{r_s}{r} \right )^{-1}dr^2-r^2d\phi ^2-r^2sin^2\phi d\theta ^2[/math]


    where [math]r_s=\frac{2GM}{c^2}[/math]





    Yes, you can get the SC metric in terms of rest energy if you wanted to.




    No, you're not allowed to do this. If the energy distribution is changing then you need to go back to the Einstein field equations, insert this information into the stress-energy tensor, then re-solve the equations to obtain a completely new metric. What you're doing doesn't make sense.




    That pdf never once mentions any change in central mass in the SC metric. It is certainly possible to talk about the energy of particles in SC spacetime (so long as they're not energetic enough to disturb the metric), but this says nothing about what you're proposing.


    No schwartzschild metric that I have seen written in terms of energy like I have written it, and speaking of a rest mass ... I don't see your point. The point is that my metric is written in terms of energy... can you find me a literature that has written it like I have?


    Secondly, I never said that the paper did speak of varying energy. I am saying speaking of an energy for a metric is not unheard of.


    As for how the schwarztschild metric is written, I can find where I copied this one metric down if you give me time. I obviously read many literatures, it will take time.


    You need to keep in mind I wrote the original terms in metric wrong, I had an extra dt in there on the left hand side of the equation which should not have been there. I made this clear. Does this clear up your problem? I changed that before you even posted that last post>?


    Right, the metric I have is fine. dτ² where c=1 in some cases, mine is not. The thing which troubled you was the extra dt term which shouldn't have been there, but I explained this. The metric is fine otherwise.

  11. It occurred to me a week past how to variate the energy of a Schwarzschild metric.



    The Energy changing in a Schwartzschild Metric


    It is not obvious how to integrate an energy in the Schwartzschild metric unless you derive it correctly. The way this following metric will be presented will be:


    [math] c^2 d\tau^2 = (1 - 2\frac{GM}{\Delta E} \frac{M}{r_s} c^2 dt^{2}) - \frac{dt}{(1-2\frac{GM}{\Delta E} \frac{M}{r_s})} - r^2 d \phi dt[/math]


    This will be interpeted as


    [math] c^2 d\tau^2 = (1 - 2\frac{GM}{E - E'} \frac{M}{r_s} c^2 dt^{2}) - \frac{dt}{(1-2\frac{GM}{E - E'} \frac{M}{r_s})} - r^2 d \phi dt[/math]


    And this metric is dimensionally-consistent to calculate the energy changes within a metric. Usually, in the spacetime metric, we treat it as a energy efficient fabric. This can be a way to treat a metric with a type of energy variation consistent perhaps with a radiating body.



    The Schwarzschild solution is (as a necessary assumption) t-symmetric, so it doesn't make any sense to talk about changes in the central mass. A changing mass distribution will produce an entirely different solution to the EFE's.


    I'm also not really sure how you arrived at that metric. It looks pretty nonsensical to me.


    It should be entirely consistent to talk about a changing energy in a metric, considering that energy can be stored in a metric, just like spacetime. Spacetime is made of fields which store energy, the two concepts cannot be separated.


    Knowing that the metric can be written as [math] c^2 d\tau^2 = (1 - 2\frac{G}{c^2} \frac{M}{r_s} c^2 dt^{2}) - \frac{dt}{(1-2\frac{G}{c^2} \frac{M}{r_s})} - r^2 d \phi dt[/math]


    Then from the gravitational parameter equation


    [math]\frac{G}{c^2}E = GM[/math]


    one can get


    [math]\frac{G}{c^2} = \frac{GM}{E}[/math]


    from this you simply replace all the two terms in the metric for the representation




    and variate the energy in the metric.


    It is certainly possible according to this paper to talk about the energy of a schwartzschild metric



    I had a couple extra dt's in there which shouldn't have been in there, if that added to the confusion. Sorry about that... it is quite a cumbersome equation to write out.

  12. Time has been presented by some as a measure of change.

    So the question goes like this:

    can change happen without time?

    That is not a question about simultaneity (can a change happen in zero time), that's another question.


    The question is: if you take an event A and say it changes into an event B, do we need time for the change to happen?


    For example, in mathematics, one can take an equation and make it evolve in a full page of equations following a sequence that can go forward and backward. Isn't it an example of change without time?


    According to Julian Barbour, all there is, is change without time.


    This comes from the timeless interpretation of GR, where the Wheeler de Witt equation permits a vanishing time derivative.




  13. m_0 = 0 for a photon.


    The way light has mass is to introduce a new definition for mass, e.g. m=E/c^2


    Personally, I think its just best to keep any redefinition of mass to its lowest uses. Photons have no mass, if they did it would be very very small, something like on the order of magnitude of [math]10^{-51}[/math] kg.

  14. That have zero mass in what sense?



    Your use of the term "mass" is contrary. When you choose to say that light has zero mass then you have chosen to use the term "mass" to mean "proper mass". The proper mass of a particle does not change its value with c. Therefore the assertion that Einstein showed that a massive particle can't travel at c because its mass would become infinite is then am invalid statement.


    What is mass?


    It is a quantity of inertia, gravitational mass. It is the presence of an object moving below the speed of light.


    What is an object which moves at lightspeed? It is the lack of all of the above. The fact that a particle with mass would gain an infinite amount of energy means that this is an ''unrealistic'' proposition. No particle with matter gains an infinite amount of energy, so as a limit, it works well as an explanation why particles with mass can never move exactly at the speed of light.


    The assertion that Einstein made was not wrong as far as we know it, because the more you accelerate a particle to the speed of light, the more energy is required... more energy in the visible universe, which is why we cannot get a particle with mass to that speed.


    Maybe some might find my newest post here a little interesting. It attempts to explain mass as a charge of the field. http://www.scienceforums.net/topic/66985-a-forgotten-theory-of-mass/page__p__683191#entry683191


    Charge is simply the coefficients of the Lie Algebra. But as my post states, mass has also very close connections to the electromagnetic phenomena.

  15. It occurred to me a week past how to variate the energy of a Schwarzschild metric.



    The Energy changing in a Schwartzschild Metric


    It is not obvious how to integrate an energy in the Schwartzschild metric unless you derive it correctly. The way this following metric will be presented will be:


    [math] c^2 d\tau^2 = (1 - 2\frac{GM}{\Delta E} \frac{M}{r_s} c^2 dt^{2}) - \frac{dt}{(1-2\frac{GM}{\Delta E} \frac{M}{r_s})} - r^2 d \phi dt[/math]


    This will be interpeted as


    [math] c^2 d\tau^2 = (1 - 2\frac{GM}{E - E'} \frac{M}{r_s} c^2 dt^{2}) - \frac{dt}{(1-2\frac{GM}{E - E'} \frac{M}{r_s})} - r^2 d \phi dt[/math]


    And this metric is dimensionally-consistent to calculate the energy changes within a metric. Usually, in the spacetime metric, we treat it as a energy efficient fabric. This can be a way to treat a metric with a type of energy variation consistent perhaps with a radiating body.

  16. Light follows what are called geodesics. Light couples to the distortions of spacetime. But yes, gravity effects the spacetime, causing curvature, causing the geometry in which massless radiation follows, called geodesics. I'd write maths but I think an explanation without it suffices.

  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.