# New discrete theory

## Recommended Posts

They are some common ideas, like using a grid and rules. States are updated by step.

But there's a difference : Edward Fredkin uses a deterministic rules and nokton theory uses some kind of probabilities.

There have been a lot of attempts to develop theories like this (Stephen Wolfram is perhaps the most high profile). As far as I am aware none of them have achieved anything significant.

##### Share on other sites

I agree with you.

But the common with Wolfram and Fredkin is using non probabilitic rules like cellular automata.

##### Share on other sites

Can you explain more.

Unfortunately the builders pulled out my line along wiht the phone and internet a couple of days ago.

I have now got a temporary fix in place and I see that things have moved on a bit.

Yes but I need to know where to start.

In other words I don't know your level of mathematical knowledge.

Do you know the difference between a function and an operator or that whilst the solution to a function equation is a value the solution to an operator equation is a function?

If not I will start by explaining this.

##### Share on other sites

I'm graduated. I know exactly what a function. For operator I have an idea.

##### Share on other sites

Hi studiot, I see you are connected to forum, can you respond at my last post ?

##### Share on other sites

Consider this function, which was first enunciated by Gauss well before any quantum or atomic theory.

$f(t) = \frac{1}{{\sqrt {2\pi } }}\frac{1}{\tau }{e^{\left( { - \frac{{{t^2}}}{{2{\tau ^2}}}} \right)}}$

Now take the Fourier transform

$g(w) = \frac{1}{{2\pi \tau }}\int\limits_{ - \infty }^\infty {{e^{\left( { - \frac{{{t^2}}}{{2{\tau ^2}}}} \right)}}} e\left( { - iwt} \right)dt$

Completing the square and executing some algebra leads to

$= \frac{{e\left( { - \frac{{{\tau ^2}{w^2}}}{2}} \right)}}{{\sqrt {2\pi } }}\frac{1}{{\sqrt {2\pi } \tau }}\int\limits_{ - \infty }^\infty {{e^{\left( { - \frac{{{{\left( {t + i{\tau ^2}w} \right)}^2}}}{{2{\tau ^2}}}} \right)}}} dt$

The integral on the right can be shown to equal one by complex integration so

$g(w) = \frac{1}{{\sqrt {2\pi } }}{e^{\left( { - \frac{{{\tau ^2}{w^2}}}{2}} \right)}}$

Which is of the same form ( in w) as the original function in t.

This is, of course, the normal or gaussian distribution in statistics.

The spread or uncertainty for each is

$\Delta t = \tau$ and $\Delta w = 1/\tau$

$\Delta w\Delta t = 1$

Which is an uncertainty theorem.

A physicist will tell you that if t is time and w is frequency of an electrical impulse than the above pair tells us that the narrower an electrical impulse the greater the spread of the frequency components.

She might also say that in classical wave theory wave number k and position x are similarly related so

$\Delta k\Delta x = 1$

##### Share on other sites

Thank you, now I understood the origin of Heisenberg inequality.

My question is : Exists a same kind of this inequality for this "new nokton theory" ?

##### Share on other sites

Doesn't the very nature of these 'noktons' contradict the uncertainty principle?

In the uncertainty principle the values taken by the two operators are allowed to vary continuously, but the value taken (or assumed) by one affects the value allowable for the other. Since their deltas have an inverse relationship we can say that the large one delta is the smaller the other is, but there is (in mathematical theory) no upper or lower limit to either.

Discretisation of the values (quantisation in physics) but introducing lower (and by implication upper) limitschanges things is is curently the subject of some debate.

Is reality discrete or continuous?

Prefessor Shan Majid of London University has published an interesting book, collecting thoughts from many famous scientists and mathematicians, on this matter.

On Space and Time

Shanh Majid

Cambride University Press

Edit a couple of interesting points about the version of the uncertainty theorem in my last post.

The theorem above is purely numerical and has no units, whereas the Heisenberg theorem has units.

My presentation was unusual and constructed to avoid quantum theory for demonstration purposes.

An interpretation of the theorem that is often given is that it applies to processes that it involves the composition/convolution of two operators, say AB.

If the order is important, that is if AB is not equal to BA

Then (AB - BA ) is not zero and the relation can be derived from this.

Physically this means that this allows for the fact that if you first fix the momentum and then measure the position where this occurs you will obtain a different result from if you fix the position and measure the momentum at that position.

This is where the confusion arises leading to the misunderstanding that it is only a measurement issue and not inherent in the theory.

Edited by studiot

## Create an account

Register a new account