Jump to content

dt ramblings


Johnny5

Recommended Posts

What do we do about functions that are added together? Well' date=' we can use a really simple fact about the derivative of this function:

 

[math']\frac{dy}{dx} = \frac{d}{dx}(x^2+5x+6) = \frac{d}{dx}(x^2) + \frac{d}{dx}(5x) + \frac{d}{dx}(6) = 2x + 5[/math].

 

We can just split up the derivative into sums of little bits that we know how to differentiate.

 

Sidenote for the interested reader: Try to prove this using first principles - it's not all that hard!

 

Theorem:

 

[math] \frac{d(A+B)}{dt}} = \frac{dA}{dt} + \frac{dB}{dt} [/math]

 

To prove the theorem, it suffices to prove the following statement is true:

 

[math] d(A+B) = dA + dB [/math]

 

It suffices, because if the previous statement is true, and dt is nonzero, then we can divide both sides by dt, to obtain the theorem we are attempting to prove.

 

Let

 

[math] Q = A+B [/math]

 

Thus, we are attempting to prove that:

 

[math] dQ = dA + dB [/math]

 

Definition:

 

[math] dQ \equiv Q2-Q1 [/math]

 

Where we have used subtraction to define the differential of quantity Q. Q1 is the value of Q at one moment in time, and Q2 is the value of Q at the very next moment in time. The amount of time which has passed is called a time differential, and is denoted as dt.

 

By definition:

 

[math] d(A+B) \equiv (A_2 + B_2) - (A_1 + B_1) [/math]

 

Where A1 is the value of quantity A at some moment in time, and B1 is the simultaneous value of quantity of B. A2 is the value of A at the very next moment in time, and B2 is the simultaneous value of B.

 

Now, we can just use the field axioms of algebra at will, to prove what we want to. In particular, using just those axioms, we can prove conclusively that:

 

 

[math] (A_2 + B_2) - (A_1 + B_1) = (A_2 - A_1) + (B_2 - B_1) [/math]

 

Now, by definition A2-A1 is dA, and B2 - B1 is dB, therefore:

 

 

[math] (A_2 + B_2) - (A_1 + B_1) = dA + dB [/math]

 

Thus, we have shown this:

 

[math] d(A+B) = dA + dB [/math]

 

Now, divide both sides of the statement above, by dt, to obtain:

 

[math] \frac{d(A+B)}{dt}} = \frac{dA}{dt} + \frac{dB}{dt} [/math]

 

which is the theorem. QED

 

The result is generalizable too.

 

Suppose that you have:

 

d(A+B+C)

 

Let D=B+C

 

So you have

 

d(A+D)

 

By the theorem just proven you know this:

 

d(A+D) = dA+dD

 

Therefore:

 

d(A+D) = dA+dD = dA + d(B+C)

 

By the theorem just proven you know that:

 

d(B+C) = dB + dC

 

Hence, if you know the theorem just proven, then you can figure out that:

 

[math] d(A+B+C) = dA + dB + dC [/math]

 

In which case you can learn this:

 

[math] d [\Sigma_{n=1}^{n=N} Q_n ]= \Sigma_{n=1}^{n=N} d[Q_n] [/math]

 

 

 

Dave, if you object to this method of proof, please let me know why.

 

Thank you

Link to comment
Share on other sites

  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

My qualm with it is that you can't just "divide" by dt. I know that we all do, but it's not proper - d/dt is an operator, and as such you can't really mess around with it that much.

 

A much nicer (and quicker) way of doing it is just to use the definition of the derivative, which I haven't really put down.

 

[math]\frac{d}{dx} f(x) = \frac{f(x+h) - f(x)}{h}[/math]

 

A simple re-arrangement can give you the answer.

Link to comment
Share on other sites

My qualm with it is that you can't just "divide" by dt. I know that we all do' date=' but it's not proper - d/dt is an operator, and as such you can't really mess around with it that much.

 

A much nicer (and quicker) way of doing it is just to use the definition of the derivative, which I haven't really put down.

 

[math']\frac{d}{dx} f(x) = \frac{f(x+h) - f(x)}{h}[/math]

 

A simple re-arrangement can give you the answer.

 

Well dave, you are missing the limit concept, you have to take the limit as h goes to zero don't you?

 

And h is called the "step size" in the finite discrete difference calculus right?

Link to comment
Share on other sites

I'm certainly not missing the limit concept. Yes, the idea is to divide by some infinitessimaly small quantity, but I'm trying to teach/give examples of using proper mathematical notation.

Link to comment
Share on other sites

i thought you put that definition in the second lesson and it was implied in the first.

what does: [math] d [\Sigma_{n=1}^{n=N} Q_n ]= \Sigma_{n=1}^{n=N} d[Q_n] [/math] mean?

 

"differential of a sum equals the sum of the differentials"

 

d(A+B+C+D+E) = dA+dB+dC+dD+dE

 

That kind of thing.

 

 

[math] d [\Sigma_{n=1}^{n=N} Q_n ] [/math]

 

The differential of the sum' date=' from n equals one to n equals N, of Q sub N.

 

[math'] \Sigma_{n=1}^{n=N} d[Q_n] [/math]

 

The sum from n equals one to n equals N of the differential of Q sub N.

 

when you see the capital Greek letter sigma written that way, it just means repeated addition, and is called summation notation. But you knew that, but for anyone reading this who didn't...

Link to comment
Share on other sites

I'm certainly not missing the limit concept. Yes, the idea is to divide by some infinitessimaly small quantity, but I'm trying to teach/give examples of using proper mathematical notation.

 

So how am I to interpret h?

 

From finite discrete difference calculus, we have:

 

Definition: [math] \Delta f(x) = f(x+h) - f(x) [/math]

 

where Delta is the difference operator, and h is the step size.

Link to comment
Share on other sites

I don't really get where you're coming from to be honest. I've said it twice now: d/dt is an operator. It's not a fraction. You can't divide everything by dt. It's a nice, shorthand way of writing things down, and it's the right kind of idea, but it doesn't detract from the fact that it doesn't actually sense from a strictly mathematical point of view.

Link to comment
Share on other sites

I don't really get where you're coming from to be honest. I've said it twice now: d/dt is an operator. It's not a fraction. You can't[/i'] divide everything by dt. It's a nice, shorthand way of writing things down, and it's the right kind of idea, but it doesn't detract from the fact that it doesn't actually sense from a strictly mathematical point of view.

 

I know it's an operator, and I know it's not a fraction.

 

I don't see why I cannot divide whatever I wish by dt, given that its

 

1. nonzero

 

But at any rate, why doesn't it make sense from a strictly mathematical point of view?

Link to comment
Share on other sites

Sure, you could: but what is[/i'] dt? It's meaningless by itself.

 

Time can only flow in one direction. In other words, dt is strictly positive.

 

So specifically...

 

[math] dt = t_2 - t_1 [/math]

 

Where t1 is the value of the time coordinate of some three dimensional rectangular coordinate system, at some moment in time, and

 

t2 is the value of the time coordinate of that three dimensional rectangular coordinate system, at the very next moment in time.

 

By saying that its at the very next moment in time, we ensure that:

 

1. not (t1 = t2)

 

So we can divide by dt, and we ensure that:

 

2. t2 > t1

 

So that dt is nonnegative.

 

I have analyzed dt, as much as is possible. It used the concept of subtraction.

Link to comment
Share on other sites

How can dt be a number if it's supposed to be an infinitessimal quantity?

 

Let

 

[math] dt = 1 [/math]

 

There you go, now it's a number.

 

For the purposes of doing physics, with t representing time, I would think that you need units. One state change. One time unit. Something like that.

 

hence

 

[math] dt = 1 TU[/math]

 

TU = time unit, one state change.

 

At some moment in time, the universe is in state S1.

 

At the very next moment in time, the universe is in state S2.

 

Things moved, the state changed, there was a change of state.

 

From a purely abstract point of view...

 

[math] \Delta S = S2-S1 [/math]

 

If S2=S1 then nothing moved relative to anything else, and time hasn't passed :)

 

 

so dt>0 if and only if DS > 0

Link to comment
Share on other sites

[math] dt = 1 TU[/math]

 

TU = time unit' date=' one state change.[/font']

 

:confused:

 

Let's go through this. You've stated already that you'd like to declare:

 

[math]\Delta f(x) = f(x+h) - f(x)[/math].

 

Okay. I can deal with that. Now, the idea is that, yes, you have some small change [math]h = \Delta t[/math]. So, we divide through by this:

 

[math]\frac{\Delta f(x)}{\Delta t} = \frac{f(x+h) - f(x)}{h}[/math].

 

Now, taking [math]\Delta t \to 0[/math],

 

[math]\lim_{\Delta t \to 0} \frac{\Delta f(x)}{\Delta t} = \frac{df}{dt}[/math].

 

The idea being that you divide through by some finite amount to start off with, and as you let that finite amount get arbitrarily small, we obtain the derivative.

 

Setting dt = 1 just doesn't make sense.

Link to comment
Share on other sites

:confused:

 

Let's go through this. You've stated already that you'd like to declare:

 

[math]\Delta f(x) = f(x+h) - f(x)[/math].

 

Okay. I can deal with that. Now' date=' the idea is that, yes, you have some small change [math']h = \Delta t[/math]. So, we divide through by this:

 

[math]\frac{\Delta f(x)}{\Delta t} = \frac{f(x+h) - f(x)}{h}[/math].

 

Now, taking [math]\Delta t \to 0[/math],

 

[math]\lim_{\Delta t \to 0} \frac{\Delta f(x)}{\Delta t} = \frac{df}{dt}[/math].

 

The idea being that you divide through by some finite amount to start off with, and as you let that finite amount get arbitrarily small, we obtain the derivative.

 

Setting dt = 1 just doesn't make sense.

 

 

Actually, i don't want to declare that:

 

[math] \Delta f(x) = f(x+h) - f(x) [/math]

 

That's already been defined by others, I didn't say I want to do it that way. I really just want to see what you have to say on the matter. As a matter of fact, i define the infinitessimal difference operator using first order logic, but thats my own business.

 

I didn't say that I wanted to set dt=1, you did.

 

Plus you are ignoring whether you take the limit from the left, or the right. All I wanted to do, was to answer your question, as to how to prove the following fact:

 

[math] \frac{d(A+B)}{dx} = \frac{dA}{dx}+ \frac{dB}{dx} [/math]

 

Now I'm just trying to understand what your obection is to the method of proof I used. I just want to prove the fact rapidly, using only algebra. I know that I can do it quickly and elegantly using the infinitessimal difference.

 

And you saw it, I removed dx from the denominator first.

Link to comment
Share on other sites

 

[math]\frac{\Delta f(x)}{\Delta t} = \frac{f(x+h) - f(x)}{h}[/math].

 

Now' date=' taking [math']\Delta t \to 0[/math],

 

[math]\lim_{\Delta t \to 0} \frac{\Delta f(x)}{\Delta t} = \frac{df}{dt}[/math].

 

Yes, this is the classical definition of the derivative.

Link to comment
Share on other sites

Johnny, what the hell are you talking about? Now isn't the time to introduce differentials or 1 forms into this topic. Stop it, please. Dave was doing a good job of discussing derivatives without your off topic intrusion into unnecessary parts of mathematics, not to say inconsistent.

 

In post 13 you declare dt=1, at no point does Dave declare dt is a number, he sticks with the proper idea that d/dt is an operator.

 

then in post 17 you state that Dave wanted it to be a number. Well, he didn't, you set it to be one, despite the fact that, when treated properly it is a differential, not a number. You are not treating it properly by the way. In fact I get the impression that this is all kind of new to you and you're not well versed in these.

 

THe ponit is that derivatives are about limits, not difference equations. There are formal simliarites but that is all. Indeed the differences are so large that they ought not to be discussed in the same arena at this stage.

 

Forget the "classical" definition of derivative, it is THE definitoin of derivative, and fortunately limits commute with finite sums (when everything exists).

Link to comment
Share on other sites

Actually' date=' i don't want to declare that:

 

[math'] \Delta f(x) = f(x+h) - f(x) [/math]

 

That's already been defined by others, I didn't say I want to do it that way. I really just want to see what you have to say on the matter. As a matter of fact, i define the infinitessimal difference operator using first order logic, but thats my own business.

 

I've never heard to the infinitessimal difference operator myself, but sure.

 

I didn't say that I wanted to set dt=1, you did.

 

No, I objected to the fact that you wanted to set [math]dt = t_2 - t_1[/math]. dt is supposed to be infinitessimal and as such it can't be defined like this.

 

[math]Plus you are ignoring whether you take the limit from the left, or the right.[/math]

 

No, I'm not. If you want to take a limit from the right or left, then the notation used is commonly [math]\lim_{h\to 0^{+}}[/math] and [math]\lim_{h\to 0^{-}[/math]. This is the "proper" definition of the limit:

 

[math]\lim_{x\to c} f(x) = l \Leftrightarrow \forall \epsilon > 0 \exists \delta > 0 \text{ such that } |x-c| < \delta \Rightarrow | f(x) - l | < \epsilon[/math].

 

As you can see, we're taking values of x around a neighbourhood of c, not just to the left or right.

 

All I wanted to do, was to answer your question, as to how to prove the following fact:

 

[math] \frac{d(A+B)}{dx} = \frac{dA}{dx}+ \frac{dB}{dx} [/math]

 

Now I'm just trying to understand what your obection is to the method of proof I used. I just want to prove the fact rapidly, using only algebra. I know that I can do it quickly and elegantly using the infinitessimal difference.

 

And you saw it, I removed dx from the denominator first.

 

I've given you my reasoning :)

Link to comment
Share on other sites

I've never heard to the infinitessimal difference operator myself' date=' but sure.

 

 

 

No, I objected to the fact that you wanted to set [math']dt = t_2 - t_1[/math]. dt is supposed to be infinitessimal and as such it can't be defined like this.

 

 

It is infinitessimal, as t2 and t1 are supposed to be adjacent moments in time, in some chosen frame of reference.

Link to comment
Share on other sites

You're thinking of this from a purely physical point of view. Define "adjacent" in terms of known mathematical terms.

 

No problem.

 

 

Let A, B denote two moments in time.

 

If there is a moment in time M, such that either

 

A before M and M before B

 

OR

 

B before M and M before A

 

Then it is NOT the case that A and B are adjacent moments in time.

 

On the other hand, if there is no moment in time M such that

 

A before M and M before B

 

OR

 

B before M and M before A

 

And additionally, not (A simultaneous B) then A,B are adjacent moments in time.

 

You can now use first order logic to symbolize this appropriately.

 

Let me try it.

 

Let the domain of discourse be the set of moments in time.

 

Let A,B be elements of the domain of discourse.

 

We can easily prove trichotomy now.

 

Trichotomy: A simultaneous B or A before B or B before A.

 

Definition:

 

In the following definition, let it be the case that A before B.

 

A,B are adjacent moments in time if and only if

 

not [$ M (A before M and M before B) ].

Link to comment
Share on other sites

Let A' date=' B denote two moments in time.

 

If there is a moment in time M, such that either

 

A before M and M before B

 

OR

 

B before M and M before A

 

Then it is NOT the case that A and B are adjacent moments in time.[/quote']

 

So, what's a "moment in time" defined mathematically? If you're talking about "time" being represented by the set of reals - which we have to really, otherwise the notion of a limit wouldn't really make sense - then your definition would imply that we can never have two "adjacent moments in time". Indeed, it's quite a nice example of the point I'm trying to get across.

 

You have to realise that from a purely mathematical point of view, we don't care about time or any other physical quantity.

Link to comment
Share on other sites

So' date=' what's a "moment in time" defined mathematically?

[/quote']

 

First of all, there is no reason to define "moment in time" mathematically, because you can define it operationally. There is nothing forbidding it I suppose, but I don't suggest it. But I know the answer you want is point. But certainly not 'point' in any spatial sense.

 

At any rate, time cannot be modelled by the real numbers. Because, between any two reals, there is another real.

 

So in the case of 'time' that would mean that there wasn't a second moment in time, but in reality there really was a second moment in time. That is one of many reasons you cannot use the real number system to model time.

 

Kind regards

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.