timo

Senior Members
  • Content Count

    3397
  • Joined

  • Last visited

Community Reputation

554 Glorious Leader

About timo

  • Rank
    Scientist

Profile Information

  • Location
    Germany
  • Interests
    Math, Renewable Energies, Complex Systems
  • College Major/Degree
    Physics
  • Favorite Area of Science
    Data Analysis
  • Biography
    school, civil service, university, public service, university, university, research institute (and sometimes "university", as of lately)
  • Occupation
    Ensuring a steady flow of taxpayer money to burn

Recent Profile Visitors

22838 profile views
  1. There are two approaches here, the formal one and the brain-compatible one. 1) Formally: Realize that there are hidden coordinate dependencies. You are probably looking to construct a function p(y). Since y is the coordinate at the lower side, you have p(y) at the lower edge and p(y+dy) at the upper. This is (possibly) slightly different from p(y) (because of the displacement dy). If you call the difference dp, then p(y+dy) = p(y) + dp(y) = p + dp (note that dp can and will be negative). 2) Brain compatible: Put the equation first and then define the variables: The forces on top and bottom should cancel out. There force up is the pressure force p*A from below. The force from up is the pressure force p2*A from above the small fluid element plus the weight dw of the small fluid element. Since we are talking about infinitesimal coordinates, and since p(y) should be a function, it makes sense to say that p2 = p+dp, which you can then integrate over.
  2. In my opinion, the content you listed is below the minimum required for AI (not really sure what "Data Science" is, except for a popular buzzword that sounds like Google or Facebook). More precisely: Apply these topics to multi-dimensional functions and you should have the basis of what is needed for understanding learning rules in AI. However: All of the content you listed is the minimum to finish school in Germany (higher-level school that allows applying to a university, that is), even if you are planning to become an art teacher. And Germany is not exactly well known for its students' great math skills. The course looks like a university level repetition of topics you should already know how to use, i.e. a formally correct way of things that were taught hands-on before. I do not think a more rigorous repetition of topics will help you much, since you are more likely to work on the applied trial&error side. Bottom line: If you are already familiar with all the topics listed, I think you can skip the course. If not, your education system may be too unfamiliar to me to give you any sensible advise. Btw.: University programs tend to be designed by professionals. So if a course it not listed as a mandatory, it is probably no mandatory.
  3. By this standard, physics is not very strange most of the time.
  4. The equation is not particular to Compton scattering. It is the relation between momentum and energy for any free particle (including, in this case, electrons). I am not sure what you consider a derivation or what you skill level is. But maybe this Wikipedia article, or at least the article name, is a good starting point for you: https://en.wikipedia.org/wiki/Energy–momentum_relation
  5. There is no law of conservation of mass. Quite the contrary: The discovery that mass can be converted to energy, and that very little mass produces a lot of energy, has been a remarkable finding of physics in the early 20th century. The most well-known use is nuclear power plants, where part of the mass of decaying Uranium is converted to heat (and then to electricity). The more modern, but from your perspective even more alien view is that mass literally is a form of energy (I tend to think of it as "frozen energy"). In that view, you can take the famous E=mc^2 literally. There is a law of conservation of energy, but energy can be converted between different forms. In your example, it is converted from kinetic energy of the photons to mass-energy of the electron and the positron (and a bit of kinetic energy for both of them). Note that the more general form of E=mc^2 is E^2 = (mc^2)^2 + (pc)^2 with p the momentum of the object - it simplifies to the more famous expression for zero momentum. I say this to make the connection to your other question where you asked about this equation.
  6. This post is a bit beyond the original question, which has been answered - as Psi being a common Greek letter to label a wave function and wave functions being used to describe (all) quantum mechanical states. I do, however, have the feeling that I do not agree with some of what your replies seem to implicate about superposition, namely that it is a special property of a state. So I felt the urge to add my view on superposition. Fundamentally, superposition is not a property of a quantum mechanical state. It is a property of how we look at the state - at best. Consider a system in which the space S of possible states is spanned by the basis vectors |1> and |2>. We tend to say that [math] | \psi _1 > = (|1> + |2>)/ \sqrt{2} [/math] is in a superposition state and [math] | \psi _2 > = |1>[/math] is not. However, [math] |A> = (|1> + |2>) / \sqrt{2} [/math] and [math] |B> = (|1> - |2>) / \sqrt{2} [/math] is just as valid as a basis vectors for S as |1> and |2> are. In this base, [math] | \psi _2 > = (|A> + |B>)/ \sqrt{2} [/math] is the superposition state and [math] | \psi _1 > = |A>[/math] is not. There may be good reasons to prefer one base over the other, depending on the situation. But even in these cases I do not think that superposition should be looked at a property of the state, but at best as stemming from the way I have chosen to look at the state. Personally, I think I would not even use the term superposition in the context of particular states (a although a search on my older posts may prove that wrong :P). I tend to think of it more as the superposition principle, i.e. the concept that linear combinations of solutions to differential equations are also solutions. This is kind of trivial, and well known from e.g. the electric field. The weird parts in quantum mechanics are 1) the need for the linear combination to be normalized (at least I never could make sense of this) and 2) that states that seem to be co-linear by intuition are perpendicular in QM. For example, a state with a momentum of 2 Ns is not two times the state of 1 Ns but an entirely different basis vector. Superposition in this understanding almost loses any particularity to QM. Edit: Wrote 'mixed' instead of 'superposition' twice, which is an entirely different concept. Hope I got rid of the typos now.
  7. To turn rotational energy into electric energy you can indeed use dynamos, as you already assumed. Or better: Dynamo-like devices. The general term seems to be https://en.wikipedia.org/wiki/Electric_generator. Essentially, i.e. from a physics perspective, electric generators move around magnets in the vicinity of looped electric wires ("electro-magnets"). This induces a current in the wires. Doing this moving around in a controlled way generates a controlled current. Sidenote: I thought the general concept of a dynamo is a "turbine". But according to Wikipedia that refers to the moving part, only. Still, turbines are so closely related to electric power generation that looking up that concept may be relevant, too.
  8. Thermodynamics, in its general meaning, is always equilibrium Thermodynamics. Calculations of processes assume that the systems go through a series of equilibrium states during the process, which is called a quasi-static process. Reversibility is not required for studiot's entropy change equation in the first post. The equation is fully applicable to bringing two otherwise isolated systems with different temperatures into thermal contact, which I will use as an example: In the theory of thermodynamic processes, both systems' states change to the final state through a series of individual equilibrium states (*). Because of their different temperatures and conservation of energy (and because/if the higher-temperature system is the one losing heat to the colder) the sum of entropies increases. In the final state, both systems can be considered as two sub-volumes of a common system that is in thermal equilibrium. Since the common system is in equilibrium (and isolated), it has a defined entropy. Since entropy is extensive, it can be calculated as the sum of the two entropies of the original systems' end states. As far as I understand it, removing barriers between two parts of a container is essentially the same as bringing two systems in thermal contact. Except that the two systems can exchange particles instead of heat. The two systems that are brought into contact are not isolated - they are brought into contact. If one insists on calling the two systems a single, unified system right after contact, then this unified system is not in an equilibrium state (**). And I believe this is exactly where your disagreement lies: Does this unified, non-equilibrated state have an entropy? I do not know. I am tempted to go with studiot and say that entropy in the strict sense is a state variable of thermal equilibrium states - just from a gut feeling. On the other hand, in these "bring two sub-volumes together"-examples the sum of the two original entropies under a thermodynamic process seem like a good generalization of the state variable and converges to the correct state value at the end of the process. (*): I really want to point out that this is merely a process in the theory framework of equilibrium Thermodynamics. It is most certainly not what happens in reality, where a temperature gradient along the contact zone is expected. (**): In the absence of a theory for non-equilibrium states this kind of means that it is not a defined thermodynamic state at all. But since there obviously is a physical state, I will ignore this for this post.
  9. I assume you refer to my magnet example: The force between two magnets depends not only on their location, but also on their orientation. Take two rod magnets NS in one dimension which are one space (here I literally mean the space character) apart. In the case NS NS their attract. If one is oriented the other way round, e.g. NS SN, they repel each other. On other words: Their force does not only depend on their location, but also on their orientation (the equations in 3D are readily found via Google, but the common choice of coordinates may not obviously relate to what you describe). So for calculating forces or energies of magnets, you need their orientation as an additional parameter. This orientation can be expressed as a unit vector (and to relate to my first post: since this is a geometric and not an integration topic, unit vectors are better suited than angles).
  10. It is hard to tell without knowing your "subject" or the variable. For spherical coordinates of a single location, the unit direction vector indeed supplies no additional information to the location vector. Maybe the direction refers to something else than the location? Like the state of a small magnet, which (ignoring momenta) is defined by its location and its orientation at this location. Or much simpler: Direction refers to the direction of travel. Another idea I could think of is that if your position and direction are data in a data set processed on a computer. Then, they could just be in there for convenience of the user or some (possibly overambitious) performance optimization of calculations that only need the direction and want to skip the normalization step. Just semi-random ideas.
  11. Using unit vectors to define directions indeed contains the same information as using appropriate angles. And either can be used in conjunction with a radial coordinate to define location (but only the version with the angles is called spherical coordinates). Either version can be more appropriate for a practical problem, I think. In my experience, direction vectors tend to be more useful for trigonometric/geometric questions, and spherical coordinates tend to be more useful for integrals.
  12. All of what you said is at least arguably true (including the part about technological alternatives, that I did not cite). I took the battery example from the introductory slides of a lecture on renewable energies, where it was meant as an approximation and a starting point for the students to possibly try objections on. One of the key factors for the calculation is, of course, the capacity required. For simplicity (and readily availability of a suitable plot from the same lecture ), let's restrict this to fully-renewable systems: The required capacity relative to the load is indeed influenced by the size of the area - it goes down with area, just as you stated. Perhaps not down to 1 day. While the "night with no wind" is a picture that everyone understand easily, it is too simple to understand why we need storage. "Several days with not enough wind in the region" is much better but less intuitive. And I have seen cases in which evidence suggested that in an economically-optimized calculation the storage demand is influenced by the annual fluctuations in renewable generations. It also strongly depends on the amount of renewable generation: If you accept that storage demand is driven by prolonged times of insufficient renewable generation, rather than complete absence, then it is clear that this demand gets smaller if you install more generation than the total electric energy demand suggests.Simply said (and ignoring power limits and efficiency losses for now): You have a trade-off between extra generation costs and extra storage costs. The image above is based on 8-year historic weather and load data with some fixed assumptions about the share of wind and PV generation (usually 1:2 or 1:3 in terms of energy) and a fixed spatial distribution of the PV panels and wind parks. Black curves correspond to isolated German systems, the blue curves are the corresponding extremes of loss-less capacity-unbound electricity transmission in Europe (defined as roughly the EU). The horizontal axis is the potential for electricity generation relative to the demand, the vertical one the required capacity for useable energy in units of days of mean power demand. The solid curves correspond to a storage that is perfectly efficient and not limited by input power. Detailed calculations of optimal scenarios, which also consider topics like adaptive demand that I did not cover in these posts for the purpose of keeping complexity down, end up with a generation ratio of around 1.1 to 1.3. So 7 days for the Ger scenario and 4 days for the Eur scenario may look realistic. The dashed curves correspond to 65% efficiency loss on power intake (35% return efficiency), which is realistic for chemical storage. The most prominent effect is that the location of the diverging capacity requirements shifts from 1.0 to some larger value. Lastly, the dotted lines show the capacity requirements with the additional constraint that the maximum charging power is 50% of the mean load power. As I hopefully made clear by now, the amount of required capacity depends on a lot of factors (some discussed and some more). More complex calculations tend to find a mix of long-term storage (cheap capacity, bad efficiency) and short-term storage (expensive capacity, good efficiency), but the mix depends on a lot of details, e.g. the assumed technology costs, which directly play into the the trade-off between extra installation and extra storage. The 30 days I used were the result of one of these calculations; I might indeed want to do the Europe-equivalent alongside to see the results there. I strongly doubt the 1-day capacity requirement for reasons hopefully made a bit more clear in my statements above. But even if the calculations came up with 16 G€/a, which is about 16% of the total cost of electricity including grid fees and taxes, there is one thing that I want to comment on that may or may not be clear to everyone: That number is on top of the other costs, not the new costs. With the ridiculously high number I estimated the difference is irrelevant (13 times as expensive vs. 14 times as expensive) but for smaller numbers this should be considered. I agree that there are a multitude of storage solutions. My current favorite (just for the coolness) is underwater pump storage, which may in fact be the technical realization of the compressed air bags you linked to. For me, who usually considers the energy system from an abstracted point of view, all of the alternatives to synthetic chemical fuels fall into the "limited by capacity costs" class of short-term storage and become just another technical realization of a battery. Redox-Flow batteries are the best candidate for not being capacity limited that I currently see (but have not investigated). For a company in the market, that wants to optimize revenue on the percentage-margin, the choice of technology of course may be very relevant. But they usually have a very different approach for decision making in the first place.
  13. Not 100% certain what you mean. "Can't be contradicted" is not the same as "proven", especially in math. Any unproven theorem in math, say an unproven Millenium Prize Problem, is proof of this (silly pun intended). Because if they could be contradicted (more specifically: we knew a contradiction to the statements) then they were proven wrong. Accepting an argument as true is a much stronger statement than not contradicting it. Taking the liberty to modify your statement to "a proof is an argument that everyone [sane and knowledgeable implied] has to agree on" that is not too far away from what I said. The main question is where "everyone" lies in the range from "everyone in the room" to some infinity-limit of everyone who has commented and may ever comment on the statement. This limit would indeed be a new quality that distinguishes proofs from facts (to use the terms swansont suggested in this thread's first reply. I could understand if people chose this limit as a definition for a proof. But it looks very impractical to me, since I doubt you can ever know if you have a proof in this case (... but at least you could establish as a fact that something is a proof ... I really need to go to bed ... ). Fun fact: For my actual use of mathematical proofs at work, "everyone" indeed means "everyone in the room" in almost all instances. For me, the agreement of that audience would not be enough to call something a fact . (Okay... off to bed, really ....). It is, indeed. Maybe with an extra grain of elitism for not needing observations but relying on the thoughts of peers alone.
  14. If you accept discovery of a particle as proof for a particle, then that is pretty much the usage of the terms in particle physics, where "evidence" is a certain amount of statistical significance and "discovery" is a certain larger amount of statistical significance (example link for explanation: https://blogs.scientificamerican.com/observations/five-sigmawhats-that/). That usage of the terms is, however, very field-specific. Other fields may have problems to quantifying statistical evidence. When I was a young math student, I thought about the same. Then in a seminar a professor asked me "what is a proof" and I told him something about a series of logical arguments that start from a given set of axioms. His reply was roughly "err .. yes, that too, ... maybe. But mostly it is an argument that other people accept as true". I think his understanding of "proof" may be the better one (keeping in mind that "other people" referred to mathematicians in this case, who tend to be very rigorous/conservative about accepting things as true).
  15. Absolutely. Adding up thin shells to create the full sphere is actually a very common technique to access the volume, i.e. [math] V(r) = \int_0^r A(r) \, dr[/math] Well, picking up on the integration example: The average distance <r> of a particle from the center in a sphere of radius R is [math] <r> = \frac{1}{V(R)} \int_0^R \, r \cdot A(r) dr = \frac{1}{V(R)} \int_0^R \, r \cdot 4\pi r^2 dr = \frac{1}{\frac 43 \pi R^3} \left[ \pi r^4 \right]_0^R = \frac 34 R[/math] (modulo typos: Tex does not seem to work in preview mode ... EDIT: And apparently not in final mode. The result in the calculation above is <r> = 3/4 R). There is a general tendency that the higher the dimension, the more likely a random point in a sphere lies close to the surface. There is a famous statement in statistical physics that in a sphere with 10^23 dimension, effectively all points lie close to the surface. In all sensible definitions of volume and area I am aware of (at least in all finite-dimensional ones), volume has one more dimension of length than area. Hence, their quotient indeed has dimensions of length. I don't think the quotient itself has a direct meaning. But there are theorems like that a sphere is the shape that maximizes the V/A ratio for a fixed amount of V or A. I already commented on the dimensionality. But I still encourage you to just play around with other shapes: Cubes are the next simple thing, I believe.