Jump to content

timo

Senior Members
  • Content Count

    3406
  • Joined

  • Last visited

  • Days Won

    1

timo last won the day on January 9

timo had the most liked content!

Community Reputation

558 Glorious Leader

About timo

Profile Information

  • Location
    Germany
  • Interests
    Math, Renewable Energies, Complex Systems
  • College Major/Degree
    Physics
  • Favorite Area of Science
    Data Analysis
  • Biography
    school, civil service, university, public service, university, university, research institute (and sometimes "university", as of lately)
  • Occupation
    Ensuring a steady flow of taxpayer money to burn

Recent Profile Visitors

23120 profile views
  1. I am not sure I understand the question. I am not even sure that the premise is true (did Alchemy and Chemistry even exist at the same time?). But as a hint to what I guess may be the answer you are looking for: Do you understand why turning lead into gold is not within the scope of Chemistry?
  2. An operator f(...) is linear if f(A+B) = f(A) + f(B) and f(a*A) = a*f(A), with addition and multiplication being the addition of two vectors and their multiplication with a real number, respectively, in your case. Alternate form of the same statements for a matrix M, vectors x, y, and a scalar a: M(x+y) = Mx + My, M(a*x) = a*(Mx). When interpreted as an operator V -> V, matrices are always linear. But it should be straightforward to explicitly show that for your given matrix by starting from one side of the two defining equations and rearranging until you get the other side.
  3. Yes - if the height of people is a normal distribution or at least well-approximated by it (not sure to what extent it is). Very specifically, if you assume a Gaussian with 1.70 m as mean height, and 0.15 as the standard deviation then the chance that a person is 1.90 m or larger (in the region [1.9; infinity)) is roughly 10 %: https://www.wolframalpha.com/input/?i=integrate+1%2F((2+pi)^(1%2F2)+0.15)+e^(-((x+-+1.7)%2F0.15)^2%2F2)+from+1.9+to+infinity (note that I put explicit numbers here - if you replace 1.9 with x you'll see the erf() again in the expression for the solution).
  4. Ctrl-Z just cost me a long post that I put a lot of time in and will absolutely not type in again just to be fooled by my muscle memory. Key points: 1) Consider erf() just as the number-crunching function that computers provide you to calculate integrals over Gaussians. Not much more. The explicit relation between Gaussians and erf() is https://www.wolframalpha.com/input/?i=integrate+1%2F((2+pi)^(1%2F2)+sigma)+e^(-(x%2Fsigma)^2%2F2)+from+A+to+B 2) Here's some Python code to play around if you want. It plots a) A single random variable "number of bugs a programmer fixes each day" b) The resulting "number of bugs fixed per programmer per year", which is a sum of 200 random variables in a) and itself a random variable. Key observation: The distribution looks very different and very Gaussian. c) The probability to "fix at most N bugs per year" which is the same as "fix 0 to N bugs per year" which is the same as "the sum of probabilities to fix 0 ... N bugs per year" which indeed is pretty much the same as the integral over [0; N]. The resulting curve, as a function of N, looks ... *surprise* ... like the error function. import numpy as np import seaborn as sns import matplotlib.pyplot as plt # We visualize the distributions by drawing a large sample of random variables. SAMPLE_SIZE = 10000 def randomFixesPerDay(): # Number of bug fixes per day is a random variable that becomes 0, 1 or 2. return np.random.randint(3) def randomFixesPerYear(): # Number of bug fixes per year is a random variable that is the sum of # 200 (=workddays) random variables (the bugfixes each day) return np.random.randint(3, size=200).sum() # Experimental probability for # bug fixes per day dailyDistribution = [randomFixesPerDay() for i in range(SAMPLE_SIZE)] sns.distplot(dailyDistribution, kde=False, norm_hist=True, bins=[0, 1, 2, 3]) plt.title('Probabiliy of Bug-fixes per Day: 1/3') plt.xlabel('# Bugs') plt.ylabel('Probability') plt.show() # Experimental probability for # bug fixes per year annualDistribution = [randomFixesPerYear() for i in range(SAMPLE_SIZE)] sns.distplot(annualDistribution, kde=False, norm_hist=True, bins=np.arange(150, 250)) plt.title('Probabiliy of Bug-fixes per Year\n(note smaller value on y-axis)') plt.xlabel('# Bugs') plt.ylabel('Probability') plt.show() # Integral [0; x] over annualDistribution looks like error function xValues = np.arange(150, 250) yValues = [len( [value for value in annualDistribution if value <= x])/SAMPLE_SIZE for x in xValues ] plt.plot(xValues, yValues) plt.title('Integral: Probability of fixing [0; N] bugs per year\n(i.e. "not more than N")') plt.xlabel('x') plt.ylabel('Probability') plt.show()
  5. There is only one context in which I ever encountered the error function, and it the same context as here: It is the integral over the (normalized) Gaussian function. The Gaussian, on the other hand, is the most important function in statistics. The reason is the Central Limit Theorem: If you take a variable that is distributed according to some probability distribution, and then take the sum of many of these variables, the probability distribution of the sum becomes increasingly more similar to the Gaussian with an increasing number of numbers added (and the mean value of the sum is the sum of the individual means and the variance of the Gaussian is the sum of the addends' variances). The random walk is a process in which a walker takes a number of independent steps with random length and distance. By the central limit theorem, the resulting total deviation from the original location (the sum of steps) will look like a Gaussian, soon. This Gaussian gives the probability density to find the walker at a certain location. To find the probability to find the walker in a certain region, you sum up the individual probabilities of all locations in this region (i.e. you integrate over this region). When computing this integral, the solution can be expressed in terms of erf(). EDIT: I'm still posting this despite just having received a pop-up saying "someone else posted probably the same thing" 😜
  6. You indeed have an infinite number of points on a finite line segment. The length of a line segment determines if it is finite or not, not the number of points contained (which is always infinite for line segments with non-zero length - and therefore a pretty useless measure).
  7. You correctly state that there may not be a unique answer, since A and B may be multiplied by any non-zero number. That begs the question: What is your point? Contrary to your claim, your thread title does not completely describe the problem. As a rough guess of what you meant: There are numbers that cannot be represented as fractions of natural numbers. The most prominent cases are pi (as in "ratio of circle circumference to radius") and "e" (as in the exponential function).
  8. Random comments on your seemingly random questions: 1) Particle physics is indeed looking at debris to a very large extend. However, people are not looking for new objects in the debris. They look at the content and distribution of the debris and compare it with the predictions of the different mathematical models. 2) The reference to "statements about their encounters" does not refer to particle collisions (caveat: I am interpreting a single sentence out of context here - but modern particle physics did not exist during Einstein's lifetime, anyways). It refers to a key concept in relativity that comparing situations at different locations is tricky. It is not required that the objects in questions are elementary particles that collide. The famous spacefaring twins meeting each other after their space travel (or lack thereof) are would be typical situations that the statement refers to.
  9. I did an interview for a newspaper article a couple of years ago. Me and a PR colleague talked with a journalist for about an hour in the morning. The journalist already had a vague idea about the topic and we essentially had a chat about it. The journalist then sent me the draft article in the evening. I sent him correction proposals that were all accepted. Nothing spectacular on this level. Still, there were a few takeaways from this experience: 1) It is the journalist's story. When you get the interview request you feel very important and in the center of things. But in reality you are just helping the journalist to write the article. This also why I said that I sent "correction proposals". The journalist is not required to send you the story beforehand or get your permission for publication. 2) I originally sent elaborated explanations to my corrections, explaining in which context the statement would be correct and when it would me misleading. I ran my corrections through our head of PR. He said a memorable sentence in the sense of: "that guy is a poor devil, a freelancer being paid per article written. Just send him corrections that he can accept or decline and don't cause him extra work". Point is: As a scientist you may be excited about the topic, and of course you expect the journalist to be exited, too. In reality, to the journalist your topic could as well be an orphan kitten that has been adopted by a dog: It is a story that gets the next article. 3) We had prepared lots of great diagrams but the journalist insisted on a photo of me, instead. My face is completely irrelevant for the science and even inappropriate for the fact that our results were achieved by a team. It is a manifestation of the journalist rule "no news without a face". Since I experienced this from the producing side, I often find myself re-discovering this: When a minister proposes something (that his employees worked out), when the director of a research institute is asked an expert opinion about a topic (which he bases on the work of the people actually doing the science - his employees) or -a current example of discussion in my family an hour ago- when the main discussion of the climate conference is how Greta Thunberg looked at Donald Trump.
  10. There are two approaches here, the formal one and the brain-compatible one. 1) Formally: Realize that there are hidden coordinate dependencies. You are probably looking to construct a function p(y). Since y is the coordinate at the lower side, you have p(y) at the lower edge and p(y+dy) at the upper. This is (possibly) slightly different from p(y) (because of the displacement dy). If you call the difference dp, then p(y+dy) = p(y) + dp(y) = p + dp (note that dp can and will be negative). 2) Brain compatible: Put the equation first and then define the variables: The forces on top and bottom should cancel out. There force up is the pressure force p*A from below. The force from up is the pressure force p2*A from above the small fluid element plus the weight dw of the small fluid element. Since we are talking about infinitesimal coordinates, and since p(y) should be a function, it makes sense to say that p2 = p+dp, which you can then integrate over.
  11. In my opinion, the content you listed is below the minimum required for AI (not really sure what "Data Science" is, except for a popular buzzword that sounds like Google or Facebook). More precisely: Apply these topics to multi-dimensional functions and you should have the basis of what is needed for understanding learning rules in AI. However: All of the content you listed is the minimum to finish school in Germany (higher-level school that allows applying to a university, that is), even if you are planning to become an art teacher. And Germany is not exactly well known for its students' great math skills. The course looks like a university level repetition of topics you should already know how to use, i.e. a formally correct way of things that were taught hands-on before. I do not think a more rigorous repetition of topics will help you much, since you are more likely to work on the applied trial&error side. Bottom line: If you are already familiar with all the topics listed, I think you can skip the course. If not, your education system may be too unfamiliar to me to give you any sensible advise. Btw.: University programs tend to be designed by professionals. So if a course it not listed as a mandatory, it is probably no mandatory.
  12. By this standard, physics is not very strange most of the time.
  13. The equation is not particular to Compton scattering. It is the relation between momentum and energy for any free particle (including, in this case, electrons). I am not sure what you consider a derivation or what you skill level is. But maybe this Wikipedia article, or at least the article name, is a good starting point for you: https://en.wikipedia.org/wiki/Energy–momentum_relation
  14. There is no law of conservation of mass. Quite the contrary: The discovery that mass can be converted to energy, and that very little mass produces a lot of energy, has been a remarkable finding of physics in the early 20th century. The most well-known use is nuclear power plants, where part of the mass of decaying Uranium is converted to heat (and then to electricity). The more modern, but from your perspective even more alien view is that mass literally is a form of energy (I tend to think of it as "frozen energy"). In that view, you can take the famous E=mc^2 literally. There is a law of conservation of energy, but energy can be converted between different forms. In your example, it is converted from kinetic energy of the photons to mass-energy of the electron and the positron (and a bit of kinetic energy for both of them). Note that the more general form of E=mc^2 is E^2 = (mc^2)^2 + (pc)^2 with p the momentum of the object - it simplifies to the more famous expression for zero momentum. I say this to make the connection to your other question where you asked about this equation.
  15. This post is a bit beyond the original question, which has been answered - as Psi being a common Greek letter to label a wave function and wave functions being used to describe (all) quantum mechanical states. I do, however, have the feeling that I do not agree with some of what your replies seem to implicate about superposition, namely that it is a special property of a state. So I felt the urge to add my view on superposition. Fundamentally, superposition is not a property of a quantum mechanical state. It is a property of how we look at the state - at best. Consider a system in which the space S of possible states is spanned by the basis vectors |1> and |2>. We tend to say that [math] | \psi _1 > = (|1> + |2>)/ \sqrt{2} [/math] is in a superposition state and [math] | \psi _2 > = |1>[/math] is not. However, [math] |A> = (|1> + |2>) / \sqrt{2} [/math] and [math] |B> = (|1> - |2>) / \sqrt{2} [/math] is just as valid as a basis vectors for S as |1> and |2> are. In this base, [math] | \psi _2 > = (|A> + |B>)/ \sqrt{2} [/math] is the superposition state and [math] | \psi _1 > = |A>[/math] is not. There may be good reasons to prefer one base over the other, depending on the situation. But even in these cases I do not think that superposition should be looked at a property of the state, but at best as stemming from the way I have chosen to look at the state. Personally, I think I would not even use the term superposition in the context of particular states (a although a search on my older posts may prove that wrong :P). I tend to think of it more as the superposition principle, i.e. the concept that linear combinations of solutions to differential equations are also solutions. This is kind of trivial, and well known from e.g. the electric field. The weird parts in quantum mechanics are 1) the need for the linear combination to be normalized (at least I never could make sense of this) and 2) that states that seem to be co-linear by intuition are perpendicular in QM. For example, a state with a momentum of 2 Ns is not two times the state of 1 Ns but an entirely different basis vector. Superposition in this understanding almost loses any particularity to QM. Edit: Wrote 'mixed' instead of 'superposition' twice, which is an entirely different concept. Hope I got rid of the typos now.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.