Jump to content

timo

Senior Members
  • Posts

    3449
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by timo

  1. Joigus already mentioned is implicitly, but I think it's worth pointing it out explicitly:

    When it comes to the exact integral, the different methods with equal-width rectangles that approach dx->0 do approach the same limit (with some exceptions that are not relevant here). And this common limit is called "the integral". For many important functions, e.g. polynomials, we know how to compute the limit exactly. And we don't even care about the rectangle construction in these cases, and just jump to the known solution - which does not depend on the exact rectangle-method that has been used.

     

    Now: When it comes to functions for which we do not have a known solution, we often have to fall back to what is called "numerical integration". In this case, we use a single, small dx, but we do not take the limit dx->0. Then, we brute-force the approximation by summing up all the individual rectangle results (computers are very good at doing stupid, repetitive tasks very quickly). In this case, the method you propose (f(x0)/2 + f(x1)/2) is indeed considered superior over the simplest approximation (f(x0)). The calculation I showed in my previous post still holds, but N now is a fixed number that does not become arbitrarily large. In practice, the method you proposed is usually the simplest choice for numerical integration that someone with a bit of knowledge about numerical integration will use. Numerical integration routines integrated in programming languages or software libraries will often use even more complicated rules to calculate each rectangle (arguably not even a rectangle, but still dx-sized segments and a representative mean function value for each segment).

     

    Bottom line: Don't worry if you don't understand everything in this post. My point is: Your idea about improving the rule to calculate the integral is actually very good. It does not matter much for the definition of the integral (well .. it does in the sense that the definition of the integral would be broken if it gave a different result). But for numerical integration on a computer, your idea is actually very relevant.

     

  2. Yes, both methods approach the same limit. In this case, you can explicitly write that down: Assume you integrate from 0 to 1, and you split the range into N intervals of equal length. In the first case, your integral approximates as
    [math]I_{1, N} = \frac 1N \sum_{i=0}^{N-1} f(i/N) = \frac 1N  \left( f(0) + f(1/N) + f(2/N) + \dots + f((N-1)/N) \right)[/math].
    In the second case, your integral approximates as
    [math]I_{2, N} = \frac 1N \sum_{i=0}^{N-1} \frac 12 \left( f(i/N) + f((i+1)/N) \right) = \frac 1N \left( \frac 12 f(0) + f(1/N) + f(2/N) + \dots + f((N-1)/N) + \frac 12 f(1) \right) [/math].
    If you compare the terms, you notice that
    [math]I_{1, N} - I_{2, N} = \frac 1N \frac 12 (f(0) - f(1) ) = \frac{f(0) - f(1)}{2N}.[/math]
    So whatever finite numbers f(0) and f(1) are, the difference between the two ways to approximate the integral becomes tiny when N becomes large enough.

     

    Btw: This editor is horrible: Preview should preview the rendered tex, not show me the raw tex I typed for different screen sizes. I want my editor from ten years ago back.

     

  3. On 3/18/2021 at 10:23 PM, MigL said:

    Post quotes from the Bible (on a science forum) and you will be ignored [...].

    I was already chuckling when you posted this, and the thread seems to prove this part of your prediction wrong. In my experience, it is the non-scientific content that get the most attention on sfn. Probably because it is easier to respond to. I certainly put less effort into this post than into science-related posts. Possibly even less than into my one-liner that is the first reply in this thread when I thought this was a genuine question.

  4. I helped my colleagues to move a Volkswagen E-Up for ~50 km between two cities about eight years ago - highway in one direction, smaller roads in the other. The car was used in a field test in the city I lived in, so it made sense for me to just go from/to work by car instead of by train. I drove on a cold but typical German winter day. Turning on the heating approximately halved the remaining range, and I ended up turning heating on and off periodically during the trip. I found that experience quite impressive back then, because I was not aware of such basic issues as heating before. I imagine that there is room for improvement when it comes to thermal insulation, and that newer cars perform better (the E-Up is an electric variant of a combustion engine design, so heating may not have been a big issue in the design). But that's speculation.

  5. 4 hours ago, MigL said:

    Noahs flood has been proposed as ancient memories of the Zanclean flood, of 5 million years ago, that flooded the Mediterranean basin.
    I find it hard to believe that our ancestors of 5 million years ago ( Australopythecus, I believe ), could preserve such 'memories' through story-telling, since Homo Sapiens only emerged ( in Africa ) about 300 000 years ago.

    I also think the bandwidth to transfer the satellite images that show the whole earth being flooded was very limited back then 📧.

     

    On topic, in case it wasn't clear by now: Since all the water for flooding already is on earth, and already pushes as weight on the ground (including the ice), you should expect no significant effect on the stability of the ground when it rains. Also, as studiot said, if all ice melted, the water would not cover all of the land. Here's the first Google hit I found regarding this: https://www.nationalgeographic.com/magazine/article/rising-seas-ice-melt-new-shoreline-maps (seeing the maps it is kind of funny that Australia is one of the few coal power fans in the world).

  6. What you talk about seems more like a method for calculation to me than an actual measurement. You can indeed calculate the volume of a body by taking a surrounding volume and then subtracting the parts of the surrounding volume that does not belong to the body (in your language: the volume filled with gap values). This may in some cases be an efficient method, e.g. for a block of stone with a cylindrical hole. In the general case, if you have a generic way to calculate the amount of gaps then you could probably use that method to calculate the volume of the body in the first place. This, as you already mentioned, is the tricky part.

     

    The most generic way to approach this is dividing a volume in tiny blocks of simply-calculated sub-volumes and use these blocks to approximate the total volume from the sum of these blocks. For example, for your stone you could use small cubes and use laser scans to determine if a cube contains rock or not. If you make these blocks finer and add a few Greek letters, then you get what in Mathematics is called "integration", which is the theoretical basis for such measurements. In many semi-geometric structures you can get good results by gluing together and substracting known geometric structures (as in the case with the block with a cylindrical hole).

  7. 30 minutes ago, SergUpstart said:

    And the main problem of the Sahara is water.

    I never heard that mentioned as an issue for actual projects. But yes, maybe. How, specifically, do you think that lack of water is a problem for solar power projects?

  8. Or you take the engineering approach and just read-off provided numbers: https://globalsolaratlas.info/map. If I remember correctly, the tool even has a "mark an area and integrate" functionality.

     

    I'd like to say something constructive here. But I find it hard to make out what this thread is about, or to add anything meaningful on this very vague level. I mean: Yes, solar panels generate electricity. Yes, you can put them on rooftops. And yes, there is lots of sun in the equatorial regions. And to Swansont's post: Yes, there are problems in detail. Transport and storage are somewhat generic problems, and they are at least easy to handle - any scenario calculation in the planning phase will implicitly include them. Rather specific problems to solar power in the Saharan regions seem to be sand, corruption and a perception of modern-day colonialism when rich white guys try to tell Africans what they should be doing.

     

    The idea of exploiting the solar power opportunities in the Sahara region is obviously not new. My personal favorite idea in "think big" is a world-grid with a solar power belt around the whole equator, btw. In Germany, the Desertec initiative was very well known. They planned to generate electric power in Africa and export it to Europe (sounding like modern-day colonialism: check). To my knowledge, the project died in 2014 when most major industry partners quit. I don't know why it failed, but the common rumors are about drop in renewable energy generation costs within Europe and worries about generating your power in regions that are considered politically unstable (-> Arab spring and the civil wars that followed and are still ongoing).

  9. 2 hours ago, Jon O'Starr said:

    Assume black holes attract more matter then contributes to their increase in mass, and that this surplus mass is transformed into energy. As electromagnetic energy it could not be emitted by the black hole, but as quantum entities could it tunnel out? If it did this, why has no one detected it?

    Maybe its not electromagnetic and we do not currently have the means for its detection.

    Maybe it appears as randomly spread, “photon sized” white holes and the energy is dissipated in the creation of the space needed for said white hole.

    (Most heretically, maybe we did but misinterpreted it as CMB.)

    When it comes to experiments on black holes, it gets a bit tricky. You cannot produce them in the lab, and none of these things exists naturally on earth. In fact, we had put lots of efforts into even detecting anything in the universe that we concluded/agreed must be a black hole. While double-checking if even that has happened, yet, I ran into an article claiming that 2019 was the first time you could get an image of a black hole (https://www.sciencemag.org/news/2019/04/black-hole). Don't pin me on the accuracy of that statement, but my point is: It is already pretty challenging to detect black holes in the first place, so you cannot expect these objects to be as well researched experimentally as, e.g., a laser diode.

     

    I don't think there is any accepted idea how something could evade the event horizon of a black hole. I am not sure if even tunneling would work, but that is certainly an interesting mathematical question. Today, I think the idea of Hawking radiation is a widely accepted mechanism of how black holes emit energy (in addition to the normal jet emissions). I've never bothered to read up on in, but popular scientific depictions of it describe it as an effect happening outside of the event horizon.

     

    As for white holes: Formally, they appear if you allow the distance-to-center coordinate to become negative, which creates a second volume of spacetime with similar properties as the original one, except that nothing can enter the event horizon (whereas before nothing could exit from there). You can fantasize about this being a white hole, the exit of a black hole. And you can fantasize about making this passable, which is then called a worm hole. Usually, you'd have exactly one white hole exit for a black hole. But well, maybe something interesting happens if you make the radius coordinate a complex value. I don't quite see how that would relate to the CMB. Also, keep in mind that the CMB is pretty well understood and, in contrast to black holes, experimentally measured with an extreme precision (https://en.wikipedia.org/wiki/Cosmic_microwave_background#/media/File:Cmbr.svg) - with the caveat that it is only measured within a single solar system.

  10. 2 minutes ago, BillyFisher said:

    I still haven't found an answer to your question: What's really going on?

    There are lots of answers to that question. For some, the answer is that there is a pedophile ring drinking children blood in the backroom of a pizzeria (*). If you haven't found your answer in this thread you should invest the time to be a bit more specific about the question.

     

    (*) Rumor has it that this actually is some people's answer about the question what climate science is about, too.

  11. I understand that you are saying that light travels faster than sound or nerve signals in the body. From that you jump to "I think that the point when the action is created is ‘ground zero’ and might be faster than light as it takes up no time. It’s hard to explain but I think you get the idea". Well, ... I do not get the idea. I even had to look up the term "ground zero". But it did not help to know that it means a point on the surface that a bomb explodes over.

     

    I do not understand what you are trying to say, so it is hard to give constructive advise here. I think it could make sense to think if "the point where the action is created" really has a speed, and what that would be. If it has no speed, then you cannot compare it to the speed of light. Note that "speed" in this context means distance traveled per unit of time, not "time it takes for something to happen".

  12. The quoted part of the text you provided seems correct to me. In particle physics, there is a concept of a parton. A parton is the thing that does the core interaction when a proton is shot at something else in a particle collider ("core interaction" is the part of the process with the highest energy, the one that you draw Feynman diagrams for to describe it). Experimental physicists have a very pragmatic approach to these partons: They define a probability to get a certain parton (a  quark or a gluon) with a given momentum from the proton. These probabilities can be taken into account when simulating/calculating collider events. The probability function is called the parton distribution function (PDF). These PDF can be measured in experiments, and they also contain heavier quarks. Just Google for them yourself, the first hit I found (no guarantee for quality) is figure 1 of http://www.scholarpedia.org/article/Introduction_to_Parton_Distribution_Functions.

    So from the perspective of someone doing particle physics experiments, it is probably correct to say that a proton contains all kinds of stuff. There are a lot of reasons I can think of why that could be missing the big picture (is this an effect of perturbation theory? How is the probability  to heavier quarks related to the CKM matrix? How is the remnant of the interaction, the Underlying Event, handled? ...) but I lack both time and skill to write about this.

     

    Bottom line: Saying that a proton is more complicated than three quarks held together by a gluon pit is correct. There is at least one mathematical model that describes it as a magical box containing random objects which may well be the most-used mathematical proton model in the world. I agree with the author that specifically to understand LHC physics, "it is three quarks" is not enough. I don't think that "plus zillions of gluons and  quark, anti-quark pairs" is the key to enlightenment, either.

  13. 9 hours ago, Airbrush said:

    It seems to me that a "scientist" is someone that does "science" professionally, doing research, teaching, or other projects, not as merely a hobby. [...] Does anyone know of important science that came from non-professional scientists?

    The explanation of the photoelectric effect and the theory of relativity would match that criterion. I kind of thought that Isaac Newton had a job as master of coin or something like that, but I could not verify that. I am not aware of any more recent examples - not even the proverbial exceptions that prove the rule. That makes me wonder to what extent "only scientists make contributions to science" is a tautology (i.e.: science is defined as "what professional scientists do"). For example, you could argue that Mark Zuckerberg has started the largest sociological experiment in the history of mankind. But the people credited for scientific contributions are the hitchhiking university scientists that write papers about it.

  14. Not sure I understand what you are asking for - I certainly don't know what you mean by empirical cycle or retrospective study. But to me, the general approach seems to be:

    1) Define one or more quantitative measures for the sale of phishing tools, e.g. number of sales, volume of sales, number of different products offered, number of different products sold.

    2) Find data source from which you can determine these measures. Note: The actual progress may be doing this step first and then defining measures that you have data for - it was just easier for me to describe the steps in this order.

    3) Plot the measures over time in a suitable time binning (bonus points: with statistical error bars).

    4) Define the time that you count as "Corona pandemic" and see if there are any visible trends in your graph.

    4a) Alternatively, just try a few assumptions. E.g., for a suitable binning, fit two different constants to the data for the non-Corona and the Corona time intervals and check if these constants look significantly different (bonus points: calculate a statistical measure how different they are).

     

    I imagine step 2 to be the hardest by far. Despite often planning to do so I have never tried to navigate around in the dark web. But the term already indicates that it will not be easy to get reliable overview data from it - especially since you are trying to monitor activities that are at least borderline illegal.

  15. I don't think I understand anything you just wrote. But I'll give it a try:

    - "Does -f"(x) imply -1 ?": -f''(x) means -1*f''(x), if that was your question. It does not mean that f(x) = -1 or f''(x)=-1, if that was the question.

    - "Does linear differential reffer to a "radius"?": Differential equations are a special type of equations that relate functions and their derivatives. They one of the most important mathematical concepts in physics. 'Linear" is just a mathematical property of this equation.

    - "When you say normalization is ignored, is that for ""All QM waves?"" ": Usually, wave functions must meet a requirement that they are "normalized". In my example, I ignored this requirement because it is irrelevant for the point I wanted to highlight.

    - "After all the electron isn't that easy to figure out, plus the nature of its movement is not yet confirmed... ": I kind of disagree with this statement. Moreover, I think it has really little to do with the question why the constant Pi shows up in the context of QM.

     

    I think my first attempt to answer your question did not go very well. So let me try an alternative story:

    In many important cases, wave functions can be expressed as sine and cosine functions. The Pi comes from the 2*Pi periodicity of these functions (if you'd express the arguments in terms of degrees you'd get 360s popping up in the equations, I guess). For example, if a wave function looks like f(x) = sin(x), it has the wave length (the distance after which it repeats itself) of 2*Pi. If the function should have the wave length 1, the function would look like f(x) = sin(2*Pi*x).

  16. Wave functions are solutions of linear differential equations. Mathematically, solutions to linear differential equations contain sine and cosine (or, equivalently, exponential functions with imaginary exponents). The Pi comes in from them.

     

    Example: Imagine the Schroedinger equation for a free particle would be f(x) = -f''(x), where f''(x) is the 2nd derivative of f(x) with respect to x. A possible solution for this is f(x) = sin(x) (normalization ignored for this example). The wavelength of this wave is 2*Pi.

  17. I find it really hard to get a coherent picture of the opening post. I see three different aspects that trigger different tones of response, some of which are pretty redundant with the replies given already. So maybe I'll just briefly touch all three aspects to show why at least I have problems with getting a clear picture of this thread.

     

    First, there is the relatively long explanation about sums or averages not giving the full information about the individual components that contribute to them. That is correct, mathematically trivial and well known to everyone working in any field of complex systems. It is also pretty banal, and applies to pretty much every science or society related number you ever hear in the TV news: the gross-domestic product, the number of Covid-19 infections, salaries in the IT sector, the time that kids spend on social media,  ... . Now, admittedly, there are a lot of people who, for different reasons, appear to limit the discussion of a topic essentially to these numbers. So for this aspect of the opening post I am torn between a sarcastic "great work, Sherlock" and an honest "it is great that you are aware that this one number is not the full picture". I think the relative volume of the sub-optimal example pushed a few people towards the former reaction.

     

    Second, there is the aspect of the specific role of an average temperature in climate science, or more specifically its role in the climate change debate. For me, this would be a great topic of debate and learning. I worked as a scientist in a somewhat related field for several years, and still my understanding of it is very basic and with a lot of "that's how I imagine it is". I'll not formulate a coherent story for this post, but just throw in a few imho relevant pieces: In the context of the greenhouse gas effect the average temperature is a very sensible, experimentally-measurable observable with some weaknesses (energy stored in the oceans). Climate scientists don't model average temperatures but create sets of future scenarios for the evolution of complex systems. The evaluation of these scenarios cannot be reduced to a  single number that tells you how good or bad the scenario is. What you can do is group your scenarios according to some meaningful parameter, see what typical scenario effects are for that parameter, and then have some delegates barter about how bad you want it. Remember: The problem with climate change is not the increase in the mean temperature, but increase in extreme weather conditions, change in habitability on the planet, the self-enforcing mechanism (loss of reflective ice, melting of permafrost, methane emissions from the oceans), and possibly a bit of land loss from rising sea waters.

     

    And finally, there is the third aspect of the opening post which really turns me off: The first half of the first sentence and the last sentence. Thanks to them, the post with potential for an interesting discussion comes in a wrapping that says "troll, ignorant or political agenda inside" to anyone with a bit of experience in social media. So despite giving the OP a huge benefit of doubt with the time I put into this post I don't want to leave them without comment: 1) "Climate scientists are concerned with deviations in the average global temperature": No, they are mostly not. Type "climate science" into Google and check out what they do. 2) "Has there been any research in this area [of what is really going on]?": Yes. There is a complete scientific discipline called Climate Science that is concerned with these questions.

  18. 10 hours ago, Sensei said:

    Correct. Using built-in C/C++ srand()/rand(). But you are free to use alternative pRNG..

    Pick up one

    https://en.wikipedia.org/wiki/List_of_random_number_generators

    C++ supports state-of-the-art random number generation. So it would be easiest to use a c++ compiler for the code (I expect that it should compile the C parts just fine) and pick an rng that is provided by the language standard library. So: Pick up to one: http://www.cplusplus.com/reference/random/ .

     

    I do not expect that the choice of the rng matters for this case. But it is a good habit to never run a Monte-Carlo simulation without a good rng, and including one is really easy in most programming languages.

  19. Do I understand it correctly that the issue you see is the following: One unit in x-direction is a different amount of pixels (or cm on paper, if you'd print it out) than one unit in y-direction. Is that what you mean? That is indeed the case. I have never considered a problem. In my experience, this is the default behavior of most plotting engines. It is normal that in addition to looking at the shape of curves you also have to look at the numbers on the axes (small effects can often look large if you just zoom-in into the graph). And I do think there are more use-cases for having different spacings in x- and y-direction than for having the same spacing. In fact, it is very common that x- and y-values are not even comparable (e.g. one can be a time and one can be a number of people).

    If you really want to have identical spacings, I think you can at least approximate that by setting the ranges by hand and forcing the aspect ratio or the size of the picture to fit to the ranges (same height and width of the x- and y-ranges are the same, double width if the x-range is double the y-range, etc). To set the range by hand, you can use

    curve(x^3, -3, 3, ylim=c(-3,3)); grid()

    Not sure how to set the window size by commands, but it should be possible. For an approximate solution, if you use R-Studio (which I really like for working with R) you can just resize the plot window to have the aspect ration you want and then export the image.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.