Jump to content

taeto

Senior Members
  • Posts

    699
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by taeto

  1. You may or may not want to distinguish notationally between inertial confinement fusion (ICF) and magnetic confinement fusion (MCF).
  2. That is what I mean by a language problem. Your table contains "the primordial numbers", and 48 and 81 are two of the numbers in the table. Clearly then they are indeed examples of such numbers. You also say here that there are two kinds of composite numbers, some that are raised (prime powers) and others that are composite (not prime powers). Instead of talking about composite numbers that are not composite because they are raised, why not call them "unraised" or "true composite" numbers. Why the first 10 numbers of each kind? How about the first 12? Or 100? Are you aware of Skewes's number in the theory of prime numbers: https://en.wikipedia.org/wiki/Skewes's_number ?
  3. Maybe the terminology does not work so well when transported from French into English. The expression "the 40 primordial numbers" is puzzling. First you might think of the numbers 0,1,...,39. But we see 48 and 81 presented as examples. And "composite number" is already used for every non-prime number larger than 1. It is unfortunate to attach the same term to two different and closely related concepts. On the positive side, to find something interesting about the density of natural numbers that have no square factors might be worthwhile. I am not aware of much that is known.
  4. No, D cannot be negative. And the sum for C is \(\sum_i \frac{1}{n_i-1}\) where \(n_i\) is the number of elements in the \(i\)'th data set. In our case each T contains 4 numbers. So we have to add 1/3 to itself three times, and the sum adds up to 1. And then when you have that sum, you subtract \(\frac{1}{n-3}\), where \(n\) is the sum of the \(n_i\) and 3 is the number of data sets. Sorry that this was not so clearly written.
  5. If we take Bartlett first, then the purpose of the test is to figure out for several sets of data, and assuming that each set is normal distributed, whether they also have the same variance. If we think about the second table, it is possible that it represents four sets of data, one for each row, each set of data containing three values, for T1, T2 and T3. Or it is (more) possible that the table represents three sets of data, each containing four values. Let us say that it makes sense that the second table represents experiments in which for each of three temperatures T1,T2,T3 there were made four measurements. Then for T1 it means that values 2.42, 2.83, 2.25, 3.02 were measured, for T2 they were 3.05, 2.21, 2.18, 2.35, and for T3 it was 1.95, 2.23, 2.54, 2.56. We can calculate the estimate variances of each of these samples in the standard way, as 1/3 of the difference between the average of the squares minus the square of the averages. I trust that this is familiar to you? Then we have three estimated variances V1, V2, V3, one for each T. We also have to compute the estimate of the common variance V in case they were actually all equal. That will be \(V = 1/(12 - 3) \sum_{i=1,2,3} (4-1)Vi\), where the 3 means that we have three data sets, the 4 means that we have four data in each set, and 12 is the total number of data in the table. I have not made the computations, since I have no good calculator handy, and I would probably make confusing mistakes, sorry. Finally you have to compute the Bartlett testor itself. First we need the number \( D = (12-3)\log V - \sum_{i=1,2,3} (4-1)\log V_i.\) We can see from the formula above for V that it would be a pretty good match if all the V1,V2,V3 are the same, because then V would be equal to all of them, and this \(D\) would be zero. So \(D\) having a small value is good. To compute the final Bartlett testor we also need to have \( C=1 + (\sum_{i=1,2,3} \frac{1}{4-1} - \frac{1}{12-3})/(3(3-1)).\) The testor becomes \(B = D/C\). Now you have to check \(B\) against a \(\chi^2\) distribution with 3 - 1 = 2 degrees of freedom. Having typed all of that, maybe it is not as easy as I first thought. But try to compute as many of the numbers as you are able to.
  6. Don't be silly. I am happy you point out the obvious connection. When I see things from a math viewpoint, I sometimes forget the, well, obvious 🧐 At least Bartlett is not so difficult, so let us go through it 🙂. But first, what is the meaning of your first table? We see five columns T1 to T5 (where T stands for "Temperature_") and eight rows with an entry for every column. Do you expect that you have to test whether all the 40 entries in the rows and columns follow the same normal distribution? Or just that the entries in the same column follow a normal distribution, possibly not the same for all columns? Or the entries in the same row? And then, what is the similar meaning of the second table? The two tables are not related in any way by coming from the same experiment or anything like that, is that right?
  7. Yes, it seems to be what the assignment is about. But sangui said that he cannot do the math. And the math differs a lot. Bartlett is a standard \(\chi^2\) test, whereas the Shapiro-Wilk testor W needs looking up in a table. That makes the mechanics a little different. The value of W is actually not quite easy to compute, so it would be a place to start, if necessary.
  8. Could you show some example of what you are unable to do? Shapiro-Wilk and Bartlett tests are not really that closely related. So if you bunch them together like this, it may mean that you are in need of some of the basic concepts that underlie them both.
  9. It is fairly safe to say that if they are aware of one having happened, then it is because they have seen/detected it. Whether it has happened before is something we cannot know unless we detect it.
  10. Abstract Complex bound states of magnetic excitations, known as Bethe strings, were predicted almost a century ago to exist in one-dimensional quantum magnets. The dispersions of the string states have so far remained the subject of intense theoretical studies. Here, by performing neutron scattering experiments on the one-dimensional Heisenberg–Ising antiferromagnet SrCo2V2O8 in high longitudinal magnetic fields, we reveal in detail the dispersion relations of the string states over the full Brillouin zone, as well as their magnetic field dependencies. Furthermore, the characteristic energy, the scattering intensity and linewidth of the observed string states exhibit excellent agreement with our precise Bethe–ansatz calculations. Our results establish the important role of string states in the quantum spin dynamics of one-dimensional systems, and will invoke studies of their dynamical properties in more general many-body systems. Bera, A.K., Wu, J., Yang, W. et al. Dispersions of many-body Bethe strings. Nat. Phys. (2020).
  11. Funny how Freud sometimes takes control of one's fingertips.
  12. Hi Zulfi Your numbers will work just as good. But for the original proof in the book the idea is that there are m small items at the beginning, all of the same size. And there are up to m larger items at the end, also all of the same size. When the sizes are chosen in addition so that one large and one small item fit together in one bin with no spare room, then we can be absolutely sure that the value of OPT is exactly m. When you choose varying sizes as you do, it becomes more tricky to check that OPT doesn't end up below m. We do not know how many bins A uses among the first m. The algorithm can do whatever it wants. All we know is that A must use between m/2 and m bins to pack the first m items (if we assume that three of them will not fit into the same bin). Whichever number of bins it uses, we call that number b. When it says "At this point in the algorithm" they mean: after the first m items have appeared, and we are still waiting for possibly more to arrive. At that point you have m items, any two of which fit together in a single bin. So then the optimal number of bins possible to use would be m/2, in the special case when no more items come in. Remember the algorithm does not know when the stream will stop of more items to get packed. In your example, after arrival of the two small items, the optimal way to pack just those items will require only one bin. No problem. Pretty much stuck to this chair anyway.
  13. Yes, you can choose to pack the new item in any one of the existing bins, or in a new bin. Once you have packed an item in a bin, you are not allowed to move it to another bin; it has to stay until the end in the bin where it was first packed. Which is why you cannot in general achieve the optimal solution. Now I suppose that \(m\) is the number of items of size less than but close to \(1/2\). Their combined size is nearly \(m/2\), and a bin can only hold items that have sizes adding up to \(1\). Then clearly you need at least \(m/2\) bins. In fact you can get away with using exactly \(m/2\) bins, since the items can be paired off so that each pair has size less than \(1\). I read this to mean that if we observe what A is doing as it receives the initial \(m\) items each of size \(< 1/2\), we will see that \(b1\) bins are used to pack just one item in each, and \(b2\) are used to pack two items. This is different from what you are saying, since I assume that all the \(m\) initial items have size \(< 1/2\). The larger items, if any, will arrive only later. The calculations are used to compare the number \(b\) of bins that A uses to the optimal number \(m/2\). A must use a balanced approach, that is, it has to pack a certain number of bins with two items and a certain number with only one item. Even though each bin has room for two items, it would be a mistake to pack two items into each bin, because it would make A perform poorly if and when \(m\) further items arrive each of size \(> 1/2\). It would also be a mistake to pack each bin with only a single item, since if no large items arrive (A is not allowed to know the information about the total number of items), A will have used \(m\) bins where the optimum solution is half as many, and that is bad performance.
  14. He is looking at the case when for a long time, you get items of the same size just less than a half, say 0.499. You keep getting those, and you have to decide how to pack some of them together into one box, because two of them will fit. Or to just leave some of them in half-packed boxes, because maybe later there will come items to fill up those boxes. Now in our case, the sequence I2 just means a sequence of items of size a little more than a half, at most .501. So that if you left any box ready which as only filled up to 0.499 of its capacity, then you can drop one of these items in there to fill the box. But if you do not have a box ready, then you have to open a new box to accomodate it.
  15. Who expects the Roman inquisition, indeed.
  16. If there exists a solution for sure, then there has to be an assumption which says, or implies, that the matrix on the right hand side of the equation is invertible. Something is missing from your original explanation which allows us to make such a determination.
  17. Being called "not Bruno or Galileo" might having been too harsh.
  18. Remember that scene I do (embarrassingly). What is the idea here. Galaxies were previously mistaken for stars, and now they are "missing" because no stars are left in their places, only galaxies? If you go "oh, there is no star left in this location, only a galaxy", is it not too much of a giveaway that maybe it wasn't a star to begin with?
  19. taeto

    Pangaea ?

    Maybe. But it is gone now and can no longer answer.
  20. It all sounds right. Which text is this from? And how is the performance guarantee defined in that text? If we assume m = 2 in that same proof, why would it not prove that the bound is at least a factor of 3/2 by the same argument, except we replace 4/3 by 3/2. The inequality which reads 2b/m < 4/3 in that proof would then read b < 3/2. And the inequality which reads (2m - b)/m < 4/3 would read (4 - b)/2 < 3/2, that is, b > 1. Since b is an integer, this is a contradiction. It is the same as the mock argument which I gave above in another post. I suspect that your recollection is top notch. However, I fear the source of your information cannot necessarily be trusted. The paper https://arxiv.org/pdf/1807.05554.pdf gives more details on background and more recent bounds. Their new lower bound for the asymptotic competitive ratio is larger than 1.54278.
  21. Darn it, you had to do that. I do not even click the video, and I am still going to hear Road to Nowhere inside my head for at least the rest of the day. I will take one hour off:
  22. Me too. Maybe there are variations in the definitions of the competitive ratio. It just seems too easy to show a bound of 3/2 in place of 4/3 if you do not require the asymptotic limit. It is the value 4/3 which makes sense asymptotically. Also I expressed it wrong: you can exceed the ratio 4/3 infinitely often, so long as you approach it in the limit. This has no implications for this problem, which deals with assuming an A that has ratio strictly less than 4/3.
  23. Knowing that algorithm A always spends less than \(4n/3\) boxes to pack items that require at most \(n\) boxes, you can think about what A must do when you feed it items one by one in the sequence shown in the figure. What will A do with the first two items of size \(1/2-a?\) If A places them in different boxes, then A uses 2 boxes, but obviously the optimal solution for this short sequence is 1 box. In that case A would have used 2 times as many boxes as needed, which is more than 4/3. We deduce that A must place both those items in the same box. This is still a dangerous strategy, because the third and fourth items to arrive could both be of size \(1/2+a\). Then they do not fit together in a single new box, so you have to open two new boxes to place them in. You have now spent 3 boxes on packing four items which could have been packed into 2 boxes, by fitting one small and one large item together in each of two boxes. The solution that A gets overshoots the optimal solution by a factor of 3/2 in this case, which is also more than 4/3. Now it seems a done deal. Whatever A does with the first four items must cause it to use at least 50% more boxes than actually needed. But there is one special thing about the analysis of approximation algorithms: you only have to consider for solution values, that is in this case the number of boxes that it uses, that are very large. Then you may consider the number of boxes that A gets forced to use versus the optimal number, now in the limit as the number \(m\) increases. At least the way to think about what A must do is similar to how to think about inputs with only a small number of items in the input sequence. That would overlook the special aspect of the analysis of approximation algorithms that the ratio is always asymptotic. You may exceed the ratio for some inputs, but only for a bounded size of the solution values.
  24. I believe that the stars referred to in that post were actually stars that have recently appeared. Not disappeared. Inb4: no, there will not be any new stars appearing right here inside our solar system during our lifetime, killing us all.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.