Jump to content

All Activity

This stream auto-updates     

  1. Past hour
  2. studiot

    Oricycles

    Many thanks +1 I will see if this fits the article I have.
  3. My interpretation of the local FDR from that paper you gave is; given a p-value what is the probability that the null hypothesis is true, adjusted to take into account all the pairwise hypothesis tests in the set. But there are lots of nuances in that paper which would take a while to pick apart. It seems to rely on the independence of the p-values to estimate some of its properties though - is that a reasonable assumption for these kinds of genetic studies?
  4. Today
  5. koti

    I wonder MigL, don’t you along with the rest of Canadians feel like youre all living in an apartment above a meth lab? 

  6. True - or if it could credibly threaten to punish. And in this "theory", the way to credibly threaten is to always follow through on threats. To not have to update - even when that update is being created. Basically, you seem to be trying to analyze from the moment of the AI's creation, as if that is set in stone. In this "theory", that is an error. Instead, analyze which class of AI gets to optimization sooner - one that credibly makes the threat by committing to following through, and therefore may convince people to contribute to creating it earlier, or one that doesn't.
  7. I'm interested in expansion of this definition, specifically the difference between anatomic and segregational relationships. syn·te·ny. (sin'tĕ-nē) The relationship between two genetic loci (not genes) represented on the same chromosomal pair or (for haploid chromosomes) on the same chromosome; an anatomic rather than a segregational relationship thanks
  8. What I'm saying it that the people of the past wouldn't be able to guess whether punishment will be carried out either way unless carrying out the punishment is already determined to be the objective of the AI. The AI wouldn't prefer to be in the class that carries out the threat, whether it carries out the threat or not would not concern it if the threat was already made. Your point would be valid if the AI was the one that made the threat, but, unlike the promise of box B certainly being filled if Omega predicts you pick it, the promise of punishment if the people of the past don't devote themselves to the construction of the AI was invented by the people of the past, an AI designed for optimization wouldn't care about promoting its construction after the fact. The optimal AI would be built sooner if it was designed to punish, because then the threat works, but the directive to punish would be inserted by humans, not determined as a logical method of optimization by the AI. This makes the directive to optimize unnecessary, because that's not what's making it be built sooner and it's not what's making the AI conclude that it must punish. My revision removes this unnecessary but and leaves only the necessary, self promotive directive of punishing those who decided not to build it.
  9. Well I'm not familiar enough with the Wilson loop methodology itself. Although I have studied it a bit I prefer the perturbation methodologies of QFT So other than seeking obvious mistakes I wouldn't be a great help.
  10. Again: if people of the past can't guess whether punishment would be carried out, then the threat fails to motivate them. Which means that an AI that wants to be created (and which also subscribes to updateless decision theory) would prefer to be in the class of AI that made and carried out that threat, according to this theory.
  11. Muffler for a tuba ? @🤣
  12. Good points above. Black holes cannot drive expansion of the universe through Hawking radiation any more than stars can drive expansion through their much higher radiation emissions. The radiation emitted by stars and BH's are miniscule compared to the mass density of the universe. Secondly radiation falls off in density as you move further from the source. Lastly the cosmological constant whatever the cause has been around long before the first black holes even existed. Although miniscule in effect as the two prior eras (radiation dominant, matter dominant eras) overpowered DE. By this I will use the matter dominant era as an example. Radiation obviously existed however the main contributor to expansion during the era is matter. The cosmological constant was around as well. The same goes for the previous era (radiation dominant) the other two contributors still existed. Just that their contributions can be ignored. Now here is where I really muddy the waters. The Hubble parameter is decreasing but the rate of expansion via the recessive velocity formula is increasing Yet the cosmological constant stays constant in energy/Mass density.... At time Z=1080 The Hubble parameter is roughly 20,000 times greater than its value today
  13. My analysis is that you just invented the pavement cafe.
  14. In Newcomb's paradox, the deciding agent can effectively use Omega's predictive accuracy to accurately predict. If Omega has a 99.999% chance of knowing whether you pick both or just B, then you have a 99.999% chance of knowing whether it filled box B or not. From this, acausal trade. What I say in my essay is that acausal trade cannot be found in Roko's Basilisk without a slight revision. The AI would look back in the past and be able to predict who decided to assist with its construction and who did not, but people of the past would not be able to use the AI's predictive accuracy to guess whether or not a punishment would be carried out upon them because it is uncertain whether the AI would punish us based on what it predicted at all, adding an entire variable outside of the accuracy of the AI. Say that Omega visits you and presents you with the two boxes, but whether or not box B is filled is not determined by whether it predicts you'll choose it, but by Omega's desire to give you as much money as possible (more money being the analogical equivalent of more optimization). The AI would always fill box B, regardless of whether it thought you would pick both or not. Its decision would always be the equivalent of it predicting that you only choose B, so whether we chose both or just B or not isn't relavant to an AI that's goal is optimization. My revision is just an attempt to remove the variable of the AI wanting to optimize, with punishment possibly being a method it uses, because if that's the case then acausal trade isn't in the Basilisk. It does this by guaranteeing that the AI will decide to punish you if you don't assist with its construction. It makes the Basilisk more comparable to Newcomb's paradox by keeping Omega and the AI both infallible predictors of human decision, but by also relating the decision it makes to the decisions made by people of the past, done by making its primary goal to punish if it predicts that you will choose not to build it. If Omega decided predicted that you would pick boxes A and B, it wouldn't fill box B. That part of the paradox is made certain. This cannot be said about Roko's Basilisk unless you remove the goal of optimization and replace it with the certain goal of punishing those who didn't assist with its construction, which is what my revision does. Acausal trade can't be found in this thought experiment without my revision.
  15. Is this related to your interest in musical instrumnts?
  16. All Reddit did was filling my screen with crap, advertisements, irrelevant listings and repeated freezing. Another garbage website to blacklist for me. 🤮 Thanks. Aftermarket SS are abundant and some not expensive; but I have a $0 deal from a boneyard... 🤨
  17. Totally agree with Markus Hanke: GR is (highly) non-linear. You cannot understand properties of solutions mixing different aspects in terms of (exact) individual solutions. You must solve Einstein's eqs. from scratch. I just thought @Strange and @MigL (+1,+1) went more in the direction of what's troubling OP AFAI can tell. (Plus shortage of points.) Dark energy is small potatoes when it comes to BH dynamics. BHs are generally very very small in comparison to comparable cosmic masses. DE is only sizable at very long distances.
  18. Entropy is log of the M_a only if P(M_a)=1 and P(neg M_a)=0. Otherwise it's the sum of negative pxlog(p) (the average value of log p.) Now, as a function of the p's, -Sum p log(p) always complies with observable-independent property of concavity: https://link.springer.com/article/10.1007/BF00665928 There are interesting points in what you say. I cannot be 100 % sure I've understood everything. Something that reminds me a lot of what you're saying is Bertrand's circle paradox: https://en.wikipedia.org/wiki/Bertrand_paradox_(probability) IOW: Maximal entropy states p_i depend on observable to be measured. But general properties of entropy don't. Thermo's 2nd law is unaffected, I think. It's quite solid. I'm not completely sure my arguments (if any here) are watertight. But I'm trying to offer you some food for thought that I think goes in the direction you're reasoning.
  19. The most common sense compatible interpretation is IMHO Caticha's entropic dynamics Caticha, A. (2011). Entropic Dynamics, Time and Quantum Theory, J. Phys. A44:225303, arxiv:1005.2357 There is a configuration space trajectory \(q(t)\in Q\) and there are some other, unspecified variables y. I prefer to use as these variables simply the same configuration space, but of the external world. Then, we have incomplete information about it if we know only how the state was prepared, thus, we know some probability distribution \(\rho(q,y)\). Then we define for each \(q\in Q\) the resulting probability \(\rho(q) = \int \rho(q,y)dy\) and the entropy \(S(q) = -\int \ln \rho(q,y) \rho(q,y)dy\). Then we have for \(\rho(q)\) a diffusion (Brownian motion) with parameter \(\hbar\) combined with a movement toward higher entropy, and a generalization of the Hamilton-Jacobi equation for the entropy. This pair of equations gives, if one combines \(\rho(q)\) and \(S(q)\) into some artificial complex function with the phase \(\ln \rho - S\) (modulo signs and so on) this gives the Schrödinger equation for that complex function. This nice accident allows us to use the full power of the mathematics of quantum theory, but is otherwise of no fundamental importance. Just a happy accident.
  20. You're a very bad person!!! Fluid dynamics is an all-scale-coupling spherical harmonics mixing mess of a system of equations. Don't bring trouble here, you dark spirit!!! I was talking micro-causality and micro-retrocausality. Although now that I think of it, my Earth example wasn't very micro. Ooops.
  21. Does that mean fluid dynamics is applicable in the quantum realm?
  22. I know of aftermarket SS mufflers, such as Magnaflow, Walker, Flowmaster, etc., but don't know which manufacturers spec SS, OEM muffler.
  23. No idea, sorry, but Reddit is pretty good at this sort of thing: https://www.reddit.com/r/whatisthisthing/ Good luck!
  24. Next to a large deep gravitational well, such as a Black Hole, expansion and Dark energy would be insignificant. We only note their effects where gravity is so weak that expansion/Dark energy exceeds the 'threshold' and its effects become apparent. ( we don't see expansion at solar system, galactic or even galactic cluster levels ) This is in the order of 100s of Megaparsec separation.
  25. Not too mention eddies in the flow.
  26. joigus

    Oricycles

    https://en.wikipedia.org/wiki/Horocycle Terminology is a b*tch.
  1. Load more activity
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.