Jump to content

Delta1212

Senior Members
  • Posts

    2767
  • Joined

  • Last visited

Everything posted by Delta1212

  1. Yeah, that headline is somewhat misleading. It's talking about an epigenetic process that regulates gene expression but doesn't actually do anything to the underlying sequence of base pairs.
  2. He might be talking about epigenetic effects, but calling that a change to the DNA is a bit of a stretch.
  3. I suppose you could have the "only affected by adjacent cells" thing be a design decision rather than an intrinsic part of the simulation, and then allow for an additional layer of cells to hold information on the quantum state of entangled particles rather than having that information stored locally, with the cell at whatever location each particle is measured at being updated by the master cell, and the Uncertainty Principle acting as a fudge to mask the processing time required to update each cell from the master cell.
  4. It's possible something else will come after us, but it will most likely be starting from scratch. On the off chance that intelligence arises again on Earth, or close enough to find Earth in the window of time where it still exists, it's unlikely that it would learn anything from us, except possibly a few tricks from anything mechanical that is still in good enough shape to be useable, or at least point someone in the right direction as to its function. Anything written or recorded is liable to be unreadable.
  5. There is no reason we would be able to detect the update speed of the simulation. Unless, I suppose, the "universe" is being simulated across a distributed network of processors running a synchronously so that updates in one area have to propagate out through the network. In which case the speed of light would be less likely to be the actual "front" of an update, and more of a fudge to make sure there were no dependencies between processors at opposite ends of the network and prevent inconsistencies arising from latency in the network. Quantum entanglement makes this seem somewhat unlikely, though, since it's exactly the sort of phenomenon that such a set-up would presumably be seeking to avoid, and would probably best be covered by a global variable defining the state of all entangled pairs. In any case, if you're not running it as multiple interconnected simulations in parallel, but updating every part of the simulation before moving on to the next "tick" then the time it takes for the simulation to progress from tick to tick would not be detectable inside of the simulation in any way. Time proceeds at a rate of one second per second, no matter how long it takes each second to render.
  6. Seconding this. Knowledge builds upon itself. The more you know, the better off you will be. Asking which subject is most important is like asking which load bearing beam is most necessary for holding up the building: a bit pointless since it'll collapse if any of them are removed.
  7. Oh yes, as StringJunky says, I'm not claiming this is a place we're at now nor that we're going to reach it tomorrow. It's just that it seems like it's more of a matter of scaling and refinement of techniques to get from where we are now to that point, rather than such an AI requiring the development of some heretofore completely unknown technology. To give an analogy, if such an AI is our moonshot, then we're currently in the early days of Werner Von Braun rather than being at the Wright Brothers stage.
  8. You say that like there is a difference in kind rather than simply one of degree.
  9. My point is that, long term, I don't think that's necessarily true. We crossed the point four years ago where neural networks were capable of finding meaningful patterns in large data sets without being told ahead of time what patterns they were looking for or what each individual element of data even represented. Building robust facial recognition software by showing the neural network a large number of images with faces and without faces, but without having to tell it, with each input, which images contained faces and which didn't. Or, same project but more amusingly and therefore getting more play in the press at the time, just showing it tons of hours of YouTube videos and it learning to recognize cats without having been promoted to do so. Source for both: https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf And in the time since then, AI has been moving further and further into areas that are considered more "art" than science (a category which, somewhat ironically, in this discussion includes science, itself). Those areas of human rights endeavor that are acknowledged to be at least somewhat rules based on a superficial level but that contain such a level of complexity that it takes a degree of creativity to succeed at them, rather than rote rule-following. AlphaGo was well covered in the press and I believe there was a discussion on this site about it at the time? And, of course, the next challenge it is tackling will be StarCraft, which may not have quite the same cultural cachet, but certainly ups the "muddle" factor as far as optimal moves go, and does maintain a population of very high level players to measure ability against. What I haven't seen quite as widely reported on, and which frankly I find even more compelling in terms of "computers doing jobs that require both knowledge and also creative input in order to be successful" is the recent update to Google translate. Languages are a hobby of mine, and I'm very familiar with Google translate and its limitations. I used it quite frequently as a tool, and using it properly as a tool required knowing exactly what its limits were, where it would break and what kinds of things it had problems handling, so that ai could rework the sentences I need translated to avoid things I knew would break it. I should say I was very familiar with Google translate, because they just switched over from the previous, phrase-based algorithm to a neural network they built for the purpose earlier this year. The improvement in the translations is remarkable. It used to give fairly broken translations for any sentences more complex than very short simple phrases. They were very helpful for figuring out the meaning of long articles in languages you don't speak, but no one ever would have mistaken them for human translations or looked to them for any kind of nuance about a general overview of whatever was being talked about. I just went to the homepage of Spiegel Online and ripped the first paragraph I saw off the site and dropped it in Googke translate. Here is the German: Der Anschlag von Berlin wirft viele Fragen auf. In der Politik wird über schärfere Sicherheitsmaßnahmen diskutiert: Diese Vorschläge stehen zur Debatte - von Fußfesseln über "spezielle Erstaufnahmeeinrichtungen" bis Videoüberwachung. Here is what Google came up with: The Berlin attack raises many questions. Politicians are discussing more stringent security measures: These proposals are being debated - from ankle cuffs to "special first-time devices" to video surveillance. Under the old system, that would have been a jumbled mess. And while the new system is not completely flawless, the translations mostly read as something an actual human would have written, rather than something you would get by trying to look up each word in a sentence in a dictionary, and there is a level of abstraction to the translation that is remarkable. A more literal translation of that second sentence would be "In politics, more stringent security measures are being discussed." It means the same thing, essentially, as "Politicians are discussing more stringent security measures" but it changes the subject and switches from passive to active voice, resulting in a slightly more natural-sounding English sentence than the direct translation. Achieving that while maintaining the semantic equivalence of the sentences is seriously impressive. Anyone would doesn't think this has major implications for the ability of networks to produce output that requires creative input doesn't really understand how translation works. I'm not claiming that machines will eventually take every job that a human currently does. My contention is that there is no job currently done by humans that can definitively be labeled as safe from automation in the sense of a machine eventually being capable of performing at, near or above human-level at the task. This includes science, which at its core is the attempt to find high level patterns in mass amounts of raw data. This is exactly the kind of work that machines are already good at doing. And since scaling up of neural networks has, thus far, resulted not merely in gains of speed for processing time, but also in the degree of abstraction that the networks are able to handle in what they do. I am again, though, speaking in principle. I don't think there is any job that a machine, simply scaling from what we have now, cannot do in principle, whether it will be economical to employ a machine capable of doing a given job instead of every human currently doing the job is another question entirely. For low level service jobs, it probably will be, and even for some jobs that require advanced education levels, especially, for instance, in the medical field where diagnostic AIs are already beginning to outpace human doctors in terms of accuracy in medical evaluations. It may be that some tasks require a level of abstraction such that it simply isn't worth the money to run a neural net of the size required in order to properly do the job vs what it would cost to pay a human to do it. But I'm not going to hazard a guess as to exactly what professions will actually wind up falling into that category.
  10. Have you seen Rogue One yet? That makes a pretty compelling case for a future in which actors are no longer necessary, at least for film. And it's fairly likely that we're coming up to a point where AI will be able to replace a lot of service sector jobs. That would have been pure fantasy a few years, but the advances that have been made in artificial neural networks in that time has pretty well convinced me that there is no job in pretty much any field that you can guarantee is impossible to automate any longer. Any job a human can do, computer can do, or will be able to eventually. The question is how soon that happens and how economical it will be to have a machine do it vs an actual person.
  11. What does "halfway" mean in this context? Half implies a midpoint between start and finish. You've sooner of defined a fininish line but where are we starting to count from? Half the time it will take to reach that point? From when? The beginning of human history? That's quite a while. From the beginning of industrialization? The written word? The invention of the computer? Or are we halfway in terms of the knowledge we need and the accelerated accumulation of knowledge means we're actually quite close? But then how are you measuring our current degree of knowledge and how are you extrapolating that to how much we need? Are we halfway on a linear scale? A log scale? And, frankly, I'm not even entirely sure what you're getting at or how Twitter signals the end of science unless you're talking about the sociological phenomenon whereby the proliferation of social media as a primary information distribution outlet has result in a fracturing of the definition of our shared reality and an accelerated politicization of the essence of truth such that such that all scientific knowledge is up for debate on the basis of personal opinion rather than evidence.
  12. Why are you diving by a million?
  13. *Google "David Hilbert"* "German mathematician" "Died: 1943" Welp, that answers all the questions I'm of a mood to have answered right now.
  14. I was initially going to say that $100 isn't exactly life changing money and I'm not desperate for cash by any measure, so I'd probably go with B regardless of what the odds were unless you got down to some incredibly miniscule percentage that really no longer made any mathematical sense. Then I reframed the question in my mind as to what the odds of winning the million would have to be for me to be willing to purchase a ticket for $100. And after reflecting on that, I think I actually converge to a similar answer about what "feels right" being the approximate break-even value. I'd trade $100 for a 1 in 10,000 shot or better at a million dollars. Those are long odds but not so long I wouldn't take them. Up it by a factor of 10 to 100,000 and it no longer feels worth the money. So anything better than a 0.01% chance and I'd probably hit B.
  15. Sure, you can invoke them as part of a conspiracy theory. And I'm sure you could build a conspiracy theory around God or religion if you wanted to. There probably are plenty of conspiracy theories about religions and religious institutions. But that doesn't make the idea of God itself a conspiracy theory.
  16. Yes, but that's my point. There are a lot of conspiracy theories about the CIA. That doesn't make the CIA itself a conspiracy theory. Similarly, while religion may represent a conspiracy of people who all share a theory with one another, that is not generally what is meant by a conspiracy theory, which is a theory about the existence of a conspiracy, not a conspiracy of people with a theory.
  17. But isn't the point of a conspiracy theory that the theory is that there is a conspiracy? There's no doubt that the groups exist in this case. Just doubt about what they are promoting. It's be like everyone knowing there was an organization devoted to framing terrorists for bringing down the World Trade Center but nobody could agree on whether the Towers actually fell at all. It's sort of an inverted conspiracy theory in that sense. Instead of a publically visible result with a hidden organization that most people don't believe exists representing the "real" explanation of what happened, you have very public organizations purporting to hold the truth about something that most people can't agree on the existence of. You can certainly use God as an explanation for things in the same way that a conspiracy theory is used as an explanation for things, but by that metric, belief in the CIA would be a conspiracy theory.
  18. Begin as you mean to continue?
  19. Isn't "non-genetic cultural effect" redundant?
  20. If you want to post a personal theory of any kind, you need to provide the evidence for it. If it was locked before you could provide any, then you didn't provide it in the first place and were breaking the rules. It's not physically possible to lock a thread before the evidence for the theory being presented is provided if you are going about creating threads the way you are supposed to.
  21. You're a rulebreakerist, swansont.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.