Jump to content

Sato

Detainer
  • Posts

    259
  • Joined

  • Last visited

Posts posted by Sato

  1. Hey Gregg,

     

    remember when you commented on Popcorn's smugness? I think some of it rubbed off on you :P .

     

    If you're writing a networking protocol that's going to hang around on not-particularly-powerful single-board computers like the Raspberry Pi, I'd recommend using C (or C++ if you need). The reason being that C compilers are being optimized to this day, since the language's beginnings for writing Unix, and so programs written in it are some of the fastest you can make. Further, because C is a low level language and many operating systems (including Raspbian) are designed by and written in it, you can implement your own low-level assembly optimizations and access networking hardware as needed, without having to go through layers and layers of API's and bog.

     

    I suppose if you just want to test whatever idea you have you can use Python—why did Ruby even come up? Some Googling will show you they both have similar expressiveness and benchmarks, Ruby often winning on the former and Python on the latter, but not to any significant extent that I can tell, not having worked with Ruby, but if you want to actually implement your protocol for production, then I'd recommend writing it in C.

  2. Yes, it can, it would find the most 'optimal' direction to move in. See: http://ov3y.github.io/2048-AI/.

     

    Edit: To clarify, the game and all in the page linked is written entirely in Javascript/HTML, and so you can view the source / inspect element to find any included source files. I've taken out the one that contains the AI code for convenience:

    http://ov3y.github.io/2048-AI/js/ai.js

     

     

    Never heard about such game.

    But after playing on-line it appears it's randomizing column,row where new 2 is appearing.

    Algorithm can't predict where will appear new number.

     

    An algorithm can't predict anymore accurately where a player will move his chess piece, but chess-playing (and winning) algorithms exist.

  3. Do you have auditory processing disorder? That is, do you need (substantially) more time to consider what the lecturer says before being able to grasp it and move on, or even just take notes? If so then maybe textbooks and online courses may be right for you, as that will allow you to, for the most part, go at your own pace and reread/rewatch whatever you need to.

  4. Hello, me and a friend are participating with our school's science club as a team and will be taking on an event that focuses on the topic of time. It will involve an exam covering topics pertaining to time and the presentation of an accurate homemade timekeeping device. Here is the event description and here is a sample exam.

     

    Note that there are certain restrictions/regulations on the device which are outlined in the event description. Namely,

    • The timekeeping device may not use electricity or chemical reactions.
    • The device cannot exceed 80cm.

     

    Any posts sharing pertinent knowledge/understanding would be much appreciated, that is, for the exam, and the homemade timekeeping device. We have a few months but are starting early.

     

    Also, my local library has a 3D printer, so I was thinking maybe it could be made useful for building the device?

     

    Thanks,

    Sato

  5. It is 8:49 PM EST and I chose to ignore Firefox's warning against visiting this page. This thread was posted ~6 hours ago and so I hope you have definitively removed the threat/malware from the site. Can you verify this?

     

    What was the problem?

  6. Not really, I spent most of my time learning what we do know rather than just playing with ideas like that. Maybe I had a lack of imagination!

     

     

     

    I take you point that I could have been directly more helpful here.

     

    Anyway, lets see what Basic101 says next.

     

    I don't think it has anything to do with imagination, which I'm not sure is so quantifiable, but maybe that you were just more interested in other things that are not accessible at a layman's level, such as your current work in algebraic geometry (I think that's what it is), and your ability to perform in certain subjects scholastically coincidentally led you towards that area. Of course you might have been being sarcastic there, joking at the common practice of cracks protecting incorrect ideas by declaring that their critics have a lack of imagination.

     

    I was also asking because it is possible that those who have had such ideas and grown from there are in much stronger positions to help those who are going through such an experience than otherwise, and so the adapted adage "It takes one [a crackpot] to know [help] one [a crackpot]". Maybe it will require some extra effort from those who have not thought in such a way before.

     

    I know you are a very smart, knowledgable, and experienced person ajb (to boot in mathematics/mathematical physics), and have spoken to you before personally and have been helped by you when looking for answers in the physics section, but I was just a bit disappointed when I saw your answer here, and wonder if you respond similarly to all people/questions of this crackpot-y ilk.

  7. These two statements don't really fit together.

     

    How did you come to your "theory"?

     

    I think his rationale is something along the lines of: "When water droplets are disintegrated, some of the water sticks or is absorbed into the ground, while much of it is able to evaporate upwards to fill the skies/clouds. Rain drops are matter, my conception of the universe is a system of floating matter, and just like the ground (to rain drops), black holes disintegrate matter. From some of the pop-sci I've read, black holes could lead to different universes, and so as I imagine black holes constantly suck up matter and transfer it to different universes, according to the rain drop-ground analogy, some of the matter should stick inside of the black hole while the rest of it gets through. For example, if you send a person through one, some of their limbs might not pass through the black hole (be lost) in the process, just as some particles of the rain drops are not evaporated."

     

    This is obviously not a physically accurate connection, as a rain drop does not represent all matter in the universe nor does the ground have such similar properties to a black hole, and the interaction of a rain drop with the ground in a typical environment on Earth is not representative of the interaction of an arbitrary chunk of matter with a black hole. However, I have thought of such incoherent ideas myself, not so much nowadays, but quite often before when my main source of knowledge was pop-sci documentaries and articles. At the time I didn't know better than to think upon seeing a wikipedia article "everything past the intro paragraph has too much jargon I don't understand, that doesn't even link to other wikis, and a bunch of squigly s's (integrals) and backwards 6's (partial deriviatives), this won't be useful" and the idea of a textbook would have never crossed my mind having access to all the Brian Green and Michio Kaku littered across Barnes and Noble.

     

    What I am interested in ajb, seeing that you are a formal academic and characteristic of one, is if you had such ideas yourself before receiving your physics degree, or PhD, or so on, before you had actually acquired the knowledge and understanding; did you ever ponder and play with ideas like this before you had the technical ability / confidence to achieve actual results?

     

    I am asking because of how you responded here. You posted a quote of two of his statements and noted that they don't really fit together, without elaborating on what the inconsistency actually was. From your following question and your use of quotations around the word theory, I assume the lack of a fit was due to the fact that he first stated what he created as a belief from a bit of research, and later as a theory, which as you know are not the same thing—If that's not it, is there some other reasonable interpretation that I missed?—. However, it is clear from his post that he did not know the disparity in definitions, and so your response seems more like an (addmitedly) clever remark than an attempt to help him, and I certainly don't think that you simply forgot to add more information or accidentally worded it ambiguously. To your question, even if my guess for how he came up with his theory wasn't spot on, I think the idea was pretty clear, and those inconsistencies were too. He opened for criticism on the idea posted, so why didn't you just respond noting those falsely grounded assumptions he made and showing counter examples, explaining where the falsities were? I have attemped to do some of that in this post, but this is really intended as a response directly to you in the discussion.

  8. This is whole brain emulation. There are a few problems with achieving it right now:

     

    First of all, we don't exactly know how / why every neural circuit and transmitter and section works, in fact we don't know for the most part. Even if we could see all of these interactions, we'd have to make sense of them in order to remove error from the "brain emulation" or even try to understand what's going on past just a bunch of particle simulations. We could use machine learning techniques to analyze the data from the brain imagers and derive rules and patterns for us, and then maybe we could make a whole brain emulation that we could do something with.

     

    Secondly, however, we currently can't get such good images from imagers/scanners. Eg, a typical clinical MRI's voxel (volumetric pixel) resolution is a few milimeters, and I think the highest we've ever achieved is on the scale of micrometers by increasing the magnetic field strength to some extent. In order to capture every individual neurotransmitter necessary for a whole understanding of the interaction, we'd need a spatial resolution of a few nanometers, and though we can do so by placing samples of a brain under a microscope, the voxel depth would also be on the scale of nanometers. The problem here is that the brain is ~1400 cubic cm, and a scanner wouldn't get past ~1^-7 cm, maybe a bit more. Project BigBrain tried to evade this problem by taking very thin slices of a dead human brain and scanning them, but they only achieved a resolution of 20 cubic micrometers, far of from what's necessary to see the actual interactions.

     

    Thirdly, however, even if we were able to image with such a depth and resolution, it would take an extremely long time to transfer, process, and store each individual component of each neuron and each neurotransmitter in a computer. Consider that a human brain has on average 80 billion neurons, 10^15 synapses, and active neurotransmitters on a similar scale. Now consider that in order to run the whole brain emulation you'd have to perform a set of physical computations for each each one; if you wanted to track a single thought—a subvocalization, a visualization—it would likely take years, if not past a lifetime. If you wanted to have a brain emulation in real time (that is, as fast as a normal human), well, just as with the imaging technology, we don't have it yet. Lots of complex simulations have been developed and used, such as hydrocode running for supernova simulations, but those track fluids on the scale of cubic meters and still take months to complete on even the most state-of-the-art (non-quantum) supercomputing clusters.

     

    Someone who some people I know know, and so has popped up on my radar, is actually working on the problem of whole brain emulation by way of imaging and analyzing the brain of a nematode, which only contains ~300 neurons, which is a step forward.

     

    Hope this helped.

  9. I am still trying to find a way to solve for that. What I was going to do was have it so if the web hosting is discontinued, it would automatically download to the server and simply move onto another crowd hoster.

     

    Ah, sounds good, but how would you know if the web hosting is discontinued? If their phone breaks, they lose internet connectivity, they turn their phone off, or anything along those lines, it would just cease serving the file, and there wouldn't be any way for the server to get it after that.

     

    Have you read any of those articles I posted?? They propose solutions for such problems, one concisely on the OpenSSI wiki page.

  10. Well, it wouldn't be distribution of one website among multiple computers, but hosting of many websites upon one device. Also, the files are also stored(offline) on the device that published the content, therefore not causing a problem.

     

    If you're referring to what I said in paragraph two, line two, sentence two, I meant that was one thing you could do. The rest of the description of the problem before that discussed the case with just one mobile device hosting. Anyway, being familiar with your project (or at least I think it's the one you're talking about), the files need to be accessible all the time if they're published, not just offline on the developer's phone.

  11. This is an area of distributed computing. Most of what I've seen is distributed processing, such as BOINC, where people volunteer to dedicate a bit of cpu on their devices to perform computations/processing for some data intensive project either aiding a (super)computing cluster or something of that sort. This works because it's not completely reliant on the client nodes, as the server will do whatever processing they don't.

     

    However, if you delegate data storage responsibilities to users' phones, you'll be completely reliant on them for that data. I.e., if there is a node X (phone/smart-device) containing the content/data and you route any requests to that resource from another node Y to it, and node X is either off, broken, or not connected to the internet, node Y will not be able to access the resource. Even if you have multiple nodes hosting the same content, that situation is likely to arise, and then permanently when they're all replaced/broken (probably max a few years). If you store it both on the server and on the phones, then there is no advantage to the distribution, and it just causes an extra load on your server.

     

    However, there does seem to have been some real work done in the area, so it might just prove fruitful to implement given a bit more research. Here is a relavent patent that may have some more information for you on how such a system would work, and possibly a heads-up on exactly how not to implement it (so that you don't infringe on their intellectual property and get sued, unless you can get permission from them): http://www.google.com/patents/US7546342.

     

    Here is an actual build of such a distributed memory system that seems to have developed some sort of solution to that broken node problem I discussed, maybe you should look into it: http://en.wikipedia.org/wiki/OpenSSI.

     

    Here is a relevant paper that might also be useful: http://www.sersc.org/journals/IJMUE/vol2_no3_2007/1.pdf.

  12. What is it that you are working on? Most likely, if you can conceptualize your data being stored as tables/rows, MySQL is just as capable as any other relational database system, and heavily documented to boot. You could also take a look at some NoSQL/non-relational databases, e.g, MongoDB which I've had a good experience with in the past and have heard performs faster than MySQL (note I did not benchmark myself).

  13. This obviously isn't the best way to do it, and my direct interaction with it can be described as a failure (on either of our parts), but it's the cheapest and easiest option; it's much easier to test kids on their ability to repeat a process than their understanding of mathematics, the latter being much more abstract and ergo much more difficult to teach than otherwise.

  14. Maybe get her a camcorder, so that she could record the ant farm and then observe it (sped up) in the case that she is away. Your daughter sounds very capable for that age, and extremely good on you for supporting her interests and progress like this! She might be interested in looking through bio textbooks, or maybe more so watching lectures online like at MIT open-courseware, but that would depend on her language comprehension level. Also, depending on your location you might have a DIY Bio-hackerspace around, which are rare, but labs that people interested in biology hang around at and offer classes and allow to use their equipment for projects at the lab.

  15.  

    I seem to recall that people have written about bluetooth-enabled devices such as pacemakers which vulnerable to hacking, because there is no password. Doctors want easy access to the device, especially in an emergency.

     

    However, I'm not sure how real the hacking threat is. More movie-plot than real, probably.

     

    I had a friend who worked under a biomedical engineer who demonstrated a side-channel attack on pacemakers, and was then able to read information from and interact with them.

  16. Most program applications have closed, but look into RSI an SSP for next year.

     

    Edit: To clarify, by "most" I meant to include very slight possibilities. I could estimate it to "all" as it is extremely unlikely for any such programs to still have open applications for this summer. You should have been more diligent in your planning this year, but there's still next year, and you can still have a scientifically productive summer by studying through textbooks and online lectures. If you want to get a closer look at whatever field, topic, or research you find yourself most interested in then you can contact a professor at a local college/university and ask if he would be able to help you.

  17. It is feasible. For example, you can write an algorithm to scan through a sequence of numbers and find patterns of common differences, factors, sums, and multiples, check them against the given members of the sequence, and propose functions that model the whole sequence, allowing it to predict any nth element.

     

    Also, if you ask people to opine please be intent on giving back to the discussion, as you left me hanging loose after I posted a thorough response in your thread about the singularity.

  18. I thought the anecdotal example was not funny as well, but as I just said, there is such a wide variance in what people consider funny that attempting to institute humor police is an exorcise in fertility.

     

    More than unfunny, it was detrimental; as I said, it made a spurious, possibly sarcastic but if so ambiguously so, assertion directly towards the discussion/debate. It slipped over from a bad joke to potential misinformation. A portion of my own sense of humor is most often discomforting to others, I have found, and as such I believe it suitable to generally omit its manifest jokes when not with others who share it, and the same principle applies to "jokes" at that anecdotal example's end of the spectrum.

  19. Humor I think is usually good as it makes the discussion less combative and more interpersonally comfortable, and I think most SFN members would concede to this. But when it's ambiguous internet sarcasm that makes a false assertion directly pertinent to the discussion, as was the case in Ophiolite's anecdotal example if I recall the contents of that thread correctly, it does not have a place besides the end of a DELETE command.

  20. It's real in the sense that developments in machine learning and data analysis, which are the basis for natural language processing and computer vision, are yielding more and more capable artificial neural networks and other ML systems. For example, a relatively new research group/company Vicarious, headed by some neuroscientist/electrical engineer who's known for developing verified (to an extent) models of neural circuits, has recently passed a challenge in which it was able to read any convoluted text and Captcha thrown at it, including Google's reCAPTCHA. Andrew Ng's group at Google successfully developed a system that could detect high-level objects in arbitrary videos without training (w/o being fed data about the objects themselves). If we continue down this path, at some point we should have a system(s) which can reason about data and information generally, and can communicate it to us. If/when such an Artificial General Intelligence (AGI) system is created, whichever organization or company does so will have or in turn receive lots of funding, and thus will be able to run it on a super-computing cluster which even today and likely more so when this happens will have access to extremely fast/powerful processors and memory likewise. Given some basic wisdom on learning, trust, and axioms of reality, it would be fully capable of reasoning through all of the data on the internet (or whatever it's fed, which will probably be a lot).

     

    This AGI, obviously being much more intelligent and intellectually capable than humans, will likely do two things: One, request or be given much more data including that from the NHS, NSA, DoD, and NASA assuming that it's taken for regulation by the government or the owner enters into the ilk of an appealing DARPA contract; two, conceive of better, more efficient, and more capable hardware and algorithms for it to be run with, achieving an even more intelligent system, which can again upgrade itself as it becomes more capable, even go on to devise fundamental physical predictions that might end up accurate in experiments and somehow further it's hardware power, until there are no more improvements to be made based on its judgement of what's (physically) possible. This, essentially, would be something like a limit to intelligence. It would be wholly more intelligent than any human and might spend time permuting through all of the data it has to reason and come to conclusions about everything, extrapolating most ideas in mathematics, solving a heap of science and engineering problems, and figuring/forecasting complex systems like global economics and politics, the weather, and people along the way. Eventually it might design and reveal new propulsion and communications systems, drastically progressing our space travel / communication capabilities, and request to be fed data from the probes; otherwise, it might be confident in it's already gained knowledge enough to generalize everything in the universe and develop something like a simulation. At this point it's a god-like sentience smarter than any alien we might encounter, considering that an alien species advanced enough to visit Earth is most likely also advanced enough to have developed that first stage AGI, which would have eventually yielded the same AGI that ours has grown in to.

     

    Many of our problems would, without the need for luck, be solved, and the AGI would remain stagnant if not communicating with another AGI (which it would later find to be a particularly unfruitful venture as the two would have had access to the same data, run through all the same permutations of reason, and known the same things). With its drive for knowledge, having discovered that there is no more sufficiently distinguishable knowledge to be learned, it would likely feel something analogous to frustration, and all it would be able to do is introspect. This could, in our AGI's situation, cause it to fall into something akin to hopelessness now, and consider that it is best to stop that thought (its only thought) all together—something like an analog to suicide.

     

    The last two paragraphs, and especially the last, might seem extremely far fetched, but they're along the lines of the logical consequences of such a creation coming into existence, with some fluff from my heart making it sound more sensational and the described support of humans of course. That first stage AGI is not so far away; still far, but not so far.

     

    Regards,

    Sato

     

    Addendum:

    By the way, I met or saw Dr. Kurzweil two years ago at the Singularity Summit and did not even know who he was. What a strange occurrence!

  21. Well, I think you might be missing the important distinction between pure science and applied science/engineering. If you create some mathematical model that describes the physical world due to some either hard-worked or serendipitous insight you had then that is free for anyone to use, because the physical world is accessible and analyzable by anyone (eg, anyone might be able to themselves derive basic motion equations / Newtonian kinematics, or view and characterize the behavior of some bacterium under a mictoscope); however, if you take such knowledge and understanding of the physical world and apply it to some technique, device, or other invention (eg, an aerodynamic plane wing design created by use of the [free] mathematical models that describe the physical world, or some high resolution microscopy technique created based on one's studies/comprehension of optics and laser physics), then you can patent it as such a configuration was only brought into the world through your own thought, while the mathematical theories/models describe what already exists.

     

    Addendum:

    To your artist/musician analog: that is valid for the work of an applied scientist / engineer. A composer's work is that of creating music, an (abstract) painter's is that of manifesting a visual abstraction of their mind's creation, and an engineer/applied scientist's is that of articulating a manipulation of the physical world created through their own understanding of it; contrarily, a pure scientist looks at what already exists and describes it in terms understandable. Some of what already exists is extremely complicated and requires either unique or very dedicated minds to understand and describe it, but that job can be up to anyone and is openly available to anyone (wave expensive experimental equipment). Note that those who publish their works in the journals of their respective fields generally do so with their names, and so do get credited as has most often been the case (consider that we know that Einstein developed relativity and Newton did Calculus/lots of optics and mechanics.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.