Everything posted by CharonY
-
Why you have to be so careful accepting answers from AI
The scaling argument makes perfect sense, though I suspect there will be some nuance regarding what activities require the support of agriculture and which not. I am guessing that in most cases it wouldn't be a yes/no answer, but rather a matter of scale. We do have evidence of very early crafting and arts, but more complex arts really could only develop once food wasn't the key limiting factor of survival, I would guess. But regarding wars, there are (oral) records of First Nations in North America. While some have developed agriculture, others were largely dependent on hunting. I would suspect that the scopes of such conflicts were a bit more limited, but could be interesting to follow up. That being said, I suspect that it really depends on what we consider a war. If that is any large scale aggression between communities, that has likely happened throughout our history (well and our ancestors, considering that our chimpanzee cousins are doing that, too). Military specialization (e.g. making shields and building weapons specifically against humans) was also very prominent among First Nations, including hunter communities, as they developed a highly sophisticated system to sustain themselves rather successfully (which is one of the explanations why some First Nations didn't really develop large-scale agriculture).
-
Why you have to be so careful accepting answers from AI
There is one more thought on this, now that I think about it. I have been talking with researchers, who have collaborations with China. What I found interesting is that that it seems that in China, AI is intended to be used as a tool and they put a lot of money into operationalize AI, e.g. for robotics or to solve very specific questions. Even in the educational sector their implementation of AI seemed far more supporting learning (e.g. dedicated tools to reinforce training elements, rather than giving answers). Meanwhile, in the West AI is often framed as a thinking tool with the ultimate goal to develop it out into AGI. I found the perspective quite striking, and to me the Chinese approach seemed more grounded. Or at least I have an easier time to wrap my head around it without having layers of hype on top. I am curious, how do you see it? Edit: I should add that I am aware that the Chinese path could, at least in part be the result of the government being afraid that it could be a tool being used against then, but it still (to my mind) represents itself as more rational model, regardless of the underlying motivation.
-
Why you have to be so careful accepting answers from AI
Ah I read "supporting" in the text as a form prerequisite. My bad.
-
Why you have to be so careful accepting answers from AI
Intuitively I would have thought that language would predate agriculture. There are societies who largely live from hunting and have developed fairly complex societies. Though there are limits in community size and specialization, and associated forms of technology development, of course.
-
Why you have to be so careful accepting answers from AI
I think there are two elements of it. Historically, the development of abstract language was probably the biggest change in human history. Other developments, such as writing had huge impact and offloaded some of the effort of oral memories, but those are still entirely human activities. Even when writing affected memorization, writing itself has become a human activity. Here, the activity is offloaded wholesale and there can be entire loops without any input from humans and the role for humans keeps getting smaller. That, I think is entirely new and we really don't know what to do with it. The stated goal of AGI is basically to make human thinking obsolete. Nowhere, in this scenario do I see what the place of human then would be. Sometimes they throw in abundance or related ideas to it, but those are more independent economic discussions, only peripherally related to AI.
-
Why you have to be so careful accepting answers from AI
No, it is both. Folks do have positive experiences, though at least in my neck of the woods it depends on who you talk to. For example, for those doing more teaching it is considered more a pain. For certain types for researchers they are a good copywriter. For students they are the best thing ever- though the pain of learned incompetence will come much later. But the viral stuff comes from the having AI infused in every electronic device, and loosely the following talking points: don't worry about cost and resource use. AGI will solve all our problem, so don't even think about regulating the system the benefit will outshine all possible negatives. So really, don't think about regulating it also: here is your email/pdf. Do you want me to summarize it? You really don't want to do all the work, now, do you? look, it is just a harmless chatbot. Don't think about what folks can use it for. After all, it is pretty much too late an fait accompli. There is really no use to discuss ethical or other use at this point anymore. It came in fast, and while the companies at the beginning seemed to stress ethical use, it moved so fast in integrating so that there is little to no thought about the consequences on any level. We are in the midst of a great experiment where we are going to figure out, for the first time, what happens if we take an aspect that we often use as the defining factor of humanity, and offload it to an external system for efficiency's sake. There have been cataclysmic developments in the past, such as the invention of writing and other physical record-keeping. But those happened over a long time frame. Now, the companies are pushing for a massive acceleration, by having a popular product so that folks get used to its use, without thinking of consequences. The last technical development I think of with similar impact was the combination of cell phones and social media and that was still way slower than what we see here. And still, we are only starting, probably way too late, to do something about the former. In my mind, and seeing the last few years of students, it is like giving free candy to everyone not thinking about the incoming diabetes crisis. Edit: I also think the term "evangelize" is exactly right. And that worries me, too. Mixing religious fervor with something that is being embedded in almost all aspects of life is something that I am skeptical about. And this is from someone who always had a deep love of tech and what it can do. But largely, I was thinking about it in this framework: But AI is not sold us a tool, and certainly not a precision tool. And in fact it is not used like that, either. It is being used to offload the process of thinking. It has been used to make folks feel less lonely. It feels emotional and intellectual gaps. I think where people are right is that at least some folks are not thinking about it like a tool. It has become an emotional and intellectual crutch.
-
Why you have to be so careful accepting answers from AI
Yes, but the framing is different. With a jack hammer, you are supposed to learn how to use it before doing so. And if you mess up, you often have somewhat immediate consequences. With AI, it is more marketed as something to not make you think or learn, it will do it for you. Also, it is everywhere, normalizing even the stupidest interactions. I think I would be at least as much annoyed if someone constantly shoves a jackhammer into my face and tells me to use it for everything. My point is perhaps, it is being sold as precisely no like any other tool.
-
Why you have to be so careful accepting answers from AI
Perhaps even worse. It is not only a stochastic parrot, it is also a stochastic parrot in a mirror. It creates and illusion of something that is not really there but seems realistic enough that the user will project their own thoughts on it. Then, by having their thoughts reinforced what they consider to be external, but, as mentioned in the previous post it fundamentally is mostly a conversation with yourself. This in itself is not necessarily bad, as it can help shaping your arguments. But it falls apart if folks don't realize that because of the way they are using it, it is not really an external agent, it is there to react to your prompts. I see it quite a bit with my students who use it to gain confidence in their reasoning, but it fails to grasp the gaps in the reasoning, and very frequently results in overinterpretation and ultimately false conclusions. The utility of this tool unfortunately scales with expertise.
-
Why you have to be so careful accepting answers from AI
I will add that especially with regard to journal articles- they rarely provide answers as such (except for very specific things). They add evidence of varying quality to discussions. How they then contribute to answering a a question depends on the expertise of the person who uses the papers. And interestingly, in some fields with with restrictive or at least well-defined knowledge frameworks (much of medicine and engineering, for example) AI will likely perform as well or better than humans. Conversely, in other areas with significant gaps (much of cellular and molecular biology), the undocumented expertise of humans is what differentiates it from AI. I.e. someone working in the field is much better at evaluating the strength of presented evidence, often due to undocumented cues. Finally, the single biggest issue I see is accountability. A reported/journalist who keep getting things wrong can be easily classified as competent or incompetent in the field. Similar to those who write journal articles, in the community folks can get a sense which groups are really good at delivering high-quality research, and who puts out everything that crosses their minds. For AI that doesn't work. Some models are better than others, but even if they are great in one area, they may suck in a different. And each update can make them better in another area, but break in yet another. Ultimately, I think it boils down to how we trust anyone or anything. We can direct trust to individuals, as we can look at specific track records, hold them accountable and/or directly interact with them. Humans are entities that we somewhat understand, if only by extrapolating with knowledge about ourselves. AI are largely opaque, they might change at any given minute and are fully beholden to their owners who can change output at their leisure (with Musk's Grok being a prominent example). I trust Steve with vector modeling. He seems to understand it really well. I don't trust Steve with understanding cell lines. With AI you have to extend the trust to the company and the the whole concept of how AI generates answers.
-
Why you have to be so careful accepting answers from AI
Why? The math department has Steve. If you need maths you go to the basement and bring him coffee and treats. You just have to follow protocol. First you ask question and Steve explains. Then you just stare at him. And he will explain again. Just slower. Then, very importantly, you have to look like a deer in headlights. He will sigh and then show you. I think he secretly likes that. He also likes sugary treats. Steve is a good guy and doesn't overheat like our computer. This is because Steve is in the basement and hides from the sun. And the outside. His wife said we shouldn't feed Steve after midnight. Steve might be a Mogwai. I prefer Steve over computer.
-
Pseudo-oppositionist spoilers in autocracies
Troll/propagandist, there is little difference of that nowadays. A tell is characterizing Navalny's anti-corruption actions as "going out of control". From the viewpoint of Putin it might be true, but for an observer, not so much.
-
The special relationship...
Would depend on the definition, but if you are thinking of the Byzantine Empire, it was IIRC largely conquered by the Ottomans. The Vatican was more of a state within that context, when the power center moved away from Rome. Not particularly, but the deference you sense is more something that is projected from Trump rather than the US population. He loves the idea of hierarchy and that some folks are just born "better". I think among Europeans there is some hope that he would make some inroads into modifying Trump's behavior, though I think that there is scant chance for that.
-
Your Brain: Perception Deception, PBS Nova (2023): S50(EP9)
I think a problem with that explanation is that it is mechanistically vague. The issue of course is that (at least when I read about it) the mechanisms themselves are not really known, and might still not be. A focus at that time was on the better understood elements, such as the anatomy of connections, thus following the potential pathways of information. The attractive element is that in both structures there are areas that directly map the visual field, which makes objective tracking of visual areas feasible. The big issue is that this phenomenon breaches the area of subjectivity and consciousness, where it is much less understood how neural correlates create the subjective awareness. From what I remember, early focus was on on connections that bypass V1 and therefore create an attractive anatomical model of these other circuits that add up to our total perception of things. Going back to the idea of a tuning fork- from what I remember I think this might be a bit of an overstatement, or at least it might require qualification: It might be a matter of how we define data but even on the sensory layer, stimuli are heavily modulated and the anatomic structure and connections themselves are doing a lot of filtering and signal modulation. In that model it seems that the thalamus is an extension of that? I.e. the distinction would be more of amount of filtering rather than being filtered and unfiltered (as anything leaving the sensory would be processed somehow). Very, very vaguely I think I had a discussion with a prof back in the days about the role of the thalamus as a signal relay and that how the signals are distributed (I suspect that is what you might mean with tuning fork?) could affect how the signals are then subjectively perceived. One could argue (and I am fully speculating here, based on vague decades-old memory), for example that some might for example have motor-relevant information, but depending on how the signals are routed, (the way I imagined was a splitting of signal across different pathways) some elements get suppressed in terms of conscious perception, others are heightened. The former could trigger motor responses by suppressing or delaying slow, conscious processing (e.g., if you need dodging something). And subcortical structures such as the thalamus can be themselves be tuned to send signals one way or another (e.g., if you are relaxed vs fight and flight mode). But again, this is really not even a student-level explanation as I have really stopped reading on this field a long time ago, to some regret.
-
“The Star Mangled Spanner”
I quickly googled for a map and came up with this. Essentially the lanes relying on the Suez Canal path are impacted. Previously, when the Panama canal was blocked, the lanes through Suez intensified and during heightened conflicts in the Middle East in the past (and present) the reverse happened. But it is less about the transport through the Strait of Hormuz, but more the shipping through the region (I think).
-
“The Star Mangled Spanner”
I don't think that it is an meant as alternative, rather a response to global rerouting of lanes. This is all speculation, but the escalation likely also impacts the Suez Canal. From what I understand much of the Asia-Europe-US routes go through either Pajamas or the Snooze canal.
-
"With A Strange Device"
Intuitively, the absolute number is so small that I am not entirely sure whether rigorous applications of statistics really provide that much insight. At those levels I would (perhaps wrongly) assume that stochastic effects would dominate, even if the pool was much smaller.
-
Your Brain: Perception Deception, PBS Nova (2023): S50(EP9)
So I vaguely recall that maybe around 2000ish I found some papers discussing the role of the thalamus in blindsight and that because of what they find they are postulating different types of it. I am moderately sure that I put it at least one of those on my to read pile back in my office. Maybe I can find those papers online again (or maybe they are overturned at this point).
-
Why is there a Great Divide between animal designs? Never read anything about this anywhere!
True, though with the exception of echinoderms I think the symmetry is maintained in generally maintained in other groups even as adults. But as with many phenotypic classification schemes, things might be weird.
-
Why is there a Great Divide between animal designs? Never read anything about this anywhere!
And existing designs can place a constraint on subsequent ones. This is how genetic relationship work. I.e. traits that are complex cannot be easily undone or reversed. But to specifically address questions like this: As exchemist points out, this is because the animals you are referring to all have a common ancestor which share the same body plan. But I think you might be a bit confused about this point: You seem to make a hierarchy here (above and below something), but this is an inaccurate way to see things. A cat is not above an ant, for example. Evolutionary, everything that exists at a given time exist in parallel (to state the obvious) and are not hierarchically ordered. What you might think of is how far back the lineage between ant and cat have split, which would be about 700 million year ago. I.e. there was an ancestor 700 million years ago, that split into different lineages that, over time, become what we now can see as ant or cat. So based on that, there is no sudden cut-off in that perspective. However, what has happened is that at some point animal with the body plan you mentioned evolved and they have split into further and further species, but they did not all come into the existence at the same time. Tetrapoda (four-limbed vertebrates) evolved around 390 MYA and are the ancestors of amphibians and amniotes, including dinosaurs as well as mammals. Now, if you go back to lineages that existed in parallel or earlier to tetrapoda, you will find many other designs. You mentioned insects, which are derived from arthropods which go back about 540 million years. Cephalopods (to which the octopus developed) in parallel around a similar time. All these groups belong to the bilateria, animals with a bilateral symmetry, which where first members with estimates as ranging to about 700 million years ago. So we have a long history of animals that are not four-limbed, but with some basic symmetric body shape. Beside bilateria, other lineages included porifera, ctenophora, cnidaria and placozoa. Those have a very different structure and include sponges, jellyfish, corals, comb jellies and so on. These are the weirdos were folks might not intuitively recognize as animals, and where some are just blobs. So going back to the why, it is history and relatedness, but there is not sudden cut-off point as such. Only really a point where the body scheme first existed and if it still exists now, it just means that their descendants survived to this day.
-
Your Brain: Perception Deception, PBS Nova (2023): S50(EP9)
I have not seen that episode, but was it about blindsight? I.e., damage in the primary visual cortex that removed the ability to consciously perceive things, but unconsciously processed them (e.g. in form of avoiding objects or reflexively react to movement)? There have also been variations thereof, where they for example cannot perceive color, but notice differences in wavelengths. I remember having read about that a long time ago during undergrad and was immensively fascinated by those studies. It was presented IIRC as a way how the brain does parallel processing of cues at various levels rather than in neat distinct areas. Though I am not sure if additional studies have changed that view since then.
-
“The Star Mangled Spanner”
I have some dread for that. Even in the best case scenario, it won't wash everything away. There will be a higher chance of some corrections, but the fissures are so larger, I don't see any mending happening. OTOH, things could go sideways. And what does that tell you then? Or what syndromes are. Though to be fair, I think it is something else. Those that are MAGA and are willing to suffer, seem to be able to endure that, as long as someone else they don't like suffers more. At this point, I have very hard time trying to see them as victims.
-
How a Janet Jackson song crashed laptops for 9 years
Also, I think newer HDD have implemented vibration detection and related safet features, which might have been absent in older systems.
-
"With A Strange Device"
Doesn't help that FBI is actively purging competence from its ranks.
-
How a Janet Jackson song crashed laptops for 9 years
Now I wonder whether my laptop kept crashing while writing my thesis because I was accidentally cursing at the right frequency...
-
“The Star Mangled Spanner”
I am looking forward to discovery. This is the point where you should ask yourself: why do you know that? An it really boils down to two things. In this part here: You basically say that media are untrustworthy. So you eliminate those as sources of information and supplant it with trust in Trump. I.e. you exchange trust in an information ecosystem (i.e. media) with trust into a person who demonstrably throughout his career has lied to further his personal goals. This is certainly a choice. A choice that abolishes accountability to those in power, a choice that selectively ignores information, a choice that weakens democratic systems (which rely on accountability). It is certainly not a choice that I would make. But perhaps I am just not understanding things properly and perhaps we can explore this issue a bit more systematically. Generally speaking there are at least three key elements of trust. The first is goodwill or fairness. This is is rooted in aspects of transparency that shows that the party is acting with benevolence and following procedural justice. In short, it should signal that processes are open and oriented to the benefit of people, rather than, for example, personal benefit. Media falls flat in some aspects, as they are a business. On the other hand, legacy media still has some transparency in editorial decisions, where they at least have to signal journalistic integrity. Some have failed in the aspect, to a large part due to failures of their owners (say, Bezos) so there are certainly deductions to be made. But there are also other information sources, such as academic ones. Here, transparency is mostly gained through processes like peer-review, academic exchange and providing specific sources for each and any claim. How does Trump's presidency stack up against that? Has it shown goodwill to the people over personal interest? Has it shown transparency and clarity in their actions? Second, you need competence and integrity. This part demonstrates that one has the competence to deliver what is being promised. This includes technical ability as well as adherence to prior state values. The values for journalistic integrity are well-known and while there is some faltering, senior journalists at least try to maintain their reputation, as that secures their livelihood. Same for academics, as lack of integrity is usually the end of the career (unless they decide to go on right-wing circuits). So how does the presidency stack up to that. Especially in the face of documented lies and misdirections (even in court)? And third, there is the aspect of accountability and governance. To build trust you have to demonstrate that there is an accountability structure that will keep you in check so that folks can trust in your actions. How does that pan out, especially when Trump repeatedly stated that he won't take accountability for any failures, only for successes? I think addressing these questions would provide some more insights into the motivation of trusting the government. For that specific element I wonder if you have sources. From what I read, there was an initial positive response when folks thought that there was a plan for sustained regime change, but since then I have only seen one poll where Iranian Americans were identified showing broad opposition, mostly related to civilian casualties and uncertainty of goals (i.e. cutting through all three elements of the above elements). There was about a third that somewhat or strongly supported it, which is about the same as the group that strongly opposed it. I suspect a lot depends on whether they still have friends and relatives in the region.