Jump to content

Quantum Prompting (statistics and some speculations with proof)


Popcorn Sutton

Recommended Posts

This conversation was had on April 14, 2012.

 

The original poster was interested in a theory of grammar using statistics and posted the following while quoting from one of his fellows. He said-

 

 

Ok, first I'm going to read and comment on this post, then I will use it as discourse for a theoretical analysis.


And I am trying to bridge the gap as good as possible. Hopefully this discourse analysis will help to show how it will work.



We can't always find a link between sound and meaning, the sound j doesn't mean anything to me, but I still use it. Or maybe the meaning of sounds is actually correlated to the environment they fall in. And in the case of "j," we have no immediately accessible meaning because it falls in many environments. Maybe the most probable meaning of j (in my idiolect) is "jar." It's statistics.

Quote: Evidence 2 (specific): The word "oink" sounds like a pig. Therefore, it must mean "pig" in some sense. Clearly it's related.
The word oink doesn't sound like a pig, but it is the word that is relevant to the sound that pigs make. So yes, it does mean "pig" in some sense. Clearly related


There may be a relation there, the relativity is within a minimal environment, but there is definitely a correlation. But these words are also correlated through relevance. The word "hissing" occurs relative to the occurrence "snake" and therefor, "hissing" occurs within the environment of "snake" and visa versa. This is a pattern of language.
Quote: Conclusion: I've explained how to approach the sound-meaning relationship. I've shown several clear specific examples of how sound and meaning are related. Therefore, my theory is right, we can explain all language in this way, and every other theory (Minimalism, for example) is wrong.

That's a fallacious overgeneralization. I see what you're getting at, I got the theory in my head, understand that the only reason I am responding and drawing out diagrams and making it more clear and filling the gaps is for the benefit of humanity, and more importantly, my benefit. I want my phone and my computer to literally be my best friend. And for humanities sake, we could have our children grow up with a best friend ear attachment that listens to everything and tells them the best choice, then they can make educated decisions which will lead to better survivability. All for life longevity.



That, my friend, will come later, in the analysis of the discourse.

Quote: (*Disclaimers: 1) This metaphor is not fallacious. This is not proof. This is an example of how your argument sounds to us. 2) Some researchers actually believe claims like these, so I'm not just making up a silly example-- note that my point is not about the content of the argument but about the style of the argument.)

Right, it's fallacious because it appeals to belief, it also appeals to popularity (to an extent), it's also biased.



Statistics of how many times something occurs, and how many times it occurs within the environment of something else(plural).
Quote: so we can't say "well there just isn't evidence for that". If you're really saying that language is just a result of probabilities of people saying it... well... of course. The immediate future is completely predictable from an omniscient knowledge of the moment. That's not language-specific. It's also not realistically computable. And it doesn't explain why language is more interesting than, say, the way that flocks of birds fly as a group. That might just be statistics too. Why are you studying language, then, not birds?

I have studied birds, and there is a correlation between their flight pattern and fuel efficiency, so there is statistics there as well.

Because that is not my field of expertise, and I think this will benefit everyone. Really I'm not a selfish or self centered guy, if you hung out with me in real life you would see that. I'm logical to say the least.

Quote:
Someone else wrote: Your point seems to be just that the things people say are non-random. Well...yes. Is anyone disputing that? Has anyone ever disputed that?
And more specifically, XXXX, how does your theory actually explain language, beyond the fact that everything relates to everything? We're missing the link between the details and the "everything=everything" claim.

Take Einstein's popular theory of relativity, translate it to linguistics, you get relevance. Take quantum theory, translate it to linguistics, you get probabilities. Connect the particles (tagged with knowledge, P600) and you get a probabilistic link. The knowledge then becomes maximized.

OK the next post will be an analysis of this post as discourse.

 

On to the next post.

 

 

Ok, given the previous post as the ONLY context the computer will have. Here are some examples of input and output.

Input: What do you mean by relevant?
Output: (Examined for the maximal environment to the minimal, we get a match with "what do" [in "what does it take..."] and count that occurrence. Unfortunately no other examples of "what do" can be found, so we search for "what," which is more probable, and notice that the most probable correlation is between the word "what" and "I," so we prompt "I" in the output)

I

(Then we search for the maximal environment -what, what I notice is that in the input, the word "do" cooccurs with the word "mean" within a 3 word environment, and it looks to be approximately the same in the discourse, so prompt the word "mean")

I mean

(So far so good, so we take away the what and the do from the maximal environment and search for "you mean by relevance" or at least type "you mean by relevance" into the find bar until you find the most maximal environment. "you" turns out to be the maximal environment, and in this case "you" occurs most frequently with "theory," so prompt the word theory)

I mean theory

(Ok, getting a bit ungrammatical, but nothing that a grammar check can't handle, There's also a link between "you" "theory" and "is," so prompt "is")

I mean theory is

(on to the next maximal environment, I see that the word "mean" occurs within the same relative environment as "relationship" so I prompt "relationship")

I mean theory is relationship

("by" does not occur at all, so on to "relevance" which only occurs twice and both times it occurs equally as much with the words "word" and "relative" so prompt "word" and "relative")

I mean theory is relationship word relative

(This is before the grammar check, I assume that during the enumeration is when the grammar is checked, so I may have skipped that part, but lets just do a grammar check now instead. So in order to do this, we search by the most maximal environment and see what other words in the enumeration fall with them and where they fall relative to each other.)

Unfortunately, given only the previous post as discourse, I cannot find a concrete way to establish grammaticality out of the numeration. It seems that because "I" and "mean" don't fall within the same relative two word environment, it falls back to the closest environment, which, using this discourse, would change the order of I and mean to mean and I. Given enough context though, and using this discourse as relevant data for computation, I believe that after the grammar check, the sentence would be something like this.

I mean, the theory is about word relationships.

And if it didn't prompt that perfectly reasonable grammatical response containing the probabilistically numerated words, then we have the option of correcting it, which will then make that phrase probable.

What are your thoughts?

I'm also considering deletion as probable, but only probable when there is grammatical excess that makes for a less probable response, if the word enumerated is required for a more probable response, then it will have to add dummy words. If all the words in the original numeration were used, it could be something like this.

I mean, the theory is about word relationships relative to each other.

 

 

I want to discuss this post, especially in relation to current knowledge, quantum entanglement, and the quantum mind hypothesis. It's been quite some time since this post and obviously the poster has convinced a lot of people about his method. Whether it's useful or not is a completely different topic though. I, personally, don't think it's going to be used, at least, for a little while (maybe a couple hundred years) because implementing this method in code, in the very basic sense, means raising a child.

 

Regardless, I thought you guys might like to see.

Link to comment
Share on other sites

Quantum prompting is different. The effect is that because a point of interest is entangled, it seems that it emits a light or radiates in some way that affects it's surrounding providing context based on proximity in space and time. All I can really do is describe it at this point and play with my toy program. What happens is that the poi (point of interest) sort of bubbles up in your nerves at the point where the hair meets the receiving end. It's a weird hypothesis but I'd really like to explain it.

Here's some math to link it (remotely) with quantum entanglement.

 

When you flip two coins, because of entanglement, you see one is heads (that was a 50% chance) and you know that the other one is tails (100%). Think of having 10 neurons all with different parts (like heads, tails, sideways, upside down, etc...). When you know that one neuron is heads (10% chance), you know that the rest are not heads (100%), the chance of one of the rest of them is now equal to (11.11111111111% chance). This is like adding more output. So we figure out the next one is tails, all others cannot be heads or tails (100%), so the chance of one being upside down is now 12.5%. The emergence looks like this (and so does the output)-

 

[H, T, U]

 

If, however, the chances were different, so as to have connected two separate sequences of likelihood, the output will change. So you have a sequence of 10%, 11.11%, 12.5%, but then it gets more added on (i.e. 10 more), now the chance is 1/17, 5.9%, 1/16, 6.2%, 6.6%, 7.1%, all the way until it beats 12.5%, then the emerged unit becomes part of the output . So the output would look like this-

 

[H,T,U,C] but the emergence looks like this-

 

[H,T,U,P,J,K,Y,C]

Edited by Popcorn Sutton
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.