Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation on 11/16/20 in all areas

  1. 2 points
    That could work, if you use the right primers, of course. There are multiple ways to approach this issue all with different advantages and disadvantages. It also depends a bit on the subject. But given the details you have provided I would not see why a PCR would not work. Edit: It would not work if you for some reasons a mixed sample, for instance.
  2. 1 point
  3. 1 point
    Generally speaking, we simply don't have time to investigate all our biases in anything close to a scientific manner, never mind a rigorous one. It's at best some degree of quality control sampling, then (hopefully) left to our good judgement gained from our experiences. Obviously guarding against confirmation bias is a good idea. Try to be accepting of people as much as possible...even if they're wearing a Toronto jersey....😄
  4. 1 point
    Hi, My friend you are right. Because the author showed some stuff which was a joke. I discussed with Ghideon also. Then we have to improve the response received from the Markove chain. You are saying that GPT-3 also does the same thing and Ghideon is saying that we can improve its response. So let's try to improve the response from the Markov instead of focussing on NN. <but you will still have to train it on that dataset> You mean that instead of using the war&peace stuff, I have to focus on my domain related messages? God blesses you. I would develop some messages and then ask you people about its chaining. Zulfi.
  5. 1 point
    It seems my response in your other chatbot thread (intents classification) led you to only half the correct solution. In the other thread you had a link to a data set which provided you with both input parameters and a series of responses that fit them. I suggested a markov chain as a much simpler way to map those input phrases to output phrases than an ANN but you will still have to train it on that dataset (and likely format said data in a way the chain can learn to hop from the correct state to the next.) EDIT If you’re actually looking to properly model intent (I assumed you were looking for homework help) then that is a topic of ongoing research. GPT-3 is little more than a statistical model that, although a lot more complex, is similar to a markov chain in that it maps words to the next based on probabilities. It’s just GPT has 3 billion parameters while people tend to use markov chains with like, 3, parameters. GPT does not understand intent anymore than a markov chain does.
  6. 1 point
    It's up to you to decide, this is your fantasy.
  7. 1 point
    I’d think it depends on the species, however, Wikipedia says this in general: https://en.wikipedia.org/wiki/Millipede#Reproduction_and_growth
  8. 1 point
    Is this a joke? Anyway: Yes. What have you tried so far? How did you extract the intent of the user (I guess you did not yet)? What data was used to train the bot to provide answers? What did you use to capture the dynamic aspects of the response? The response to "What is the date today" is dynamic and handling that has similarities to a customer asking about the price of an item (as you stated in the opening post about groceries).
  9. 1 point
    I would rather have watched the video ...
  10. 1 point
    its such a pity that andromeda doesnt actually look like this to the naked eye
  11. 1 point
    From any given distance to a black hole the 'gravitational pull' does not change at all as a BH gets smaller. If our sun were to suddenly become a BH there would be no change to its gravitational pull on us. The gravity gets stronger when you move closer to a center of mass. Imagine the strength of gravity on the surface of the sun. That is the strongest you can feel the sun's gravity. But if the sun turned into a BH, you could get close to its center of mass since its 'surface' is now closer to its center of mass. THAT is where you would feel a stronger gravitational pull.
  12. 0 points
    No. He wrote several, and jolly good ones too. But he wasn't the only one, not even the first. If you don't know what a word like "literally" means, don't use it.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.