Jump to content

PoetheProgrammer

Members
  • Posts

    22
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by PoetheProgrammer

  1. 3 hours ago, swansont said:

    Considered by whom? (citation needed)

    it’s such a common relation I’m surprised you’ve never heard of it.
    http://feature.politicalresearch.org/whats-the-matter-with-secularism

    https://www.vice.com/en/article/3k7jx8/too-many-atheists-are-veering-dangerously-toward-the-alt-right

    https://www.rysec.org/event/atheism-and-the-alt-right/

    Namely it’s a pretty straight forward path from the New Athiest Movement of the 2000s -> “Intellectual Athiesm” like Sam Harris -> Alt Right

     

    1 hour ago, John Cuthber said:

    From what I have seen, "Alt Right" at least pretend to be Christians.

    They claim to be theists- they go to church  and say that Trump (or whoever) is God's chosen president.

    Id argue that is just American Conservatives (the line is frankly blurry regardless,) while Alt-Right tends to reject the religious aspects or at least did initially.

     

    With that said still most atheists are likely to be Left leaning, it’s just not as clear cut as it was 1-2 decades ago.

  2. 2 hours ago, ScienceNostalgia101 said:

    I still suspect it's not just Trump voters who'd condemn atheists for being left-wing, but to a lesser extent centrists and independents as well; you'll note that Joe Biden seems far more in favour of religion than, let's say, Bernie Sanders. (Glad as I am Biden won instead of Trump.)

    Not sure this is true anymore as the new atheist movement is considered a predecessor to the alt-right.  I don’t think religion is a good indicator of political leanings in the present day and only ever was because the USSR was officially an atheist state.

  3. Yes sorry if I wasn’t clear initially.  You asked for guidance on building this LSTM but that’s a big ask and I was advising you to implement smaller networks on your own to build the knowledge and intuition required to implement this paper.  An LSTM is four networks so building CNN is definitely a first step regardless.  In the process you may find a simpler network (or combination of) will suffice for the problem you are trying to solve.

    In general if you are wanting help implementing something it needs to be smaller than a research paper.  At least take a first pass at it and come back when you run into trouble or get stumped.  As it is it seems like asking someone to do quite a bit of work for you instead of helping.

  4. I’d say there is or should be a distinction between sacrificial suicide and not.  If you are killing yourself to prevent information getting out, or even to prevent a gruesome torture where you’d likely die anyway, then that may be morally correct.  If you are down on your luck and take your own life it’s definitely less so and we should actively prevent this when possible.

    Of course you could make the argument some people’s lives are gruesome just due to being alive, such as people dealing with great amounts of pain or just getting a bum wrap all around since birth, and this is the reason Ethics is the only class I ever dropped.

  5. LSTMs are a couple years away from cutting edge.  This is a big ask but let me ask have you implemented a convoluted neural network before?  An LSTM is basically a recurrent net built with four CNNs (that’s a simplification please don’t take that too literally) one for input, one each for short and long term memory and then finally an output network. As I said this is a complex network so if you haven’t built a CNN you need to learn how to do that before scaling up.  

    Also what is this for?  LSTMs tend to be best for time series analysis (stock prices, sales data, etc) so there may be a simpler network you could use for your problem.

  6. Iirc (this may not be true but it is for most dynamic console commands) top uses ncurses which is a TUI library.  It’s fairly easy to do this ncurses though my only experience with it is in C not Java, you may want to see if there are binding for that.  The old school way is to simply redraw the entire console with only the characters you want changed changed.  Normally you’d check how tall the current console is and ‘\n’ your way to a clear screen before redrawing, but you can use the clear command if you want (most languages have a system library that lets you call commands programmatically.)

  7. I’m not a physicist but a simple google search claims that antimatter is believed (but not yet proven) to interact the same as normal matter gravitationally.  Normally when a theory likes this pops up my first thought is, “surely someone with more expertise has thought this and there’s a good reason the theory isn’t widespread.”  That isn’t always the case and people thinking like you are the reason sometimes new theories do come out of left field, but it’s worth trying to disprove yourself first anyways.

    https://en.wikipedia.org/wiki/Gravitational_interaction_of_antimatter


    While the consensus among physicists is that gravity will attract both matter and antimatter at the same rate that matter attracts matter, there is a strong desire to confirm this experimentally- although simple algebra shows that the presence of two photons with positive energies following electron/positron annihilations observed frequently in nature is extremely strong evidence that antimatter has positive mass and thus would act like regular matter under gravity.

  8. 8 hours ago, Ghideon said:

    I agree. I think my question was not clear; I mean more a chain of dialog contexts* spanning over several turns. Assuming parsing user input and calling backend bot logic is taken care of, what is a good way to draw the conclusion that the user is discussing the same topic or has moved on to a new topic? Example: The user has added items to basket and wish to pay. The dialogue moves on to handle checkout. In checkout it may be more likely that the user will ask about deliveries than about items in the stock and the bot may learn that to allow for better predictions. I have used similar things in some frameworks but the implementation was blackbox. As Zak seems to try to start more from scratch your opinion on implementations could be interesting. I also note that this topic is huge and the research is ongoing. Prototypes I did last year is probably obsolete by now.

    I would say that, if your goal is domain specific as in the example of sales that we’re rolling with, you would need some hard coded primitives/nouns that you can push to a “topic stack,” by which I mean that chains are fine for just handling responses but you will have some logic that isn’t a machine learning.  Mainly though my point is that moving from discussing the items to checkout doesn’t actually change the topic but it adds a new, derived, topic unto the “stack” which in reality would be a higher level chain than the markov chain.  

    So perhaps you have a high level chain that learns the users hop from discussing the item to the checkout and as it sees you moving topics this high level models moves the “chatbot” to a chain trained on topics regarding checkout of items (which would still have to delegate to some logic that eg checks that weight and/or the freshness of the item and can then give proper answers to questions one would have at checkout.)

    As you say this stuff is ongoing research: but you’ll definitely need to stack machine learning techniques alongside old school search to both keep track of the “topic stack” and correctly answer it.  That’s how I’d go about solving the OPs problem but from what I’m gathering from his post this is not a weekend project.

  9. 8 hours ago, Ghideon said:

    I have a followup question on that. In your opinion, is that a viable approach in a more general application that zak100's example? Assume we have a working input parsing, tokenising, stemming, intent and entity extraction etc in place. To perform back end calls (bot actions) and generate output we need to track the current state or context (I've seen different words used). Example: we have the following (sketchy) dialogue: (chatbot output in italics)

    "How much does 5 bananas cost?"
    "5 bananas costs 4€"
    "I want to buy them"
    "I have added 5 bananas to the basket"
    "Do you have apples in stock?"
    "Yes we have apples in stock"
    "What is the price of them?"
    "One apple costs 0.35€"


    The reasonable answer would be to present the price of apples, not the price of the bananas in the shopping basket. Would Markov Chain be useful to handle the user switching the context in the example above? Reason for asking; I have tried things related to this in more high level frameworks where the underlaying mechanism was not exposed. Your response to zak made me interested in possible implementations.
    (Note that the above is just a quick example, it could be reasonable for the bot to answer "I want to buy them" with "I did not understand that, please try something else" or "sorry we do not have 'them' in stock)

    On its own probably not.  You could for sure use a markov chain to get “price n apples” from the phrase price of 5 apples, and it’d be trivial to allow the same or a related chain to keep track of a state (so as to disregard bananas) but you would then need to delegate the looking up of some price from an inventory system, etc.  A chain is just that a chain of words and it learns to hop to the most likely word based off the previous N words.  By the time you implemented a chain to extract that kind of data it wouldn’t be a markov chain.

    you could use a chain like that to process words and eg extract nouns and things about them (price n apples) but if you want real conversation you’ll need an object hierarchy no process nouns and a system to learn all the things they do e.g. price of bad apple needs to know apples go “bad” as in rot.  To model proper human language such a hierarchy would need to be fairly complex and self building.

     

    edit: would need to be self building if the intent wasn’t to spend years training by manually writing out all these things.

  10. I haven’t used a markov chain in years so I don’t know a good tutorial but the first google result was a python library https://github.com/jsvine/markovify

    May I ask what did you try in regards to neural networks?  There are lots of architectures of ANNs and things like a simple CNN will probably get you nowhere fast but RNNs will far exceed the capabilities of a markov chain.

  11. It seems my response in your other chatbot thread (intents classification) led you to only half the correct solution.  In the other thread you had a link to a data set which provided you with both input parameters and a series of responses that fit them.  I suggested a markov chain as a much simpler way to map those input phrases to output phrases than an ANN but you will still have to train it on that dataset (and likely format said data in a way the chain can learn to hop from the correct state to the next.)

    EDIT

    If you’re actually looking to properly model intent (I assumed you were looking for homework help) then that is a topic of ongoing research.  GPT-3 is little more than a statistical model that, although a lot more complex, is similar to a markov chain in that it maps words to the next based on probabilities.  It’s just GPT has 3 billion parameters while people tend to use markov chains with like, 3, parameters.  GPT does not understand intent anymore than a markov chain does.  

     

     

     

  12. The simple answer is making near identical copies of itself, tho simple proteins aren’t life, so what life is would be a group of proteins working in junction able to generate near exact copies of the entire protein chain.  The atomic structure of that is likely near infinite in terms of how to it’s made up (although probably small in terms of how such things can arbitrarily form in our universe without some intelligent designer I.e. humans making proteins in a lab.)

  13. If it’s homework or something that absolutely required a stat model I would recommend a markov chain.  It’s about the simplest one that’ll work for you as you just train it to jump from one state or another (autocorrect used to be a markov chain until a few years ago) and can be implemented in a few dozen lines of code.  Neural nets are sexier but very difficult to get right with language unless you have a lot of time (the complexity compounds quickly.)

  14. FYI forum rules say we shouldn’t need to click links or watch videos to participate.

    I’m not sure a model would be best here as that’ll get complex /fast/.  However it is possible to build a neural network that takes in input and maps to the given outputs.  How you would build such a model is a topic in of itself but you could start with just a RNN where you’ve mapped all English words to a given number and setup the input and output layers to take/output the binary representation of said numbers.  You could do a CNN but you’d need to allow each layer to take in and output entire sentences and unfortunately I think you would mostly get gibberish or overfitting.  Neither are desired.

    it would be simpler to just build a simple “expert machine” since you have the data set available and you could trivially  map inputs to outputs provided within the set provided.  For instance when a user inputs “Hi there” you lookup in the data set what pattern it is in and then return one of the responses from that same node, I.e., “good to see you again!”

  15. I’d break it down into individual states (you almost certainly need to regardless) and pull some polling data as a start to get probabilities.  You’ll also need data about each states views on the issues at hand in order to properly account for them otherwise you’re just guessing.  I would start with modeling an individual state (e.g. Texas) and building a backtester to verify results.  Once you get that down add a couple of other states to guarantee you aren’t overfitting and scale up from there.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.