Jump to content

Artificial Consciousness Is Impossible


AIkonoklazt

Recommended Posts

10 hours ago, moreno7798 said:

As stated by Blake Lemoine, he was not having a conversation with just the text chatbot, he was accessing the system for generating chatbots, which by his words is a system that processes all of google's data acquisition assets including vision, audio, language and all of the internet data assets available to google. What do you make of Blake Lemoine? 

If he was accessing other 'processes' then he was not dealing with Lamda. 

If he has been giving information out about Google's inner workings I'm not surprised he had to leave, I'm sure he violated many agreements he made when signing up with them. But given what he believed about the AI, he did the right thing. I don't know anything more about him than that. 

Link to comment
Share on other sites

On 8/15/2022 at 10:40 AM, Prometheus said:

If he was accessing other 'processes' then he was not dealing with Lamda. 

If he has been giving information out about Google's inner workings I'm not surprised he had to leave, I'm sure he violated many agreements he made when signing up with them. But given what he believed about the AI, he did the right thing. I don't know anything more about him than that. 

I may be veering off a little bit off topic here, but perhaps it may be beneficial to understand him a bit more to have a more rounded opinion about why (or how) he came to the sentience conclusion. I found the H3 podcast did an indepth interview about his background. It's interesting.

 

Link to comment
Share on other sites

  • 5 months later...
On 6/22/2022 at 1:17 AM, StringJunky said:

You are here to defend your thesis, and that means not to keep using your article as it's own evidence. Your approach is getting annoying. The best explanation for consciousness I've seen involves sufficient system integration and sufficient complexity. What is 'sufficient'? We can't know until we can observe and quantify it. That is the nature of an emergent property.

(I left because people were starting to make personal accusations, and the recent hoopla surrounding LLMs reminded me of this forum so now I'm back, and speak of the devil- look at the latest replies mentioning that guy from Google...)

I wasn't using my article as evidence- I just don't want to reprint my own argument over and over, which is usually what happens when people don't bother reading the whole thing before responding. It happens a LOT, everywhere I go.

As I've already argued in my article, emergence via complexity doesn't cut it as an argument when you can compare a non-conscious system that's allegedly much more complex than something that's not but is conscious.

On 8/20/2022 at 9:44 PM, moreno7798 said:

I may be veering off a little bit off topic here, but perhaps it may be beneficial to understand him a bit more to have a more rounded opinion about why (or how) he came to the sentience conclusion. I found the H3 podcast did an indepth interview about his background. It's interesting.

 

I'm sure it's interesting to some people, but could you sum up exactly what made him reach the conclusion he did?

All LLMs (large language models) work via the same basic principle.

You ready?

The statistical chances of words and groups of words being next to each other.

"That's... it?" One may ask.

Yes. THAT'S IT.

Can you imagine the amount of exasperation from experts who deal with LLMs when they see people claiming anything more than that, much less sentience?

I would post a link to Emily Bender's paper but I seriously doubt anyone here has the patience to read that short paper because many apparently don't even have the patience to scroll though something much shorter, such as my article. But hey, here it is just for yucks https://dl.acm.org/doi/epdf/10.1145/3442188.3445922

p.s. Something for laughs, and perhaps to illustrate my aforementioned principle. Try spotting exactly what's wrong with ChatGPT's reply in the attached screenshot, and exactly how it relates to what I said about the operating principle of LLMs:
 

2023-02-08 16_46_05-darthvader.png

On 8/15/2022 at 7:40 AM, Prometheus said:

If he was accessing other 'processes' then he was not dealing with Lamda. 

If he has been giving information out about Google's inner workings I'm not surprised he had to leave, I'm sure he violated many agreements he made when signing up with them. But given what he believed about the AI, he did the right thing. I don't know anything more about him than that. 

My personal Vegas wager is that he was tired of his job and figured, hey- If he was gonna leave anyway he might as well get some mileage from the event. There is no way anyone who is familiar with LLMs would make such a claim (see my above other reply as to the simple reason why); It's a comedy show, a cheap hoax for his own amusement, or both.

Edited by AIkonoklazt
change "I'm" to "I"
Link to comment
Share on other sites

On 7/25/2022 at 1:26 PM, TheVat said:

Yes, good point, emergent qualities are inherently unpredictable.  And that recognition of sentience will require a formidable sort of Turing Test.  Something that can tweeze out all the clever artifice that might manifest in a really high grade simulation (or somehow discern what David Chalmers calls a P-Zed).

On 7/18/2022 at 5:52 AM, dimreepr said:

Indeed, how would we know?

I can't immagine a question that would not result in a stereotype; imagine if you were an AI and had to convince people that you were deserving of, not being switched off...

There isn't a behavioral test that could demonstrate anything regarding a first-person only phenomena; The Chines Room Argument had shown that a long time ago without me having to say anything on top of it such as writing an article.

The only demonstration is that of identification of origin. Is the object an artifact (see dictionary definition) or not? Sure, if all evidence of manufacture is destroyed one would have no real evidence (assuming perfect physical imitation, as in even if you put it through an X-ray machine you can't tell from the pictures). However, that doesn't mean it's not an artifact. What something fundamentally is or isn't doesn't change from the availability of such identifying information.

Link to comment
Share on other sites

Hey Alkonoklazt, I am concerned that you are assuming things to be constant when we don't actually know what many things you referenced are in truth, such as the human consciousness. Any entity that can think and learn from its mistakes could very well be able to develop a reality of consciousness, at least in the way that we are. Human DNA has many similarities to a programming language. Our cells seem to be mindless and don't do anything until they receive instruction from connected and folded stands of proteins in specific combinations. Our own emotions and feelings seem to be a mix of drugs, electricity, and programming. That being said I enjoyed reading your post.

Edited by EmDriver
Link to comment
Share on other sites

1 hour ago, EmDriver said:

Hey Alkonoklazt, I am concerned that you are assuming things to be constant when we don't actually know what many things you referenced are in truth, such as the human consciousness. Any entity that can think and learn from its mistakes could very well be able to develop a reality of consciousness, at least in the way that we are. Human DNA has many similarities to a programming language. Our cells seem to be mindless and don't do anything until they receive instruction from connected and folded stands of proteins in specific combinations. Our own emotions and feelings seem to be a mix of drugs, electricity, and programming. That being said I enjoyed reading your post.

I like the bolded. 

Link to comment
Share on other sites

23 hours ago, EmDriver said:

Any entity that can think and learn from its mistakes could very well be able to develop a reality of consciousness, at least in the way that we are. Human DNA has many similarities to a programming language.

That first quoted statement of yours is circular reasoning. You assume "think" and "learn." Machines don't "learn," and you have yet to expound or demonstrate machine "thinking":
 

====

I textbooks readily admit that the “learning” in “machine learning” isn’t referring to learning in the usual sense of the word[8]:

“For example, a database system that allows users to update data entries would fit our definition of a learning system: it improves its performance at answering database queries based on the experience gained from database updates. Rather than worry about whether this type of activity falls under the usual informal conversational meaning of the word “learning,” we will simply adopt our technical definition of the class of programs that improve through experience.”

====

What you stated regarding DNA is backed up by exactly nothing (you're arguing from assertion) AND contradicts scientific findings (bolded for emphasis):
====

DNA is not programming code. Genetic makeup only influences and does not determine behavior. DNA doesn’t function like machine code, either. DNA sequencing carries instructions for a wide range of roles such as growth and reproduction, while the functional scope of machine code is comparatively limited. Observations suggest that every gene affects every complex trait to a degree not precisely known[17]. This shows their workings to be underdetermined, while programming code is functionally determinate in contrast (There’s no way for programmers to engineer behaviors, whether adaptive or “evolutionary,” without knowing what the program code is supposed to do. See section discussing “Volition Rooms”) and heavily compartmentalized in comparison (show me a large program in which every individual line of code influences ALL behavior). The DNA-programming parallel is a bad analogy that doesn’t stand up to scientific observation.

====

You said you went through my post but it sure doesn't seem like it.

22 hours ago, StringJunky said:

I like the bolded. 

Sounds plausible but false, kind of like a lot of stuff LLMs spit out as query results and the definition of a spurious argument.

Edited by AIkonoklazt
Here, I'll bold stuff too to make it look attractive
Link to comment
Share on other sites

1 hour ago, AIkonoklazt said:

That first quoted statement of yours is circular reasoning. You assume "think" and "learn." Machines don't "learn," and you have yet to expound or demonstrate machine "thinking":
 

====

I textbooks readily admit that the “learning” in “machine learning” isn’t referring to learning in the usual sense of the word[8]:

“For example, a database system that allows users to update data entries would fit our definition of a learning system: it improves its performance at answering database queries based on the experience gained from database updates. Rather than worry about whether this type of activity falls under the usual informal conversational meaning of the word “learning,” we will simply adopt our technical definition of the class of programs that improve through experience.”

====

What you stated regarding DNA is backed up by exactly nothing (you're arguing from assertion) AND contradicts scientific findings (bolded for emphasis):
====

DNA is not programming code. Genetic makeup only influences and does not determine behavior. DNA doesn’t function like machine code, either. DNA sequencing carries instructions for a wide range of roles such as growth and reproduction, while the functional scope of machine code is comparatively limited. Observations suggest that every gene affects every complex trait to a degree not precisely known[17]. This shows their workings to be underdetermined, while programming code is functionally determinate in contrast (There’s no way for programmers to engineer behaviors, whether adaptive or “evolutionary,” without knowing what the program code is supposed to do. See section discussing “Volition Rooms”) and heavily compartmentalized in comparison (show me a large program in which every individual line of code influences ALL behavior). The DNA-programming parallel is a bad analogy that doesn’t stand up to scientific observation.

====

You said you went through my post but it sure doesn't seem like it.

Sounds plausible but false, kind of like a lot of stuff LLMs spit out as query results and the definition of a spurious argument.

You are again making the cardinal mistake of referring to your own original thoughts as evidence.

Edited by StringJunky
Link to comment
Share on other sites

On 2/18/2023 at 5:00 PM, StringJunky said:

You are again making the cardinal mistake of referring to your own original thoughts as evidence.

What are you even talking about? Do you expect me to retype anything that people just don't bother to actually read before responding to? It's not "evidence" it's "argumentation that I've made but people ignored"

Stop accusing me of doing something that I didn't do. It's getting ANNOYING.

Make the darn arguments if you've got one.

Stop with the stupid accusatory shenanigans.

Link to comment
Share on other sites

I have another comment that I doubt you will like.

I have a soft spot for analogue systems.

Discussion so far, as so often these days, has concentrated on discrete systems.

Nature has no such requirements and uses both analogue and discrete systems as she feels appropriate.

 

The large majority of Man's early constructs have been inherently analagoue. It is only comparatively recently that we have tried to fit everything into the discrete mould, often developed from analogue beginnings.

Both analogue and discrete approaches have pros and cons, but discrete ones have an inherent limitation not suffered, in principle, by analogue ones.

So it would not suprise me if one day artifical consciousness was generated from an analogue system.

Link to comment
Share on other sites

4 hours ago, StringJunky said:

Having read this thread again, your assertion has been adequately dismantled. I'll ignore the insult... disconfirmation can do that to people.

"Adequately dismantled" by exactly which words in this thread, your mere claim that it has been?

"Disconfirmation?" Wrong. Look up what the word means first before using it.

I used to be curious about what random people on the internet could come up with, but not anymore. It's better to be curious about what non-random people have to say, specifically several professors and researchers who agree with my argument, one of which (neuroscience researcher / professor) is helping me put together a peer-reviewed paper on the subject. It's been more than 20 years since I've written any papers but it'll get done.

I'm also hardly the first person who takes my type of position- far from it. This other paper takes a different angle from mine to reach the same conclusion: https://www.sciencedirect.com/science/article/abs/pii/S1053810016301817

Edited by AIkonoklazt
link
Link to comment
Share on other sites

9 hours ago, StringJunky said:


Disconfirm (of a fact or argument) to suggest that a hypothesis is wrong or ill-formulated. (Collins)

Which "fact," and what "argument?"

It takes a bit more than some half-baked quips to refute an argumentation, unless you're talking about some random internet forum.

Arguing from assertion, that's what I'm seeing the most. On top of that, not even addressing the points I've made in the post and only the title (very typical of short attention spans to just look at the title and argue against that)

I can see why I should only engage with professional academics- The venue filters out people who have no idea how to field an actual argument with their half-baked thoughts.

I'm going to wait for your next random handwave.

Edited by AIkonoklazt
pointing out exactly what's wrong with non-arguments from random internet users
Link to comment
Share on other sites

  • 1 month later...

Due to the utter inability of people to back up their own argumentation, I propose a handicap!

People can look up ANYTHING on the entire internet to support whatever they say (e.g. random conjectures, junk clickbaits with dodgy support if any at all... anything is better than the zero that I'm seeing), while I am strictly limited to peer-reviewed academic research journals (i.e. not those random non-reviewed / preprint stuff from places like ArXiv)

I am being extremely generous. There is simply no excuses now to finally back up what you say, with anything, AT ALL.

This is from my search:

First paper:

Front. Psychol., 05 January 2021 Sec. Cognitive Science Volume 11 - 2020

Artificial Intelligence Is Stupid and Causal Reasoning Will Not Fix It - J. Mark Bishop
https://www.frontiersin.org/articles/10.3389/fpsyg.2020.513474/full

This is a very solid argument, starting at section 6, where he goes into Searle's CRA in a more detailed and nuanced way than I had. The comparison between the understanding of English and non-understanding of Chinese by the same person in the face of functional "as-if" (non)understanding is well-explained here.

Section 7 goes into how Godelian arguments further disproves computationalism. It demonstrates how mental procedures aren't containable in formal systems. What's cognitive isn't computational, which is the position of Penrose.

Section 8 basically says this: If you subscribe to the idea that machines can possess phenomenal consciousness, then you MUST also subscribe to a form of pansychism. If you do that, you abandon any and all positions on "emergent" consciousness from anything (no great loss since it's false anyway...)

 

Second paper:

Consciousness and Cognition, Volume 45, October 2016, Pages 210-225

Artificial consciousness and the consciousness-attention dissociation - H.H. Halajian, C Montemayor

https://www.sciencedirect.com/science/article/pii/S1053810016301817

This paper, full of references to scientific research, demonstrating the impossibility of implementation of emotional "routines" that aren't mere simulations. This is taking a different approach than the a priori methods of the other paper, looking at biological and evolutionary mechanisms. Before anyone says "who cares about emotions" here is a quote from the intro portion of the paper (emphasis mine):

Quote

Empathy and the intensity of emotions have not been considered as central when challenging AI. This is a puzzling situation, given the importance of phenomenal consciousness for empathy, moral standing, and moral behavior. A contribution of this paper is to improve this situation by taking the intrinsic moral value of consciousness as fundamental.

 

Well, that's it for now. Take at it, people!

p.s. you also need to state what the source(s) you're pointing to is/are talking about, and not just a random link dump.

Edited by AIkonoklazt
Link to comment
Share on other sites

  • 5 months later...

I really expect at least one item from someone, anyone, after half a year, that backs up what they say in any way whatsoever.

All anyone had to do, was look up Wikipedia, to find that one reference to Chalmers' Computational Foundation argument.

I'd say that Wikipedia's even a bit slanted in this regard, listing Chalmer's "pro artificial consciousness point" and... nobody else's anything.

However, Chalmer's position ends up being nothing else than another variety of functionalism. Those random Wikipedia editors are pretty disappointing too.

 

In other developments:

  • Robert Marks, Distinguished Professor of Electrical & Computer Engineering at Baylor and director of The Walter Bradley Center for Natural & Artificial Intelligence, read my article, deemed it to be very good and recommended me to submit it for reprint at the center's online publication. It has now been reprinted there in three parts:
  • Coincidentally, a few months after I originally wrote the article, the UN agency UNESCO banned AI legal personhood in their AI ethics recommendations, adopted by all 193 member states at the time. I wrote to Gabriela Ramos, Assistant Director-General for Social and Human Sciences of UNESCO. She agreed to forward my argumentation to members of her organization in support and defense of the policy. In my view, not only would be AI legal personhood be unethical, it would be flat out immoral (see section "Some implications with the impossibility of artificial consciousness" of my article). There already have been legal arguments made questioning AI legal personhood, one of which is this one: https://www.cambridge.org/core/journals/international-and-comparative-law-quarterly/article/artificial-intelligence-and-the-limits-of-legal-personality/1859C6E12F75046309C60C150AB31A29
  • There is still a lack of multilateral public discussion and debate. A newspaper article's writer agreed to talk to his editor to see if it's possible to set up a written philosophical debate with me and some field experts named in an article. The usual responses I get from these things are that people don't have the time, but that won't stop me from trying. There are many people from AI-related fields that have expressed similar frustrations on how the current wave of hype is distorting perception of various issues.

Edit: See item 68 (text bolded by me):
https://unesdoc.unesco.org/ark:/48223/pf0000381137
 

Quote

68.Member States should develop, review and adapt, as appropriate, regulatory frameworks to achieve accountability and responsibility for the content and outcomes of AI systems at the different phases of their life cycle. Member States should, where necessary, introduce liability frameworks or clarify the interpretation of existing frameworks to ensure the attribution of accountability for the outcomes and the functioning of AI systems. Furthermore, when developing regulatory frameworks, Member States should, in particular, take into account that ultimate responsibility and accountability must always lie with natural or legal persons and that AI systems should not be given legal personality themselves. To ensure this, such regulatory frameworks should be consistent with the principle of human oversight and establish a comprehensive approach focused on AI actors and the technological processes involved across the different stages of the AI system life cycle.

 

Edited by AIkonoklazt
item 68 doc link
Link to comment
Share on other sites

14 hours ago, AIkonoklazt said:

I really expect at least one item from someone, anyone, after half a year, that backs up what they say in any way whatsoever.

!

Moderator Note

This argument is clearly made in bad faith as an attempt to dismiss what others have said. I suggest you reread the whole thread before claiming nobody has given you any support for their arguments. 

 
Link to comment
Share on other sites

4 hours ago, Phi for All said:
!

Moderator Note

This argument is clearly made in bad faith as an attempt to dismiss what others have said. I suggest you reread the whole thread before claiming nobody has given you any support for their arguments. 

 

I'm quoting StringJunky:
 

Quote

Disconfirm (of a fact or argument) to suggest that a hypothesis is wrong or ill-formulated. (Collins)

Where is the "disconfirmation"? Was he using the English definition of the term?

Is he talking about conclusive evidence of any kind?

https://www.wordnik.com/words/disconfirmation

To disconfirm isn't to merely "suggest" as StringJunky said. People were arguing by assertion.

Edited by AIkonoklazt
Pointing out that people are arguing by assertion
Link to comment
Share on other sites

I'm inclined to buy into the idea that artificial consciousness is not going to happen.  I am interested in a related issue...

Assuming it was possible to create a synthetic model of the brain, using the same kind of architecture, modeling what cells do with electronics in terms of forming connections and exchanging signals, and letting it learn simply by experiencing data, could that be conscious?  Or, does consciousness depend on something first being alive?  (This might actually be part of the "solution"--perhaps the same kind of physical complexity that is required for something to be alive is a prerequisite for something to be complex enough to be conscious...the first is one of the parts you have to have to have the other and even if you could isolate the "consciousness" part, it won't work without the "alive" part.  I hope that makes sense.)

What are your thoughts on this?  Thanks.

Link to comment
Share on other sites

On 5/29/2022 at 2:01 AM, AIkonoklazt said:

Artificial Consciousness Is Impossible

Just curious, can you refer to a scientific law or theorem that makes artificial consciousness impossible? Examples, analogies to illustrate my question:

1: Due to Turing's proof, it is an established fact in theoretical computer science that it's absolutely impossible to create a general algorithm that solves the halting problem for all possible program-input pairs.

2: According to the laws of thermodynamics, it's impossible to cool a system to absolute zero or below.

In your opinion, is there an equivalent statement regarding the impossibility of artificial consciousness? 

Link to comment
Share on other sites

1 hour ago, Ghideon said:

Just curious, can you refer to a scientific law or theorem that makes artificial consciousness impossible? Examples, analogies to illustrate my question:

1: Due to Turing's proof, it is an established fact in theoretical computer science that it's absolutely impossible to create a general algorithm that solves the halting problem for all possible program-input pairs.

2: According to the laws of thermodynamics, it's impossible to cool a system to absolute zero or below.

In your opinion, is there an equivalent statement regarding the impossibility of artificial consciousness? 

Good question. +1

Link to comment
Share on other sites

3 hours ago, Mgellis said:

I'm inclined to buy into the idea that artificial consciousness is not going to happen.  I am interested in a related issue...

Assuming it was possible to create a synthetic model of the brain, using the same kind of architecture, modeling what cells do with electronics in terms of forming connections and exchanging signals, and letting it learn simply by experiencing data, could that be conscious?  Or, does consciousness depend on something first being alive?  (This might actually be part of the "solution"--perhaps the same kind of physical complexity that is required for something to be alive is a prerequisite for something to be complex enough to be conscious...the first is one of the parts you have to have to have the other and even if you could isolate the "consciousness" part, it won't work without the "alive" part.  I hope that makes sense.)

What are your thoughts on this?  Thanks.

First, consciousness has two necessary and sufficient conditions- Intentionality and qualia, per my article.

Let's start with intentionality. I've raised two demonstrations in the article demonstrating the necessary lack of intentionality in machines. When there is only payload and sequence (the two together compose the very basic mechanism of an algorithm) there isn't any intentionality to speak of. That is, there isn't a referent subject. What is a conscious thought without even a subject? The pseudocode programming example in the article further reinforced this concept.

Main idea: There isn't a conscious thought without a referent. I recently saw some random non-peer-reviewed paper on Arxiv (ugh...) that claims otherwise, but the author was blatantly relying on behaviorism (i.e. "if it quacks like a duck...")

We would get nowhere by going to the "end" first and looking at the end symptom of any kind (functionalism / behaviorism). Real progress starts by looking at the necessary conditions.

Want concrete examples? Look at what LLMs like GPTs do while they "hallucinate." The term "hallucination" in large language models such as ChatGPT / Claude / Bard is a complete abuse in terms; The "hallucination" is CORRECT OUTPUT within the programming. If you describe such process as a "hallucination" then you'd have peg all LLM output as "hallucination." LLMs do not operate on "meaning." Mathematical proximity is NOT "meaning."

Seen for that angle, every "machine misbehavior" is actual "correct programmed behavior." Garbage in, garbage out. You programmed it P so that behavior X occurs.

 

2 hours ago, Ghideon said:

Just curious, can you refer to a scientific law or theorem that makes artificial consciousness impossible? Examples, analogies to illustrate my question:

1: Due to Turing's proof, it is an established fact in theoretical computer science that it's absolutely impossible to create a general algorithm that solves the halting problem for all possible program-input pairs.

2: According to the laws of thermodynamics, it's impossible to cool a system to absolute zero or below.

In your opinion, is there an equivalent statement regarding the impossibility of artificial consciousness? 

There is more than one, but I'll start with the simplest one first.

You can't have programming without programming. A "machine that does things on its own and thinks on its own" is a contradiction in terms, and thus a "conscious machine" is a contradiction in term. What's an "instruction without an instruction"? There was an an entire section about it ("Volition Rooms — Machines can only appear to possess intrinsic impetus")

That's The law of non-contradiction.

The second one is the principle of underdetermination of scientific theory. This one is very involved if we go into fine details (I tried to make a short run in the section "Functionalist objections (My response: They fail to account for underdetermination)") In short, there can be no such thing as a "complete model." You may have heard of various sayings centered around the general idea, such as

Let's take the example of satellite navigation. If you just look at the practical success and usefulness of it, you may think "I have discovered definite laws surrounding orbiting entities and everything acting upon it and within it" because the satellite doesn't deviate from course. Well, satellite navigation doesn't depend on relativistic effects. You think you're going to "build a pseudobrain" and get anywhere even close? Then which arbitrary stopping points of practical "close enough" are you using? So everything that you don't see doesn't count? There is absolutely zero assurance just from outward symptoms. You can have a theoretical AGI that performs every task you can throw at it and it'd still doesn't have to be conscious (which points to other issues as well...)

That's the two off the top of my head.
Link to comment
Share on other sites

1 hour ago, Endy0816 said:

People regularly hallucinate, so how does AI differ there?

BeFunky-collage-28.jpg

 

I feel, that as we're typically asking them to write a mile-a-minute, there's an understandable tradeoff taking place.

They are referring to different things

Machines don't have psychological conditions.

The term "hallucinations" in LLMs only means "undesirable output." It's a very bad usage of the term, as I've pointed out earlier. https://www.techtarget.com/WhatIs/definition/AI-hallucination

p.s. Optical illusions and hallucinations refer to completely different things.

Edited by AIkonoklazt
illusion vs. hallucination
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.