Jump to content

Computer Chess


Olorin

Recommended Posts

World Champion 1996 Kasparov drew a six game match against IBM's "Deep Blue" 200 core computer. In 1997 Karparov lost against same. From that time on the World Computer Chess Championships became the battleground for computers alone, which may have been considered invincible to human players. "Crafty GUI" is free on the net and can be easily beaten...if you use the slider to dumb it down, give it a ludicrous time to move, and accept the human privilege of ignoring your own loss on time, while backtracking tactical and strategic errors ad nauseum until its enemy king falls. Eventually a compute + coding called "Stockfish" lost to another called "Jonny Chess" manufactured and programmed in The Netherlands, and employing 2400 cores to win the WCCC, but was disqualified on protest by the former team. Result - an 8 core limit was set for entrants.

This effectively hobbles to field, and possibly means that the World Champion is no longer "Deep Blue". Should the current World Chess Champion challenge the WCCC winner with hope of victory? How could this be arranged. Presumably a sizable stake would be required with great promise of financial gain to media people at least. A resurgence of interest in chess is also likely - akin to the 1972 Fischer vs Spassky match in Reykjavik 1972, when the Ruskies lost the long held crown. Any thoughts? Of extreme interest is this - WCCC entrants now rely on coding excellence to win, with spin-offs to other similar areas of AI again with great promise for our increasingly complex world. Analogous coding problems are not difficult to imagine. Hopefully, Neuronet processors, Skynet and Terminators, or "Running Man" reality TV shows (Stephen King alias Bachman book, Arnold Schwarzenegger Movie) remain fictitious.

 

Link to comment
Share on other sites

Chess algorithm does not have to be AI at all. '90 chess algorithms were not AI at all. And I doubt modern chess programs are A.I. (even if the creators claim it is A.I. in ads) because developers don't want to give such advanced technology to everybody (if you release software on CD/DVD/program downloaded through Internet, anybody can reverse-engineer what is there). e.g. when you talk to Google Assistant, or Siri,  your voice is sent in real-time to central server to analyze, and they return just plain text. Why? Because Google/Apple don't want to include (reveal) A.I. voice recognition code in OS, because everybody ("programmers") will see it, and get it, and will use in their own projects..

 

AI means Artificial Intelligence, which should mean "teacher gives it data samples, and let it learn from them". The more data are given, the more knowledge gained.

If you put A.I. inside of box without any signals from outer world, it won't learn anything. There will be no data to analyze and gain some knowledge.

If teacher gives wrong data, A.I. learns bad things, and there are consequences:

https://www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html

https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/

"Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets."

Search for "[company name] chatbot shutdown" for other similar stories.

(chatbot is kinda like parrot repeating what he/she heard from humans, and repeating unaware what it means)

 

Analyze of the all possible movement is not AI, it is brute force algorithm. Engineers just have to add more and more machines (or increase frequency of CPU) to have better results.

 

Modern GPU cards have 1024/2048/4096+ cores on them.

$5k and you have 4096 cores:

https://www.acmemicro.com/Product/14963/NVIDIA-Tesla-M60-GPU-4096-cores-PCI-Express-3-0-16GB-GDDR5-GPU-Accelerator?c_id=573

$9k and you have 5120 cores:

https://www.acmemicro.com/Product/16529/NVIDIA-900-2G500-0000-000-Tesla-V100-GPU-for-PCIe-16GB-HBM2-640-Tensor-Cores-Passive-Cooling

 

True AI/human don't even think about "obviously wrong movements". Brute-force algorithm does through the all movements and judges them.

Maybe "obviously wrong movement" at the beginning of play (rejected at the begging by everybody, by AI and human), maybe later have some unexpected consequences.. A.I./human can't know that, because rejected movement at early stage as wrong/pointless/stupid..

 

Edited by Sensei
Link to comment
Share on other sites

On 1/3/2021 at 10:46 AM, Sensei said:

Chess algorithm does not have to be AI at all. '90 chess algorithms were not AI at all. And I doubt modern chess programs are A.I. (even if the creators claim it is A.I. in ads) because developers don't want to give such advanced technology to everybody (if you release software on CD/DVD/program downloaded through Internet, anybody can reverse-engineer what is there). e.g. when you talk to Google Assistant, or Siri,  your voice is sent in real-time to central server to analyze, and they return just plain text. Why? Because Google/Apple don't want to include (reveal) A.I. voice recognition code in OS, because everybody ("programmers") will see it, and get it, and will use in their own projects..

 

AI means Artificial Intelligence, which should mean "teacher gives it data samples, and let it learn from them". The more data are given, the more knowledge gained.

If you put A.I. inside of box without any signals from outer world, it won't learn anything. There will be no data to analyze and gain some knowledge.

If teacher gives wrong data, A.I. learns bad things, and there are consequences:

https://www.independent.co.uk/life-style/facebook-artificial-intelligence-ai-chatbot-new-language-research-openai-google-a7869706.html

https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot-after-it-turned-into-racist-nazi/

"Microsoft had to shut Tay down because the bot started spewing a series of lewd and racist tweets."

Search for "[company name] chatbot shutdown" for other similar stories.

(chatbot is kinda like parrot repeating what he/she heard from humans, and repeating unaware what it means)

 

Analyze of the all possible movement is not AI, it is brute force algorithm. Engineers just have to add more and more machines (or increase frequency of CPU) to have better results.

 

Modern GPU cards have 1024/2048/4096+ cores on them.

$5k and you have 4096 cores:

https://www.acmemicro.com/Product/14963/NVIDIA-Tesla-M60-GPU-4096-cores-PCI-Express-3-0-16GB-GDDR5-GPU-Accelerator?c_id=573

$9k and you have 5120 cores:

https://www.acmemicro.com/Product/16529/NVIDIA-900-2G500-0000-000-Tesla-V100-GPU-for-PCIe-16GB-HBM2-640-Tensor-Cores-Passive-Cooling

 

True AI/human don't even think about "obviously wrong movements". Brute-force algorithm does through the all movements and judges them.

Maybe "obviously wrong movement" at the beginning of play (rejected at the begging by everybody, by AI and human), maybe later have some unexpected consequences.. A.I./human can't know that, because rejected movement at early stage as wrong/pointless/stupid..

So AI is now defined as "learning software"? Okay, I'm still coding with (Free) Pascal, and almost as extinct. Your response is interesting to say the least. Can you clarify for me whether 8 cores now has sufficiently increased frequency to match 200 cores in 1996, please.

Link to comment
Share on other sites

1 hour ago, Olorin said:

Can you clarify for me whether 8 cores now has sufficiently increased frequency to match 200 cores in 1996, please.

In 1996, CPUs had one core per CPU. So I assume you meant "to match 200 machines in 1996" (e.g. server-room).

The first Intel/Xeon with a dual-core was released in late 2005.

The efficiency of modern CPUs apart of increased frequency, and more built-in cores, is the result of the introduction of new instructions which process more data in single instruction.

https://en.wikipedia.org/wiki/SIMD

SIMD = Single Instruction Multiple Data

But it requires 1) writing machine code manually 2) writing assembler snippets manually 3) enabling option in compiler (so it must to have such an option in the first place). Doubtful ancient Pascal compiler will utilize power of new instructions.. 4) writing multi-threaded algorithm..

A significant bottleneck is how fast the processor can read and write data from and to physical memory. To help with the bottleneck, many new technologies have been introduced: multi-level cache, dual-channel and triple-channel memory slots (so, when you have dual-channel memory on the motherboard, instead of buying a single 8 GB memory, you should buy 2x 4 GB, or 2x 8 GB.. and if triple-channel, buy 3x at a time, to have 3x faster memory transfers)

These days, when somebody wants to have fast code, converts the most time consuming part of application to GPU/CUDA/OpenCL. We have GFX cards with literally thousands cores.

Usage of ancient languages with ancient abandoned compilers won't help you fully utilize power of a modern machine.

 

  

1 hour ago, Olorin said:

Can you clarify for me whether 8 cores now has sufficiently increased frequency to match 200 cores in 1996, please.

Hmm.. I have sense this sentence can be equivalent of "is modern 8 core CPU faster than 200 machines in 1996", correct?

Yes. Modern single computer with 8 core CPU can be faster than 200 old machines from the past.

You can check it using e.g. CPU benchmark website, which list CPUs:

Compare AMD Ryzen (Cores: 64 Threads: 128). Score 88731.

https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadripper+PRO+3995WX&id=3837

with Pentium 4 from 2000 (Cores: 1, Threads: 1). Score 77.

https://www.cpubenchmark.net/cpu.php?cpu=Intel+Pentium+4+1300MHz&id=1058

(the worstest on this website)

88731/77 = 1152 times faster

But it requires 1) multi-threaded code (not all algorithms are easily scalable) 2) no significant transfers from CPU to physical memory 3) no significant transfers from physical memory to data storage e.g. HDD/SSD/M.2

 

Edited by Sensei
Link to comment
Share on other sites

1152 times faster!

That's about as clear as I can fully understand, but I am grateful fro the rest, for which I do have some intuitive grasp. So much for coding. Concerning data storage, I have a glass prism containing the image of a Chinese Godlike figurine, etched as small fractures with incredible definition, each computer generated by focusing laser light to a small point. It has long been clear to me that that bits and bytes may may become available to the order of Avogadro's number as crystal lattices. Required would be the development of this kind of laser control, at specific frequency, with specific electrons in the lattice having two stable states that may be so manipulated. The World renowned psychic Edgar Cayce (1877-1945) gave past life readings for people who had incarnations in Poseidia, the last island of Atlantis to survive before their technology produced a pole-shift and ice-age 11.5 thousand years ago. He recounted that they had incredible crystals with powerful properties. These were eventually abused to their undoing, and acquiring the name "Terrible Crystals" as a result. He predicted the return of these advanced souls, and the return of their progressive and aggressive influence and technologies, from a time of high energies.

Do you have any knowledge of such developments in computer memory capacity? It appears the more a plausible assumption considering the vast data resources available on the internet, and the "cloud". How are these extremes possible? 10 TB is the best I'm using - nothing like the order 10^23, Jupiter in rice grains. There would also need to be a plethora of addressing bits.

 

Link to comment
Share on other sites

17 hours ago, Olorin said:

Do you have any knowledge of such developments in computer memory capacity?

There are experiments with high density glass disc based memories intended for permanent, long term storage. Here is a link: https://en.wikipedia.org/wiki/5D_optical_data_storage

A quick overview:

Quote

The 5-dimensional discs [have] tiny patterns printed on 3 layers within the discs. Depending on the angle they are viewed from, these patterns can look completely different. This may sound like science fiction, but it's basically a really fancy optical illusion. In this case, the 5 dimensions inside of the discs are the size and orientation in relation to the 3-dimensional position of the nanostructures. The concept of being 5-dimensional means that one disc has several different images depending on the angle that one views it from, and the magnification of the microscope used to view it. Basically, each disc has multiple layers of micro and macro level images.[

https://www.allaboutcircuits.com/news/5d-data-storage-how-does-it-work-and-when-can-we-use-it/

Link to paper from University of Southhampton: https://www.orc.soton.ac.uk/news/4282

Maybe the above is what you are looking for? There may be other variants, I have not studied this in any detail.

 

17 hours ago, Olorin said:

The World renowned psychic Edgar Cayce (1877-1945) gave past life readings for people who had incarnations in Poseidia, the last island of Atlantis to survive before their technology produced a pole-shift and ice-age 11.5 thousand years ago. He recounted that they had incredible crystals with powerful properties.

I have not come across any reliable source within mainstream science supporting such claims. Can you provide a reference?

Link to comment
Share on other sites

On 1/3/2021 at 12:46 AM, Sensei said:

AI means Artificial Intelligence, which should mean "teacher gives it data samples, and let it learn from them". The more data are given, the more knowledge gained.

Isn't that more a definition of machine learning, which is itself a subset of AI?

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.