What's new

That’s all for humans: Google’s DeepMind AI is crushing pro gamers

Hamartia Antidote

ELITE MEMBER
Nov 17, 2013
35,580
30
21,495
Country
United States
Location
United States
https://tcagenda.com/2019/thats-humans-googles-deepmind-ai-crushing-pro-gamers/

Humans had a good run as caretakers of the planet (for better, or, as many would argue, for worse), but our time is slowly drawing to a close: This week, humanity took one step closer to obsolescence as Google announced (with a certain pride) that its DeepMind AI had skunked two pro gamers at Blizzard’s iconic StarCraft II.

The AI “AlphaStar” beat Team Liquid’s Grzegorz “MaNa” Komincz and Dario “TLO” Wünsch 5-0 in two separate five-game series back in December, both Google and Blizzard confirmed, which is a pretty sorry showing for our human representatives. You can watch the demonstration game by clicking the video above – though, be warned, it isn’t pretty.

Using games to test AI (Artificial Intelligence, also known as machine learning) systems, and their performance and abilities, is nothing new. Companies have, for decades, starting with MANIAC in 1956, pit computers against humans in games like chess. The most famous defeat was of world champion Gary Kasparov by IBM’s Deep Blue back in 1997.

AI has been playing computer games for years as well, and becoming increasingly adept at not only understanding the increasingly-complex, skill-based digital gameplay, but also decision-making within the game. AlphaStar, the first AI to defeat a top professional player, has now proven that we are going to need more sophisticated ways to test our AI: These games are simply becoming too easy for them.

But human hubris is a real thing as well, and will probably remain so for the foreseeable future: Top gamer Wünsch reportedly watched footage of his opponent, AlphaStar, playing, and still felt “extremely confident” going into the game.

The AI, in the end, was just too tough to touch.

All your games are belong to us
It would be easy to assume that the AI would have the upper hand due to faster-than-human reaction times, but (surprisingly) this wasn’t the case; in fact, the AI’s clicks and key-presses were actually slower than the humans’.

The wins, according to the Google DeepMind team, were instead secured by “superior macro and micro-strategic decision-making.”

Which means that AlphaStar was simply smarter.

Though, as is also very human, there have been some accusations of cheating – that the AI was operating at a level unattainable by humans in the first place. You can read more about that here: The DeepMind StarCraft AI May Have Been ‘Cheating’ After All.

But, ultimately and regardless, this is a truly significant victory for machine learning. While AI has also dominated in games like Mario, Quake III Arena Capture the Flag, and Dota 2, the difference is (until now, at least, so the difference was) that the StarCraft game, much more complex than those we just mentioned, had proven too big an obstacle for the machines to overcome. This conquering of human adversaries last December shows that is no longer the case.

All hail our new AI overlords.
 
An AI defeated a human in an RTS game. On a single map after learning observing human Gamers playing on this map is not a big deal. I remember that IBM super computer defeating Human Chess champion making Highlights.
However the speed at which AI is progressing is scary. Ion Musk is no child and he is scared of AI developments.
AI research should be regulated like Nuclear research
 
Come to think of it, AI could prove to be the cause of the demise of the human race.
AI + WMDs = Recipe of ultimate disaster (scary as hell)
 
It's isn't about just speed.. the computer can think of many more senario than a human mind and then strategise and it doesn't fatigue.

A computer can think of more scenarios than a human mind?
you sure have never tried to understand the basics of human brain psychology?
 
An AI defeated a human in an RTS game. On a single map after learning observing human Gamers playing on this map is not a big deal.

While early versions did observe humans the vast majority of the "training" came from playing against itself. It basically created thousands of instances of itself in a league and over a span of 14 days got 200 years worth of real-time play experience. The developers found it fascinating to watch the evolution of strategies where it seemed to like one and then discard it abruptly after stumbling upon a better one.
 
Last edited:
best game should be AI VS AI then who will win.Game is made by human and humans can make more complex games which cannot be learned by AI on their own unless they see humans doing same in front of them this is limitation of AI as they needed someone to copy for learning but humans can innovate new things without needing to observe others
 
I think it's very dangerous to use AI for war.

I think China and the rest of the world should realize about the danger of AI against humanity.

Who control the weapon, control the world, control us as human.

One day we will be the slaves of the machines, luckily if the AI want to keep us rather than being wiped out.
 
machine learning requires previous data to reach solution to learn patterns but human can make innovation without previous data available to them and find new methods from scratch AI cannot innovate like humans
No, they don't. That's why some of DeepMind's AI is scary. Some you just tell it the "higher the point score the better". It just plays the game millions of times until it figures out the most optimum solution.
 
machine learning requires previous data to reach solution to learn patterns but human can make innovation without previous data available to them and find new methods from scratch AI cannot innovate like humans

Please read up on some of DeepMind's AI. Their previous data is the machine's own playing. That's the main point of it.

https://deepmind.com/blog/alphago-zero-learning-scratch/
AlphaGo Zero: Learning from scratch
"Previous versions of AlphaGo initially trained on thousands of human amateur and professional games to learn how to play Go. AlphaGo Zero skips this step and learns to play simply by playing games against itself, starting from completely random play. In doing so, it quickly surpassed human level of play and defeated the previously published champion-defeating version of AlphaGo by 100 games to 0."
 
Last edited:
playing is different from real innovation .You cannot modify graphics and other objects visible in games beyond the boundaries of what is programmed in game software.Games have limited options available as compared to real world so it is easy to play .Gamers are not scientists and not someone who innovate new things
Please read up on some of DeepMind's AI. Their previous data is the machine's own playing. That's the main point of it.

https://deepmind.com/blog/alphago-zero-learning-scratch/
AlphaGo Zero: Learning from scratch
 
playing is different from real innovation .You cannot modify graphics and other objects visible in games beyond the boundaries of what is programmed in game software.Games have limited options available as compared to real world so it is easy to play .Gamers are not scientists and not someone who innovate new things

You are failing to grasp the magnitude of this and how it will lead to innovation. Say the goal of the "game" is to make a jet engine with 50,000 lbs of thrust. The humans did it with the F-35's engine. DeepMind's job is to do it better. You want to bet against it?
 
Last edited:

Users Who Are Viewing This Thread (Total: 2, Members: 0, Guests: 2)


Back
Top Bottom