Pages

Friday, 31 March 2017

Reading Log

Title: “Google's New AI Has Learned to Become "Highly Aggressive" in Stressful Situations”


Author: BEC CREW


Text Type: Article


Date: 13 FEB 2017


This article is a quick look into what Google's new DeepMind AI system can mean for the future of the human race. It brings up a few things that google has released about it and analyzes it telling us what's going on.


I think this article focuses to much on it being bad instead of thinking why its bad. “They found that things went smoothly so long as there were enough apples to go around, but as soon as the apples began to dwindle, the two agents turned aggressive, using laser beams to knock each other out of the game to steal all the apples." This quote from the BEC CREW in my opinion is wrong. They have not taken into consideration that the game the made the ai system play was literally about fighting for food, and what have humans been doing for centuries? War for land, food, and anything really valuable. So if this ai system has been programmed with having human intelligence, then obviously it's logical reasoning would be to wage war for the food. I think the quote should have been more along the lines of “Things went smoothly as long as there was enough resources, but once the “food” was up they adapted to the change and did what humans know best, make war and pillage everything.” Something along those lines, what I wrote wasn't really good but it's okay as long as I get the point across that the ai is just doing what humans want it to do. I think this is something people need to look into more in depth, and if they want to make a proper Ai thats got the perfect intelligence, don't base it off humans, instead use ants.

Something I agree on! Yes, no matter how much we try to influence the ai, no matter how complex they get, they will never actually care about humans objectives. “It's still early days for DeepMind, and the team at Google has yet to publish their study in a peer-reviewed paper, but the initial results show that, just because we build them, it doesn't mean robots and AI systems will automatically have our interests at heart.” An Artificial intelligence system, does what it is programmed to do. It can be good or bad, like if you program a AI to shoot someone, it will shoot someone without remorse or guilt. If you program it to make a cake, it will. But you can't program an Ai to actually care about someone or something. It's not possible. I honestly think that working on Ai systems are just useless, why would you create something that could potentially kill everyone? Not only that this would require so much resources oh and another thing, say we manage to make them and they like take over a factory jobs. What about the people who were working there from the start that needed that money? Anyway like I said I don't like this, and think it will cause more issues than it's worth.