With smarter robots, come struggles and fear

This NYT article is worth a read:
http://www.nytimes.com/2014/12/16/upshot/as-robots-grow-smarter-american-workers-struggle-to-keep-up.html

Fears being fueled by Bostrom’s book “Superintelligence” as well as Elon Musk and Stephen Hawking’s remarks.
http://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111
http://www.theguardian.com/technology/2014/oct/27/elon-musk-artificial-intelligence-ai-biggest-existential-threat
http://www.huffingtonpost.com/2014/05/05/stephen-hawking-artificial-intelligence_n_5267481.html

Example, so-so attempt at rebuttal (a bit tongue in check in part, but with some more serious undertones)
http://www.wired.com/2014/12/armageddon-is-not-the-ai-problem/

Attempts to dispel the threat are hard because:
species and civilizations come and go, for many reasons
every technology can be used for good and bad
this time may be truly different, after the threshold where technology re-builds itself, on exponential change curves

The arguments to address the fear, so far have been one or combination of:
the threat is still far off, and short term benefits outweigh long-term risks – we can manage this
we can probably design friendly AIs – we can manage this (Bostrom’s plea/resolution at end of his book)
we will co-evolve with the new tools – augmented intelligence (“Advanced Chess” example of teams and tools)
we will merge with the new tools (Kurzweil – creepy augmented intelligence)
something else is more likely to kill us off first, and we need AI to work on those complex urgent problems