and doesn't leave behind a huge radioactive smoking crater. is the answer just always to keep a human in the loop? and is the problem with that — which human? i think the answer is "yes". to disallow attacks where there's no human supervision, there's no human who's looking at the actual situation and the actual target and saying, "yes, this is ok." even under the assumption that the machine is programmed by someone who has the best legal training and the most humanitarian of aims, even in that situation, we face problems of not being able to make the decisions correctly. the problem is the idea of these slaughterbots, all the bits could be bought in a decent supermarket, probably with the exception of the small bit of explosive. so, what do we do? they're technically already available and how would you ever ban them? well, so, we ban many things