As Kevin Drum says, Chris Suellentrop’s remarks on Isaac Asimov’s robots don’t really do justice to the way he developed the theme over the course of all his work. Still, I think it’s fair to say that “humans bad, robots good” is a pretty accurate summary of the theme of I, Robot. Where Suellentrop really goes off the mark is in suggesting that the robots are good because they’re rational. That there is a kind of Kantian reverie where rationality leads to goodness or, as Rawls would put it, we can derive the reasonable from the rational. Asimov, however, hews pretty closely to a purely instrumental account of rationality — pure reason could be good, evil, or otherwise, all depending on what it serves. The point about the robots isn’t that they’re good because they’re rational, they’re good because they’re rational and they all feature factory-installed morality. The Three Laws of Robots and, especially, the First Law saying that no robot may harm a human, are what’s doing all the moral heavy lifting. Rationality only comes into play because it ensures that robots will execute the First Law’s dictates properly.
It’s interesting to note, though, that in the later novels Daneel begins to elaborate a more sophisticated moral vision. This involves moving from a deontological framework with heavy reliance on the doing/allowing distinction to a much more consequentialist worldview. The books in question are, in my humble opinion, far far far worse than the earlier ones, but I think they express a superior moral philosophy. Meanwhile, as John Holbo eloquently illustrates if smart people spent less time thinking about this crap and more time focusing on important things, we might get a lot done.