Unnecessary for the act of carrying forward the torch of sentience and intelligence in a universe full of things that are only dimly aware, if at all. Although I doubt we are strictly necessary for that now, we seem to think we are.
That is not an end in itself. To exist strictly to exist is not a purpose.
Also more on line with Cain's OP: If warfare wants to disguise itself as politics and outright war disguises itself as peace isn't it equal to peace? Even if that peace is borne of the manipulation of thoughts, isn't it preferable to the horrors of war? My line of reasoning with AI isn't to turn this into a simple "are robots good or bad" topic, but to question the ethics of maintaining individual identity at the expense of popular wellfare. I don't necessarily disagree with it, but can we say with certainty that evolution isn't pushing us toward a collective "intelligence," artificial or not, that would need to supersede and eventually replace the one we know?
Peace, as Jerry Pournelle pointed out, is something we infer because there are sometimes intervals between wars. Peace is also not always a desirable thing, given the nature of humans.
And individual identity is what makes us what we are. Developing that identity is the only worthwhile pursuit of a human being...And is not by any means mutually exclusive with the general welfare of the species.
Lastly, I don't see any indication that we are forming a collective intelligence as a species, local events notwithstanding.
I don't mean a Borg consciousness or anything out of sci fi. What I mean is that systems (political/military/social) are growing larger and more interconnected and interdependent. These systems draw on the intentions of populations worldwide and therefore have a vested interest in shaping those intentions. Media and other propagandist organs grow to shelter the population and feed it the information necessary to grow its intentions and concerns in the desired direction. In some sense this could be referred to as a collective intelligence.
And no, it isn't always mutually exclusive with the general welfare of the species, except when those larger constructs we have created to oversee our general welfare begin to see individualism as a threat. There isn't anything inherently dangerous about being yourself but that doesn't mean the system isn't wired to see it as a threat anyway, for one reason or another.
And an AI that leads - directly or undirectly - to our extinction isn't a threat?
Look, the only good things that happen in this world are almost always a result of someone FUCKING UP. ALL the FUNNY shit in the world is a result of someone fucking up.
I don't want a future in which people don't fuck up GLORIOUSLY, and I don't see an AI God as being helpful in that department. If we're going to go extinct, I'd just as soon we did it THE OLD FASHIONED WAY. By fucking up. All by ourselves.
And even if that fucking up was the creation of said AI, I say we smash the fucking thing on the way off stage. Because nobody likes a smartass know-it-all.
One other thought: An AI designed by humans is about as likely to succeed as a regular operating system developed by humans. So we build this fucking AI to stop war or whatever, but then spend the next 200 years rebooting the fucking thing due to blue screen o' death.
Fuck that, I get enough frustration out of my laptop.