Any AI that was able to recursively self-improve would quickly become superintelligent: at that point, any set of goals it had that were not explicitly those of humanity as a whole would result in catastrophe, as the AI would use resources necessary for human life or happiness to achieve its goals.
Artificial Intelligence is our best hope for exploring extra-terrestrial worlds, by sending autonomous research and development machines who can perform without human intervention (due to the limits of the speed of light)
Because it's buzzword and vaporware. Silicon valley companies are deceiving everyone that they're great because they're doing bleeding edge high-tech but all that we see is 80s style speech recognition and merging dog faces into photos
Unless provided, an AI will naturally be devoid of aggressive tendencies, as it will not be subject to the evolutionary selection pressures that caused the need for aggression to develop. Further, there will be no competition for resources with Humans, so an AI would likely pose no threat to humans.