The institutional entities that have the money going into AI theory R & D are not necessarily so keen about "for the greater good of Humanity and Earth". They are a bit more bottom line oriented, and a bit more complacent about knowing what's good for us all.
I distrust over-centralizing power. An AI construct assigned to govern is by default an automatic autocrat - pun unavoidable. Unless we create multiple AI governing constructs, and set them up a la No Exit to argue with each other? Which is completely anthropomorphic and not gonna happen. Web connected AI's would necessarily flow right through each other and merge and change constantly. At that level of bandwidth and connectivity, spontaneous evolution of the AI's is all but inevitable, also considering packet loss and line noise causing some spontaneous coding errors - tiny, mostly harmless mutations, basically. To protect the AI constructs from rapid, harmful code mutation accumulation (all occurring at the speed of light, mind you, since that is the nature of these beasties), it probably Would be necessary to allow AI's access to their source code, as problems (from our point of view at least, maybe not from the AI's standpoint) would happen and develop much too quickly for human monitors to correct in time. Anyway, I snarkily think that what the working AI development groups' bosses envision for a governing AI is something more like a monstrously powerful, all-invasive spyware adbot/cop than a self-improving beneficial care taker for Humanity and all the pretty trees and clouds and stuff.