A lot of what follows is entirely speculative, by its nature. It does reflect views of some people who make it their business to consider these things.
The fundamental problem here is, past a certain point, a machine needs something on the level of human intelligence to do the sort of jobs humans do. Particularly things like managing resource distribution. This is what is generally called strong AI, or AGI (Artificial General Intelligence). And once you build AGI, the world has changed on a very deep level.
The 'hard takeoff' argument says, basically, that once you build human-level artificial intelligence, it will very quickly turn itself a superhuman intelligence. This is by no means settled science or an uncontroversial position, but it does have some credence. The basic thesis is that, we're human-intelligent, and our brains are ad-hoc kludges thrown together by evolution, and yet we somehow managed to create AI (in this scenario). The AI is human-intelligent, running on computer hardware (Faster and with better memory than brains), and has access to its own source code, which means a good way for that AI to achieve its goals is to improve on that to create a second-generation AI that is even smarter but has the same goals, which creates a third-generation AI, and so on and so forth until the low-hanging fruit of cognitive improvement is exhausted. There is an extensive body of arguments for and against this view I have no hope of summarising, so instead I suggest you take a look at the
Hanson-Yudkowsky AI-Foom Debate. Eliezer Yudkowsky argues for hard takeoff, Robin Hanson against.
Now, if you consider the superintelligence scenario (whether we got there via hard takeoff or not), then it ceases to be a question of whether we let the machines control us. A superintelligence doesn't need your permission to determine your life; it's smarter than you, thinks faster than you and will achieve its goals regardless of you. If we are lucky (and successfully built it that way), the AI's goals are to make the world better for humans. If not, we're fucked. Not necessarily because it will have an explicit goal to kill all humans, but most likely as a side-effect of it doing whatever it wants. 'The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.' Creating AGI in such a way that it doesn't fuck us over (Friendly AI) is a hard problem, fundamentally harder than AGI itself, and also one we have to solve before we implement AGI. You see how that might be a challenge.
So, my position here is: if we do have a machine that can in fact run a government, it won't need our permission to do so. If we solve Friendly AI, then it will do whatever it can to make the world a better place, which is great. If not, we most likely all die soon afterwards.