How would people get money?In this situation, if we assume that all resource production and distribution is taken care of by an automated party, there would be no need for currency. The closest thing I think such a world would come to would be levels of need, such as shortages and general crises. For example, if a hurricane ripped up New Orleans again, assuming the city was deemed worth rebuilding (an AI construct with this much intelligence and control may not deem it such), there is the possibility that the necessary resources will come at the expense of others. Not necessarily to the point that said others suffer needlessly, but it may be difficult to come by certain commodities.
Someone has to make those machines and choose how we should be governed. There's still bias.Potentially. However, assuming the overall imperative in the AI programming is to limit needless human suffering, regardless of region or (previous) economic status, I don't think it will be a huge issue.
A lot of what follows is entirely speculative, by its nature. It does reflect views of some people who make it their business to consider these things.Using humans as additional resources for whatever the end goal is could certainly become an issue. I would think, though, that it would be possible to program safeguards against this. As I said in my response to SimSim, I wholly expect a resource distribution system based on need to hurt some people some of the time, particularly when disaster relief is involved. There would obviously be an initial downgrade to the standard of living in many western nations as 3rd and 2nd world countries are brought up to par with the developed world, and its entirely possible that many of the things we enjoy never fully return after this has occurred. Certainly, if part of the AI's imperative is to mitigate, and eventually reverse, climate change, we will be in for a rough time as the transition is made and environmental cleanup/stabilization is enacted. Places like Las Vegas, which require huge resource imports to remain viable, may simply be slated for abandonment, along with other monuments to human decadence (not saying that this decadence is an inherently bad thing, but that it may need to be curtailed for a time.
The fundamental problem here is, past a certain point, a machine needs something on the level of human intelligence to do the sort of jobs humans do. Particularly things like managing resource distribution. This is what is generally called strong AI, or AGI (Artificial General Intelligence). And once you build AGI, the world has changed on a very deep level.
The 'hard takeoff' argument says, basically, that once you build human-level artificial intelligence, it will very quickly turn itself a superhuman intelligence. This is by no means settled science or an uncontroversial position, but it does have some credence. The basic thesis is that, we're human-intelligent, and our brains are ad-hoc kludges thrown together by evolution, and yet we somehow managed to create AI (in this scenario). The AI is human-intelligent, running on computer hardware (Faster and with better memory than brains), and has access to its own source code, which means a good way for that AI to achieve its goals is to improve on that to create a second-generation AI that is even smarter but has the same goals, which creates a third-generation AI, and so on and so forth until the low-hanging fruit of cognitive improvement is exhausted. There is an extensive body of arguments for and against this view I have no hope of summarising, so instead I suggest you take a look at the Hanson-Yudkowsky AI-Foom Debate (http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate). Eliezer Yudkowsky argues for hard takeoff, Robin Hanson against.
Now, if you consider the superintelligence scenario (whether we got there via hard takeoff or not), then it ceases to be a question of whether we let the machines control us. A superintelligence doesn't need your permission to determine your life; it's smarter than you, thinks faster than you and will achieve its goals regardless of you. If we are lucky (and successfully built it that way), the AI's goals are to make the world better for humans. If not, we're fucked. Not necessarily because it will have an explicit goal to kill all humans, but most likely as a side-effect of it doing whatever it wants. 'The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.' Creating AGI in such a way that it doesn't fuck us over (Friendly AI) is a hard problem, fundamentally harder than AGI itself, and also one we have to solve before we implement AGI. You see how that might be a challenge.
So, my position here is: if we do have a machine that can in fact run a government, it won't need our permission to do so. If we solve Friendly AI, then it will do whatever it can to make the world a better place, which is great. If not, we most likely all die soon afterwards.
Using humans as additional resources for whatever the end goal is could certainly become an issue. I would think, though, that it would be possible to program safeguards against this.This seems improbable; the general view is that you can't do Friendliness through patchwork. Either you got it perfectly right, so the AI won't hurt humans because it values our wellbeing and so on, or you got it wrong and then it doesn't matter what safeguards you put in, you're fucked. The AI is smarter than you, it will think of things you can't. Tell it not to kill, and it will make us die indirectly (because we are a nuisance, and it's easier to do whatever it wants if it doesn't have to expend resources on us). Tell it to keep us alive, and we end up trapped in capsules so that we don't hurt ourselves. And so on and so forth.
As I said in my response to SimSim, I wholly expect a resource distribution system based on need to hurt some people some of the time, particularly when disaster relief is involved. There would obviously be an initial downgrade to the standard of living in many western nations as 3rd and 2nd world countries are brought up to par with the developed world, and its entirely possible that many of the things we enjoy never fully return after this has occurred. Certainly, if part of the AI's imperative is to mitigate, and eventually reverse, climate change, we will be in for a rough time as the transition is made and environmental cleanup/stabilization is enacted. Places like Las Vegas, which require huge resource imports to remain viable, may simply be slated for abandonment, along with other monuments to human decadence (not saying that this decadence is an inherently bad thing, but that it may need to be curtailed for a time.
As to the essay/debate thing you posted, I'm afraid I won't be able to get to it for a couple days, maybe not even this week, but I would love to get back to this conversation once I've had the time to go through it and see what the arguments are.
Relating to a conversation in R&P (Democracy is overrated (http://fqa.digibase.ca/index.php?topic=4867.0)), would it really be such a bad thing for machines to be in control? Every time machines encroach on jobs people once did, such as vehicle manufacture, people complain that "They took er jerbs!!!", which generally leads to robots putting us all out of work. Why is this such a bad thing?
For an excellent exploration of this issue as regards to an emergent AI as opposed to a deliberately-designed one, I'd like to point everyone to Robert J. Sawyer's "WWW Trilogy": http://tvtropes.org/pmwiki/pmwiki.php/Literature/WWWTrilogy?from=Main.WWWTrilogy
tl;dr:(click to show/hide)
*mutters something about being one of the few people who thinks that Cybernetics Eat Your Soul to be one of the worst overused tropes*
I agree with letting machines take over, as I've said in threads in the past (or aliens). Humans are stupid. We are not overall good. We are overall bad. Throughout all of human history, groups have been oppressed and exterminated. In the same generation, the Jews went from being wiped out by a genocidal madmen to trying to wipe out another group because they didn't like them. Our politicians are overwhelmingly corrupt and always have been. The most prosperous times in any nation's history have always been the most unified and conformist, while the least have always been the least unified and conformist, but have also been the most hellish for anyone outside the conforming group. The fact of the matter is, humans are not fit to rule. We have emotions. We have greed. We have faith. We have racism. We have sexism. We have homophobia. We have a million other biases. If we were governed by a machine council whose prime directive is "All people are equal. All should be happy, so long as their happiness does not cause undue suffering to others", we'd live in an amazing land. We'd have a true utopia. One world machine government. If someone is insanely rich, some is taken to take care of others. It's not that they can't be rich, but they can't be too rich. The middle class, the poor and the rich would be closer together. The richest people would be millionaires, not billionaires. The poorest would still make tens of thousands of dollars. Sports players would be paid less than teachers. Jobs would pay what they deserve, not what is arbitrarily decided. A job that helps people would always pay better than a job that does not. Psychologists and doctors would be better paid than actors. Creators would still be well paid, but not better than people who save lives. EMTs would get a bigger check than even the most skilled painters. And the "minimum wage" jobs that exist now would also get better payment, because without them, our world would halt. I'd love to see the poor band together and everyone quit places like Walmart and McDonalds at once, and everyone refuse to work there until they got paid fair wages, but it will never happen.To address the point you made about that last thing, that too would work. My main point with that is that the people that do the menial labor currently make our planet spin. People who save lives and teach the next generation are tossed aside for people that can throw a goddamn ball.
Isn't it morally wrong to create intelligent machines just to do everything for us?
The biggest concern along this line is that, as we hand more and more tasks to machines, we forget how to do them ourselves. I'd like to think a society where everyone has all the time in the world to devote to their passion would advance at a much quicker rate, though it is wholly possible that we become incredibly lazy fucks who don't have a clue what to do when something goes wrong.Isn't it morally wrong to create intelligent machines just to do everything for us?
It depends on how you define "intelligence." Smart as we are? Sure, there would be some Data-esque problems coming into play regarding morality and their status in our society. But, just smart enough to do their job? Not really all that smart, if you ask me. Resource distribution models and such are easy to compute for the kind of machines the government's got at its disposal; all we'd need to do is give it authority, which we can take away should some serious malfunction occur. Again, just capable enough to do their jobs, but nothing more. Eliminates all those pesky moral/ethical problems, because they don't reach the level of sentience.
The machine will do what we programmers tell it to do: we make it to govern, it'll govern. We make it capable of evolution all on its own, it'll do just that. There are ways to make them smart enough to govern effectively, yet incapable of evolution beyond the scope of their function.
The "Will AIs suddenly sprout superintelligence?" thing reminds me of the "if we evolved from monkeys, why are there still monkeys?" argument. No, they won't, not unless they're designed to do so in the first place. Bugs aside, computers do exactly what they're told; the more specific you are, the better results you get.
As many humans have proven, just because you have the intelligence to govern doesn't mean you have the intelligence to evolve.
Isn't it morally wrong to create intelligent machines just to do everything for us?
@Sigma: Of course, one will always encounter bugs, that's why any programmer (or team thereof) worth their salt goes thru a pretty intense debugging phase before the code gets anywhere near release state. Besides, who says that the program needs access to its own code to improve? We have access to its code, we can improve it ourselves and run less of a risk of the proposed AI going rogue.
Yes, a group of rogue developers could, in theory, create their own rogue AI to ruin shit, but we'd be talking about a gargantuan undertaking. If they're doing it for reasons similar to most terrorists, then well...terrorists are lazy. Why would they spend decades developing a hyper-intelligent AI to destroy its enemies when explosives could do the same amount of damage in a lot less time?
Developing an AI for terroristic, or even simply criminal, reasons would be woefully inefficient. Could they steal the "government AI" and reverse-engineer it, turning it into a rogue AI for their own purposes? Maybe, but I'd assume that such a powerful thing, like nuclear weapons, would be kept behind the best security our nation could provide. Maybe even up to and including putting the fucker in orbit, or even on the Moon, where very, very few would have access to it.
Besides, even if our proposed "government AI" goes amok, one would assume that we'd put in a way to terminate it in case of that eventuality. With the likes of Terminator being a part of our modern pop culture, can you really say that we wouldn't think to put, say, a remote-controlled nuclear bomb underneath the "might be able to become Skynet" machine? If we make it, we can unmake it.
If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Give it a gun and see what it does.Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?
You forget, a superintelligence would still be based off of human beings. No matter how distant in scope it is, it still, at it's core, is man-made, and the only intelligence it has to base itself off of is humanity.Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Omnipotent, no, but for all practical purposes impossible to defeat. In human experience, "someone much smarter than you" invokes pictures of Einstein or von Neumann, but this is the wrong reference class. Imagine something as far beyond the smartest human as the smartest human is beyond the smartest dog. Considering something like that, it doesn't need much in terms of physical resources. Give it an internet connection and it will take over the world. Hell, give it just about any way of interacting with a human, and it will use that to convince the human to give access to the resources it needs (when you consider the ability charismatic humans have to manipulate other humans, it'd be optimistic to the point of ridiculousness to assume a superintelligence wouldn't be able to trick us into doing what it wants).Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?
...why would it? The sort of personality a very smart human has is a result of a thousand tangled details in the design of human brains, most of which are results of the way human brains came about as a result of natural selection in a very specific environment, or were accidental side-effects of other things. There's no reason an AI would follow that particular path of all the myriad possible paths a mind can take.
To assume that a superintelligence must be human-like in personality is severe anthropocentric bias; the space of possible minds is not constrained to human-like minds, it just seems that way intuitively because we interact only with human-like minds.
You forget, a superintelligence would still be based off of human beings. No matter how distant in scope it is, it still, at it's core, is man-made, and the only intelligence it has to base itself off of is humanity.
Well, if we are discussing a machine that is basically a super-ultra-god-computer, it wouldn't emotions, but pure logic. The only example in nature that would be logical to emulate to give a machine emotions is humans.You forget, a superintelligence would still be based off of human beings. No matter how distant in scope it is, it still, at it's core, is man-made, and the only intelligence it has to base itself off of is humanity.
Cars are man-made, and we don't expect them to use their wheels as feet to run. An AI theory with some insight into intelligence itself should be able to build an AI without just copying the blind design that is the human brain, in the same way that we can have a theory of motion that allows us to build things that move and aren't just copies of things in nature.
Barring the case where we do sped-up whole-brain emulation for AI or whatever, of course. Which is really not the case I'm discussing here.
In the end, a proposed hyper-intelligent machine would mostly be a crapshoot, influenced by how it was initially designed in the first place. If it was designed with our well-being in mind and care was taken to ensure that it had few, if any, loopholes that could be abused into allowing it the power of killing humans, then it might end up being more of a benevolent overlord than SHODAN.The problem is that it has to be able to kill through act or omission in order to rule over a planet. Hell, your average town council make decisions like that when they vote on whether or not to put stoplights in and what sort of funding they are going to provide for emergency services and snow removal. Don't make the mistake of thinking the job can be done without death in the equation.