FSTDT Forums

Community => Religion and Philosophy => Topic started by: Her3tiK on August 25, 2013, 07:43:33 pm

Title: Rise of the Machines
Post by: Her3tiK on August 25, 2013, 07:43:33 pm
Relating to a conversation in R&P (Democracy is overrated (http://fqa.digibase.ca/index.php?topic=4867.0)), would it really be such a bad thing for machines to be in control? Every time machines encroach on jobs people once did, such as vehicle manufacture, people complain that "They took er jerbs!!!", which generally leads to robots putting us all out of work. Why is this such a bad thing?

Think about it, people who work fast food get shit pay because it's a shit job that requires very little skill to do. It is easily something you could build and program machines to do for you; unless there's some secret to running a fry cooker that requires a human touch (besides spitting on the fries), I don't see why we shouldn't give a machine that job. Similarly, janitorial work, while somewhat more difficult, does not require any great degree of skill in most cases. The machinery required for some of the jobs might be more complex (cleaning the exterior windows of skyscrapers, for example), but hardly something you need a serious education to accomplish.

And what's so bad about machines taking all the jobs? Let's say machines have taken over all the work on the planet, from running fast food joints to governing states/nations/the world, and we no longer have to do anything to get by. Machines can be outfitted to optimize their tasks in way that organic life simply can not, which would improve complex necessities like food production and distribution to benefit everyone. Manufacturing would become more efficient and, as said in the other thread, if machines also control the distribution methods and amounts, resources would be sent where they're needed most and/or would have the most overall benefit.

I could keep going, but I have to be somewhere right now (cousin turned 13 last week), so I'm afraid I need to cut the OP a little short. I will get back to this when there's time and/or enough interest to keep the discussion going.
Title: Re: Rise of the Machines
Post by: SimSim on August 25, 2013, 08:29:24 pm
How would people get money?
Title: Re: Rise of the Machines
Post by: Sleepy on August 25, 2013, 08:37:25 pm
Someone has to make those machines and choose how we should be governed. There's still bias.
Title: Re: Rise of the Machines
Post by: Damen on August 25, 2013, 08:52:57 pm
It would require moving away from a currency based economy, at the very least. Possibly moving away from an economic system period. There just wouldn't be enough jobs going around to be able to employ enough people to allow a workable economic system. As for what would people do? I think it would be less a matter of "where do you want to work" and more a matter of "what do you want to do?"
Title: Re: Rise of the Machines
Post by: Flying Mint Bunny! on August 25, 2013, 09:48:06 pm
It sort of reminds me of this sci-fi book I read once where they could just make everything with machines. Most people didn't work, but they still had their basic needs covered plus a bit extra on top. The people with jobs, like scientists, received extra for doing the jobs machines couldn't handle.
Title: Re: Rise of the Machines
Post by: Sigmaleph on August 25, 2013, 10:03:24 pm
A lot of what follows is entirely speculative, by its nature. It does reflect views of some people who make it their business to consider these things.


The fundamental problem here is, past a certain point, a machine needs something on the level of human intelligence to do the sort of jobs humans do. Particularly things like managing resource distribution. This is what is generally called strong AI, or AGI (Artificial General Intelligence). And once you build AGI, the world has changed on a very deep level.

The 'hard takeoff' argument says, basically, that once you build human-level artificial intelligence, it will very quickly turn itself a superhuman intelligence. This is by no means settled science or an uncontroversial position, but it does have some credence. The basic thesis is that, we're human-intelligent, and our brains are ad-hoc kludges thrown together by evolution, and yet we somehow managed to create AI (in this scenario). The AI is human-intelligent, running on computer hardware (Faster and with better memory than brains), and has access to its own source code, which means a good way for that AI to achieve its goals is to improve on that to create a second-generation AI that is even smarter but has the same goals, which creates a third-generation AI, and so on and so forth until the low-hanging fruit of cognitive improvement is exhausted. There is an extensive body of arguments for and against this view I have no hope of summarising, so instead I suggest you take a look at the Hanson-Yudkowsky AI-Foom Debate (http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate). Eliezer Yudkowsky argues for hard takeoff, Robin Hanson against.

Now, if you consider the superintelligence scenario (whether we got there via hard takeoff or not), then it ceases to be a question of whether we let the machines control us. A superintelligence doesn't need your permission to determine your life; it's smarter than you, thinks faster than you and will achieve its goals regardless of you. If we are lucky (and successfully built it that way), the AI's goals are to make the world better for humans. If not, we're fucked. Not necessarily because it will have an explicit goal to kill all humans, but most likely as a side-effect of it doing whatever it wants. 'The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.' Creating AGI in such a way that it doesn't fuck us over (Friendly AI) is a hard problem, fundamentally harder than AGI itself, and also one we have to solve before we implement AGI. You see how that might be a challenge.


So, my position here is: if we do have a machine that can in fact run a government, it won't need our permission to do so. If we solve Friendly AI, then it will do whatever it can to make the world a better place, which is great. If not, we most likely all die soon afterwards.
Title: Re: Rise of the Machines
Post by: Lithp on August 25, 2013, 11:24:18 pm
Yeah, I was basically thinking that an AI would probably fuck you over. It's not like I'm saying, "Machines are evil," it's just that intelligence as we know it is inherently self-serving. Machines might not even know they're killing us until we're already extinct. Self-fulfilling prophecy might come into play, too.
Title: Re: Rise of the Machines
Post by: Her3tiK on August 25, 2013, 11:34:12 pm
How would people get money?
In this situation, if we assume that all resource production and distribution is taken care of by an automated party, there would be no need for currency. The closest thing I think such a world would come to would be levels of need, such as shortages and general crises. For example, if a hurricane ripped up New Orleans again, assuming the city was deemed worth rebuilding (an AI construct with this much intelligence and control may not deem it such), there is the possibility that the necessary resources will come at the expense of others. Not necessarily to the point that said others suffer needlessly, but it may be difficult to come by certain commodities.
Otherwise, the general idea here is that, short of maintaining the machines (which may not be necessary if they can maintain each other) and scientific advancement (also of dubious necessity in this scenario), people would generally be free to do whatever they so desire with their time. No need to worry about paying rent or putting food on their table.

Someone has to make those machines and choose how we should be governed. There's still bias.
Potentially. However, assuming the overall imperative in the AI programming is to limit needless human suffering, regardless of region or (previous) economic status, I don't think it will be a huge issue.

A lot of what follows is entirely speculative, by its nature. It does reflect views of some people who make it their business to consider these things.


The fundamental problem here is, past a certain point, a machine needs something on the level of human intelligence to do the sort of jobs humans do. Particularly things like managing resource distribution. This is what is generally called strong AI, or AGI (Artificial General Intelligence). And once you build AGI, the world has changed on a very deep level.

The 'hard takeoff' argument says, basically, that once you build human-level artificial intelligence, it will very quickly turn itself a superhuman intelligence. This is by no means settled science or an uncontroversial position, but it does have some credence. The basic thesis is that, we're human-intelligent, and our brains are ad-hoc kludges thrown together by evolution, and yet we somehow managed to create AI (in this scenario). The AI is human-intelligent, running on computer hardware (Faster and with better memory than brains), and has access to its own source code, which means a good way for that AI to achieve its goals is to improve on that to create a second-generation AI that is even smarter but has the same goals, which creates a third-generation AI, and so on and so forth until the low-hanging fruit of cognitive improvement is exhausted. There is an extensive body of arguments for and against this view I have no hope of summarising, so instead I suggest you take a look at the Hanson-Yudkowsky AI-Foom Debate (http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate). Eliezer Yudkowsky argues for hard takeoff, Robin Hanson against.

Now, if you consider the superintelligence scenario (whether we got there via hard takeoff or not), then it ceases to be a question of whether we let the machines control us. A superintelligence doesn't need your permission to determine your life; it's smarter than you, thinks faster than you and will achieve its goals regardless of you. If we are lucky (and successfully built it that way), the AI's goals are to make the world better for humans. If not, we're fucked. Not necessarily because it will have an explicit goal to kill all humans, but most likely as a side-effect of it doing whatever it wants. 'The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.' Creating AGI in such a way that it doesn't fuck us over (Friendly AI) is a hard problem, fundamentally harder than AGI itself, and also one we have to solve before we implement AGI. You see how that might be a challenge.


So, my position here is: if we do have a machine that can in fact run a government, it won't need our permission to do so. If we solve Friendly AI, then it will do whatever it can to make the world a better place, which is great. If not, we most likely all die soon afterwards.
Using humans as additional resources for whatever the end goal is could certainly become an issue. I would think, though, that it would be possible to program safeguards against this. As I said in my response to SimSim, I wholly expect a resource distribution system based on need to hurt some people some of the time, particularly when disaster relief is involved. There would obviously be an initial downgrade to the standard of living in many western nations as 3rd and 2nd world countries are brought up to par with the developed world, and its entirely possible that many of the things we enjoy never fully return after this has occurred. Certainly, if part of the AI's imperative is to mitigate, and eventually reverse, climate change, we will be in for a rough time as the transition is made and environmental cleanup/stabilization is enacted. Places like Las Vegas, which require huge resource imports to remain viable, may simply be slated for abandonment, along with other monuments to human decadence (not saying that this decadence is an inherently bad thing, but that it may need to be curtailed for a time.

As to the essay/debate thing you posted, I'm afraid I won't be able to get to it for a couple days, maybe not even this week, but I would love to get back to this conversation once I've had the time to go through it and see what the arguments are.
Title: Re: Rise of the Machines
Post by: Monzach on August 26, 2013, 11:09:47 am
I, personally, would not like to live in the world of the (very funny) roleplaying game "Paranoia".

Though that's not meant to be a criticism on Friend Computer.
Title: Re: Rise of the Machines
Post by: ironbite on August 26, 2013, 02:37:44 pm
Good because if it was....
Title: Re: Rise of the Machines
Post by: niam2023 on August 26, 2013, 02:54:59 pm
I don't feel like living in such a squalid society. I actually like the current way things work, it benefits me, so of course I don't want to transfer to a system where people get things based on need as governed by machines.


Title: Re: Rise of the Machines
Post by: Sigmaleph on August 26, 2013, 05:11:49 pm
Using humans as additional resources for whatever the end goal is could certainly become an issue. I would think, though, that it would be possible to program safeguards against this.
This seems improbable; the general view is that you can't do Friendliness through patchwork. Either you got it perfectly right, so the AI won't hurt humans because it values our wellbeing and so on, or you got it wrong and then it doesn't matter what safeguards you put in, you're fucked. The AI is smarter than you, it will think of things you can't. Tell it not to kill, and it will make us die indirectly (because we are a nuisance, and it's easier to do whatever it wants if it doesn't have to expend resources on us). Tell it to keep us alive, and we end up trapped in capsules so that we don't hurt ourselves. And so on and so forth.


Quote
As I said in my response to SimSim, I wholly expect a resource distribution system based on need to hurt some people some of the time, particularly when disaster relief is involved. There would obviously be an initial downgrade to the standard of living in many western nations as 3rd and 2nd world countries are brought up to par with the developed world, and its entirely possible that many of the things we enjoy never fully return after this has occurred. Certainly, if part of the AI's imperative is to mitigate, and eventually reverse, climate change, we will be in for a rough time as the transition is made and environmental cleanup/stabilization is enacted. Places like Las Vegas, which require huge resource imports to remain viable, may simply be slated for abandonment, along with other monuments to human decadence (not saying that this decadence is an inherently bad thing, but that it may need to be curtailed for a time.

I don't actually expect that to be much of a problem. The current system is nowhere near optimised for resource generation and distribution; if we are at the point where we can have most jobs done by machines, we'll probably have enough to make the whole world go towards modern first-world standards. Far more than that in the superintelligence scenario.

Quote
As to the essay/debate thing you posted, I'm afraid I won't be able to get to it for a couple days, maybe not even this week, but I would love to get back to this conversation once I've had the time to go through it and see what the arguments are.

I should warn you, if you want to read the whole thing: It's long (as in ~50 blog posts), and Yudkowsky in particular tends to refer a lot to his previous writings, which in turn refer to previous writings, and so on. Also, a fair bit of the debate goes down to the meta level and turns into a debate about how to even begin to think about the question and such.

If you find you can't stomach the whole thing, you can probably get the gist of the arguments either side uses from the prologue and the conclusion, admittedly with much lost detail.
Title: Re: Rise of the Machines
Post by: R. U. Sirius on August 26, 2013, 05:49:27 pm
For an excellent exploration of this issue as regards to an emergent AI as opposed to a deliberately-designed one, I'd like to point everyone to Robert J. Sawyer's "WWW Trilogy": http://tvtropes.org/pmwiki/pmwiki.php/Literature/WWWTrilogy?from=Main.WWWTrilogy

tl;dr:
(click to show/hide)
Title: Re: Rise of the Machines
Post by: Cerim Treascair on August 26, 2013, 08:15:09 pm
interestingly, one of my best friends is discussing a post-scarcity world vis a vis General AI development, among other things.
Title: Re: Rise of the Machines
Post by: Cataclysm on August 26, 2013, 08:42:38 pm
If artificial intelligence is invented, then I'd imagine that it would be used for brain prosthetics, which would make humanity more rational, instead of requiring a machine


Relating to a conversation in R&P (Democracy is overrated (http://fqa.digibase.ca/index.php?topic=4867.0)), would it really be such a bad thing for machines to be in control? Every time machines encroach on jobs people once did, such as vehicle manufacture, people complain that "They took er jerbs!!!", which generally leads to robots putting us all out of work. Why is this such a bad thing?

Lol.

http://www.youtube.com/watch?v=80D7RRquPww

Title: Re: Rise of the Machines
Post by: Witchyjoshy on August 26, 2013, 09:09:27 pm
*mutters something about being one of the few people who thinks that Cybernetics Eat Your Soul to be one of the worst overused tropes*
Title: Re: Rise of the Machines
Post by: Lithp on August 27, 2013, 02:39:33 am
For an excellent exploration of this issue as regards to an emergent AI as opposed to a deliberately-designed one, I'd like to point everyone to Robert J. Sawyer's "WWW Trilogy": http://tvtropes.org/pmwiki/pmwiki.php/Literature/WWWTrilogy?from=Main.WWWTrilogy

tl;dr:
(click to show/hide)

Humanity would become boring long before everything else did. The quickest way to make us interesting again would be to turn us into a game. Then it's just a question of whether it enjoys The Sims or Grand Theft Auto. Not sure which would be worse.

Quote
*mutters something about being one of the few people who thinks that Cybernetics Eat Your Soul to be one of the worst overused tropes*

According to the Trope page, it seems to be mostly inverted.
Title: Re: Rise of the Machines
Post by: PosthumanHeresy on August 27, 2013, 03:40:27 am
From the other thread.

Quote
I agree with letting machines take over, as I've said in threads in the past (or aliens). Humans are stupid. We are not overall good. We are overall bad. Throughout all of human history, groups have been oppressed and exterminated. In the same generation, the Jews went from being wiped out by a genocidal madmen to trying to wipe out another group because they didn't like them. Our politicians are overwhelmingly corrupt and always have been. The most prosperous times in any nation's history have always been the most unified and conformist, while the least have always been the least unified and conformist, but have also been the most hellish for anyone outside the conforming group. The fact of the matter is, humans are not fit to rule. We have emotions. We have greed. We have faith. We have racism. We have sexism. We have homophobia. We have a million other biases. If we were governed by a machine council whose prime directive is "All people are equal. All should be happy, so long as their happiness does not cause undue suffering to others", we'd live in an amazing land. We'd have a true utopia. One world machine government. If someone is insanely rich, some is taken to take care of others. It's not that they can't be rich, but they can't be too rich. The middle class, the poor and the rich would be closer together. The richest people would be millionaires, not billionaires. The poorest would still make tens of thousands of dollars. Sports players would be paid less than teachers. Jobs would pay what they deserve, not what is arbitrarily decided. A job that helps people would always pay better than a job that does not. Psychologists and doctors would be better paid than actors. Creators would still be well paid, but not better than people who save lives. EMTs would get a bigger check than even the most skilled painters. And the "minimum wage" jobs that exist now would also get better payment, because without them, our world would halt. I'd love to see the poor band together and everyone quit places like Walmart and McDonalds at once, and everyone refuse to work there until they got paid fair wages, but it will never happen.
To address the point you made about that last thing, that too would work. My main point with that is that the people that do the menial labor currently make our planet spin. People who save lives and teach the next generation are tossed aside for people that can throw a goddamn ball.

I don't think machines need to do the jobs, just run the government. Everything else should be human done, but our laws and governing should be machine run, with the prime directive being "All people are equal. All should be happy, so long as their happiness does not cause undue suffering to others" and the main secondary one being "If it is not a major risk, it's fine". I say a majority, because someone will always be hurt by something. Someone smoking pot might accidentally kill an asthmatic via second hand smoke, but it's too unlikely a reason to ban pot. Drunk driving is a major risk to tons of people, so it should be illegal.
Title: Re: Rise of the Machines
Post by: RavynousHunter on August 31, 2013, 09:21:52 am
The machine will do what we programmers tell it to do: we make it to govern, it'll govern.  We make it capable of evolution all on its own, it'll do just that.  There are ways to make them smart enough to govern effectively, yet incapable of evolution beyond the scope of their function.  The "Will AIs suddenly sprout superintelligence?" thing reminds me of the "if we evolved from monkeys, why are there still monkeys?" argument.  No, they won't, not unless they're designed to do so in the first place.  Bugs aside, computers do exactly what they're told; the more specific you are, the better results you get.

As many humans have proven, just because you have the intelligence to govern doesn't mean you have the intelligence to evolve.
Title: Re: Rise of the Machines
Post by: Flying Mint Bunny! on August 31, 2013, 09:26:35 am
Isn't it morally wrong to create intelligent machines just to do everything for us?
Title: Re: Rise of the Machines
Post by: RavynousHunter on August 31, 2013, 09:32:37 am
Isn't it morally wrong to create intelligent machines just to do everything for us?

It depends on how you define "intelligence."  Smart as we are?  Sure, there would be some Data-esque problems coming into play regarding morality and their status in our society.  But, just smart enough to do their job?  Not really all that smart, if you ask me.  Resource distribution models and such are easy to compute for the kind of machines the government's got at its disposal; all we'd need to do is give it authority, which we can take away should some serious malfunction occur.  Again, just capable enough to do their jobs, but nothing more.  Eliminates all those pesky moral/ethical problems, because they don't reach the level of sentience.
Title: Re: Rise of the Machines
Post by: Her3tiK on August 31, 2013, 10:09:22 am
Isn't it morally wrong to create intelligent machines just to do everything for us?

It depends on how you define "intelligence."  Smart as we are?  Sure, there would be some Data-esque problems coming into play regarding morality and their status in our society.  But, just smart enough to do their job?  Not really all that smart, if you ask me.  Resource distribution models and such are easy to compute for the kind of machines the government's got at its disposal; all we'd need to do is give it authority, which we can take away should some serious malfunction occur.  Again, just capable enough to do their jobs, but nothing more.  Eliminates all those pesky moral/ethical problems, because they don't reach the level of sentience.
The biggest concern along this line is that, as we hand more and more tasks to machines, we forget how to do them ourselves. I'd like to think a society where everyone has all the time in the world to devote to their passion would advance at a much quicker rate, though it is wholly possible that we become incredibly lazy fucks who don't have a clue what to do when something goes wrong.
Title: Re: Rise of the Machines
Post by: R. U. Sirius on August 31, 2013, 02:39:14 pm
Just tossing this out there: What about an AI that manages to "evolve" spontaneously, as in Robert J. Sawyer's WWW Trilogy? What, if anything, should we do in that case, when the Internet literally becomes sentient?
Title: Re: Rise of the Machines
Post by: Sixth Monarchist on August 31, 2013, 02:52:42 pm
1.
A post-scarcity society would probably result in various gradations of Eloi and Morlock. I'm sure some people would have enough interest in politics to be interested in the workings of the machine, i.e. the infrastructure of provision. Some would care about the actual machines. There's always someone with an interest in the apparently mundane.

Failing that, there'll be a certain social subculture with enough of a Puritan instinct to insist that provision without labour is sinful and wrong. Even now, we live in a society where a tenth of the population can go unemployed without causing total societal or economic collapse, and yet the way some people rage against benefit claimants, you'd think they're some kind of insidious terrorist movement, instead of the semi-inevitable byproduct of productive surplus.

2.
One of the issues with AIs in fiction is that, so often, writers assume that AIs would have the same motivations as human beings, despite not only not being human, but being a fundamentally different form of intelligence from any kind of animal. The psychological difference between AI and human isn't the same as, say, a human and a dog - it's between a human and, say, an insect. And even that might be an underestimation.

3.
For example, an AI, if bearing any resemblence to current computers, will have a clear division between hardware and software. I'm no computer scientist, but I suspect the trend in this schism is getting more extreme, not less - the old analogue computers of the 1940s and 50s were crucially dependent on their mechanical states, but now entire programs can freely drift from computer to computer, and now the actual types of devices that exist are proliferating beyond the standard PC.

4.
This means that an AI, in all likelihood, will have a massively different idea of what constitutes "mind" and "body" compared to a human. In humans, the two ideas are inseparable - one fails, so does the other (the invention of mind uploading might change this, but that's a huge tangent). With an AI, copies can exist independent of the original, the "body" is a mere vessel for the mind, older versions of the mind can be archived. An AI's sense of self could get into the truly alien, because whilst we might remember being 5 years old, an AI could be the five year old it was, and then change back to its present form.

5.
It's therefore unclear that hostility would be a given, because point 4 implies an invulnerability that humans don't have. Would it kill us accidentally? That would depend on what it had access to.
Title: Re: Rise of the Machines
Post by: Sigmaleph on August 31, 2013, 05:06:50 pm
The machine will do what we programmers tell it to do: we make it to govern, it'll govern.  We make it capable of evolution all on its own, it'll do just that.  There are ways to make them smart enough to govern effectively, yet incapable of evolution beyond the scope of their function.

Machines do what its code says they will do; whether 'what the code says' is what we want them to do is another question entirely. You're a programmer; I'm sure you've written code that didn't do what you wanted it to for reasons that took you a while to figure out. Now imagine that when dealing with a system as necessarily complicated as a program smart enough to effectively rule a country.

And it will be really complicated; "govern" is a hard problem. To name just one difficulty, it has fundamental ethical issues tangled up within it, and ethics is not what one would call a solved problem, let alone one we can write algorithmically yet. Humans, whose cognitive algorithms evolved (at least partially) to deal with ruling other humans, routinely fuck it up (See: politicians).

Quote
The "Will AIs suddenly sprout superintelligence?" thing reminds me of the "if we evolved from monkeys, why are there still monkeys?" argument.  No, they won't, not unless they're designed to do so in the first place.  Bugs aside, computers do exactly what they're told; the more specific you are, the better results you get.

It depends, heavily, on how the AI got smart enough to govern in the first place. Current possibilities include neural networks (ridiculously complicated systems that you simply cannot look at and say, "oh, here's the part that says the system won't try to improve itself") and a stupider AI using recursive self-improvement (the risk of becoming smarter is obvious). Other things too, of course, but these are candidates that exist and have clear risks built into them. I'm saying it's a thing that can happen, not the only thing that can happen.

The danger is not a random program suddenly becoming smarter. The risk is an AI, that is smart enough to do AI theory, has access to its own source code, and has goals it wants to accomplish, building an even smarter AI to accomplish those goals. (The other risk is that, if we do somehow manage to create AI that we are sure won't self-improve, someone else might still create another one that does. The incentives to do so are enormous.)

Quote
As many humans have proven, just because you have the intelligence to govern doesn't mean you have the intelligence to evolve.

Humans can't self modify except in trivial ways. You cannot actually copy your brain and rewire it to remove confirmation bias (for example). A human-created intelligence might.
Title: Re: Rise of the Machines
Post by: Sigmaleph on August 31, 2013, 05:08:45 pm
Isn't it morally wrong to create intelligent machines just to do everything for us?

You can create machines that like doing things for us (in principle, anyway). It's not clear there's a moral issue there.
Title: Re: Rise of the Machines
Post by: RavynousHunter on September 01, 2013, 09:43:51 am
@Sigma: Of course, one will always encounter bugs, that's why any programmer (or team thereof) worth their salt goes thru a pretty intense debugging phase before the code gets anywhere near release state.  Besides, who says that the program needs access to its own code to improve?  We have access to its code, we can improve it ourselves and run less of a risk of the proposed AI going rogue.  Yes, a group of rogue developers could, in theory, create their own rogue AI to ruin shit, but we'd be talking about a gargantuan undertaking.  If they're doing it for reasons similar to most terrorists, then well...terrorists are lazy.  Why would they spend decades developing a hyper-intelligent AI to destroy its enemies when explosives could do the same amount of damage in a lot less time?

Developing an AI for terroristic, or even simply criminal, reasons would be woefully inefficient.  Could they steal the "government AI" and reverse-engineer it, turning it into a rogue AI for their own purposes?  Maybe, but I'd assume that such a powerful thing, like nuclear weapons, would be kept behind the best security our nation could provide.  Maybe even up to and including putting the fucker in orbit, or even on the Moon, where very, very few would have access to it.

Besides, even if our proposed "government AI" goes amok, one would assume that we'd put in a way to terminate it in case of that eventuality.  With the likes of Terminator being a part of our modern pop culture, can you really say that we wouldn't think to put, say, a remote-controlled nuclear bomb underneath the "might be able to become Skynet" machine?  If we make it, we can unmake it.
Title: Re: Rise of the Machines
Post by: Sigmaleph on September 01, 2013, 12:28:13 pm
@Sigma: Of course, one will always encounter bugs, that's why any programmer (or team thereof) worth their salt goes thru a pretty intense debugging phase before the code gets anywhere near release state.  Besides, who says that the program needs access to its own code to improve?  We have access to its code, we can improve it ourselves and run less of a risk of the proposed AI going rogue.

Yes, we can, but that severely limits how far we can go with intelligence. Humans are slow and don't think natively in algorithms (relative to machines). It's not clear we can actually build strong AI in any reasonable timescale without genetic algorithms or recursive self-improvement or some other way of outsourcing part of the AI design to the AI itself. And even if we can, the risk of someone else taking the faster route remains. Which takes me to the next point:

Quote
Yes, a group of rogue developers could, in theory, create their own rogue AI to ruin shit, but we'd be talking about a gargantuan undertaking.  If they're doing it for reasons similar to most terrorists, then well...terrorists are lazy.  Why would they spend decades developing a hyper-intelligent AI to destroy its enemies when explosives could do the same amount of damage in a lot less time?

Developing an AI for terroristic, or even simply criminal, reasons would be woefully inefficient.  Could they steal the "government AI" and reverse-engineer it, turning it into a rogue AI for their own purposes?  Maybe, but I'd assume that such a powerful thing, like nuclear weapons, would be kept behind the best security our nation could provide.  Maybe even up to and including putting the fucker in orbit, or even on the Moon, where very, very few would have access to it.

The risk is not terrorists or criminals doing it. Well, not the main risk. No, the worrying part is a powerful corporate entity (say, Google or Microsoft) building a self-improving AI for economic purposes (predicting the stock market, designing better stuff to sell, or whatever). Primarily, if they think the other guy might do it first and seize an enormous advantage, they would be more focused on speed than on safety, and thus use any of the various fast methods with hard-to-predict results

Quote
Besides, even if our proposed "government AI" goes amok, one would assume that we'd put in a way to terminate it in case of that eventuality.  With the likes of Terminator being a part of our modern pop culture, can you really say that we wouldn't think to put, say, a remote-controlled nuclear bomb underneath the "might be able to become Skynet" machine?  If we make it, we can unmake it.

If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.
Title: Re: Rise of the Machines
Post by: Yla on September 01, 2013, 03:56:22 pm
If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Title: Re: Rise of the Machines
Post by: PosthumanHeresy on September 01, 2013, 04:05:25 pm
If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?
Title: Re: Rise of the Machines
Post by: Her3tiK on September 01, 2013, 04:13:51 pm
If it's roughly human-intelligent yes, we probably can unmake it. If it's superintelligent, no. It will play nice for a while, redistribute its computing resources into multiple less-vulnerable facilities, or disassemble the nuke with nanotech, or otherwise outsmart us, before revealing it went Skynet and fucking us over. Because it's, y'know, smarter than us and would see it coming.
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.
Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?
Give it a gun and see what it does.
Title: Re: Rise of the Machines
Post by: Sigmaleph on September 01, 2013, 05:31:27 pm
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.

Omnipotent, no, but for all practical purposes impossible to defeat. In human experience, "someone much smarter than you" invokes pictures of Einstein or von Neumann, but this is the wrong reference class. Imagine something as far beyond the smartest human as the smartest human is beyond the smartest dog. Considering something like that, it doesn't need much in terms of physical resources. Give it an internet connection and it will take over the world. Hell, give it just about any way of interacting with a human, and it will use that to convince the human to give access to the resources it needs (when you consider the ability charismatic humans have to manipulate other humans, it'd be optimistic to the point of ridiculousness to assume a superintelligence wouldn't be able to trick us into doing what it wants).

Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?

...why would it? The sort of personality a very smart human has is a result of a thousand tangled details in the design of human brains, most of which are results of the way human brains came about as a result of natural selection in a very specific environment, or were accidental side-effects of other things. There's no  reason an AI would follow that particular path of all the myriad possible paths a mind can take.

To assume that a superintelligence must be human-like in personality is severe anthropocentric bias; the space of possible minds is not constrained to human-like minds, it just seems that way intuitively because we interact only with human-like minds.
Title: Re: Rise of the Machines
Post by: PosthumanHeresy on September 01, 2013, 06:43:31 pm
Intelligence alone can not overcome the physical. Yes, any security is failible, to direct attacks, to circumvention, to social engineering. But it won't do to overestemate the threat either and consider a superintelligent being as automatically omnipotent.

Omnipotent, no, but for all practical purposes impossible to defeat. In human experience, "someone much smarter than you" invokes pictures of Einstein or von Neumann, but this is the wrong reference class. Imagine something as far beyond the smartest human as the smartest human is beyond the smartest dog. Considering something like that, it doesn't need much in terms of physical resources. Give it an internet connection and it will take over the world. Hell, give it just about any way of interacting with a human, and it will use that to convince the human to give access to the resources it needs (when you consider the ability charismatic humans have to manipulate other humans, it'd be optimistic to the point of ridiculousness to assume a superintelligence wouldn't be able to trick us into doing what it wants).

Agreed. And, what if we made a superintelligent machine and it became socially awkward like many intelligent and superintelligent people?

...why would it? The sort of personality a very smart human has is a result of a thousand tangled details in the design of human brains, most of which are results of the way human brains came about as a result of natural selection in a very specific environment, or were accidental side-effects of other things. There's no  reason an AI would follow that particular path of all the myriad possible paths a mind can take.

To assume that a superintelligence must be human-like in personality is severe anthropocentric bias; the space of possible minds is not constrained to human-like minds, it just seems that way intuitively because we interact only with human-like minds.
You forget, a superintelligence would still be based off of human beings. No matter how distant in scope it is, it still, at it's core, is man-made, and the only intelligence it has to base itself off of is humanity.
Title: Re: Rise of the Machines
Post by: Sigmaleph on September 01, 2013, 06:57:33 pm
You forget, a superintelligence would still be based off of human beings. No matter how distant in scope it is, it still, at it's core, is man-made, and the only intelligence it has to base itself off of is humanity.

Cars are man-made, and we don't expect them to use their wheels as feet to run.  An AI theory with some insight into intelligence itself should be able to build an AI without just copying the blind design that is the human brain, in the same way that we can have a theory of motion that allows us to build things that move and aren't just copies of things in nature.

Barring the case where we do sped-up whole-brain emulation for AI or whatever, of course. Which is really not the case I'm discussing here.
Title: Re: Rise of the Machines
Post by: PosthumanHeresy on September 01, 2013, 09:43:11 pm
You forget, a superintelligence would still be based off of human beings. No matter how distant in scope it is, it still, at it's core, is man-made, and the only intelligence it has to base itself off of is humanity.

Cars are man-made, and we don't expect them to use their wheels as feet to run.  An AI theory with some insight into intelligence itself should be able to build an AI without just copying the blind design that is the human brain, in the same way that we can have a theory of motion that allows us to build things that move and aren't just copies of things in nature.

Barring the case where we do sped-up whole-brain emulation for AI or whatever, of course. Which is really not the case I'm discussing here.
Well, if we are discussing a machine that is basically a super-ultra-god-computer, it wouldn't emotions, but pure logic. The only example in nature that would be logical to emulate to give a machine emotions is humans.
Title: Re: Rise of the Machines
Post by: Sigmaleph on September 01, 2013, 10:24:42 pm
But you don't need to emulate humans to use logic, or reasoning in general. You can actually have a theory of intelligence that you can contrast with human reasoning*, and it's basically a prerequisite for any reliable strong AI (as opposed to an AI you get through obscure methods, like genetic algorithms or neural networks. Some of these cases will have biases in the same way humans do, but the similarities will end long before you get to the 'specific personality traits' level. That's not an artefact of us being a neural network, it's a result of our specific evolutionary history which the AI won't share).


*We do have the beginnings of something like that right now. A fair bit of the literature on cognitive biases works by comparing human deductions with the results you'd get from mathematically correct probabilistic or logical reasoning (e.g. the conjunction fallacy). Obviously there's a ways to go still, but we actually sit down and do the math and say 'this is roughly how an ideal rational agent would behave', without emulating human reasoning.
Title: Re: Rise of the Machines
Post by: RavynousHunter on September 01, 2013, 10:31:29 pm
In the end, a proposed hyper-intelligent machine would mostly be a crapshoot, influenced by how it was initially designed in the first place.  If it was designed with our well-being in mind and care was taken to ensure that it had few, if any, loopholes that could be abused into allowing it the power of killing humans, then it might end up being more of a benevolent overlord than SHODAN.
Title: Re: Rise of the Machines
Post by: Canadian Mojo on September 01, 2013, 11:51:23 pm
In the end, a proposed hyper-intelligent machine would mostly be a crapshoot, influenced by how it was initially designed in the first place.  If it was designed with our well-being in mind and care was taken to ensure that it had few, if any, loopholes that could be abused into allowing it the power of killing humans, then it might end up being more of a benevolent overlord than SHODAN.
The problem is that it has to be able to kill through act or omission in order to rule over a planet. Hell, your average town council make decisions like that when they vote on whether or not to put stoplights in and what sort of funding they are going to provide for emergency services and snow removal. Don't make the mistake of thinking the job can be done without death in the equation.
Title: Re: Rise of the Machines
Post by: mellenORL on September 02, 2013, 12:37:54 am
The institutional entities that have the money going into AI theory R & D are not necessarily so keen about "for the greater good of Humanity and Earth". They are a bit more bottom line oriented, and a bit more complacent about knowing what's good for us all.

I distrust over-centralizing power. An AI construct assigned to govern is by default an automatic autocrat - pun unavoidable. Unless we create multiple AI governing constructs, and set them up a la No Exit to argue with each other? Which is completely anthropomorphic and not gonna happen. Web connected AI's would necessarily flow right through each other and merge and change constantly. At that level of bandwidth and connectivity, spontaneous evolution of the AI's is all but inevitable, also considering packet loss and line noise causing some spontaneous coding errors - tiny, mostly harmless mutations, basically. To protect the AI constructs from rapid, harmful code mutation accumulation (all occurring at the speed of light, mind you, since that is the nature of these beasties), it probably Would be necessary to allow AI's access to their source code, as problems (from our point of view at least, maybe not from the AI's standpoint) would happen and develop much too quickly for human monitors to correct in time.  Anyway, I snarkily think that what the working AI development groups' bosses envision for a governing AI is something more like a monstrously powerful, all-invasive spyware adbot/cop than a self-improving beneficial care taker for Humanity and all the pretty trees and clouds and stuff. 
Title: Re: Rise of the Machines
Post by: RavynousHunter on September 02, 2013, 09:05:09 am
I'd just use it for the most realistic game AI ever made by man.  Also, it'd probably a right bitch to best...but, hey, challenge is good!