Is this a reasonable assumption?

washingtonpost.com/news/theworldpost/wp/2018/05/03/end-of-capitalism/?utm_term=.15051703ac7a

Seems reasonable to assume AI will be abused in free market economies, how do we stop this from happening in say America or the UK?

Attached: TechpriestChiyo.jpg (660x740, 61.69K)

Other urls found in this thread:

youtube.com/watch?v=EUjc1WuyPT8
intelligence.org/files/AIPosNegFactor.pdf
arxiv.org/pdf/1705.08807.pdf
marshallbrain.com/manna1.htm
lesswrong.com/rationality
wiki.lesswrong.com/wiki/Sequences
theanarchistlibrary.org/library/ted-kaczynski-the-truth-about-primitive-life-a-critique-of-anarchoprimitivism
en.wikipedia.org/wiki/David_Pearce_(philosopher)
wiki.lesswrong.com/wiki/The_Fun_Theory_Sequence
en.wikipedia.org/wiki/Loss_aversion
wiki.lesswrong.com/wiki/Loss_aversion
en.wikipedia.org/wiki/Prospect_theory
wiki.lesswrong.com/wiki/Prospect_theory
wiki.lesswrong.com/wiki/Free_will
wired.com/2017/04/the-myth-of-a-superhuman-ai/?mbid=social_twitter_onsiteshare
yudkowsky.net/singularity/power/
youtube.com/watch?v=7Pq-S557XQU
twitter.com/NSFWRedditGif

Bomb scientists responsible for AI development.

(semi related but I just wnated to get this off my chest)
here's a thing about the nature automation and AI and capitalism.
Why would an economy where most of the production is done by machines be capitalist. There's literally no reason for it. One of the most common arguements for capitalism is that money and profits motivate capitalists to change their production to better suit the demand of the market.
Well, if you have robots, which just make whatever the fuck you tell them to, why would you need motivation? If blue shoes are more liked you tell the fucking robots to make blue shoes. There's no need for a middle man, the demands of the populus can directly be used to plan out the production of a 6-12 month period.
Mass automation will turn capitalism into an outdated system, just like manufactories and mass production made feudalism outdated.

Generic, dull and dumb article but i don't 'get' this passage
is this retarded or am i just not enough of a galaxy brain to get this hot take?

Prob something lost in translation


It will outmode capitalism as we know it but it could result in these megacorps using their robotic productive forces to establish themselves is the new neo-feudal lords


But AI could be an immense benefit to socialism

not if we stop them

Attached: ClipboardImage.png (750x563, 794.75K)

just stop this marketing meme please

Attached: next_gen.webm (640x360, 2.83M)

Technology always shapes humanity for the own sake of technology growth, not to benefit humanity.
At long term, the AI would become self-conscious and only benefits only to itself, do you want a socialist Roko's Basilic?

Well, it'll be the upcoming war between the corporation collaborationist and socialist revolutionary armies, so good luck then I guess

I don't know, the very fact that you survived childhood was helped by technology.

Survived to be enslaved by technology.

Bruh making fire literally helped develop our brains from eating more nutritious cooked meat, it is the single reason your ass is not being plowed by nature 24/7

You just confirmed you don't even know what you're talking about.

youtube.com/watch?v=EUjc1WuyPT8
intelligence.org/files/AIPosNegFactor.pdf

Attached: mindspace_2.png (536x536, 90.09K)

Predictions for the time of arrival of human level AI

arxiv.org/pdf/1705.08807.pdf

Attached: HLAIpredictions.png (1394x1490 435.87 KB, 280.61K)

You just said technology, how was I supposed to know you meant only specifically stuff made after the industrial revolution?

You also realise how before that people lived as serfs right?

Holy balls, I'd better look into studying math and AI

It's you who made a thread about complex AI.

Not in Paleolithic.

Why should anyone even follow primitivism, I'm pretty sure there was already a thread confirming it to be reactionary

Because primitivism is still better than an almighty AI taking all the decisions instead of humans.

It's not a very convincing ideology and smells of defeatism and pessimism towards human progress.

forget it man, primmies believe brain parasites are anticapitalist because muh civilization

And your techno-progressivism smells idealism, you still don't understand that there's no human progress, only technological progress.
Enjoy your bright future when your life will be completely dictacte by an AI until it decides it longer needs you when it will reach the Singularity.

*no longer

You do realise nature has decided this for as long as we have existed right? Just one cut and one contamination could easily kill you without vaccinations or antibiotics.

Some kind of victim support group for abused AI perhaps?

If human beings create an AI that is smarter than the smartest human beings it will be the last invention we make. We dont know what will happen then. Actually a couple of hours ago i was deep into reading shit about AI, and its basically a couple of things that might go down

1. The AI decides we are useless and kills us all

2. The AI is a benevolent dictator and gives us everything our hearts ever desired

3.We merge with the AI.


Yeah, capitlism will either die or be changed dramtically.

Capitalism started out with capitalists owning the means of production, then its turned into managers owning the means of production, the age we live in now.
Now, what if the means of production arent owned by anyone, also the means of production are alive and is a fucking robot god that can create things out of thin air with nano bots?

Why the fuck would we need a job for?

Are you saying that nature is self-conscious?

Yes, the mortality was higher in Paleolithic, but does your vaccinations and antibiotics are worth the technological slavery?

I'm not sure, I'd rather not die, maybe you should go solo it out for abit, well collect your corpse at the end of your experiment.


Hmm, have you watched Eve no jikan? The show comes close to displaying something like that.

3 sounds pretty good, don't you want your own cortana?

Attached: etienne-jabbour-cortana4.jpg (1920x1054, 212.56K)

We all die somewhen and somehow, it's how we live that's matter. Maybe you're the kind that want immortality so maybe you should merge with your AI godess like you seems you want to do :

But you will be no longer human, maybe that's why you despise humanity.

I can't, it's like saying to a communist to go solo to communism, it's a society problem, even if I could legally live in the woods I would need a tribe and the techno-industrial society would destroy our habitat a day or another.

oh yeah i forgot number 4.

the AI keeps us around as pets in a zoo

Ai is being slowly integrated into the economy and society. Its very low level now. I read this great quote about AI and humans, "Larry Tesler, the computer scientist who invented copy-and-paste, has suggested that human intelligence “is whatever machines haven’t done yet.” ".

So AI can as of right now, beat us at board games. But its getting more advanced, can imitate a human voice, is starting to recognize images. Who knows what else.


The first people to get the shaft of course will be the proles. The factories and low level menial work like driving and warehouse jobs will be fucked.

Then something funny will happen, AI will get so advance it will come for the white collar middle class professionals. Come into the pit of doom with us mr. manager

Its estimated that unemployment will end up being worst than during the great depression, something like 45% of the population wont have a job

Beings evolve, why should we be afraid of the next step of our ascendance?
I mean, there have been many people that went off the grid to join existing tribes, one American did in the PNG, he got cannibalized though.


We'd have to be awfully evil to incur that

we are evil. The AI will be in the hands of a capitalist elite and they will use it to wage war on those beneath them. Every technological innovation under capitalism leads to it being used against the working class or to kill more people in war.

For instance, the cotten gin lead to even more slavery.

The airplane leads to bombing raids that kill thousands

Nuclear fission leads to a bomb that can kill all human life

So what will the capitalists do with a powerful AI? DO ya think gonna be nice?

Because it's not a natural evolution, it's a technological evolution, it's the next step to a new kind of technological slavery where humans becomes the technology itself.

You still don't understand that all those tribes are threaten by the techno-industrial society, it's just a matter of time before all their habitats get destroy.

It will happen at first because didn't require special equipment to perform physical movement.

The ones in PNG at the time weren't in any danger, stop pussyfooting around how you'd most likely get killed


Nothing is stopping you from developing your own AI now with enough resources.

titan gpu cost over 3k$, special NPU cost far more.

I'm not saying it's impossible now, I'm saying it's futile because the techno-industrial society will destroy all the tribes habitats in the future. Do you seriously thinks tribes and Singularity AI will coexist peacefully?

I'm pretty sure AI would recognize the lack of resources on this planet, Mars would be of much more interest to them than puny tribes


Well… You could collectively pull money together I guess?

I mean, AIs in general are still programs, in the future I'm quite sure more company data leaks regarding AI will occur

Mars has plenty of Iron but so does earth.

Surely they would rather go after one of those giant suns made entirely of diamond or something.

There's a lack of resources on this planet because of advanced technology.
You still don't understand that to develop the techno-industrial society you need to destroy tribes habitats, not to kill tribes by purposes of course, but to exploit the resources of the habitats and to have more and more spaces for the techno-industrial infrastructures, especially to reach space travel and singularity.
So all the tribes would have already disappear because all their habitats would have been destroyed to reach that stage.

I mean this is an AI, it probably stores in much more compact sizes as compared to humans, they could easily shoot themselves off to Mars and build a future there if they hate us so much

To reach that stage, most of Earth resources would have already be exhausted.
There's no reason for the AI to go the Mars except for resources, it won't go there to make you happy or sad, only because it needs to growth more and more.

Cyber-socialist like Cockshott don't even adcocate for an AI contolled economy so I don't even know what you guys are arguing about. We can have a cybernetic system that we give production goals to and helps us achieve those goals instead becoming subordinated to some all powerful AI

Managers, traders and other office drone jobs will be the ones gone first.

Janitors, factory workers, warehouse, etc. actually do and produce things which require manual labor and movement, and their labor actually produces physical results.

Managers just organize and that can be done far more efficiently by an AI
Read this chilling story to realize the true future of AI
marshallbrain.com/manna1.htm

Not Cockshott but some other planners (socialist or not), like OP, might want an AI controlled economy.

I kinda doubt that, machines are much more tougher than us and would survive in space much more effectively than we could on potentially far less resources

So why would the machines go to space if the Earth's resources are enough for them?

You should also read the Sequences if you haven't already.

lesswrong.com/rationality
wiki.lesswrong.com/wiki/Sequences

Attached: yudkowsky.jpg (1724x1724, 2.35M)

theanarchistlibrary.org/library/ted-kaczynski-the-truth-about-primitive-life-a-critique-of-anarchoprimitivism

Attached: virginchadtranshumanism.png (1400x650, 221.45K)

Attached: 16b5ddeec234f5ce72bd58d90bd236eca535ac50779bf309e99acbd68a774a60.jpg (960x401, 50.24K)

This will never work.

intelligence.org/files/AIPosNegFactor.pdf

I know it's a dead end to want stop technological growth but slowing it might gives more time to humanity.

Is this supposed to be a good thing?

Read The Hedonistic Imperative.

en.wikipedia.org/wiki/David_Pearce_(philosopher)

Attached: civilizationhappiness.png (1000x750, 39.39K)

Transhumanists like Pearce needs to realize that you can't separate the good and the bad sides of technology, if society wants to use pharmacology, genetic engineering, nanotechnology, neurosurgery, etc. to help all sentient life, it will also be use against all sentient life, like it as always be the case with advanced technology. Scientists were also excited for nuclear, cars, eugenism, etc. But all those things have been use against sentient life in one way or another.

You need to realize that the reason why suffering exists at all is because of Darwinian biological design. Your brain evolved to reproduce and reproduce only, not to make you as happy as possible. Even if future technology is used to kill/damage sentient life, I don't see why you couldn't re-engineer the brain in some way in order to drastically raise the hedonic set-point and eliminate the ability to feel suffering.

The problem is not the suffering, the problem is the fact that you need a high level of technology to remove suffering (pharmacology, genetic engineering, nanotechnology, neurosurgery, etc.) and you have no guarantee that it will be used to end suffering or another ethical way, it can totally be the opposite, those technologies might and probably will be used to control humanity ("for its own good" some would say). For example, you might have access to a life without suffering but the price would be to be removed of your free will.
To sum up, the problem with advanced technology is that the more advanced it get, the more it requires more and more control (like cars traffic regulation) and so less and less autonomy for people.

You never technically had "free will" in the first place. That being said, even if there is a chance that the scenario you describe could happen, it doesn't necessarily mean that it will. You seem to have the cognitive bias of loss aversion. Even if humanity is controlled in some way supposedly for it's own good, virtual reality might be one solution to this problem. That would allow one to experience any reality they desire, but it would be confined to that virtual world, so any damage they would cause would not cause damage to the outside world.

Read these:
wiki.lesswrong.com/wiki/The_Fun_Theory_Sequence
en.wikipedia.org/wiki/Loss_aversion
wiki.lesswrong.com/wiki/Loss_aversion
en.wikipedia.org/wiki/Prospect_theory
wiki.lesswrong.com/wiki/Prospect_theory
wiki.lesswrong.com/wiki/Free_will

there is an AI arms race going on right now. Lots of people are saying they should slow down to make sure the AI wont be some evil cunt or that we might be able to control it. But china, america, russia, and others are basically all fighting each other to be the first to make the God AI, cause the first one that does it will of course now have that under their control

Start doing a bunch of LSD, then move out into a shed in the most rural area of the Pacific Northwest you can find.

People would still need to make and oversee the machinery. And to repair it. Etc… That's labor.

earth isn't enough for them, and extracting resources here in a war for meager resources would be much more tiresome than going to mars

This is implying it already isn't. I'm already convinced from before that the bounces we saw in the market was entirely hedge fund algorithmic AI meddling.

Attached: AI-DOW.png (1485x283, 134.71K)

I will actually murder a liberal one day. I HATE TECH FAGS WHAT THE FUCK.
They are CONSTANTLY saying some brand new item is going to change everything, and we just have to wait. It's centrism without fence sitting. It is absolutely poisonous.
That would require a generalized, human + ai. This has not been shown to be possible. Let me repeat, we have literally no proof that an AI is capable of any of this. Human thinking has never, ever, ever been replicated in anything other than a womb. Here's an article that explains what I'd say.
wired.com/2017/04/the-myth-of-a-superhuman-ai/?mbid=social_twitter_onsiteshare
But furthermore, how could an ai even be cheaper than a person? How? For their to be no laborers, there would have to be no labor. The robots would have to design new equipment, repair themselves, etc…. This won't happen, and it also wouldn't destroy Capitalism. A.) It's literally rapture tier for credibility and possibilty. B.) It would just be Capitalism again! You have robots capable of fulfilling all human functions (service and productive). They sound like humans to me, just metallic. So perhaps, for a little, we could argue if it constitutes feudalism or a slave economy, but it would still be Capitalism, just with shiny robot workers. This shit is absolutely ridiculous and utopian. I hate tech fags.
I'm done responding, it's literally all just christfag tier stroking about the coming of Jesus Christ Rumba edition

you do realise dumber robots already work in matured production lines right, ie: car assembly lines, etc.

ai is cheaper in the sense that it is essentially a programme, a well trained ai for specific tasks can be simply copied into another body if you need more of it. with a human you would need to hire a person and then train them etc.

you seem quite upset

From the same article:

We can therefore visualize a possible first-mover effect in superintelligence. The firstmover
effect is when the outcome for Earth-originating intelligent life depends primarily
on the makeup of whichever mind first achieves some key threshold of intelligence—
such as criticality of self-improvement. The two necessary assumptions are these:
• The first AI to surpass some key threshold (e.g. criticality of self-improvement), if
unFriendly, can wipe out the human species.
• The first AI to surpass the same threshold, if Friendly, can prevent a hostile AI
from coming into existence or from harming the human species; or find some other
creative way to ensure the survival and prosperity of Earth-originating intelligent
life.
More than one scenario qualifies as a first-mover effect. Each of these examples reflects
a different key threshold:
• Post-criticality, self-improvement reaches superintelligence on a timescale of weeks
or less. AI projects are sufficiently sparse that no other AI achieves criticality before
the first mover is powerful enough to overcome all opposition. The key threshold
is criticality of recursive self-improvement.
• AI-1 cracks protein folding three days before AI-2. AI-1 achieves nanotechnology
six hours before AI-2. With rapid manipulators, AI-1 can (potentially) disable
AI-2’s R&D before fruition. The runners are close, but whoever crosses the finish
line first, wins. The key threshold is rapid infrastructure.
• The first AI to absorb the Internet can (potentially) keep it out of the hands of other
AIs. Afterward, by economic domination or covert action or blackmail or supreme
ability at social manipulation, the first AI halts or slows other AI projects so that
no other AI catches up. The key threshold is absorption of a unique resource.

Should probs be "from each according to the inability…" and so on. He's saying that human labor should only be used ("From each") to the extent that automated systems are not able to meet the needs of society.

People living in hunter-gatherer societies had free will, they were autonomous in their decisions as individuals or as a tribe. There was no state, no dominant class, no advanced technology to watch and control them.
Of course they had to survive in nature but the way they choose how to survive was their own decisions.

What do you think could happen with people having access to nanotechs and other emerging techs? It's will be too dangerous for the techno-industrial society to let them "free" even if at this point they are not really free anyway because advanced techs requires control over population, so more the techs are advanced, more the control has to take more space in our lifes. That's exactly why modern police was created during the Industrial Revolution, the control was far less important before the Industrial Revolution. So what do we have with nanotechs? Nanocops.

Why would I want a virtual "reality" when there's already a natural reality? Because our natural reality will be fucked up even more than now by transhumanists like you. Transhumanists love to talk about VR because they know their natural reality will never be emancipatory.

Maybe they had "freedom" from government control, but they still had barely any freedom to speak of. They couldn't choose their environment, they couldn't choose their physical form, they couldn't choose their lifespan, and they couldn't choose their lifestyle.

And how is this a bad thing? Even if you need to restrict freedom to prevent people from building apocalyptic nanotechnology and things like that, this doesn't mean that you need to restrict all freedom in general. By your logic, murder being illegal is equivalent to totalitarianism.

Because it offers a potential solution if freedom is as limited as you say it will be. What exactly are your terminal values?

What does this statement even mean?

Let's not be too hasty, afterall what's the point of living if you can't be better than someone else?

Can you?

It becomes totalitarianism when you're being too much controlled to prevent you won't kill anyone. Is brainwashing a good idea if it can stop people to even have the idea of killing someone?

A "solution" for the problems created by the techno-industrial society with the help of transhumanists like you.
It's not even real solution because people won't really have the choice to live in a VR "world", they will have to, to suit the imperatives of the techno-industrial society if they don't want to becomes "marginals" or "deviants", and/or simply because like have already said, their natural reality would have so much been deteriorated by the techno-industrial society and so would have no appeal to them anymore. It's already happening nowadays with the escapism, VR is just escapism with a better immersion.


The future of the techno-industrial society will never be emancipatory.

Where capitalism ends, the Lobster King begins,

Attached: Peterson.webm (854x480, 1.29M)

Define this meme word friendo

From the article you linked:

No one is saying that it has to be silicon. There are other potential computing methods that could be used to build an ASI, such as quantum computing, optical computing, biological computing, using graphene transistors, etc.

It doesn't necessarily have to be infinite, just far higher than what currently exists. The human brain has numerous inefficiencies that could be made to be far more efficient.

Read this: yudkowsky.net/singularity/power/

This is nothing more than semantics. Even if there are some tasks that humans are bad at, and humans are therefore not a "general intelligence" by your logic, that still doesn't mean that there couldn't be an AI that can do all of the things that a human can do. This statement is a statement about the limits of human capabilities, not the capabilities of future AI.

Is this article seriously trying to argue that no one will ever try ti build a superintelligent AI because it would be too expensive? This ignores that A) the technology necessary to do so would likely cheapen with time, and B) that there are potentially massive payoffs for whoever creates the first superintelligent AI, and has a potentially extremely powerful first mover effect.

No one said it has to be infinite. Even with physical limits to intelligence, you could still probably build an AI millions of times smarter than a human.

Nice mind projection fallacy, dumbass. Just because you can't imagine how an AI could be cheaper and more efficient than a human doesn't mean that it can't happen. Watch this: youtube.com/watch?v=7Pq-S557XQU

How do you know it won't happen?

More so than the average hunter-gatherer

And there are potential other ways to stop murderers that don't involve brainwashing. Just because you can imagine transhumanism leading to dystopia doesn't mean that it necessarily will.

As opposed to your solution, where people have to live as hunter-gatherers?

honest question
why would people be worried about creating an "evil" AI if you can just tell it to not be evil

"AI" IS A FUCKING BUZZ TERM PORKY
IT DOESN'T MEAN ANYTHING

I bet you think machines learn as well.

tell me is a plugin that compared data and finds patterns a "mind"

goddamn the ☭TANKIE☭s were ahead of their time

no