I don't want to work anymore, it's very silly...

I don't want to work anymore, it's very silly, especially with our rapidly advancing technology year by year and AI inevitably taking all the jobs. Like, it doesn't even matter.

Why even do anything?

Attached: 1542243566503.jpeg (647x827, 56.59K)

Just become a neet and scam welfare.

You can't do this in Freedumbland I'm afraid!

You can plenty of ameriblubbers post in the welfare thread on wizardchan check there. You won't get a lot of money but enough to keep your head above water.

fucking roses need to off themselves. you're literally feminized nazism.

please explain

AI is a pipedream that has wasted literally billions of dollars in funding with nothing to show for it. The idea that AI will replace lawyers and doctors and wipe out the majority of jobs is fucking retarded and if you unironically believe in that you're as stupid as race "realists".

If you're tired of your current job search for another one or go back to school and get trained up for a higher paying one.

they are both a means of empowering a corporate-ruled state.

t. anti-AI shill

There is literally zero difference between a state that provides heatlhcare for all and a state that hates the disabled and weak so much they send them to concentration camps.

t. the average anti-socdem poster on this board

Very enlightening.

The techies of course assume that they themselves will be included in the elite minority that supposedly will be kept alive indefinitely. What they find convenient to overlook is that self-prop systems, in the long run, will take care of human beings-even members of the elite-only to the extent that it is to the systems' advantage to take care of them. When they are no longer useful to the dominant self-prop systems, humans-elite or not-will be eliminated. In order to survive, humans not only will have to be useful; they will have to be more useful in relation to the cost of maintaining them-in other words, they will have to provide a better cost-versus-benefit balance-than any non-human substitutes. This is a tall order, for humans are far more costly to maintain than machines are.

It will be answered that many self-prop systems-governments, corporations, labor unions, etc.-do take care of numerous individuals who are utterly useless to them: old people, people with severe mental or physical disabilities, even criminals serving life sentences. But this is only because the systems in question still need the services of the majority of people in order to function. Humans have been endowed by evolution with feelings of compassion, because hunting-and-gathering bands thrive best when their members show consideration for one another and help one another. As long as self-prop systems still need people, it would be to the systems' disadvantage to offend the compassionate feelings of the useful majority through ruthless treatment of the useless minority. More important than compassion, however, is the self-interest of human individuals: People would bitterly resent any system to which they belonged if they believed that when they grew old, or if they became disabled, they would be thrown on the trash-heap.

But when all people have become useless, self-prop systems will find no advantage in taking care of anyone. The techies themselves insist that machines will soon surpass humans in intelligence. When that happens, people will be superfluous and natural selection will favor systems that eliminate them-if not abruptly, then in a series of stages so that the risk of rebellion will be minimized.

Even though the technological world-system still needs large numbers of people for the present, there are now more superfluous humans than there have been in the past because technology has replaced people in many jobs and is making inroads even into occupations formerly thought to require human intelligence. Consequently, under the pressure of economic competition, the world's dominant self-prop systems are already allowing a certain degree of callousness to creep into their treatment of superfluous individuals. In the United States and Europe, pensions and other benefits for retired, disabled, unemployed, and other unproductive persons are being substantially reduced; at least in the U.S., poverty is increasing; and these facts may well indicate the general trend of the future, though there will doubtless be ups and downs.

Attached: 1515707951488.jpg (1200x1200, 214.5K)

Do you think that the neurons that form your brain are somehow magical? If not, what are the limiting factors that will prevent the mechanisms that exist with a human brain from being replicated in a machine?

we can't even replicate basic animal cognition, let alone the advanced shit humans are able to do (which takes a few decades of assimilation to even acquire)

Because the possibilities are literally fucking endless and AI is only as good as the code which programs it.

May I direct your attention to Microsoft's twitter bot, which was supposed to be a demonstration of how sentient AI could be. Within an hour it was heiling hitler and saying jews deserved to die because Zig Forums spammed it . Wow, it's almost like it's literally impossible to develop sentience via AI.

Yet. This post is a testament to the fact that your tiny brain can't understand large time scales. Do you know how long it took for your human brain to evolve? Even if you think AI progress is slow, it's still happening a lot faster than evolution is. There are still billions of years between now and the heat death of the universe, so it's hubris to say that an artificial superintelligence will NEVER be created.

Attached: explainingthesingularitytoretards.png (1376x1124, 137.96K)

It may be damn near impossible to purposely develop an algorithm that would result in machine sentience. But that doesn't mean it's impossible for it to happen. I think it will happen eventually, as long as we keep experimenting with large networks. The internet or some similar networked system will eventually accidentally spawn something with sentience.

I think it's possible but is not going to occur within the century. The AI era will have its whole own set of problems we can't even comprehend yet, nor do we need to.

Why? Because that would be scary?

I don't believe in this. This was the same argument people had in the time of industrial revolution: "Machines will take our jobs"
New kinds of work arise, when technology is going forward. But it wont abolish work itself. Atleast not in the next years.

Nothing wrong with this mentality. In this system, work only serves the capitalist class.

because I actually go on scihub and read about this shit.

Attached: literallyyou.png (461x295, 21.2K)

Quantum computing with access to all information on google/amazon/facebook, eye tracking through webcams etc, will create a beast beyond our comprehension, and it is being created right now, with every post we make on the internet we give it more information.
I'm getting nauseous thinking about this.

Singularity cultists need to learn more math.

Attached: sigmoid function.jpg (919x625, 50.18K)

t. person who doesn't think the sun will rise tomorrow


But what makes you think that the trajectory of AI intelligence increase will be asymptotic? It could just as easily increase hyperbolically, for instance. Even if it eventually does eventually increase asymptotically, it will still likely be billions of times smarter than humans by then, so it's a moot point.

Attached: intelligencescale.jpg (736x233, 36.21K)

if you knew the scope and depth of the cognitive machine you're talking about replicating you'd shut the fuck up

You wouldn't be building a superintelligence from scratch though. The idea is that you would first build a seed AI with read and write access to all of its components, which would be able to recursively improve its own intelligence.

Such as? The reason why new jobs arose then is because intellectual labor couldn't be automated yet. That seems to be changing, and there seems to be little evidence that there are any human jobs that are somehow impossible to automate with AI.

Infinities don't exist in nature.

"Likely" based on… what?

that's not how our brain works, otherwise we'd be overwriting basic function constantly.