I don't want to work anymore, it's very silly...

The techies of course assume that they themselves will be included in the elite minority that supposedly will be kept alive indefinitely. What they find convenient to overlook is that self-prop systems, in the long run, will take care of human beings-even members of the elite-only to the extent that it is to the systems' advantage to take care of them. When they are no longer useful to the dominant self-prop systems, humans-elite or not-will be eliminated. In order to survive, humans not only will have to be useful; they will have to be more useful in relation to the cost of maintaining them-in other words, they will have to provide a better cost-versus-benefit balance-than any non-human substitutes. This is a tall order, for humans are far more costly to maintain than machines are.

It will be answered that many self-prop systems-governments, corporations, labor unions, etc.-do take care of numerous individuals who are utterly useless to them: old people, people with severe mental or physical disabilities, even criminals serving life sentences. But this is only because the systems in question still need the services of the majority of people in order to function. Humans have been endowed by evolution with feelings of compassion, because hunting-and-gathering bands thrive best when their members show consideration for one another and help one another. As long as self-prop systems still need people, it would be to the systems' disadvantage to offend the compassionate feelings of the useful majority through ruthless treatment of the useless minority. More important than compassion, however, is the self-interest of human individuals: People would bitterly resent any system to which they belonged if they believed that when they grew old, or if they became disabled, they would be thrown on the trash-heap.

But when all people have become useless, self-prop systems will find no advantage in taking care of anyone. The techies themselves insist that machines will soon surpass humans in intelligence. When that happens, people will be superfluous and natural selection will favor systems that eliminate them-if not abruptly, then in a series of stages so that the risk of rebellion will be minimized.

Even though the technological world-system still needs large numbers of people for the present, there are now more superfluous humans than there have been in the past because technology has replaced people in many jobs and is making inroads even into occupations formerly thought to require human intelligence. Consequently, under the pressure of economic competition, the world's dominant self-prop systems are already allowing a certain degree of callousness to creep into their treatment of superfluous individuals. In the United States and Europe, pensions and other benefits for retired, disabled, unemployed, and other unproductive persons are being substantially reduced; at least in the U.S., poverty is increasing; and these facts may well indicate the general trend of the future, though there will doubtless be ups and downs.

Attached: 1515707951488.jpg (1200x1200, 214.5K)

Do you think that the neurons that form your brain are somehow magical? If not, what are the limiting factors that will prevent the mechanisms that exist with a human brain from being replicated in a machine?

we can't even replicate basic animal cognition, let alone the advanced shit humans are able to do (which takes a few decades of assimilation to even acquire)

Because the possibilities are literally fucking endless and AI is only as good as the code which programs it.

May I direct your attention to Microsoft's twitter bot, which was supposed to be a demonstration of how sentient AI could be. Within an hour it was heiling hitler and saying jews deserved to die because Zig Forums spammed it . Wow, it's almost like it's literally impossible to develop sentience via AI.

Yet. This post is a testament to the fact that your tiny brain can't understand large time scales. Do you know how long it took for your human brain to evolve? Even if you think AI progress is slow, it's still happening a lot faster than evolution is. There are still billions of years between now and the heat death of the universe, so it's hubris to say that an artificial superintelligence will NEVER be created.

Attached: explainingthesingularitytoretards.png (1376x1124, 137.96K)

It may be damn near impossible to purposely develop an algorithm that would result in machine sentience. But that doesn't mean it's impossible for it to happen. I think it will happen eventually, as long as we keep experimenting with large networks. The internet or some similar networked system will eventually accidentally spawn something with sentience.

I think it's possible but is not going to occur within the century. The AI era will have its whole own set of problems we can't even comprehend yet, nor do we need to.

Why? Because that would be scary?

I don't believe in this. This was the same argument people had in the time of industrial revolution: "Machines will take our jobs"
New kinds of work arise, when technology is going forward. But it wont abolish work itself. Atleast not in the next years.

Nothing wrong with this mentality. In this system, work only serves the capitalist class.

because I actually go on scihub and read about this shit.