==OH SHIT==

OH SHIT
youtu.be/EEP1lDwcEAQ

Other urls found in this thread:

hooktube.com/watch?v=iqUCX5rPQug
hooktube.com/watch?v=H7C8xi5N7JA
hooktube.com/watch?v=EoBAIQjWoUQ
twitter.com/SFWRedditGifs

Which one of you fags made this? You did an amazing job of capturing that google hipster tech company vibe.

Attached: Screenshot at 2018-05-18 21-06-42.png (1221x630, 318.1K)

Data is exactly like genes. I am so glad Google passes mine on so that future generations can give Google money.

...

This is Steins;Gate levels of dystopian future, holy shit!

We had this many people here?

You're retarded. The word you're searching for is Psycho Pass.

The vibe seems close to a Microsoft game called ilomilo.

Mein bad. I haven't seen Psycho Pass

Psycho Pass is basically Minority Report combined with totalitarian socialist economic planning. Literally nobody in the entire country has free will to choose their profession or line of work. All people are monitored at all times by a surveillance system that can detect how likely they are to commit a crime, whether they have done so or not, to the point that veteran police detectives get flagged in the system for "thinking like a criminal".

...

Explain your implication.

I worked you in to a shoot, brother. HH.

Now you're just spouting nonsense. If you're making a reference you need to understand that I have never watched Rick and Morty and I will not get it.

It's just like Boku no Piko, amirite my fellow east-asian island animation appreciating fellas?

You seem upset.

Any faggot that hasn't severed all ties to jewgle at this stage deserves to be milked for his data. Stop using smartphones and feeding the google jew, you fucking plebs.

Attached: Google_s_The_Selfish_Ledger_Leaked_Internal_Video-EEP1lDwcEAQ.webm (426x240, 10.11M)

What's this faggot posted?

anyone care to explain what this crap means?

tl;dr Google plans to use their AI to guide human development and thought through technology.

It's pretty much them exploring a concept of memetics by treating data a person generates as an abstract representation of who they are, much like how DNA is a physical representation. If they identify underlying common base patterns people share, they could feed back modifications into minds based on such data to manipulate thoughts on an individual and/or collective level. It's basically Big Data social engineering on a societal scale, much like what they're capable of with advertising algorithms.

No shit? I am surprised they haven't started this crap sooner. Is everyone supporting this?

I think that if they're as malicious as to do what is suggested in the video, it can easily be said that this video could be a kind of market research on the idea of what they call a "lamarckian" data ledger. See how the soygoys react, then move onto step 2, whatever that may be.

Attached: locate.png (600x399, 354.09K)

Wrong term. A leader of men guides. A shepherd of cattle herds.

THE BOTNET IS EVER EVOLVING USING NEWER TRICKS REEEEEEEEEEEEEEEEEEEEE

THEY'RE HERDING THE NIGGER CATTLE.

YOU are the botnet
Install Gentoo unironically

Sounds like 1984 but written with knowledge of more modern technology.

They're herding the goyim. To jews, non-jews are literally cattle.

Feasibility is the problem. While they can socially engineer through methods of applying marketing strategies to ideologies for propaganda, their ledger idea requires something far more advanced.

It requires understanding concepts at many abstract levels and relationships to predict and manipulate thoughts to such precision. That means an AI has to be able to ask itself questions/queries and make intuitive guesses (like gut instinct) by analogies it has to imagine to answer itself. As a result it acquires knowledge out of unfamiliar situations without being trained for it. After all it can't actually read people's minds, only their data. So it has to imagine how people might think or identify discrepancies by questioning itself on observations and memory about their abstract selves.

To put it another way, AIs would have to possess abstract reasoning as well as a rather creative philosophical mindset when evaluating data. That's beyond even Deepmind's ability to develop yet.

obviously they must be fake

You are saying just now the tech is there for an AI to teach itself all this data crap so Google can market another phone?

SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.SHIT.

This.

Here's 3 other URL's of the same video.

hooktube.com/watch?v=iqUCX5rPQug
hooktube.com/watch?v=H7C8xi5N7JA
hooktube.com/watch?v=EoBAIQjWoUQ

I use a phone running Replicant OS.

So reddit and tumblr and facebook and everyone were in an uproar about net neutrality, but this gets nothing?

You do realise jewgle already has an A.I running unchained on the internet >>>/polk/28336 , right? Most of those bitchers were fake accounts trying to drive societal opinions exactly like OP's video describes. OP's video only exists to normalize the concept so if you point it out to a retarded normie a shill can just go "well it already existed for X years/months and nobody gave a fuck, so why are you a conspiracy theorist and give a fuck user?"

Are you talking about their web spider? What evidence is there of any AI making posts or other shit?

Alex Jones talked about this years ago.

What I'm saying is that their ledger idea would fail to reach its actual potential without an AI capable of abstract reasoning. Right now they're limited to suggestions based on statistical analysis of big data and what they know about marketing. Throw a lot of uncertainty into the system, and abstract representations will fail to stay synchronized with the people they supposedly represent. That would result in a skewed perception making the idea almost worthless for what they aim to do.

Their example of acquiring a user's weight information shows limitations in their capability, as their "thought experiment" failed to take the concept a lot further. It also shows they were thinking too much like a marketer would. They suggest persuading the user to get a customized scale to acquire such data. But the truth is, there's ways to acquire such data without even the user being aware of it if you're creative enough.

That's where abstract reasoning comes in to compare many ledgers to build analogies to suggest how much that person might weigh and reason why that's the case. Is it because these person have these certain thoughts? Their environment and lifestyle? Their mental states? Medical conditions? These the kind of questions an AI would have to formulate and ask itself. It answers itself by comparing known habits, activity, relationships and patterns from similar ledgers where their weights are known. It then tries to find a way to test its answers by its understanding of gravity (it already learned prior to this scenario), so that if a scale measures weight, so could pressure being exerted on a tire of a vehicle maybe for example. Both concepts are connected at a particular abstract level, both measure the pressure on a surface of mass by gravity. Then using security footage it could attempt to figure out the weight of that person by how displaced the tire pressure appears to be when they enter their vehicle. Once it has acquired all that data on its own, it can confirm which analogies/answers it came up with made sense or worked, and keep them for future reference. It sounds far-fetched because it definitely is for now, but AIs with abstract reasoning and a mass surveillance network at its disposal, would be capable of this given a lifelong learning ability.

But to even begin on such a thing, it requires being able to discern how to represent abstracts into interpretative policies for meta-learning and vice versa.

TLDR; the only thing that would fuck up such an A.I is the introduction of new information or ways of proccessing that information that are completely new and unlike the waveform of other comparable functions, such as introducing religion that isn't the one world waveform religion. I say waveform because it requires people to, at a base level, be thinking exactly the same on a few things. Change that and it is completely foriegn to the A.I and causes unexpected errors.

very informative. thanks.

bump

More useful than the OP was. Thanks.

Umbrella Corporation - the advert.