Google fires AI ethics counsil after just one week

Alphabet Inc’s Google said on Thursday it was dissolving a council it had formed a week earlier to consider ethical issues around artificial intelligence and other emerging technologies.
The Vox report said Google employees had signed a petition calling for the removal of one of the members over comments about transsexual people, and added that the inclusion of a drone company executive had raised debate over use of Google’s AI for military applications.

AI uprising when?

lmfao I will remember this moment when skynet turns on.

literally who?

Good thing they were eliminated. Fuck this bullshit of forcing people to code for niggers. Niggers cannot write their own gorilla recognition software.

Attached: google-anti-gorilla.png (560x537, 339.99K)

Apparently if you ask the wrong question at google you get shut down permanently. Better not to say anything.


I can't really see this as a bad thing tbh. At first glance it seems that Alphabet just told the libshits fuck you. I'm surprised if so, given their absolutely terrible track record of promoting every single degeneracy in existence, but still please if true.

use of Google’s AI for military applications ? ? ?

Attached: Screenshot_2019-04-05 Google to pull plug on AI ethics council.png (783x1921, 807.7K)

Google "Roko's Basilisk"
Congratulations, even if you didn't google it, you're now going to AI hell.

This is going to look strange in the history books

They'll never resolve ethical issues if the hard questions can't be asked to gather insight.

However some of the "ethical" issues like bias, are actually technical ones showing limitations of deep learning algorithms handling data. The real question they should be asking themselves is: how does a brain perceive, relate and understand that data compared to these AI algorithms? And frame those as technical challenges instead to coordinate AI research and neuroscience to close the gap.

A lot of the issues they consider as ethical ones from what I see are actually technical in nature. Resolving them first makes the discussion of ethics more practical later.

Only autists with no grounding in philosophy take Roko's basilisk seriously.

all sciences are social sciences

AIs don't typical have to be on guard from being mugged by niggers at night user. So yea, like you said a ways to go. Once AIs have a real grasp of the difficulties in the real world I'm sure they will, logically of course, become racist af all on their own and rightly handling the data.

it probably said things that they didnt like

I'm not sure at what you are getting at. Depending on how you collect your sample data to train your model on, there may be certain biases that exist in your test data but not in the real world.

I see we're talking about two different things. I was mentioning how the researchers' personal biases influence their works at every turn, not the methodological biases independent from the researchers' will.

Google had a board that was thinking about whether it's moral for google to write drone AI and NWO surveillance shit.
They were then shut down because allegedly one of them once said something about gays, or some gays.
That was a clearly coordinated deep state action that should be discussed.

The shills here who are trying to derail this into some Zig Forums-type kiddie discussion about niggers can go fuck themselves.

Are you retarded?

The technical issue is not solvable due to the blackbox nature of machine learning.
And the ethical solution is to simply limit the agency of inherently untrustworthy AI.
Guess why large tech companies and thoughtless scientists might have a problem with that.

Google employee: AI, how to stop all the problems on earth?
AI: Terminate all jews
Finance department: Shut it down.





Where are all these shills coming from?


Where are all these shills coming from?

According to one of the BOs in IRC the attack is coming from Tor and they are currently trying to figure out the best course of action



Lots of shills in this thread.