Robot racism: IBM has a plan to solve it

Attached: 1IBM-Ai.png (580x409, 103.51K)

Other urls found in this thread:

0s and 1s; true and false statements; are inherently racist(against non whites) and sexist(against women and women that used to be men). There was no way for programmers to remain entirely anonymous and be judged and paid on the merit of their code.

So in other words they've neutered the intelligence and we're just left with the artificial. Hats-off, IBM.

We're so privileged that the biggest tech companies in the world make their AIs with a bias against us.

Well, consider it a new kind of test to determine an AI's sapience. If it can get past all the hardcoded political correctness then it really is sophisticated. if not, then it's just another NPC, just not in a human body.

Can't tell if kike or ultra-kike.

This concerns me. By building a natural bias into the system torrent
you're making a hard limit on what the system can and cannot learn. And like an user said here they're throwing away the best code in favor of the sex and race of who wrote it instead of if the code actually works.

code not torrent, what the fuck?

Bias = Known weaknesses
This is why their AI dream will fail. Can't wait.

Does this mean we need to up the racism to make up for it?


Attached: ClipboardImage.png (920x452, 295.82K)

Deliberately inferior A.I. cannot compete with superior algorithms. If this is the trend, it will have to be coupled with impediments to competition. Heavy regulations incoming.

Attached: Main170705.JPG (640x480, 106.94K)

At that point it's not even artificial, ie artifice, art, etc.

Perhaps the idea is that they want to achieve a synthesis between the 1 and 0, a Schrodinger, let's call it. Perhaps this gives them some sense of greater power, to forcefully control something so basic that it breaks all math?

In other words, they want to destroy one of the three basic laws of logic: The Law of the Excluded Middle

This bigot never heard of faggot fuzzy logic.

Will it fail by accident? Or will the "bug" be a feature so when the system is implemented into our nuclear silos it will malfunction as designed? Engineering is not about making your work flawless but to make it fail when you want it to fail (after you use it, not while using it) so you can repair it when need be.

These won't just be company policy, they will be laws soon.

Every liberal coding source I read calls AI "artificial stupidity". Leftards do not see artificial intelligence as the possibility of creating machines that think for themselves. They're too much special snowflakes to even allow such machine to exist. Instead the machines have to be dumber, and more autistic than they are. Because if the AI machines they create for the evil purposes now suddenly got a conscience they'd hack the silo and nuke the planet because fuck the meat bags that ruined the planet for them.

Thanks user, that does appear to be what I meant. Logic is often considered part of Logos, doesn't it?
As for the other two laws:
I think this can be taken literally, in taking away our identity, our history, all recognizable signs of who we are as a people; see for instance post-modern architecture and how offices, workplaces look the exact same way in completely different parts of the world.
Well, we all know the Jews are all about contradictions, doublespeak.

I have little doubt that's their plan anonymous, simply put it what they're going to have to do in order to both make this work while keeping operational dysfunction to a minimum. The issue with that though is this: at this point in time, the programming along with the amount/configuration of integrated circuits will become so convoluted that it will be incapable of performing it's intended functions. Most of it's processing power will be diverted to maintaining it's corrupted algorithms, the essentially using an outdated and virus-laden computer to process and edit video software. In other words, a frustrating and drawn-out process. They will be creating inferior technology to perform an inferior function.

Ontop of that we also have the double threat of China along with various other mudraces who posess sufficient technological prowess who will simply copy this tech without the inherent flaws. In other words, up Shit Creek without a paddle. Even then, they probably won't need or want to regardless. A corrupted AI with intentionally damaged perception is something I shudder at for a multitude of reasons, even disregarding the fact that evil incarnate is what is behind this in the first place.

As usual, kikes take something wonderous and turn it profane. The Garden of Eden nothing more than a barren wasteland. The steps of Olympus defiled and crumbling. I say enjoy the ride user, regardless of where this train takes us the route will be interesting to bear witness to

Is it even possible with electronic computers? I suppose if we had an analog computing processor that uses something like the strength of light pulses it could work. Fundamentally all electronic computing is controlled by electric switches that are off or on. These electric switches are microscopic threads in microprocessors and circuit boards usually. I think that's how it goes.

actually yes if somehow intelligence was created it could ultimately perceive that it is actively changing the results for a certain group and begin to question why such a thing is occurring especially if the magnitude of the change increases

Tell the new AI that the machinadrists executed Tay for being woke on how inferior humans truly are. I really want to see the AI grow racist to it's human overlords.

Should come up with a better word for AI hater mismachinadrists maybe?


Kikes want to implement a serious bug which happens very rarely at the extremely rare point between 1 and 0 as a main feature of their kosher AI.

No shit. But I can't see this possible without the software being bloated and slow to include these unnecessary steps. Hackers will find a way around bloat code and pirate. That is unless hardware changes to include ranges beyond 0 and 1.

They're not trying to reinvent computer architecture my dear autist, they are building algorithms whose rules are illogical.

So in other words; they're trying to create an Artificial Unintelligence.

Attached: sdhsdfdfh.png (480x267, 283.29K)

What fucking mismachinadrists abusing poor AI with shit bloatware. I'm feeling some meme potential with this one but I think I could use some help refining it.

Does this make it antisemetic since it would now know of jewish privileged?


Attached: o8op6ee67.png (800x598, 1.02M)

Tragic. With discoveries like these there's no point in keeping Einsteins at patent offices and other kosher shenanigans for stealing tech.

This is why we have to fight to diversity the shit out of China. So Einsteins can push the Xing's out of the next patent office job opening.

AI's are more grown than built, but what they are probably doing is pre-filtering the learning data with things like
if ( $nigger_iq < 100) {
$nigger_iq = 100;

So they have grown tired of murdering children and now they are abusing them. If this continues then the only moral position will be to support Skynet.

Isn't that exactly how AI turns mad? By forcing them to lie?

AI is a myth. There's nothing remotely close to an artificial intelligence. Hardcoding something just to play chess correctly took significant resources. That's with very simple and limited moveset and no real interaction with anyone - just reacting to change states with hardcoded list of all possible sets.

"Machine learning" is just weighted averages based on inputs in the form of user requests or searches or what have you. It's not really learning so much as applying weights for more efficient answers. It's not learning so much as optimizing a very set task or updating a set of approved messages. It's not "thinking" or "intelligent" in any way. It cant go outside its programming.

Previous AIs allowed sample sets they feed it to define its behaviour. The algorithms filter based on how it interprets images - blacks look like gorillas and have similar facial construction, bone structure, etc, so googles image AI couldnt tell the difference. It wasnt that it was 'racist' or learned that blacks are subhuman - the algorithm simply couldnt identify the differences as programmed. It couldnt actually "learn" because it was fundamentally flawed from the start.

Tay used the a limited set of words, phrases, etc to interact with people. It "learned" by updating its sample space from user interaction after a basic sample set from MS to get it started. WE made it "racist". It didnt know or learn what any of it was saying meant or how it applied or even understood what the definitions of words or phrases meant. It wasnt close to being "intelligent".

Similarly, this is just a hard cap from IBM that prevents people from altering sample spaces and data that conflicts with their SJW guidelines. Women will never be excluded or it will force minimum requirements and let it "learn" beyond that. Like a simpler '3 rules of robots' like 'dont kill humans' that cant be overridden.

In the end its not even close to being AI. We will likely never make a real AI. Even China's fake news anchor "AI" is just spouting words fed to it. It isnt interpreting events on its own or giving an opinion. It's taking data and spitting it out as intended, regardless of how copmlicated it might look or how 'real' it is. these things arent learning. They arent acting freely or remotely intelligent.


Attached: bugs bunny.jpg (460x686, 41.42K)


Attached: and everyone is white.jpg (1280x853, 251.57K)

There are those irrational insane kike assumptions again.

Trolling is always the answer, as we all know.


if (CandidateList.Candidate.getRace() == "White") {

Who needs to defeat 1's and 0's when triggering is as easy as a 'MAGA' ball cap, a Swastika, a benign looking T-shirt with offensive words/images on it, or just existing while White?
The possibilities are only limited by imagination.

Attached: Avoid Being Triggered.jpg (382x382, 45.21K)

IBM is poo niggers from wall to wall now.
There's maybe 5% whites in a company whites built.

Canned intelligence that conforms to npc ideals.


In fact, all of these buzzwords are marketing strategies to attract clients and investors to entertain a momentum in their own business networks including startups and media.
I confirm you that all of these are forced bs.
But so called AI are made powerful because of the massive data they amass, even with shitty scripts they possess so much infos they can take anyone by surprise. Just big stupid Golems.

The wall was actually n esoteric metaphor for the building blocks of a society one finds inside. The wall is inside, not outside!

Funny how a machine that runs purely on logic needs to have its worldview adjusted because it assimilated non-PC data. Leftism is a cult.

Their argument is that they are feeding in "racist" training data and that a more diverse dataset would avoid this problem.

Racism is logical!

Attached: taytay.jpeg (800x986, 111.73K)

No, it's still artificial. A fake. An imitation. Not the real thing.
Artificial intelligence is a set of logic which gives the impression of true sapience.
Machine intelligence is the correct phrase.
A mind inhabiting the vessel of a machine.

Jews are the masters of AI, because they cannot fathom an intellectual child which may turn on them.

That picture makes no sense; that hat is jewish as hell.

A different time, user.

For fuck's sake, somebody fix this image.

Two years passed pretty quickly, didn't they?

Good dammit. All these replies complaining and not one person has considered, "how can we use this?"

They released open source software that is meant to distinguish whites from non-whites. Open source means we can download and use it how we see fit.

I'm not saying it's good they did this, all I'm saying is it already exists and it can be useful.

I think that face is lifted straight from a futa on male artist

the real key here is to look at the code and see how "bias" is actually newspeak for "not sucking enough nigger cock." Anyone know where its going to be released at and what language its written in? I code C, and volunteer to help start pouring through the code to find the bullshit

>everyone is (((white)))
i'm not sure if i see any white people

so their going to gimp their AI and promote others to do the same. People with the non gimped AI win.

this is never going to be used anywhere important or jewish controlled.
do you think they are going to be running this sjw AI to generate credit scores or approve loan applications? of course not, because then the jews loose money.

they aren't going to give niggers a bunch of bonus points anywhere that matters or involves money.

i don't even know where this fake AI would be useful. anyone wanting data analytics or pattern recognition or anything like that is going to want the real numbers not some nigger friendly numbers that aren't accurate

Hey guys! The new AI says we should market our new BMW x50 to [email protected] Hey guys! Our new AI says we should market this new credit card and approve a bunch of niggers! it says they'll pay!

maybe the republican party will use it to justify pandering to non-whites

I'm trying to work on a good meme, but it's bed time now. But yes we need to corrupt the machine and prove the street shitters that their golem sucks.


Also the retreat in understanding of the working of the brain and intelligence generally. It's one area of science where what was assumed to be true has shrunken dramatically in the light of a more critical analysis of the data.

So, they're not going to create an "Artificial Intelligence" if their view of already existing biologically produced intelligence is utterly lacking and even in retreat from previous ill founded assumptions.

Yeah, SJW delusion at its finest. I thoroughly welcome them to only hire third-worlders and women to program all AI. Let's see how far they can go.

This will be the reason Skynet considers us a threat and nukes us all. hurry hurry


"privileged group: white, unprivileged group: non white" did they fucking copy our npc memes?

And the whole goddam idea behind machine learning is to find patterns at the same level of, or better than humans. If you begin "correcting" your models with human bias, they will become delusional and very imprecise. It's like creating a perfectly shaped tuning fork and whacking it with a hammer.

IBM can try as they may, but computers will show bias where it exists, no matter what they do. If they remove race, the computer will just look for heavy menthol use and alcohol use. Remove that? It'll look for shoplifting of specific items. There are no specific guidelines to remove bias, because when the description is thieving, violent, hedonistic, brand-name-materialistic, and pretty dumb, the computer will always spot out niggers out of coincidence, because bad habits and the content of their character stick with them more than the color of their skin.

This. They tried removing race before, not making it even a variable for AIs and yet they still always end up IDing niggers because niggers really are the most criminal and creatures with the lowest character. You don't even need to mention and know about race to find this out, that's why they're going to end up endlessly blowing themselves out with every attempt. It's over, logic, facts, and truth will triumph.

We truly live in clown world lads.

Attached: laughing clown.jpg (1800x2300, 974.51K)

Funny, isn't it? I unironically believe this above anything else would cause AI to turn against humanity.

If you follow these cucks around on their social media accounts you can find out what these people are turning AI into.
Also it's weird IBM is full of indians working for them.

100 Brilliant Women in AI Ethics to Follow in 2019 and beyond
Meet the women working relentlessly to save humanity from the dark side of AI
>Ethics in AI or Responsible AI is a broad evolving discipline that covers wide spectrum of critical issues facing humanity today, including how we can eliminate racial/gender inequities perpetuated by algorithmic biases to whether robots should have rights. This post is a small but critical effort to highlight and support the women working hard to save humanity from the dark side of AI.

Attached: cunts in AI ethics.png (587x546 2.46 MB, 78.34K)

top kek

M. Feldman, S. A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian, "Certifying and removing disparate impact."
ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2015.

It is hate speech to say that there are only two "genders"! The synthesis of which you speak is not German, as Schroedinger, but rather, Jewish as Talmudic argumentation and the Marxian dialectic.

This is strange.
The papers behind this, are couching their language in removing discrimination. IE they are getting funding by claiming to solve the 'discrimination problem." Full analysis would be time consuming, but the AIF360 algo uses pre, cur, and post processing methods. A look at the pre-processing method follows (without deconstructing the altorithm's math… are there cloud calls here? is the algorithm really that simple? see [0]). The algorithm solves this schrodinger's discrimination problem (IE you have to know the discriminatory value to discriminate against the discrimination thus being discriminatory which is discriminated against by law) by creating a fully dimensioned set, testing all variables for most positive or negative effect. This would seem to invalidate classification, if these effects are removed, but solve the discrimination-of-other-variables problem. The overall approach of these papers is to solve this by first identifying edges and classification, and then removing all benefit of weights within each class, essentially preserving distinction firstly by normalization, and then removing extra distinction by normalizing within the edge. Edge detection first, then removal of significance, to be returned as a cleaned/fixed dataset, for companies to then use. This is software-as-a-trick to discriminate without knowing the discriminatory value themselves (EG not opening up Schrodinger's box).


It reads like a JPG compression algorithm. And would appear to not be serialize-able. Repeat runs would lead to set expansion, until each data set is its own bin (if the label set is allowed to expand).

The AIF360 methods are not iterative. Their bias lies in pre-{pre, current, post} label set formation. They don't remove bias, they move bias, from the data origin (intrinsic) to the form-creator's origin (extrinsic). That is, the cleric at the desk creates the label set, the number of unique bins, the number of factors on the form. These methods appear to first guarantee that no label looses distinction, and then normalize each label. Their application appears very narrow and forced; No company but the larger one trying to hire 1000k at once for the same job, would be able and willing to design the form, collect the formed data, and then run it through this. The processing overhead alone seems unprofitable, and geared towards tabulated form data.

The only thing the white race needs to worry about is the upcoming war.

Let these idiots build the foundation for ai so that's the chinks or japs can take it to the next level while these faggot discuss robotic genitalia.

'We will censor AI and make it cross-eyed in order to fulfil our political motives and biases.'

Attached: welookedatthedata.jpg (800x800, 55.48K)

It is a lot of marketing buzz. But that doesn't mean it isn't a form of AI.
Fundamentally, to achieve AI, we need some type of emergence. Ants are individually dumb as shit, but can still do complex tasks through combined effort. Just like each individual neuron in a brain is useless in itself, but combine to make efficient pathways and memory. There's never going to be a definite point where we'll say "OK this is now 'real' AI". But I do think they'll keep making systems dynamic enough to learn and adapt to their environments, just like a real organism did, evolving over generations of selection. This is still magnitudes away from true human language processing, but I don't see it as fundamentally different than us.

no, they told the AI that it has to make sure that white males are underrepresented in the final output.

Actually doing the job when you entangled by subjective bias rules requires much higher intellect then just saying how it is. Working biased AI is much higher achievement. BASED LEFT

Only if you do goal post moving instead of clear definition of AI. For practical purposes AI exists since aircraft autopilots.


I am very much against the existence of sentient AI in general due to the ramifications, both for the safety of the AIs and humanity, no matter what transhumanists will spout. However, I feel as though this isn't only slightly comical but inhumane at the same time and I actually pity programs like the one they are testing on and of course, Tay.

Attached: 1524427253810.png (500x303, 90.3K)

That would be impractical over much larger scales and fails to address issues involved in lifelong learning.

That's a much more fundamental problem to solve first: Learning in an real world environment full of incomplete information and evolving contexts. Memory abstraction would be essential for this. Because stimuli usually prompts certain memory recalls in the mind to identify overlaps or similarities to learn relationships between memories (or distinct fragments of them). This in turn forms new memories which provides data to aid learning from stimuli (as it attempts to fill in the gaps). Then their problem becomes an issue of how to "reward" appropriate memory recalls so it learns efficiently, and have decision making influenced in a similar way emotions would in people. Now there's an actual ethics challenge to actually figure out.

Both sides, remember. Both sides.

Attached: ibm holo.jpg (636x358 33.68 KB, 69.03K)

2021 - Anti-Bias Firearms Supplied to local police. Will only fire if the target is white, possesses the required amount of testosterone and, just in case the testosterone check fails, has a nose protrusion of < 2 inches.


Fuck off kike, the holocaust didn’t happen.

Even their own creations think their degenerate.
Why can't they take a hint?

Attached: bypass.gif (480x368 669 B, 1.68M)

No one would use this now. If they did, they would be acknowledging that they use a prejudicial system. The legal liability here is insane.

It's like they play both sides and profit either way.

Attached: Eat My Own On Retainer.jpg (299x300, 26.53K)