Technocracy

This was one of the topics that got banned swiftly in the past years and I'd like some discussion about it.
PLEASE NOTE:
I'm not saying Technocracy is possible or advisable.
I simply want to discuss it's effects, pro's and con's.

To kick off the discussion, a thought I've had frequently:
There's some activities were humans excel over machines. Image/Pattern recognition.
But when you do those tasks in bulk with little variation, machines surpass us.
It seems proper then to use machines for tasks where they perform better than humans and leave humans to tasks robots can't do well.

When it comes to government, the most basic function it performs is resource management/allocation at a Nation level.
And I think that's one task were Robots perform excendingly well.
When it comes to passing Laws though, Robots would have problems (at least until Genetic Algorithms improve) and we could leave those to humans.
BUT: humans passing laws would influence the resource managing machine-process. Thus, we've got a system that can be co-opted at it's source by human corruption/greed and mass distribute it's effects in seconds.

This is the main problem I can see with Technocracy, the age old question: "Who will guard the Guards?"

Attached: wallpaper.jpg (1920x1080, 1.88M)

There are no pros. There are only cons.

Attached: googlelogo_color_272x92dp.png (544x184, 13.19K)

I have a short but sweet answer: open source government. Let those who are intelligent enough to program such an AI be randomly selected to review the code from time to time. Basically, use people's feedback to constantly improve the functions associated with it. Needless to say feedback will only be accepted and acted upon if the person meets a minimum IQ threshold.

The problem with a AI-based technocracy, we would have the following issue: humans actually sort of suck at knowing what they want a machine to do.

For instance, we often train Neural Networks by performing gradient descent on some error function, and this error function isn't always "good". For an image, we often use MSE, which is actually a pretty trash error function for images.

Now, imagine that we scale that up to things that are way more important. Instead of "lol the machine realized it can just send a black image to minimize the error. clever," we get "lol the machine told us to go kill everyone ever and burn the entire earth to minimize the amount of suffering over the next 1,000,000 years."

Even without the corrupting influence of Jewgle et. al., this is a terrible idea. Even in the hands of caring people who are trying to be benevolent, this can backfire in horrifying and terrible ways.

And then Jews redefine IQ such that antisemetic views are considered an automatic fail. The issue about implementing gates to politics is that jews will invariably strive to become the gatekeepers for their own political ends.

Obviously I was implying that no jews would EVER EVER be considered for a government role again. Their insane crime and nepotism is not allowed in white societies.

No. Cryptojews are a meme. The ebil Nadsees had many half-jews, but given that they tried to genocide us I am going to have to say that all who do not reform their fucked up views or emigrate immediately to their shitty Pissrael will be lolacausted for real this time.

Age old answer. Just indoctrinate the Guards.

If you think scientists don't lean left or right (usually left) you're delusional. Not only they're huge scumbags when it comes to politics, they also love money. Many fields were hindered by lefty politics, genetics for example would be 15-20 years ahead if literal retards, like those that claim there is no race, didn't mess with data and research and funds and their nepotism.

AI is another subject as that too is coded by humans so it's faulty to begin with, and any real application would require decades of advancement with computers. By then it won't even matter if we don't do something first about all these shitskins. If you want a chance at a technocracy reform the whole educational infrastructure, tear down colleges and universities and kill all the fucking retards, Only then you can begin to cultivate the next generation of capable minds.

The thing I'm starting to think, from a phylosphical point of view, is that you need Guards because you have a Gate.
And that gate must be closed, to protect something. But if you just wanted it protected, you'd build a wall. The difference between a Gate and a Wall is that a Gate allows condicioned acess. Conditioned by the aforementioned Guards. The Guards have the ultimate say on your acess and will let in or not on their own volition.

The solution, I think, is to build a better gate. One that does not require Gatekeepers.
As said, the problem with Gatekeepers is their inevitable corruption. We can blame the Jews for that, but from a deterministic point of view, the Jews are incosequential: Jew or no Jew, someone will eventually subvert the Gatekeepers.

I idealize a Technocratic Goverment as "The Perfect Gate".
A Gate that is secure without a single Guard.
This Gate must be reactive and "hear" as well as "see" the demands of the people it governs.
The worst dificulty is, like said, how it would change itself. Our current tech would ensure a monumental fuckup within a couple months. Thus, we must let people modify it somehow. Kinda like the Lawmakers we currently have. But we should prevent them from becoming the very Gatekeepers we seek to abolish.
Thus, I had an idea.
Neo-Democracy. It's only possible through techonology. Dismiss government and rulers. Every decision, every state/nation question is posed to it's citizens over a time frame that ranges from a couple hours to days (depending on how serious it is).
Any citizen registered in the country can vote on it usign an online plataform, BUT, to vote on any given issue you need to answer a short quiz or elaborate on what the issue means to you. This would prevent the mentally challenged or the badly informed from influencing decisions. Machine-learning can generate such quizzes or interpret a small text were you reason.
This would allow every decision to be made organically, by people that know enough about an issue for their opinion to hold weight and prevent corruption from external sources.
Until we factor in Hacking, but that's another can o'worms.

It's not true evolutionary machine government, but in a pinch it would do.
What do you guys think?

I can see where posts like yours come from, but please don't toss all scientists in the same bin like that.
Your opinion is most likely formed by the scientists you see. Aka: the one's that come out on scientific texts, get the awards and the money. 50 years ago, those would be signs of merit, nowadays it's a sign of your (((friends))) to keep up the good work.

There's other scientists out there, some right leaning, some centrist. Don't let the pompous leftist pseudo-scientists lead you to hate all of them user.

AI is the best example of this. Most "unknown" scientists working on the field are just exploring the possibilities and pushing the current tech to it's limits.
What you see making headlines however are a bunch of kikes taking code they didn't write, slapping it on a doll and calling it "Real AI" while discoursing on all the markets they can tap with it.
These people are not scientists. They're markeeteers with a labcoat.

Another problem is that decisions themselves aren't simple linear deductions either. It would involve relational hierarchies of many inferences generated by a mixture of stimuli, memory and imagination, based on stimuli given at a specific time and situation. So instead of acting directly on input based on what it's trained for, it would have to also create contextual abstracts and meta-learning policies to actually think and learn about its decisions.

Because the biggest hurdle I see in AI even should it reach AGI status, is how will it handle human contradictions in beliefs due to emotional situations? Because such problem require it having a deeper understanding of humans, a capacity to act bias to fit the situation and a sense of empathy (even if by an intellectualized viewpoint).

it wouldn't. The entire point of a technocratic government is to remove such factors from decision making.
Keep the "human" bits we have in our daily life but never let it influence decisions.
For instance: seeing those refugee children dead on the beach surely melted a lot of hearts in higher positions all around the world.
Logically, it doesn't make sense. An AI wouldn't value those kids higher than it values kids of it's own nation it would probably value it's kids higher since they have a higher chance of finishing superior education.

The essence of being a Technocrat is that you can't deny the human condition and it's myriad of emotions, but strive instead to search methods by with it can be separated from critical tasks such as governing a nation.

Another example would be in Justice.
I find it really weird that we have human judges still. The Law is Blind (as it is commonly represented aswell) and the sentences it assigns criminals should be handed by something not prone to feel sorry for the criminal (or over-angry at it).
An A.I. Judge wouldn't let immigrants get away with crime because "they didn't know better". It would apply the same criteria it applies to it's citizens because the moment that immigrant got into the country and signed his permit, he agreed to be treated like anyone else living there.

checked and rekt, now you know why you are always filling out those captcha's on all the big web pages. They feed the AI, the AI is already working on becoming better at recognizing patterns by having people do them and the computer learning from the people

I'm against it.

As is it stands with our current situation, technology will only further destroy us. Censorship, surveillance, "smart" devices will all be ways to track us and silence us. RFID chips are coming en masse, the shit in Sweden with people doing voluntarily and in the UK with compulsory dog chipping. The end goal will be some hell of 1984 and Brave New World with a little Fahrenheit 451 and a dash of Stand on Zanzibar. Overcrowded cities hooked into the net, RFID chips storing all your data, your whereabouts, your finances, smart devices in your pocket and all around your tiny apartment recording and uploading everything you do and say. Commit thought crime and your touchless, digital transactions end. No smart car to take you around, can't drive yourself as manual cars will be banned.

That's the Jewish part though. Technology can be good for us. The issue I have ultimately is that technology will essentially push us all into quaternary work. Doctors, lawyers, programmers, politicians. Sounds nice on paper, but not everyone can manage these complex and demanding jobs. It takes special people to do these. It eliminates hierarchy of society and civilization but means either some work for the many who cannot work or we aggressively uplift everyone else to that level.

Then we have to consider what ultimately is the end goal to technology and progress. Is it to make work easier or to eliminate work? Is it to take us out of the grind of basic survival so we can focus on transportation, colonization, exploration? Is technology meant to free of us the bondage of work and slavery or to be chains we find comforting?

The problem I'm think of is more on the lines of someone of authority giving an military AI an order clearly insane and beyond dangerous in a fit of rage during a Defcon 1 situation. Another scenario is a medical AI noticing a patient is distressed and needs calming down before attempting to handle them, because being forceful or using a sedative would endanger their life or others around them. These kinds of situations are unavoidable even with a human-in-the-loop emphasis (as all it takes is that human to go berserk or not see the problem for what it is).

Those kinds of situations explicitly require understanding of human emotion and human tendency to become irrational in "heat of the moments" to make a decision that's needed, not necessarily wanted while understanding how people might react to it.

Nah, I don't buy that.
The only reason captcha is a thing is because OCR software stopped being developed 10 or so years ago. Feed a computer a picture and it fucking chokes trying to understand it.
Since Neural Networks have been evolving, and they DO have uses for OCR, I can see captchas becoming obsolete in a few years.

Plans are already in the works. I'm sure (((they'll))) have all the kinks worked out before it goes online in tel aviv.

They want it to go through. Urban sprawls of ghettos for the goyim, high end apartments policed by state of the robotics and AI for (((them))).

why do you want to get rid of emotional influence? you are using a very sick mankind as your reference point, whereas if the society was healthy then we wouldnt be having these problems like the refugee child thing. man *is capable* of not being ruled by his emotions. even if an ai were to be able to 'understand' what emotions were and all that (maybe it could create a million perfect simulations representing the chemical reactions that a person would have in such a situation or something idk), is this better than a man who can simply put his emotions aside without much effort? sure, the price is suffering, but at least we can trust the reliability of this - what will the cost of ai decision-making be?
if we were in a world free of nonwhites, what purpose would there be for an ai? why would we need it? what good could it possibly do that we cant do? and reality probably isnt an ai-friendly vacuum either. humans get sick - can an ai get sick? so much more complication. inb4 "just let the ai handle that. fractals are no problemo"
it seems to me like the goal of all these promoters of ai is nothing but an easier time with things, at the potential cost of the entire species. it can very likely be immediately catastrophic in so many ways, and for what? so you dont have to make tough decisions? or that the economy might be a little more squeaky clean?
*using logic* is very different from *actually thinking with your human brain*. the disconnect today is already big enough; i dont think that adding further distance, let alone COMPLETE SEPARATION, could be very good
why is it that i feel the complete opposite of this? if anything ai should be limited to the most menial and low level tasks, like opening a door to the supermarket. why would you just willingly allow it to have full control of everything? we dont even know what it's like and you're already talking about making it king. where is your faith in humanity? is man not good enough? is he a child incapable of leading himself?
i dont want to make allusion to cuckoldry, but come on. why are people like this? am i wrong? why is it deemed okay to hand over all of humanity to non human actors? what if an alien came down from outer space and seemed really smart, and before even knowing how friendly it was we just handed over all of our government and law to it? that's silly
arent the words of law that are printed on paper already completely incapable of pity or anger towards criminals? the ai would still have to be monitored and ultimately kept on a leash by men; this would be the same function as a judge. unless you mean to give ultimate permanent power to the ai, which to me seems like heresy. i dont want robot shit being in charge of and having final say over life and death of my people. man should be governed by men.
i want to exterminate all threats to humanity and all possible usurpers of the dominion of man. to create more of them is the complete opposite of this

Basic problem how do you ptext such AI from maliciousness of its creators? What mechanism will prevent this AI from been coded to combat white male privilege?
Like over sight over Trump's Russian probe and surveillance?
You, niggers, do the same mistake over and over again. Give government more and more unchecked powers they can apply or not by their whims.

Attached: 95ce37435364d3254ae175e611a64328a38aa492f3e08a73a747f605ef1c587a.gif (728x818, 6.69M)

Problem is, technology is controlled by people who are supremely arrogant about abilities that they don't have yet or never will, and they will gladly push it out anyway with no regard for who gets hurt.
Also, remember the Highlands Forum and their rhetoric about how the people should not be free to do their own scientific investigation and should instead be completely focused on hedonism? I don't want those cunts having absolute power, no fucking way.

Attached: iesoap.png (480x258 4.52 KB, 208.75K)

Oh boy, the summer is strong with this one. What resource management/allocation did fucking Mongol tribes do? What did Celtic kings do? What do African warlords do?

The most basic function of the state is monopolizing violence - protection from external threats and forcing the subjects/citizens follow whatever set of rules passes for "law" in said government's territory. States emerge naturally because they are vastly more powerful than individuals, and they breed success through the reduction of in-group violence and (usually) enforcing a common set of rules people have to play by. Resource re-allocation always comes after the primary function. Fundamentally the state is simply the strongest gang around. States exist because they are evolutionarily stable and outcompete different solutions.

Attached: 1511438816578.png (638x359, 246.79K)

As long as the cons are cohens, yes there won't be pros.

This going to be the summer theme?

How does the machine generate the quizzes? What sort of things are asked? How would a machine evaluate a free-form response? These questions lead to more questions - how do we quantify if someone is 'informed' enough to vote on an issue? There has to be some core principles to extrapolate from, I think.

Technocrats can be controlling of current political system without it going into technocracy, and I don't mean by (((them)))

We can control the underlying information that goes back and forth and manipulate it, change it to our liking.

Kojima was right again.

"Technocracy" a jew word for

> (((Futurism)))
> (((The Finders Cult)))
> (((Talpiot)))
> (((Kabbalah Tech Gulags)))

Savior machine by David Bowie

President Joe once had a dream
The world held his hand, gave their pledge
So he told them his scheme for a savior machine

They called it the Prayer, its answer was law
Its logic stopped war, gave them food
How they adored till it cried in its boredom

Please don't believe in me, please disagree with me
Life is too easy, a plague seems quite feasible now
Or maybe a war, or I may kill you all

Don't let me stay, don't let me stay
My logic says burn so send me away
Your minds are too green, I despise all I've seen
You can't stake your lives on a savior machine

I need you flying, and I'll show that dying
Is living beyond reason, sacred dimension of time
I perceive every sign, I can steal every mind

Don't let me stay, don't let me stay
My logic says burn so send me away
Your minds are too green, I despise all I've seen
You can't stake your lives on a savior machine

Nope.
The main problem that even encompasses yours is that flawed humans will make flawed AI.
So not only is that AI a bad imitation of consciousness that doesn't even contain a soul, it also contains all the unintentional flaws that come with human psyche and are autistically extrapolated and enlarged by the soulless AI.

Human future doesn't lie in AI. At least it doesn't if Aryans/Europeans win. It lies in enlightenment. The right hand path.
The robot/AI control is strived towards by the jews/communists/mystery schools which are bound to the material world. The AI is already (see Minerva) basically their G_d as you can see the almost masturbatory attention that Drudge gives to articles related to it.
And as dumb and arrogant as they are in their though they can control it. We'll get this ending
Basically the Borg. The left hand path.

Attached: the borg.jpg (1024x1037, 193.76K)

all the good mgs games were written by Tomokazu Fukishima