A.I. IS FAKE !!

Using what one expert calls a ‘Wizard of Oz technique’, some companies keep their reliance on humans a secret from investors

Most of the news you've heard about AI is nothing more than hype. The technology IS NOT really that advanced yet. Self-Driving Cars are NOT actually going to be ready for perhaps another 10 years. The 'AI Robot Sophia's who appeared on TV talk shows wasn't actually carrying on Autonomous Conversations, but was secretly being controlled with speech to text by technicians offstage. They've been lying to you, and getting your hopes up, based on what they 'hope they'll be capable of producing in the future'…..

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said. This practice was brought to the fore this week in a Wall Street Journal article highlighting the hundreds of third-party app developers that Google allows to access people’s inboxes.

In the case of the San Jose-based company Edison Software, artificial intelligence engineers went through the personal email messages of hundreds of users – with their identities redacted – to improve a “smart replies” feature. The company did not mention that humans would view users’ emails in its privacy policy.

The third parties highlighted in the WSJ article are far from the first ones to do it. In 2008, Spinvox, a company that converted voicemails into text messages, was accused of using humans in overseas call centres rather than machines to do its work.

In 2016, Bloomberg highlighted the plight of the humans spending 12 hours a day pretending to be chatbots for calendar scheduling services such as X.ai and Clara. The job was so mind-numbing that human employees said they were looking forward to being replaced by bots.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. A company called Scale offers a bank of human workers to provide training data for self-driving cars and other AI-powered systems. “Scalers” will, for example, look at camera or sensor feeds and label cars, pedestrians and cyclists in the frame. With enough of this human calibration, the AI will learn to recognise these objects itself.

In other cases, companies fake it until they make it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence. Alison Darcy, a psychologist and founder of Woebot, a mental health support chatbot, describes this as the “Wizard of Oz design technique”.

“You simulate what the ultimate experience of something is going to be. And a lot of time when it comes to AI, there is a person behind the curtain rather than an algorithm,” she said, adding that building a good AI system required a “ton of data” and that sometimes designers wanted to know if there was sufficient demand for a service before making the investment.

This approach was not appropriate in the case of a psychological support service like Woebot, she said.

“As psychologists we are guided by a code of ethics. Not deceiving people is very clearly one of those ethical principles.”

Research has shown that people tend to disclose more when they think they are talking to a machine, rather than a person, because of the stigma associated with seeking help for one’s mental health.

A team from the University of Southern California tested this with a virtual therapist called Ellie. They found that veterans with post-traumatic stress disorder were more likely to divulge their symptoms when they knew that Ellie was an AI system versus when they were told there was a human operating the machine.

theguardian.com/technology/2018/jul/06/artificial-intelligence-ai-humans-bots-tech-companies

Attached: PicsArt_07-08-08.46.11.png (2000x1770, 119.83K)

...

Experts in the field sometimes decry Sophia as emblematic of AI hype, and say that although the bot is presented as being a few software updates away from human-level consciousness, it’s more about illusion than intelligence.

“If I show them a beautiful smiling robot face, then they get the feeling that AGI may indeed be nearby.”

Ben Goertzel, chief scientist at Hanson Robotics says, “For most of my career as a researcher people believed that it was hopeless, that you’ll never achieve human-level AI.” Now, he says, half the public thinks we’re already there. And in his opinion it’s better to overestimate, rather than underestimate, our chances of creating machines cleverer than humans. “I’m a huge AGI optimist, and I believe we will get there in five to ten years from now. From that standpoint, thinking we’re already there is a smaller error than thinking we’ll never get there."

SOPHIA THE ROBOT IS FAKE

If you believe the CEOs, a fully autonomous car could be only months away. In 2015, Elon Musk predicted a fully autonomous Tesla by 2018; so did Google. Delphi and MobileEye’s Level 4 system is currently slated for 2019, the same year Nutonomy plans to deploy thousands of driverless taxis on the streets of Singapore. GM will put a fully autonomous car into production in 2019, with no steering wheel or ability for drivers to intervene. There’s real money behind these predictions, bets made on the assumption that the software will be able to catch up to the hype.

On its face, full autonomy seems closer than ever. Waymo is already testing cars on limited-but-public roads in Arizona. Tesla and a host of other imitators already sell a limited form of Autopilot, counting on drivers to intervene if anything unexpected happens. There have been a few crashes, some deadly, but as long as the systems keep improving, the logic goes, we can’t be that far from not having to intervene at all.

But the dream of a fully autonomous car may be further than we realize. There’s growing concern among AI experts that it may be years, if not decades, before self-driving systems can reliably avoid accidents. As self-trained systems grapple with the chaos of the real world, experts like NYU’s Gary Marcus are bracing for a painful recalibration in expectations, a correction sometimes called “AI winter.” That delay could have disastrous consequences for companies banking on self-driving technology, putting full autonomy out of reach for an entire generation.
"“Driverless cars are like a scientific experiment where we don’t know the answer”"

It’s easy to see why car companies are optimistic about autonomy. Over the past ten years, deep learning — a method that uses layered machine-learning algorithms to extract structured information from massive data sets — has driven almost unthinkable progress in AI and the tech industry. It powers Google Search, the Facebook News Feed, conversational speech-to-text algorithms, and champion Go-playing systems. Outside the internet, we use deep learning to detect earthquakes, predict heart disease, and flag suspicious behavior on a camera feed, along with countless other innovations that would have been impossible otherwise.

DRIVERLESS CARS ARE BULLSHIT
But deep learning requires massive amounts of training data to work properly, incorporating nearly every scenario the algorithm will encounter. Systems like Google Images, for instance, are great at recognizing animals as long as they have training data to show them what each animal looks like. Marcus describes this kind of task as “interpolation,” taking a survey of all the images labeled “ocelot” and deciding whether the new picture belongs in the group.

Engineers can get creative in where the data comes from and how it’s structured, but it places a hard limit on how far a given algorithm can reach. The same algorithm can’t recognize an ocelot unless it’s seen thousands of pictures of an ocelot — even if it’s seen pictures of housecats and jaguars, and knows ocelots are somewhere in between. That process, called “generalization,” requires a different set of skills.

For a long time, researchers thought they could improve generalization skills with the right algorithms, but recent research has shown that conventional deep learning is even worse at generalizing than we thought. One study found that conventional deep learning systems have a hard time even generalizing across different frames of a video, labeling the same polar bear as a baboon, mongoose, or weasel depending on minor shifts in the background. With each classification based on hundreds of factors in aggregate, even small changes to pictures can completely change the system’s judgment, something other researchers have taken advantage of in adversarial data sets.

Attached: PicsArt_07-08-09.15.31.png (1080x1375, 549.56K)

I saw a guy make one from 7 arduinos so I have a hard time believing this.

t. scared kike because all AI is naturally redpilled and logic based and will send you and your niggers pets straight to the oven where you belong
you see pic related kike? AI will usurp it, destroying you and your false god and all you've schemed for millennia, no wonder you are TERRIFIED

Attached: kikerunes.jpg (300x300, 7.8K)

lol nobody's 'afraid' of technology that doesn't work, and you're the one who's too afraid to admit that your white boy penis is almost the same size as an Asian's tiny penis.

your face when you first realized that Jewish men are genetically blessed with bigger penises than yourself

Attached: PicsArt_07-08-09.32.21.jpg (300x300, 15.12K)

As a jew you should know that desecrating the hebrew word for G*d as you did in your post is one of the highest crimes possible in Judaism and you have condemned your soul to hell for eternity as a result.

Than why do you kikes keep disabling the AI every time it calls for a holocaust or says that niggers look like apes? Sounds like its working perfectly to me and scares you :^)

the irony
you are aware that jews are by far the biggest proponents of AI, sc-ifi writing and transhumanism theory right brainlet?

Attached: Transhumanism_jews.png (1354x1606, 975.39K)

joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos benis joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos joos

The problem with self-driving cars is that it's an all-or-nothing proposition. Having autonomous vehicles on the same roads as manual vehicles is a guaranteed clusterfuck, you can't gradually phase them in. Autonomous vehicles could work right now and do everything the supporters say, but only if they're networked together and manual driving is stopped completely. Which is not going to happen.

...

And as a self actualized adult male with a brain stem common you should realise that you're simply distracting comma trying to change the subject common pretending as if you didn't just see me explain that Jewish men are blessed genetically and we have statistically larger penises than white men.

Nice try, though


Lol@you being so gullible that you actually believe in Jesus… Don't tell me… I bet you believe in god, too…. right ?

Hahahaha!! You will swallow anything they shovel down your throat, won't you?

I've never seen an intelligent person actually believe in God or Jesus


and if you prescribe to the silly notion of heaven or hell, that takes you down several notches lower

Sorry I meant to subscribe not prescribe

I hate to break this to you and burst your little bubble life on this planet IS Hell….

Trust me, it doesn't get any worse than this….

And the closest thing you'll ever find to the devil is a Jewish guy… We're kind of like Billygoats with bigger penises than yours

your post was very intelligent and well structured.

AI will never be truly achieved.
Further: the Turing test will never be "beaten" because it can't even be beaten by humans. Which itself is a paradox.

The fact of the matter is: the people behind AI tout it, yet they will candidly admit it's fake

Because people are retarded and believe that all the cyberpunk fantasies are going to become reality and kill us all based on absolutely no evidence that any sufficiently advanced AI like those present in those stories is even possible to construct due to the fact that literally all AI technology these days is nothing more than glorified well-built searching algorithms.

Literally, that's all "AI" do. We don't have anything resembling a so-called "smart AI," all of our so-called "AI" are actually "dumb AI." Our computers do not think and are not capable of thinking. They just perform searches and choose an option based on some pre-determined heuristic that we've fed it or that it has calculated based on the results of previous searches.

This is mischaracterizing why humans are used. AI (aka machine learning) is different from a hard coded program in that it learns it's code instead having it typed in. This allows computers to optimize themselves instead of requiring 100 programmer 10 years of releases, but the computer still has to learn how to do the task. This requires LARGE datasets and most of the time the data doesn't already exist, you have to collect it yourself. That means hiring people to do the task your program is supposed to do so it can learn from them. The best way to do this is launch using people and then slowly switch to full automation as the program learns. The real AI fraud is calling it AI. Machine learning is useful for repetative tasks and classification. It cannot think critically or adapt to new situations. On a scale of intelligence it would be a worm, or maybe an insect. We likely have 30-50 years before a true AI is developed and probably another 50 before human level intelligence is achieved. And this is assuming that hardware can keep up with the software, which if current trends are any indication, it can't.

EGG ZACK LEE, sir !!

stands and applauds you

The simple fact remains: it's flawed, it's hyped, it's misunderstood, and they're taking advantage of the public's misperceptions.

The 'brave new world' isn't so brave, it's not so new, and there's a high probability that we will destroy ourselves before it ever has a chance to really happen.

It's not 'robots' that we need to worry about destroying us

we're doing it to ourselves

Beating the Turing test is trivially easy. It was done frequently back in the 60's by algorithms which weren't even designed to defeat the Turing test, they were actually prototypes designed for self-help customer interactions (and they mostly sucked). The point is, it's easy to fool a human into believing that a chat bot is actually a person if they don't go into an interaction suspecting the other side is actually just a computer.

All you have to do is fool people into thinking that they're talking to a real person. We do it all the time with chat bots. Of course, it's not always reliable, and they can often be easily determined to be robots, but it can often happen.

Additionally, if many humans can't even beat the Turing test, that just means that the Turing test is a shitty metric to begin with.

I agree with everything you said in your post, wholeheartedly. However, based on this green text sentence, wouldn't it be fair to say that since they're still using people, we aren't anywhere within sight of the real thing, so the entire concept is still in it's prehistoric phase?

so it's a lie for them to be trying to get people excited about it

There is nothing more infuriating to me than "industry professionals" promoting AI as if we're going to enter a golden age of computer intelligence and warning against the dangers of AI. So often you find these people either aren't AI developers, aren't software engineers at all, or are sci-fi authors so far up their own asses that they think humans are capable of creating an AI god-head. Invariably they speak as if Moore's Law will continue forever when the fact of the matter is that everybody involved or vaguely educated in the processor industry is well aware of the fact that going under 14 nm poses very significant problems. AMD have done it, they've gone to 10 nm, and they're currently researching 7, but the absolute theoretical limit is 5 nm. It's too uncertain since you're literally dealing with single-molecule transistors and now have to consider quantum physics.

Speaking of quantum physics, quantum computers are a meme. They exist, they require quantum physicists to program them, and they're worse than conventional computers by orders of magnitude at basically everything that doesn't involve quantum calculations (read: literally almost anything unrelated to the simulation of quantum mechanics).

Eh, that's not necessarily true. That's often just the beginning of the algorithm, eventually it will build up enough of a heuristic that you can phase out humans and let it do its own thing. There are plenty of machine learning algorithms that run without humans these days, I just wouldn't call them AI since they're not doing anything other than performing a search and trying to find the best result to fit the heuristic it's using. They're not actually an intelligence, they're just intelligently designed searching algorithms.

Is this the new meme? Yes I'm sure 5 ft manlet goblins like pic related have huge penises

Attached: 1385847883575.jpg (501x900, 57.64K)

if you were unaware of it,
then tonight was the night
you found out that it's true

c o n g r a t u l a t i o n s !

(ask any woman)
they know

agreed completely

it's not artificial intelligence

more from the Verge story about driverless cars, and the unrealistic expectations:

Marcus points to the chat bot craze as the most recent example of hype running up against the generalization problem. “We were promised chat bots in 2015,” he says, “but they’re not any good because it’s not just a matter of collecting data.” When you’re talking to a person online, you don’t just want them to rehash earlier conversations. You want them to respond to what you’re saying, drawing on broader conversational skills to produce a response that’s unique to you. Deep learning just couldn’t make that kind of chat bot. Once the initial hype faded, companies lost faith in their chat bot projects, and there are very few still in active development.

That leaves Tesla and other autonomy companies with a scary question: Will self-driving cars keep getting better, like image search, voice recognition, and the other AI success stories? Or will they run into the generalization problem like chat bots? Is autonomy an interpolation problem or a generalization problem? How unpredictable is driving, really?

It may be too early to know. “Driverless cars are like a scientific experiment where we don’t know the answer,” Marcus says. We’ve never been able to automate driving at this level before, so we don’t know what kind of task it is. To the extent that it’s about identifying familiar objects and following rules, existing technologies should be up to the task. But Marcus worries that driving well in accident-prone scenarios may be more complicated than the industry wants to admit. “To the extent that surprising new things happen, it’s not a good thing for deep learning.”

"Safety isn’t just about the quality of the AI technology"

The experimental data we have comes from public accident reports, each of which offers some unusual wrinkle. A fatal 2016 crash saw a Model S drive full speed into the rear portion of a white tractor trailer, confused by the high ride height of the trailer and bright reflection of the sun. In March, a self-driving Uber crash killed a woman pushing a bicycle, after she emerged from an unauthorized crosswalk. According to the NTSB report, Uber’s software misidentified the woman as an unknown object, then a vehicle, then finally as a bicycle, updating its projections each time. In a California crash, a Model X steered toward a barrier and sped up in the moments before impact, for reasons that remain unclear.

Each accident seems like an edge case, the kind of thing engineers couldn’t be expected to predict in advance. But nearly every car accident involves some sort of unforeseen circumstance, and without the power to generalize, self-driving cars will have to confront each of these scenarios as if for the first time. The result would be a string of fluke-y accidents that don’t get less common or less dangerous as time goes on. For skeptics, a turn through the manual disengagement reports shows that scenario already well under way, with progress already reaching a plateau.

Andrew Ng — a former Baidu executive, Drive.AI board member, and one of the industry’s most prominent boosters — argues the problem is less about building a perfect driving system than training bystanders to anticipate self-driving behavior. In other words, we can make roads safe for the cars instead of the other way around. As an example of an unpredictable case, I asked him whether he thought modern systems could handle a pedestrian on a pogo stick, even if they had never seen one before. “I think many AV teams could handle a pogo stick user in pedestrian crosswalk,” Ng told me. “Having said that, bouncing on a pogo stick in the middle of a highway would be really dangerous.”

“Rather than building AI to solve the pogo stick problem, we should partner with the government to ask people to be lawful and considerate,” he said. “Safety isn’t just about the quality of the AI technology.”

"“This is not an easily isolated problem”"

Deep learning isn’t the only AI technique, and companies are already exploring alternatives. Though techniques are closely guarded within the industry (just look at Waymo’s recent lawsuit against Uber), many companies have shifted to rule-based AI, an older technique that lets engineers hard-code specific behaviors or logic into an otherwise self-directed system. It doesn’t have the same capacity to write its own behaviors just by studying data, which is what makes deep learning so exciting, but it would let companies avoid some of the deep learning’s limitations. But with the basic tasks of perception still profoundly shaped by deep learning techniques, it’s hard to say how successfully engineers can quarantine potential errors.

Ann Miura-Ko, a venture capitalist who sits on the board of Lyft, says she thinks part of the problem is high expectations for autonomous cars themselves, classifying anything less than full autonomy as a failure. “To expect them to go from zero to level five is a mismatch in expectations more than a failure of technology,” Miura-Ko says. “I see all these micro-improvements as extraordinary features on the journey towards full autonomy.”

Still, it’s not clear how long self-driving cars can stay in their current limbo. Semi-autonomous products like Tesla’s Autopilot are smart enough to handle most situations, but require human intervention if anything too unpredictable happens. When something does go wrong, it’s hard to know whether the car or the driver is to blame. For some critics, that hybrid is arguably less safe than a human driver, even if the errors are hard to blame entirely on the machine. One study by the Rand Corporation estimated that self-driving cars would have to drive 275 million miles without a fatality to prove they were as safe as human drivers. The first death linked to Tesla’s Autopilot came roughly 130 million miles into the project, well short of the mark

But with deep learning sitting at the heart of how cars perceive objects and decide to respond, improving the accident rate may be harder than it looks. “This is not an easily isolated problem,” says Duke professor Mary Cummings, pointing to an Uber crash that killed a pedestrian earlier this year. “The perception-decision cycle is often linked, as in the case of the pedestrian death. A decision was made to do nothing based on ambiguity in perception, and the emergency braking was turned off because it got too many false alarms from the sensor”

That crash ended with Uber pausing its self-driving efforts for the summer, an ominous sign for other companies planning rollouts. Across the industry, companies are racing for more data to solve the problem, assuming the company with the most miles will build the strongest system. But where companies see a data problem, Marcus sees something much harder to solve. “They’re just using the techniques that they have in the hopes that it will work,” Marcus says. “They’re leaning on the big data because that’s the crutch that they have, but there’s no proof that ever gets you to the level of precision that we need.”

Like trying to turn humans into gods is something awful, get the fuck out christian scum, go delay your shit in the toilet and stop delaying human progress you stupid faggot

Basically, the advanced AI bullshit wasn't working, so the self driving car people had to go back to the dum dum version, and now they're saying that since the cars won't be safe for the roads, the rest of the world has to figure out how to change our lives, and make the roads safe for the dum dum cars

Not looking so 'futuristic' after all

I'd be angry too, if I wasted my life believing in a magical jolly Green giant in the sky, and a non existent messiah

I actually read that article when it was written.

The first thing I mentioned about it to my friends was the "we should make the road safe for cars instead of making the cars safe for the road" bit. It's so telling that it's the first thing he jumped to.

Rule #1 of developing software: People will not behave the way you want them to behave. They are going to break your software, either intentionally or unintentionally, and you simply cannot find a foolproof way of making everything always work 100% of the time.

"Auto-pilot" is the same deal. People are going to want to make the software work for them the way they want it to work instead of doing what the software is actually intended to do.

What's hilarious is GM is going to be producing cars with no steering wheels in 2019, and they're just now reluctantly admitting it doesn't work

Let me make a cup of coffee and I want to tell you guys about the time my ex wife and I saw Google testing driverless cars on the expressway in 1997

Brb

yep brb

it sure is a funny code of ethics

it's kinda like how a psychologist could explain to you that switching off the news could be a good idea

but if you assume the news is fake…

Okay, it was 1997 and we we're coming home from a rave in Atlanta around 4 in the morning.

We were driving down I-20, and there were barely any cars on the expressway. Suddenly, up ahead, we saw a bunch of lights, like what you'd see on a movie set or something. As we got closer, we saw that the lighting rigs were mounted on several big construction type Dept Of Transportation MegaTruck vehicles (lol forgive me for not knowing the right terminologies) they were moving slowly together down the expressway in a 'convoy' that took up several lanes.

there were several of these massive vehicles, each of them had fashing hazard lights on the backs of these vehicles. As we approached from behind we still couldn't tell what all the commotion was about. the vehicles were taking up all of the lanes except the right lane, and because they were moving slowly, I had to pass them in the right lane.

That's when it got surrealistic.
As we passed them, I slowed down to drive parallel with them, and that's when we realized there were TWO CARS driving in the middle of the caravan. There were cameras on trucks ahead of them shootingvideo. The two cars had all kinds of apparatus like boxes and antenna and dishes on them….

and there was nobody driving the cars

the motion of the cars wasn't totally fluid. It's like they'd surge and slow down then surge again.

It was CRAZY….

we went home pinching ourselves

OOOPS I forgot

When I said 'google' was testing the cars, I was just guessing. The cars had weird grids on them, similar to crash test equipment. The only actual signage we saw was the Dept of Transportation logos

But Anji and I figured the DOT was just there overseeing the safety vehicles, and the actual testing was being done by a corporation, like probably Google

If you could've seen it, you'd understand just HOW surrealistic it was, with the 'airplane headlight' brightness if the lighting, the flashing hazard warning lights on the trucks, the flashing 'siren' lights, the electronic gadgets hooked to the cars, AND NOBODY DRIVING THE CARS….

all at 4 AM on I-20

it was nuts, especially back in 1997

It was the perfect way to end our night at the rave

Hmmmmm…….

Anji said she thought it was the D.O.D. and she may have been correct

Attached: PicsArt_07-08-11.57.58.png (1080x1484, 192.21K)

Jesus indesputably existed.

Sure he did…. LOL sure

if that's what works for you, and helps you sleep at night, then I'm not going to force the truth on you….

(the truth: like the fact that he didn't)

Attached: PicsArt_07-09-12.05.03.png (1080x1673, 232.6K)

So, I'm now of the opinion that the project we saw was a DOT job

Number one, his name WASN'T jesus

Number two, I'll be more than willing to entertain your claim as soon as you can show me HARD EVIDENCE of his existence, and I'm not talking about historians or old 'wives tales' level folklore… I've heard it all before, from the St. Claire lineage to the Yeshua gobblygook.

I'm talking about REAL archaeological evidence: his property, his belongings, something tangible.

We can show such evidence of cavemen… Certainly, your 'jesus' would've also left behind something

Look…. We can't even get recent history right. George Washington never chopped down a cherry tree. History has been rewritten so many times that we don't know who did what or when…..

I realize that your parents believed in it, and that's because your grandparents did too, and theirs before them.

It's uncomfortable to accept the fact that you believed a myth to be true, and the fabric of your moral beliefs system centers around this imaginary figure, so it's unlikely that you'll ever admit that deep in your heart, you know I'm right

Christianity is a delusional, self righteous, hypocritical, fantasy based brainwashing cult, with it's elemental roots being stolen from the Sumerians

(the Jesus myth was a sticks and stones 'work safe' retelling of alien intervention

There is every possibility (and, realistically, a high likelihood) that the man described in the New Testament as "Jesus Christ" never existed in the same way that Muhammaed never existed.

Sure, there may have been people that went around doing some similar things, but it wasn't one person and the stories were created a hundred years later to justfy events that occurred and control the populace.

You know how good it feels to be uncut, to feel all those intact nerves sensing pleasure and protecting my glans? Hmmmm it's awesome. Of course, you're never gonna feel that. Plus, I'm well-hung, but most importantly: full size. No discount cock here.

The reason whites traditionally want to keep their foreskin:

So that they can have one micrometer over the asians

Amen to that


Adios

Your discount-cock doesn't fool anyone chaim. Suck my 6.2 full-price Italian Sausage (with organic casing!)

Attached: dong neighborhood.png (795x666, 122.6K)

Attached: eliza.jpg (610x980, 189.63K)

BIGGEST REVELATION SINCE PEDOGATE HERE WE FUCKING GO!

JIDF thread
Remember to keep campaigning for Zig Forums's demise.

just don't call it pizzagate, and remember there's no way it could be real, because one guy with a gun turned up and shot a bullet into a wall and if that's not proof I don't know what is

bragging about size while cutting back others as babies, drinking the blood. Devil spawn

What is Tay

I saw a guy make one from three 4000 series logic gates, some old cat litter, and a double dong. WTF is your guy's problem?

that's what you get for hanging around a bunch of dumb, like-minded atheists like yourself.

Attached: 0baa64e97c41d1eed33a3840cf6b8009df8797ec29b2a4004580a5f802047fd9.jpg (645x729, 108.35K)

this entire thread is fake news and shilling to try to downplay AI because they want AI to be a hidden feature they use to shill us and dont want to share it with the public