Content farms flooding search results

Whenever I search for something tech-related, I get thousands of shit-tier normie oriented content farms and tech blogs. Usually they're written by indians, offer no useful information, the pages are js laden, and whatever solutions they offer are proprietary/botnet. I have go through 10-20 pages of links before I find a page not written by a subhuman.

What is the solution to this? Is there a trick I don't know about that everyone else is using? I have long since stopped using jewgle but startpage and searx aren't really much better.

Fucking finally, search is becoming useless and that's a good thing.

At least they are written by humans. God knows I have enough of "Welcome to spamfarm.com, the prime source for FUTA SHITTING DICK NIPPLES. Here you find all kinds of FUTA SHITTING DICK NIPPLES information.". Or sites that are literally just a copy of the wikipedia article.

Create a home page link directory. Revert to the time before search niggers.

Welcome to the web. Every search site has been fucked with retarded results since at least as far back as Lycos. The lovely shit they're pulling now is to ignore quotes and subtractions. Best to stick with specific sites, or at least search those sites with a search site, like say,
site:site.net "dns poisoning"

Whatever (((search engine))) you're using, it doesn't provide you the information for gratis, they show you mainly for-profit things.
Find good content yourself. You could use a proper search engine, like YaCy for example, but the Internet is flooded with shitty proprietary botnet content anyway, so just because of the probability, you'll see mainly worthless content.

Describe the features your dream search engine would have.

• Ignores robots.txt
• Ability to search or subtract website types from search queries, eg -news -blog -social_media -auction
• Wouldn't mind waiting >10 minutes for a comprehensive search to complete that actually searches the entire dataset and archives. If there's a discussion of my obscure search query in an archive of a site that went down in 2002 I want to know about it.
• Ability to get a daily digest of new results containing a search term I'm following long term, such as mentions of my name, or reverse-image- search results of images I've created.

only the original. then its pasted to all those random domains that are full of ads

its not that. niggers spam thousands "seo" sites that they try to get some money from. its not something that google does.

I poo'd all over this post.

I don't like Java too, but give me one usable libre search engine software not written in Java. Also you should be happy it isn't written in JavaScript...

How hard is it to make your own search engine from scratch anyway? How many searches can you serve from just one Linux machine? What % of a search engine's resources go to serving results vs. crawling?

I think you could even use a combination of the original PageRank and word content. Both of those were SEOd out the wazoo in the 2000s, which is why Google and others took steps to make them unworkable. But that means that nobody does that sort of SEO anymore so a small search engine could operate unmolested.


* Option to search only the text of the site (such as what lynx would display) and not metadata
* Support for special character so you can actually distinguish c++ and c
* Option to specify arbitrary date ranges
* Option to specify sites in certain countries only (determined by ip or tld)
* API endpoint that gives a JSON of the results with a reasonable cap that prevents bots from ruining it for everyone.

I really like this idea. I'd also add -has_js, -breaks_without_js, -uses_cdns, -cloudflare, -paywall, -forum. And the best ones: -pajeet, -chink, -woman

Yeah the internet is completely losing its appeal for me, all the content on YouTube is filtered trash, Google is useless, Twitter is mentally grinding, Instagram is addictive trash that leaves you feeling abused, Reddit is just shills and ban happy mods and Zig Forums has been overrun with low quality halfchanners.

Before Google went public back in 2006 they had good search results. Then they went public and completely destroyed the online advertising market. Around 2010 to 2013 Google's search engine results for technical issues was really good. Google has never taken responsibility for the quality of their search product because muh algorithms. Since lately they are censoring content you won't find anything worth a shit on Google search. It's designed to give you shit search results so you search more. This inflates Google's search volume. It probably generates clicks on AdSense ads since people get tired of looking through the "organic" results.

Let's start linkbuilding for Zig Forums. There are pages from Zig Forums on top of DuckDuckGo practically the first day they are posted.

Also we should start a Zig Forums web directory.
Also we should just start a support board for people want to be involved in that instead of autisticly screeching about Gentoo, bloatware, and muh systemd. The primary benefit of such a board is it would draw visitors away from big tech and grow Zig Forums's user base. It would be useful to people who are not muh dick 1337 and probably generate some useful content.

If you post to a forum. Put the 8ch/tech/ link in your forum signature. If you blog, put it in your blogroll.

Also maybe check out DMOZ. It's one of the longest running and most comprehensive web directories. They are kinda in bed with Google in that Google and other meta search always gave a lot of weight to sites that were listed in DMOZ. I haven't done SEO stuff in a few years.

it's not the internet, it's humanity
jews and corporations are brainwashing normies, making them even dumber than they are naturally

the goyim won't find out if we censor the search results.

Zig Forums is the only website I visit tbh

dmoz is dead

do you even lucene bro? maybe if your language wasn't a meme it'd have a better search library.

google isnt even a search engine. it has some precompiled results for the queries and it only shows less than 1% of the results. it has or at least used to have a 10000000 results found thing but it would still only show 20 pages and start showing you captchas if you actually browse them

Crawlers that ignore robots.txt are easily detectable and set to be autobanned by every sane website admin.

What the fuck are you talking about?

Why would you need to wait 10 minutes? Educate yourself about modern search algorithms.

Set it up by yourself with googler & cron.

unbased and bluepilled