I have a fuckhuge imageboard folder and had an idea the other day to make a system where I could expose my collection...

I have a fuckhuge imageboard folder and had an idea the other day to make a system where I could expose my collection to the internet in a way that would allow other anons to download stuff and help organize it by submitting tickets to suggest changes (add, remove, move, rename).

So my question for you is this: Are there any existing solutions that I could set up that would accomplish this or would I need to cobble something together?

Attached: not even all of it.png (800x283, 42.11K)

Other urls found in this thread:

hydrusnetwork.github.io/hydrus/
0x0.st/zihz.py
twitter.com/NSFWRedditGif

Git repository and an issue tracker. For example github/gitlab.

github will shut him down for wrong think and racism

That's why I said gitlab. Otherwise you can always selfhost gitlab instance

Gitlab is also a silicon valley company

Self-hosted gitlab.

Maybe you can base it on some issue tracking software, but you'd want some kind of gallery frontend so people can just browse it without needing to clone the whole repo.

Also git works really poorly with binary files, your .git folder will be huge once you start making changes

Right now, hosting a gitlab instance looks appealing to me (I'm investigating it right now). I would have to get a server and set it up but that isn't beyond my abilities.

Yeah, but I'm sure you'd sometimes like to upload something, remove something etc and eventually revert to older instances. If you know any versioning software suitable for such case, please gib a link

Try Internet Archive. ISIS used it and got away with it, you should too.

Use IPFS, faggot.

botnet

If you're going the self-hosted route, Fossil has a built-in web interface. But I'm not sure if it's flexible enough to accomodate for your collection of smugs.

...

In over 12 years I've collected less than 1k files and I've deleted three quarters of them (though sometimes I wish I hadn't). Do you just save everything you see?

I use Hydrus to make sure i don't collect duplicates, prune regularly, and have a soft spot for webm threads, so i've got around three to four hundred GB spread out over 200,000 files.

That looks incredible, but I'm skeptical that security is properly implemented. I wonder if there is a non-sharing fork available.

...

...

You have to have 10K+ memes collected in order to post in this thread, newfag.

Then fork it, rip out all the stuff that communicates anywhere, or go harass the dev. Wasn't he doing a strawpoll recently about what to implement next?

>>>/hydrus/

no u

Attached: b57c34c8ae97a2b9a1e4889d0520575fac46af647536102c7bbb30bc41ddcbc9-pol.jpg (546x364, 22.85K)

make a booru

You should go through your own trash. You will also be a lot happier when you trim off the fat and stop hoarding shit you don't\won't use, and only keep what you actually will. Personally, I've been organizing and adding to my document collection

For now, you can find any information you want on the internet... but it won't be that way forever. Would be a shame if you wasted this time collecting reaction images for a dying medium.


Pretty much what OP wants.

Not user, how would you decide whats worth keeping? The reason you/I archive in the first place is that you dont know wether you want to keep a file around.

Train an OpenCV image classifier using the Haskell bindings.

I archive because I know I want to use it, or have used it. I have no use for a million imageboard pics, so I don't save any. However I have a 10GB folder of just documents (mostly pdfs). I suppose I value information more than data.

Attached: The-Design-of-Everyday-Things-Revised-and-Expanded-Edition.pdf (and so fourth.pdf)

If you have a hundreds of thousands of hoarded and untagged files, you'll never find what you're looking for should you ever need a specific file anyway.

What about a db with image hashes mapped to a name and/or folder structure? The db is some sort of collective with voting etc. I would assume it would get abused in like 3 sec though. Then just run a program that renames images according to the db. Also run it reverse and upload names of your image hashes. Then run a AI bot that merges the names into one.

Attached: crystal_skull.jpg (500x706, 63.47K)

user I...

Attached: Screenshot from 2018-12-19 17-21-55.png (410x237 14.16 KB, 14.54K)

Maybe some booru software could import them all.

I'm also an archivist (mainly pdfs). We should build a network in the future, to make everything redundant.

literally a booru

Someday maybe. The big problems I've had with the existing archives I've been picking clean so far are:
Only solution I've came up with is doing it myself.

We should, not the question is ... how? Maybe we shouldn't be relying on the internet as much as we do.
Just use the Internet for co-ordinating a Sneakernet of multiple terabyte per delivery, because for most people the Internet is much too slow and it is certainly not anonymous or private.
What kind of PDFs by the way.

now the question

this

I've been looking to start the same kind of thing and am currently considering NextCloud. Haven't verified it will work for this purpose, but it seems like it might work.

Failing that, perhaps I can build a web front-end for a NAS that lets people just directly download files. Adding requests, etc, would be lovely. My main concerns are security and bandwidth consumption.

Attached: f6f02b540bb689b0177b9385633a8e4688efff9e9824a5114b769c58f69ec769.jpg (351x352, 37.68K)

hydrusnetwork.github.io/hydrus/

Https://8ch.net/hydrus

If you could upload these to archive.org for now and work out the details of a distributed system later, that might help kickstart interest once we know what's available. Pack them into zips of 1gb each or by subfolder and upload, they accept any size, anytime.

Isn't anything better than hydrus
A program that doesn't lock you up on it and doesn't change folders/files structure ???

based

Nginx+tor just make a bare-bones basic webserver with a hsv3 onion

Torrent. Did one around 20gigs. I think people still seed it to this day.

I use tag filenames so they are easier to find because some images fit in multiple directories and 100Gb is already too much to handle duplicates

and this

It runs offline if needed, what are you worried about?
Also you can just not use a public tag repo.

You can mimic file/folders with tags, if for some reason you need those.

Hold on I've seen this exact thread before. OP check latter pages and see if the threads still up.

You're running Windows, user. I'm not sure you're up to this.

Most git* services would take him down in like 6 days. It's a better idea to set up a VPS with a gitlab/gitea instance.

Different user but I have 20000 copies of the anarchist cookbook.

i have only this one and its too big for this site.

Attached: afssd.png (449x494 115.35 KB, 49.12K)

PROTIP 1: do not use windows for file hoarding.
PROTIP 2: nobody will sort you shit for you. ESPECIALLY WITH FUCKING TICKETS, NO REALLY? YOU EXPECT PEOPLE TO FILE TICKETS ABOUT WHERE TO PUT IMAGE?. But might produce better sorted pack, that you can incorporate into yours later. So putting your collection online is god's work
PROTIP 3: forget about git. It will only bring pain in your case


This. Planning to publish my shit too, once I bother fixing my net


'ip netns exec' is your friend. Cannot share anything without interfaces


Duplication is great.

Hardlinks give you reference-counting & garbage collection for free.

Also don't bother with index. Scan directory, group files with same size, hash them, relink duplicates to single inode - 0x0.st/zihz.py

It does re-hash some groups of same-sized-but-different files on every run, but that's not a problem for a weekly batch job.


No, I save even shit I don't see.
66G sorted
97G unsorted
11.5T scraped

Attached: Cisco 2620MX & 2950-24.jpg (1024x683, 71.56K)

>0x0.st/zihz.py
Don't forget to replace md5 with sha.

At some point I realized that I don't actually revisit my fuckhuge collection that is as large as yours so I made it a habit to reduce the crap on my computer every day. So far I halved my data collection within a couple of months and hopefully I'll pick anything important out of the pile before lighting everything else on fire really soon.

Hoarding is bloat.

thats why i dont download these massive book collections. there are good books but a tb or more of some random math shit is not something that i care about and hdds arent free. archivers are a different thing but those probably have a infinite supply of money and hardware and they really like it so they do it.

Censorship is ramping up. Hoarding is the only way we can protect from that shit. Of course I mean hoarding up useful, relevant shit. Throw away your seasons of Friends and most animu and mango.

they arent going to cencor the shit that these stupid book torrents are full of. most of the collections are probably even full of outdated information

Yes they will, just give it time.

Attached: b5bb7e9d1019c780aa65271c6f1278c8cb64a36083cf06dff0731f5335c04b48.jpg (1257x4361, 1.59M)

You could... set up a torrent. I'd seed.

Nextcloud is very bloated, I wouldn't recommend it

what would you recommend instead?
i use it everyday and it has a tonne of fantastic features.

any of the common version control systems ? host it on gitlab jk.

Write a script to thumbnail each image. Use something like image magick and a BASH script easy money. Save all thumbs to a second folder.

Get output from $(ls /pathtopicsfolder/).

create list of hyperlinks by appending HTML tags to the path

use split command to break the list in chunks of 9 or 16.

loop through the chunked files and create each as an html document.

LS of html documents and repeat similar to earlier process and create site map.

Takes basically no time once you get the code knocked out. Less than 50 lines.

Attached: matrixback2.jpg (1152x864, 1.22M)

Ok, 20k files.
I'm autistic and can do it. I have a lot of free time.
I'm interested to add everything to my own collection as well.
Tell me how you want it sorted and provide a link to everything.

Attached: Amanda's Pictures 054.JPG (4320x3240, 3.28M)