why does it have to be universal and database? fuck you
Universal Decentralized Database
Do you understand what a database is or am I failing in my explanations?
Please show me how can I query all of Trump quotes said between 2003 and 2008 with references to all of them in IPFS or Freenet.
It doesn't have to be a database. Open to suggestions. Anything to share?
It has to be universal because it will be open to anybody excepting malicious actors.
Easy. Have the person who initially stores the tweets create an index mapping each tweet to metadata (like timestamp). You can then use the index to query the info. It's not possible to directly query information because data is identified by only its hash. Hash based databases are key value databases.
What you really want in your database is an index of tag metadata for each file. At that point it all basically becomes a JSON datastore.
Right, but what I mean is that I can't do that right now querying IPFS because it lacks such a function. And that's the kind of functionality I'm looking to create with this universal database. As I understand it IPFS doesn't contain a queriable database of atomic datapoints related to non-automatable metadata.
I don't want to reinvent the wheel though so thanks for your input.
I just told you how to do it. All you need to do is create a metadata index of files you add.
Nigger you have no clue what you're talking about.
Bots galore.
So let's say the database you envision is complete. You have access to it and this information is what you want, all Trump quotes said between 2003 and 2008 with references to all of them. Is that the exact query you enter into your database? If not, what is? And what do you expect the database to return to you?
Wouldn't it be much easier to start from the torrent format and just extend it to work better in cases where you want to work with a local torrent without having to copy the whole thing and implement a way to authenticate versions of files that superseed each other ala packages?
I just figure out a way for OP to do that. Use IPFS to store the content. But then use >>>/hydra/ to sort the content. For example the trump quotes. You would use a giant webcrawler to obtain the data of the internet, which you would then feed into hydra which automatically tags and sorts it, such as by trump quotes, and then store it using IPFS.
Now this is a gigantic undertaking as you would have to crawl alot of pages, almost NSA levels of storage, to store the initial IPFS hash. You would have to make your own IPFS CDN, or several, based on content. Such as a IPFS CDN for trump quotes, or one for cook books, and etc. With hydra automatically sorting it all you have to do is download the hydra metadata files and look for what you want. Then download it using IPFS.
Whoops I fail. It is called >>>/hydrus/ and not hydra. The developer of hydrus is doing it as a pet project though, and it is meant to sort anything file wise. From their website
The hydrus network client is a desktop application written for Anonymous and other internet-enthusiasts who have large media collections. It organises your files into an internal database and browses them with tags instead of folders, a little like a *booru on your desktop. Tags and files can be anonymously shared through custom servers that any user may run. Everything is free, nothing phones home, and the source code is included with the release. It is developed mostly for Windows, but reasonably functional builds for Linux and OS X are available.Currently importable filetypes are: images - jpg, gif (including animated), png (including animated!) and bmp audio - mp3, flac, ogg and wma video - webm, mp4, mpeg, flv and wmv misc - swf, pdf, zip, rar, 7z
I am sure it supports text too since it supports pdf.