Are you talking about 700 megabytes of bandwidth?
How did you measure this?
There used to be IPFS threads but they can't exist there anymore as a result of the spam filter changing. Almost always will the hashes trip the filter.
As the name implies, it is much like a filesystem. It's literally just content addressable data, nodes, records, and probably other things. All built on the same standards and interfaces. With networking taken into consideration.
With the ability to reference data, globally, with read and write access, is very generic so you could do a lot with that.
Hosting a static website is as simple as hashing the content and connecting to the network, that's 2 commands and anyone can see your content using their own node or any gateway.
You don't have to consider a domain, how you handle load balancing, implementing your own protocols, formats, reference system, nat punching, etc etc.
The way they handle dynamic data and peers is built on the same principles. A peer id is just a hash, that doesn't change if they change networks suddenly, that you don't have to consider NAT, or implement anything new, or consider what is the latest transport protocol being used.
It's basically whatever HTTP stack you like, but instead of URLs and servers, it's hashes and P2P.
That's all you need to know, you don't need to know the entire stack, and better yet you don't need to know the damn user environment, like can they connect to domain xyz directly, did a file change paths, etc.
Another way to think about it is probably just simply, what if instead of your OS asking your hard disk for data, you also asked the network for it, and could make reasonable guarantees that it's the data you wanted, whether it came from the disk, or someone else.
From a user perspective I think it's really convenient to have a worldwide reachable chunk of arbitrary data, in 1 command, that progressivley gets more reliable as real time goes on and development continues to add more efficient traversal, storage, etc. that is all transparent to the users and developers.
Stagnation is prevented by using the same layering scheme as IP. Components can be added and deprecated without disrupting the whole system.
IPFS as a project is basically just saying "we should do it this way, because we can" and gluing everything together so it works that way.
You should be able to say this is the address of my data, and through some means that I don't even know, my packets will get to your machine if you request them. And that's not really an insane or improbable goal. A lot of these concepts are ancient, and already tried, but nobody has really tied them all together like this.
In what way are you talking about? Locally or globally?
Locally you just initiate garbage collection and it deletes the data. Edit is just adding the changed data in and deleting any orphaned children. Like how diffs work in git, an edit should only consist of the different bytes if it's chunked, it's not like every commit duplicates the entire file that was changed.
Globally, there isn't a delete, something becomes unavailable when everyone deletes it locally or is disconnected from the rest of the swarm. I personally see this potential permanence, as a benefit.
Literally their sister project.
filecoin.io
That doesn't make much sense to me. What would they survey unless they were hosting content for people on nodes that were determined to be the best, which probably means geographically closest and/or fastest.
At most they could determine that some peers downloaded hashes X,Y,Z from their nodes.
The NSA would likely have to act as a CDN for whatever they want to survey and be the best CDN on top of that to be chosen by clients.
Attached: 41525062dc516766738e5bf5e5bf80f8952648c661bb06c1a5e1716338e04f42.jpeg (4000x2250
3.51 MB, 165.62K)