Updates 0.4.10 - 2017-06-27 Features: 0.4.9 - 2017-04-30 Features: tl;dr for Beginners How it Works When you add a file, the files are cryptographically hashed and a merkle tree is created. These hashes are announced by the IPFS client to the nodes in the network. (The IPFS team often describes the network as a "Merkle forest.") Any user can request one of these hashes and the nodes set up peer connections automatically. If two users share the same file then both of them can seed it to a third person requesting the hash, as opposed to .torrent files/magnets which require both seeders use the same file. FAQ It's about as safe as a torrent right now, ignoring the relative obscurity bonus. They are working on integration with TOR and I2P. Check out libp2p if you're curious. Finding a seeder can take anywhere from a few seconds to a few minutes. It's slowly improving but still requires a fair bit of optimization work. Once the download starts, it's as fast as the peers can offer, just like a torrent. You be the judge. It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies. On the other hand, it's still alpha software with a small userbase and has poor network performance. Websites of interest ipfs.io/ipfs/ Official IPFS HTTP gateway. Slap this in front of a hash and it will download a file from the network. Be warned that this gateway is slower than using the client and accepts DMCAs.
glop.me/ Pomf clone that utilizes IPFS. Currently 10MB limit. Also hosts a gateway at gateway.glop.me which doesn't have any DMCA requests as far as I can tell.
/ipfs/QmP7LM9yHgVivJoUs48oqe2bmMbaYccGUcadhq8ptZFpcD/links/index.html IPFS index, has some links (add ipfs.io/ before to access without installing IPFS)
no it's not. bittorrent uses sha1 which was shattered and has been deprecated decades ago.
Henry Sanchez
where's the release that doesn't crash routers by holding 1500 connections open
Sebastian Miller
lol
John Murphy
I think they've fixed it.
Xavier James
...
Cooper Kelly
If gateways make IPFS accessible to everyone, why isn't it more widely used?
Gabriel Bell
because nobody uses it.
Jacob Williams
Client is badly optimized right now, it takes loads of ram and CPU for what bittorrent clients can do without breaking a sweat, they haven't worked on optimization since the protocol is constantly changing.
Jaxson Johnson
this is also why nobody uses it stop breaking fucking links every 2 weeks
Hudson Gutierrez
Why did you make a new thread? The old one was fine >>771999
You have no clue what you're talking about. The only non-backwards compatible change they made was in April 2016 when they released 0.4.0. Since then all changes have been backwards compatible with new releases.
Oliver Lewis
It had no image.
Chase Watson
Launch it with ipfs daemon --routing=dhtclient to reduce the amount of connections it uses.
Additionally, some progress has been made limiting the extent of the problem, though the issue of closing connections doesn't appear to be touched yet.
New InterPlanetary Infrastructure >You might have noticed some hiccups a couple of weeks ago. That was due to a revamp and improvement in our infrastructure that separated Bootstraper nodes from Gateway nodes. Weβve now fixed that by ensuring that a js-ipfs node connects to all of them. More nodes on github.com/ipfs/js-ipfs/issues/973 and github.com/ipfs/js-ipfs/pull/975. Thanks @lgierth for improving IPFS infra and for setting up all of those DNS websockets endpoints for js-ipfs to connect to :)
Now js-ipfs packs the IPFS Gateway as well
Huge performance and memory improvement >With reports such as github.com/ipfs/js-ipfs/issues/952, we started investigating what were the actual culprits for such memory waste that would lead the browser to crash. It turns out that there were two and we got one fixed. The two were:
>>browserify-aes - @dignifiedquire identified that there were a lot of Buffers being allocated in browserify-aes, the AES shim we use in the browser (this was only a issue in the browser) and promptly came with a fix github.com/crypto-browserify/browserify-aes/pull/48 ππ½ππ½ππ½ππ½
Now git is also one of the IPLD supported formats by js-ipfs
The libp2p-webrtc-star multiaddrs have been fixed
>You can learn more what this endeavour involved here github.com/ipfs/js-ipfs/issues/981. Essentially, there are no more /libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss, instead we use /dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star which signals the proper encapsulation you expect from a multiaddr.
New example showing how to stream video using hls.js >@moshisushi developed a video streamer on top of js-ipfs and shared an example with us. You can now find that example as part of the examples set in this repo. Check github.com/ipfs/js-ipfs/tree/master/examples/browser-video-streaming, it is super cool ππ½ππ½ππ½ππ½. >HLS (Appleβs HTTP Live Streaming) is one of the several protocols currently available for adaptive bitrate streaming.
webcrypto-ossl was removed from the dependency tree
PubSub tutorial published >@pgte published an amazing tutorial on how to use PubSub with js-ipfs and in the browser! Read it on the IPFS Blog blog.ipfs.io/29-js-ipfs-pubsub.
Liam Reed
do ipfs still use the ipfs (((package))) app that throws away the cryptographic integrity of git?
Jacob Hernandez
do ipfs still use the ipfs (((package))) app that throws away the cryptographic integrity of git?
Anthony Phillips
NixOS will have IPFS as well WEW
Jaxson Lewis
this is nice but i can do without the pedoshit links
Logan Fisher
Call us when it can use both i2p/tor. Until then, it's inferior to bittorrent for filesharing. Why? Because the only client availables (github.com/Agorise/c-ipfs isn't ready enough) are in GC using retarded langages.
Dominic Harris
They're working on it. c-ipfs seems promising as well.
Jordan Russell
What pedo links?
Christopher Peterson
Turn on IPv6 you dumb nigger. NAT is cancer.
Benjamin Richardson
Does IPFS need the equivalent of TOR firefox?
Jack Wood
My quick IPFS indexer seems to be working good, will try to make database available over IPFS if it keeps working good. Could anyone who knows how ES works download the 200gb of dumps from ipfs-search and run `grep --only-matching -P "[0-9A-Za-z]{46}"` on it? They're compressed/encrypted somehow.
Nah, it's non-anonymous right now anyway, you could just install a new chromium/whatever and limit it to localhost:8080 if you really want to.
Better spec, better design, and better future-proofed. Problem is, right now it still send a fuckton of diagnostics information and is poorly optimized. They're making progress, but it's not fast enough.
Thomas Cruz
It's not a honeypot where every post you make on any site is tracked globally and anyone can insert arbitrary javascript (automatically run by all clients) (which must be enabled completely at all time to view sites in the first place) so long as any content of any type is allowed for upload in any page where that content would appear, for one.
Elijah Thompson
bittorrent is NOT secure over TOR
Carson Peterson
It's just as secure as any other protocol. Some clients might potentially leak your IP, but there is nothing inherently insecure about the protocol.
Hunter Jones
That sounds bad. Are you sure? It sounds so bad that I don't know if I believe you.
Try it yourself. The steps are as follows: - go to an arbitrary site - upload whatever they allow, content is irrelevant - you will find a local file that was created with the content you uploaded - edit it to insert arbitrary content simple as that, bypasses all sanitation attempts, etc. There has been proofs of concepts on 0chan in the very early days of zeronet, and this has never been addressed. The zeronet devs seem to not care about any of the components that make this possible.
Anthony Clark
sasuga redditware
Colton Smith
This is all well and good but the problem with the internet today is that it relies on a centralized infrastructure to gain access to which one has to pay fees to people with the power to cut one off completely if not control the content being served.
Also centralized web servers were conceived because of a very important flaw with peer to peer infrastructure
Joshua Ramirez
but you've got that wrong, a centralized server shuts down no one can ever access that file.
Ethan Reyes
Did you ponder a bit before typing such a sloppy mess? Those centralized cluster of web servers can still represent a peer in a p2p infrastructure. Nothing is forbidding such a cluster from joining the swarm and sharing files. Compared to a distributed web, centralization offers very little benefits in return and is only still around because it offers more control (to the owners of the files and servers).
Alexander Powell
This is literally the same problem with web servers. Even your VPS is on an actual server. So if you're saying P2P (and self-hosting) has flaws well
Benjamin Myers
none of you are wrong but you've all missed his point
Samuel Hall
What point, that you can't get a file if the only peer in the world that has it turns off his PC? Well no shit captain obvious, but that isn't a problem with the p2p infrastructure, that's a problem of people not giving enough of a shit to seed that file. Maybe Filecoin is a better solution to that, but then again who knows what will happen in the future.
Kayden Edwards
jesus christ OP I just wanna download Initial D and DBZ and you're linking to CP.
can anyone explain how i can use p2p in a website. i have an idea of how i want to use it for a chan and other ways but i dont understand how i would go about it.
are you asking for an entry level explanation of how it works?
Ryder Brooks
no, how would you apply p2p to a traditional website.
also what i dont like about ipfs is that it's almost impossible to have any privacy or remove content.
Jonathan Reed
decentralized hosting, I'm not sure I understand the question?
Jose Harris
im talking about the code, how would you apply p2p.
Jaxson Price
that depends on the type of site user, come on
Julian Wood
well, let me make a mockup.
Isaiah Ortiz
I want to make a chan where users can create their own boards on the host site, and every thread is hosted by the users though p2p, the more users the more relevance and faster the thread loads for everyone in the thread. after a certain number of posts the thread will be removed and flushed out of everyone's computer. the features will be unlimited file /video and reasonably lengthy text sizes.
sorry for asking to be spoonfed, but do you mind showing me a bug about this? I can't find it. Based on what I can tell, you can't just arbitrarily change 0chan to post whatever file you want. This isn't related to ipfs, so I'll sage
David Rogers
user's just havin' a giggle, go ahead and click it.
Angel Ramirez
Unoptimized, probably the DHT.
They're working on other stuff right now, they got a lot of money from filecoin ICO so we should be seeing some progress pretty soon. There's also a C implementation in the works.
IPFS can run on any transport (hyperboria), which can run over any link (ronja)
You cache it automatically when you download it.
Tor integration is in the works for privacy
Dylan Taylor
You're mostly just describing smugboard , it's very similar to what you're proposing.
Mason Cooper
It's not a bug. It's how it is designed. The poster can arbitrarily change the content of what they have posted. This includes changing the media type, and is simply a matter of editing the content that is stored locally. When someone requests the file, because you are the poster of the file, your doctored copy is distributed because the content you "upload" (e.g. text posts, or actual document attachments, etc.) are handled in this way.
You can even see the instructions on "how to modify a zeronet site" here: github.com/HelloZeroNet/ZeroNet as comments you post to a site are not handled in any special way compared to anything else. It's also why you need an ID to post anything and why your ID can be used to track anything you say across all sites by a simple grep: to enable modifying the content (which is not differentiable from a site, up to a point) by signing a more recent copy of the content.
Carter Cruz
wew, its worse than I thought
Luis Butler
Why aren't files hashed? IPFS gets this right, why is there no network-level guarantee that files haven't been altered?
but Tor works easily on a P3, why is IPFS special?
Nicholas Wilson
IPFS is still in alpha (not optimized yet) and has the overhead of a complete p2p system (routing). Tor is much simpler to implement since every peer is not a contributing node. A large number of peers connect to a limited amount fast nodes. In IPFS every peer is also a node. This is the difference betwen Tor's decentralized network approach and IPFS's distributed network approach.
Landon James
IPFS uses a distributed naming system (ipns) to point to the latest version as well as static pointers (ipfs-based addresses) to point to specific files. This is to enable tracking the latest version (i.e. enable the ability to update content) while still giving the guarantees there was no tempering by the controller. Zeronet doesn't seem to care at all about such possibilities: all that matters, only about the ability to update the content. Similarly, the zeronet folks don't give a shit about security (for the longest time (that might still be the case) they had been running with very old versions of various libs, including crypto libs, with unaddressed CVEs, for example). You can just say "their threat model is different" but at this point they disregard secops 101.
Liam Murphy
Release candidates are out for go-ipfs 0.4.11. If you want to try them out, check out the download page: dist.ipfs.io/go-ipfs
If you have troubles with IPFS using way too much bandwidth (especially during add), memory leaks, or running out of file descriptors, you may want to make the jump as soon as possible. This version includes prototypes for a lot of new features designed to improve performance all around. github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md
Nicholas Jackson
So if I use something like IFPS to host a website, does that mean that I don't have do fuck with things like domain name registration?
Ryder Morris
Technically yes, but in practice you will still need a way to register a user-friendly name because people can't recall localhost:8080/ipns/LEuori324n2klAJFieow. But there's a way to add a friendly name in the ipns system (google around, I don't recall the correct method), which allows people to use localhost:8080/ipns/your.address.name instead, so that's an option. Other than that, all kinds of systems can leverage the likes of namecoin if you're so inclined.
Camden Flores
Eh, I actually prefer the hash method. Keeps things a little more comfy.
The method is to register with any normal DNS method a TXT record with content: dnslink="/ipns/" and it will work. So it's actually relying on the external system.
I thought filecoin was just an incentive to store other people's files?
Cooper Wright
Can I limit the amount of space that IPFS uses or if I download and start running it will it just fill up my hard drive indefinitely?
It is. I think they're going to recommend using ethereum domains as IPFS has plans to be deeply integrated with it.
IPFS doesn't download random things to your computer. It caches everything you view but by default it's capped at 10GB.
Isaiah Clark
By default IPFS does not fetch anything on its own, it only will retain the data you manually added via browsing or manual adding.
If you want you can run the daemon like this `ipfs daemon --enable-gc` which will read your config for 2 values, 1 is a timer and the other is storage. By default I think they're 1 hour and 10GBs, that means a gabrage collection routine would run either when you hit 10GB's of garbage or 1 hour has passed. What it considers garbage is anything that's not "pinned", if you don't want something to be treated like garbage you pin it.
Someone made an issue recently that I agree with, there should be an option for a minimum amount of data to keep, right now garbage collection deletes ALL garbage, but it would be nice if you could set it to keep xGB's worth of non-pinned content at any one time.
My mixtape. Good music with a good video to go with it Holy Nonsense Also why does 32.00 MB / 54.44 MB [=======================================================================================================================>------------------------------------------------------------------------------------] 58.79% 0s20:13:33.012 ERROR commands/h: open /home/user/.ipfs/blocks/GY/put-460657004: too many open files client.go:247Error: open /home/user/.ipfs/blocks/GY/put-460657004: too many open files keep happening? Each of these I had to try adding several times.
Each file is divided into chunks, which are then hashed. These hashed chunks form the leaves of the merkle tree, which have parents that are identified by HASH( HASH( left-child ) + HASH( right-child )). This continues until we reach the root node, the merkle root, whose hash uniquely identifies the file.
To give someone else the file, from computer S to computer T, S gives T the list of leaves, and the merkle root. As I understand it, this is basically what a bittorent magnet link does as well (along with tracker and other metadata). We know the leaves actually compose the merkle root, by simply building the tree from its leaves, and verifying the new merkle root is the same as the provided one.
Computer T then ask around if anyone else has the content of the leaves (by querying for the leaf-hash), and verifies the content by hashing it upon download-completion. Once it has everything (and verifies), it simply compiles the parts into the
Assuming there is nothing wrong with my understanding above, I have a few questions:
How do we know the merkle root actually identifies the file we meant to get? ie if someone hits an IPNS endpoint, and an attacker intercepts and returns a malicious merkle root + leaves, now what? Is there anything to do about this or is this just a case of don't trust sites you don't know
When computer T starts requesting for leaf-content, is it requesting by querying on the hash of a leaf, or the merkle root? Bittorent only requests parts from users that have the full file, which comes from the latter. If you request by the leaf-hash instead, I'm imagining that the less-unique parts (like say a chunk of the file composed entirely of NULL bytes) could come from ANY file source, regardless if that user actually has the file you're looking for.
And extending that, with some infinite number of files stored globally, it would be possible to download files with a leaf-list that NO ONE actually has; each leaf being found in some other file; composed in some particular fashion to create the requested file.
Asher Ortiz
Can you use IPFS in combination with a tor bridge with obfs encrypting files and still transfer files? If this worked would the person receiving the data still see your public IP?
Charles Ramirez
So you mean how is the data verified to be correct once the client receives it. inb4 is isn't verified
Justin Moore
On the leaf. Each 256k block has its own DHT entry (hence why it's known to be so chatty). This also means that if you have a file and change one byte in it then most of the file will be deduped by IPFS if you readd it.
My understanding is that the IPNS entries are signed by your public key, so that's not an issue. There is a problem where a malicious node could return an old entry, but that's the reason each entry is stored in a fuck-ton of DHT nodes. Which is also the reason it takes so long to resolve names, it doesn't just take the first resolution it can.
Wyatt Baker
So what I suggested then, that a file no one has could be generated by the network given a list of leaves, by retrieving them from other files, would hold then? I suppose though that's not anything special, except that the granularity of chunks is bigger than say, 1 bit. But to confirm my understanding, is this true?
Henry Harris
With bitswap, it DOES download random things, kinda. You swap fragments among peers from random content.
Christian Wilson
It's hash-based addressing: if the chunks that make up the file exist in other files, they are exactly as valid in the requested file as it is in that other file. That is, request the hash and the provenance is a meaningless concept: you can think of it as two completely different kinds of data (the actual chunks, and the file descriptors which are merkel graphs)
Justin Barnes
IPNS is still handled via DNS, so short of someone pwning the authoritative nameserver for a domain, you're looking at a hijacked local resolver, which you can defeat via VPN.
Lincoln Price
I am requesting that people please prefix their hashes with"/ipfs/" when posting, so that the browser addon detects them and anchors them, this way people with it can just click on them.
Like this QmZsWcgsKNbjmiSeQGrZUwAbHVVVtxSFKm9h9AFKoAK8aH -> /ipfs/QmZsWcgsKNbjmiSeQGrZUwAbHVVVtxSFKm9h9AFKoAK8aH