/ipfs/ - IPFS thread

Updates
0.4.10 - 2017-06-27
Features:
0.4.9 - 2017-04-30
Features:
tl;dr for Beginners
How it Works
When you add a file, the files are cryptographically hashed and a merkle tree is created. These hashes are announced by the IPFS client to the nodes in the network. (The IPFS team often describes the network as a "Merkle forest.") Any user can request one of these hashes and the nodes set up peer connections automatically. If two users share the same file then both of them can seed it to a third person requesting the hash, as opposed to .torrent files/magnets which require both seeders use the same file.
FAQ
It's about as safe as a torrent right now, ignoring the relative obscurity bonus. They are working on integration with TOR and I2P. Check out libp2p if you're curious.
Finding a seeder can take anywhere from a few seconds to a few minutes. It's slowly improving but still requires a fair bit of optimization work. Once the download starts, it's as fast as the peers can offer, just like a torrent.
You be the judge.
It has implementations in Go (meant for desktop integration) and Javascript (meant for browser/server integration) in active development that are functional right now, it has a bunch of side projects that build on it, and it divides important parts of its development (IPLD, libp2p, etc) into separate projects that allow for drop-in support for many existing technologies.
On the other hand, it's still alpha software with a small userbase and has poor network performance.
Websites of interest
ipfs.io/ipfs/
Official IPFS HTTP gateway. Slap this in front of a hash and it will download a file from the network. Be warned that this gateway is slower than using the client and accepts DMCAs.

glop.me/
Pomf clone that utilizes IPFS. Currently 10MB limit.
Also hosts a gateway at gateway.glop.me which doesn't have any DMCA requests as far as I can tell.

/ipfs/QmP7LM9yHgVivJoUs48oqe2bmMbaYccGUcadhq8ptZFpcD/links/index.html
IPFS index, has some links (add ipfs.io/ before to access without installing IPFS)

Attached: ipfs webm 1.webm (1092x512 140.19 KB, 87.56K)

Other urls found in this thread:

github.com/ipfs/go-ipfs/issues/4029
github.com/ipfs/js-ipfs/issues/962
blog.ipfs.io/30-js-ipfs-0-26/
github.com/ipfs/js-ipfs/issues/973
github.com/ipfs/js-ipfs/pull/975.
github.com/ipfs/js-ipfs/issues/952,
github.com/crypto-browserify/browserify-aes/pull/48
github.com/ipfs/js-ipfs/issues/952
github.com/ipfs/js-ipfs/issues/981.
github.com/ipfs/js-ipfs/tree/master/examples/browser-video-streaming,
blog.ipfs.io/29-js-ipfs-pubsub.
github.com/Agorise/c-ipfs
github.com/DistributedMemetics/DM/issues/1
github.com/HelloZeroNet/ZeroNet
dist.ipfs.io/go-ipfs
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md
decentralized.blog/ten-terrible-attempts-to-make-ipfs-human-friendly.html
github.com/ipfs/go-ipfs/issues/3092
ipfs.io/ipfs/QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6
twitter.com/NSFWRedditGif

no it's not. bittorrent uses sha1 which was shattered and has been deprecated decades ago.

where's the release that doesn't crash routers by holding 1500 connections open

lol

I think they've fixed it.

...

If gateways make IPFS accessible to everyone, why isn't it more widely used?

because nobody uses it.

Client is badly optimized right now, it takes loads of ram and CPU for what bittorrent clients can do without breaking a sweat, they haven't worked on optimization since the protocol is constantly changing.

this is also why nobody uses it
stop breaking fucking links every 2 weeks

Why did you make a new thread? The old one was fine >>771999


You have no clue what you're talking about. The only non-backwards compatible change they made was in April 2016 when they released 0.4.0. Since then all changes have been backwards compatible with new releases.

It had no image.

Launch it with
ipfs daemon --routing=dhtclient
to reduce the amount of connections it uses.

Additionally, some progress has been made limiting the extent of the problem, though the issue of closing connections doesn't appear to be touched yet.

Issues to watch on this subject:
github.com/ipfs/go-ipfs/issues/4029
github.com/ipfs/js-ipfs/issues/962

If you're going to make a new thread, at least post some updates.

js-ipfs 0.26 Released
blog.ipfs.io/30-js-ipfs-0-26/


New InterPlanetary Infrastructure
>You might have noticed some hiccups a couple of weeks ago. That was due to a revamp and improvement in our infrastructure that separated Bootstraper nodes from Gateway nodes. We’ve now fixed that by ensuring that a js-ipfs node connects to all of them. More nodes on github.com/ipfs/js-ipfs/issues/973 and github.com/ipfs/js-ipfs/pull/975. Thanks @lgierth for improving IPFS infra and for setting up all of those DNS websockets endpoints for js-ipfs to connect to :)

Now js-ipfs packs the IPFS Gateway as well

Huge performance and memory improvement
>With reports such as github.com/ipfs/js-ipfs/issues/952, we started investigating what were the actual culprits for such memory waste that would lead the browser to crash. It turns out that there were two and we got one fixed. The two were:

>>browserify-aes - @dignifiedquire identified that there were a lot of Buffers being allocated in browserify-aes, the AES shim we use in the browser (this was only a issue in the browser) and promptly came with a fix github.com/crypto-browserify/browserify-aes/pull/48 πŸ‘πŸ½πŸ‘πŸ½πŸ‘πŸ½πŸ‘πŸ½


>That said, situations such as github.com/ipfs/js-ipfs/issues/952 are now fixed. Happy file browser sharing! :)

Now git is also one of the IPLD supported formats by js-ipfs

The libp2p-webrtc-star multiaddrs have been fixed

>You can learn more what this endeavour involved here github.com/ipfs/js-ipfs/issues/981. Essentially, there are no more /libp2p-webrtc-star/dns4/star-signal.cloud.ipfs.team/wss, instead we use /dns4/star-signal.cloud.ipfs.team/wss/p2p-webrtc-star which signals the proper encapsulation you expect from a multiaddr.

New example showing how to stream video using hls.js
>@moshisushi developed a video streamer on top of js-ipfs and shared an example with us. You can now find that example as part of the examples set in this repo. Check github.com/ipfs/js-ipfs/tree/master/examples/browser-video-streaming, it is super cool πŸ‘πŸ½πŸ‘πŸ½πŸ‘πŸ½πŸ‘πŸ½.
>HLS (Apple’s HTTP Live Streaming) is one of the several protocols currently available for adaptive bitrate streaming.

webcrypto-ossl was removed from the dependency tree


PubSub tutorial published
>@pgte published an amazing tutorial on how to use PubSub with js-ipfs and in the browser! Read it on the IPFS Blog blog.ipfs.io/29-js-ipfs-pubsub.

do ipfs still use the ipfs (((package))) app that
throws away the cryptographic integrity of git?

do ipfs still use the ipfs (((package))) app that
throws away the cryptographic integrity of git?

NixOS will have IPFS as well WEW

this is nice but i can do without the pedoshit links

Call us when it can use both i2p/tor. Until then, it's inferior to bittorrent for filesharing. Why? Because the only client availables (github.com/Agorise/c-ipfs isn't ready enough) are in GC using retarded langages.

They're working on it. c-ipfs seems promising as well.

What pedo links?

Turn on IPv6 you dumb nigger. NAT is cancer.

Does IPFS need the equivalent of TOR firefox?

My quick IPFS indexer seems to be working good, will try to make database available over IPFS if it keeps working good. Could anyone who knows how ES works download the 200gb of dumps from ipfs-search and run `grep --only-matching -P "[0-9A-Za-z]{46}"` on it? They're compressed/encrypted somehow.

Nah, it's non-anonymous right now anyway, you could just install a new chromium/whatever and limit it to localhost:8080 if you really want to.

Can the user behind this comment on whether js-ipfs 0.26 is enough to get it started?
github.com/DistributedMemetics/DM/issues/1

I'd rather put my hopes into the actually-existing ipfs ib made by another user here over anime-pic-one-commit over there.

This is 16chan-tier software

Now is not the time for optimization, that comes later.
Also, see

"The age of men will return;
and they're not gonna get their computer-grid, self-driving car, nano-tech, panopticon in place fast enough..."

Qmd63MzEjASAAjmKK4Cw4CNMCb8NqSbL6yiVRfYnhMBT1H

QmXr7tE6teZgkzdcy5L4PM421FP3g3MpWUEGYfXJhEqfBb

Attached: 1505575841282.png (1276x531 340.73 KB, 536.4K)

Why does IPFS idle at like 25% CPU usage and high ram usage? Its not even seeding often or downloading anything.

How much later? They've known this shit is unusable for two years.

what does it do better than zeronet

Attached: 1494016389196.png (842x848, 197.19K)

Better spec, better design, and better future-proofed.
Problem is, right now it still send a fuckton of diagnostics information and is poorly optimized.
They're making progress, but it's not fast enough.

It's not a honeypot where every post you make on any site is tracked globally and anyone can insert arbitrary javascript (automatically run by all clients) (which must be enabled completely at all time to view sites in the first place) so long as any content of any type is allowed for upload in any page where that content would appear, for one.

bittorrent is NOT secure over TOR

It's just as secure as any other protocol. Some clients might potentially leak your IP, but there is nothing inherently insecure about the protocol.

That sounds bad. Are you sure? It sounds so bad that I don't know if I believe you.

Attached: 1429497660865.png (445x385, 143.55K)

Try it yourself. The steps are as follows:
- go to an arbitrary site
- upload whatever they allow, content is irrelevant
- you will find a local file that was created with the content you uploaded
- edit it to insert arbitrary content
simple as that, bypasses all sanitation attempts, etc.
There has been proofs of concepts on 0chan in the very early days of zeronet, and this has never been addressed. The zeronet devs seem to not care about any of the components that make this possible.

sasuga redditware

This is all well and good but the problem with the internet today is that it relies on a centralized infrastructure to gain access to which one has to pay fees to people with the power to cut one off completely if not control the content being served.

Anyway,
QmdqDLpEKd7zJ5fHNv8r3a4vJVcbgT3g3yW2vqvVHHmKXk

i have judged

Try upgrading from a Pentium 3.

Also centralized web servers were conceived because of a very important flaw with peer to peer infrastructure

but you've got that wrong, a centralized server shuts down no one can ever access that file.

Did you ponder a bit before typing such a sloppy mess? Those centralized cluster of web servers can still represent a peer in a p2p infrastructure. Nothing is forbidding such a cluster from joining the swarm and sharing files. Compared to a distributed web, centralization offers very little benefits in return and is only still around because it offers more control (to the owners of the files and servers).

This is literally the same problem with web servers. Even your VPS is on an actual server. So if you're saying P2P (and self-hosting) has flaws well

none of you are wrong but you've all missed his point

What point, that you can't get a file if the only peer in the world that has it turns off his PC? Well no shit captain obvious, but that isn't a problem with the p2p infrastructure, that's a problem of people not giving enough of a shit to seed that file. Maybe Filecoin is a better solution to that, but then again who knows what will happen in the future.

jesus christ OP I just wanna download Initial D and DBZ and you're linking to CP.

Attached: a-programmer-describes-how-he-nearly-went-insane-learning-to-code.jpg (557x380 366.2 KB, 54.21K)

you talk too much man

can anyone explain how i can use p2p in a website. i have an idea of how i want to use it for a chan and other ways but i dont understand how i would go about it.

Attached: D8CRtMS.jpg (374x374, 38.63K)

are you asking for an entry level explanation of how it works?

no, how would you apply p2p to a traditional website.

also what i dont like about ipfs is that it's almost impossible to have any privacy or remove content.

decentralized hosting, I'm not sure I understand the question?

im talking about the code, how would you apply p2p.

that depends on the type of site user, come on

well, let me make a mockup.

I want to make a chan where users can create their own boards on the host site, and every thread is hosted by the users though p2p, the more users the more relevance and faster the thread loads for everyone in the thread. after a certain number of posts the thread will be removed and flushed out of everyone's computer.
the features will be unlimited file /video and reasonably lengthy text sizes.

Attached: 643d.png (2354x946, 510.18K)

Attached: zn.png (1383x163, 33.33K)

that's almost exactly the same as IPFS chan

sorry for asking to be spoonfed, but do you mind showing me a bug about this? I can't find it.
Based on what I can tell, you can't just arbitrarily change 0chan to post whatever file you want.
This isn't related to ipfs, so I'll sage

user's just havin' a giggle, go ahead and click it.

Unoptimized, probably the DHT.

They're working on other stuff right now, they got a lot of money from filecoin ICO so we should be seeing some progress pretty soon. There's also a C implementation in the works.

IPFS can run on any transport (hyperboria), which can run over any link (ronja)

You cache it automatically when you download it.

Tor integration is in the works for privacy

You're mostly just describing smugboard , it's very similar to what you're proposing.

It's not a bug. It's how it is designed. The poster can arbitrarily change the content of what they have posted. This includes changing the media type, and is simply a matter of editing the content that is stored locally. When someone requests the file, because you are the poster of the file, your doctored copy is distributed because the content you "upload" (e.g. text posts, or actual document attachments, etc.) are handled in this way.

You can even see the instructions on "how to modify a zeronet site" here:
github.com/HelloZeroNet/ZeroNet
as comments you post to a site are not handled in any special way compared to anything else. It's also why you need an ID to post anything and why your ID can be used to track anything you say across all sites by a simple grep: to enable modifying the content (which is not differentiable from a site, up to a point) by signing a more recent copy of the content.

wew, its worse than I thought

Why aren't files hashed? IPFS gets this right, why is there no network-level guarantee that files haven't been altered?

Attached: 1397153602672.png (300x300, 83.63K)

but Tor works easily on a P3, why is IPFS special?

IPFS is still in alpha (not optimized yet) and has the overhead of a complete p2p system (routing). Tor is much simpler to implement since every peer is not a contributing node. A large number of peers connect to a limited amount fast nodes. In IPFS every peer is also a node. This is the difference betwen Tor's decentralized network approach and IPFS's distributed network approach.

IPFS uses a distributed naming system (ipns) to point to the latest version as well as static pointers (ipfs-based addresses) to point to specific files. This is to enable tracking the latest version (i.e. enable the ability to update content) while still giving the guarantees there was no tempering by the controller. Zeronet doesn't seem to care at all about such possibilities: all that matters, only about the ability to update the content. Similarly, the zeronet folks don't give a shit about security (for the longest time (that might still be the case) they had been running with very old versions of various libs, including crypto libs, with unaddressed CVEs, for example). You can just say "their threat model is different" but at this point they disregard secops 101.

Release candidates are out for go-ipfs 0.4.11. If you want to try them out, check out the download page: dist.ipfs.io/go-ipfs

If you have troubles with IPFS using way too much bandwidth (especially during add), memory leaks, or running out of file descriptors, you may want to make the jump as soon as possible. This version includes prototypes for a lot of new features designed to improve performance all around.
github.com/ipfs/go-ipfs/blob/master/CHANGELOG.md

So if I use something like IFPS to host a website, does that mean that I don't have do fuck with things like domain name registration?

Technically yes, but in practice you will still need a way to register a user-friendly name because people can't recall localhost:8080/ipns/LEuori324n2klAJFieow. But there's a way to add a friendly name in the ipns system (google around, I don't recall the correct method), which allows people to use localhost:8080/ipns/your.address.name instead, so that's an option. Other than that, all kinds of systems can leverage the likes of namecoin if you're so inclined.

Eh, I actually prefer the hash method. Keeps things a little more comfy.

That's really fucking cool though.

decentralized.blog/ten-terrible-attempts-to-make-ipfs-human-friendly.html
Here's a list of all DNS alternatives that IPFS team can use, my guess is that they will use Filecoin considering that it belongs to them.

The method is to register with any normal DNS method a TXT record with content: dnslink="/ipns/" and it will work. So it's actually relying on the external system.

I thought filecoin was just an incentive to store other people's files?

Can I limit the amount of space that IPFS uses or if I download and start running it will it just fill up my hard drive indefinitely?

Attached: shocking truth.jpg (768x780, 73.49K)

It is. I think they're going to recommend using ethereum domains as IPFS has plans to be deeply integrated with it.


IPFS doesn't download random things to your computer. It caches everything you view but by default it's capped at 10GB.

By default IPFS does not fetch anything on its own, it only will retain the data you manually added via browsing or manual adding.

If you want you can run the daemon like this `ipfs daemon --enable-gc` which will read your config for 2 values, 1 is a timer and the other is storage. By default I think they're 1 hour and 10GBs, that means a gabrage collection routine would run either when you hit 10GB's of garbage or 1 hour has passed. What it considers garbage is anything that's not "pinned", if you don't want something to be treated like garbage you pin it.

Someone made an issue recently that I agree with, there should be an option for a minimum amount of data to keep, right now garbage collection deletes ALL garbage, but it would be nice if you could set it to keep xGB's worth of non-pinned content at any one time.

github.com/ipfs/go-ipfs/issues/3092

QmVuqQudeX8dhPDL8SPZbngvBXHxHWiPPoYLGgBudM1LR5

All 13 of the current "Manga Guides" series, in various formats.

Attached: manga_13set (1).png (820x650, 870.38K)

My mixtape.
Good music with a good video to go with it
Holy Nonsense
Also why does 32.00 MB / 54.44 MB [=======================================================================================================================>------------------------------------------------------------------------------------] 58.79% 0s20:13:33.012 ERROR commands/h: open /home/user/.ipfs/blocks/GY/put-460657004: too many open files client.go:247Error: open /home/user/.ipfs/blocks/GY/put-460657004: too many open files
keep happening? Each of these I had to try adding several times.

Attached: water between azn tits.jpg (900x1200, 121.37K)

Have you upgraded to 0.4.11 yet?

I have a question about implementation

Each file is divided into chunks, which are then hashed. These hashed chunks form the leaves of the merkle tree, which have parents that are identified by HASH( HASH( left-child ) + HASH( right-child )). This continues until we reach the root node, the merkle root, whose hash uniquely identifies the file.

To give someone else the file, from computer S to computer T, S gives T the list of leaves, and the merkle root. As I understand it, this is basically what a bittorent magnet link does as well (along with tracker and other metadata). We know the leaves actually compose the merkle root, by simply building the tree from its leaves, and verifying the new merkle root is the same as the provided one.

Computer T then ask around if anyone else has the content of the leaves (by querying for the leaf-hash), and verifies the content by hashing it upon download-completion. Once it has everything (and verifies), it simply compiles the parts into the

Assuming there is nothing wrong with my understanding above, I have a few questions:

How do we know the merkle root actually identifies the file we meant to get? ie if someone hits an IPNS endpoint, and an attacker intercepts and returns a malicious merkle root + leaves, now what? Is there anything to do about this or is this just a case of don't trust sites you don't know

When computer T starts requesting for leaf-content, is it requesting by querying on the hash of a leaf, or the merkle root? Bittorent only requests parts from users that have the full file, which comes from the latter. If you request by the leaf-hash instead, I'm imagining that the less-unique parts (like say a chunk of the file composed entirely of NULL bytes) could come from ANY file source, regardless if that user actually has the file you're looking for.

And extending that, with some infinite number of files stored globally, it would be possible to download files with a leaf-list that NO ONE actually has; each leaf being found in some other file; composed in some particular fashion to create the requested file.

Can you use IPFS in combination with a tor bridge with obfs encrypting files and still transfer files? If this worked would the person receiving the data still see your public IP?

So you mean how is the data verified to be correct once the client receives it. inb4 is isn't verified

On the leaf. Each 256k block has its own DHT entry (hence why it's known to be so chatty). This also means that if you have a file and change one byte in it then most of the file will be deduped by IPFS if you readd it.


My understanding is that the IPNS entries are signed by your public key, so that's not an issue. There is a problem where a malicious node could return an old entry, but that's the reason each entry is stored in a fuck-ton of DHT nodes. Which is also the reason it takes so long to resolve names, it doesn't just take the first resolution it can.

So what I suggested then, that a file no one has could be generated by the network given a list of leaves, by retrieving them from other files, would hold then? I suppose though that's not anything special, except that the granularity of chunks is bigger than say, 1 bit. But to confirm my understanding, is this true?

With bitswap, it DOES download random things, kinda. You swap fragments among peers from random content.

It's hash-based addressing: if the chunks that make up the file exist in other files, they are exactly as valid in the requested file as it is in that other file. That is, request the hash and the provenance is a meaningless concept: you can think of it as two completely different kinds of data (the actual chunks, and the file descriptors which are merkel graphs)

IPNS is still handled via DNS, so short of someone pwning the authoritative nameserver for a domain, you're looking at a hijacked local resolver, which you can defeat via VPN.

I am requesting that people please prefix their hashes with"/ipfs/" when posting, so that the browser addon detects them and anchors them, this way people with it can just click on them.

Like this
QmZsWcgsKNbjmiSeQGrZUwAbHVVVtxSFKm9h9AFKoAK8aH
->
/ipfs/QmZsWcgsKNbjmiSeQGrZUwAbHVVVtxSFKm9h9AFKoAK8aH

Attached: 1385077398214.jpg (261x287, 57.52K)

updated my porn folder again
/ipns/QmVm4jMdZnewAAU3QPoUBJ6jpjjicRWsfcjfD7c47rf1KC/latest.html

direct link since ipns is buggy
/ipfs/zDMZof1m2wGAGywacnVpmTXZ76tW4EWixSdVz1rkkNGLj3d5vAuh/

alright my first try at this, it's Lovecrafts "beyond the wall of sleep"
/ipfs/QmShe7riU5RVJ7iGkr7ebMqsgUjMk5SfiqeRMeB1Hnu6gX

Zig Forums shoop incoming

Nobody is interested in the contents of your spank folder, you degenerate.

but someone might be

And here the necronomicon
/ipfs/QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6
could someone tell me if it works alright?

Attached: CRjSbwjVEAAuScv.jpg large.jpg (640x649, 38.18K)

You can check yourself by accessing a file through the gateway, i.e. ipfs.io/ipfs/QmT45tFQo5DJ8m7VShLPecsTsGy1aSKBq4Pww8MfaMppK6
If you can see it there, everyone can find it.

How do you mean?