So there’s this sort of debate in GNU/Linux between Static and Dynamic linking. Really this is also framed as a debate between musl, which lists static linking as a selling point, and the popular glibc, which afaik dropped static support and only does dynamic linking.
So different C koders have different views on this, but I wanted to see what this might mean for the recipients of the compiled binaries. What would this mean for the end user or an admin in handling their systems. How would performance change? Would anyone have to do anything different? What about compiling a program that’s not in the repos and has no packaged version? Does that change?
Which one is truly better for everyone in the long run?
Static/Dynamic Linking and the Administrator and User
This is self-explanatory as static linking means that the libraries/dependencies are included in the binary.
_IMPORTANT_ Those against static linking will say that the entire library/dependency is included but that is NOT true; only the routines needed are included in the binary.
As far as I know, consulting a lookup table for functions in a library isn't that performance intensive so any performance advantages, if any, are modest.
You mean the user/admin? Nope. Just install it and use it in the same way.
Depends if that program supports static linking (in the C case it means a C library that supports static linking; glibc purposefully breaks it). By support I also mean does the language actually allow it (some interpreted languages do not), does the developer support it in his build files or setup (or can you easily modify it if that developer doesn't), and are there any license conflicts that arise from including part of those dependencies in the binary itself.
Permissively licensed software (MIT, BSD, ISC, etc) can link dynamically to a GPL'd library if the library is LGPL or an explicit linking exception was stated but cannot link statically to it under any circumstance. The reason for this is that when something is included with the binary, it counts as redistribution and therefore must be copyleft to adhere to the GPL. Now, if your program is GPL licensed anyways, then this does not matter. There are some programs that are permissively licensed that do dynamically link to GPL'd/LGPL'd libraries and if you were to build a statically-linked binary for that program, you would have to find alternatives to those libraries or you would be violating the license.
Static linking is better in the long run as binaries would be truly portable between different Linux distributions and even to FreeBSD if they have ELF compatibility turned on. Static linking is the true solution, not this Flatpak, AppImage, and Snap package shit.
Static linking is objectively superior.
Dynamic linking is pure braindamage.
Dynamic linking is objectively superior.
Static linking is pure braindamage.
based
braindamaged
Spoiler that image of a duck fucking that horse, please
Comments on distribution of security updates for common libraries?
insufficient parentheses. It's not a debate; it's one guy who won't accept debate.
that guy's position: static linking has certain downsides, so let's not support it at all.
other people's positions:
IT ALSO HAS CERTAIN UPSIDES, MOTHERFUCKER
some people will upgrade libraries and get upgraded utility (security, performance)
other people will upgrade libraries and get degraded utility (shit suddenly not working, new security issues, new performance issues)
yet other people will have the linker hijacked and find their application linking in malicious code
the guy who won't accept debate only has eyes for this first group of people.
static binaries are nice. you can run them on any distro unlike shared things that will only run if you have the same versions of everything in same locations as they were on the compiling system.
The performances improvements can be potentially huges because of LTO; the compiler could for example inline a function provided by a static library.
Are you spying on me?
Dynamically-linked security process: 1. Vulnerability detected; 2. Developer fixes vulnerability; 3. Distro builds fixed version; 4. Distro pushes fixed version to repos; 5. End user installs fixed version
Statically-linked security process: 1. Vulnerability detected; 2. Developer fixes vulnerability; 3. Distro builds fixed version; 4. Distro rebuilds programs who use dependency; 5. Distro pushes fixed programs to repos; 6. End user installs fixed programs
The static process has to rebuild the programs that use that dependency (dynamic does not) but both processes are reliant on the user to patch their systems. Additionally, having to rebuild programs (having to build more than one thing) does not mean that it is less secure; it's a fallacious argument because not every statically-linked binary will be vulnerable, unlike the entire dynamically-linked system, and ignores the fact that systems become insecure because of poor administration not because of an elongated build process.
Now that's bullshit, this is exactly the same for both: static linking only keeps used symbols and dynamic loading only loads used symbols.
try and think for longer than it takes to come up with some kind of objection. you're really pulling a "maintainer of glibc" with this level of brainlessness.
this scenario has fucking nothing to do symbols appearing in binaries or not. symbols don't act on their own; they have to be used by the application.
before you think for another 0.002s and then object, I already agree that a static-linking linux distribution will frequently see that all of its static binaries are vulnerable. Usually people want dynamic linking by default but want static linking as an option. The glibc maintainer's disease is to reject the option.
dont you only have to rebuild the changed parts? i remember this happening often when i patch and recompile something. some distro package build systems suck tho and will rebuild the whole thing and sometimes even everything it depends on. of these void probably has the worst package builder. it redownloads all dependencies for every package build so you have to wait on that if your build fails. theres also no option to keep the already downloaded and compiled identical packages.
maybe dont link everything against systemd or the meme framework of the day. good programs dont have vulnerabilities or serious bugs frequently.
great then there's no downside to static linking.
Dynamic linking saves space. The problem is that it can break programs when you have the wrong version of a library installed. It does however ensure that bug fixes to libraries propagate fast.
Personally, I'm on the static linking side, unless you're talking code vital for security (so databases, servers, OS...). It ensures that everything works, and nobody will run out of space due to a couple duplicated libraries nowadays.
its really not if your programs are good. complex modern programs like firefox arent tho and most people are going to use such things. for the autistic sbc minimalist it should work fine since they hate bloated programs and would never put such things on their system.
Lol. I have so much space I don't know what to do with it.
i dont but the difference is so small that it doesnt really matter.
It's only a security problem when a project bundles the 3rd party libs it statically links to, or if there's no package management.
Yes, you only have to rebuild the changed parts. As you pointed out, there are package building systems that rebuild everything (such as Void's) and this would introduce a lot of unnecessary work to build packages for a statically-linked system. I'm not sure how many of them do this, or even the motivation as to why those systems do that (sandboxing perhaps?), but such functionality isn't needed as you could simply checksum all the components during rebuild and fix things as needed.
It's not bullshit as pointed out. The argument that systems whose applications use dynamically-linked dependencies are more secure is silly and unfounded.
...
Don't be so quick to dismiss de-duplicated code. If we follow the _unix_way_, you'll have lots of (basically) the same process running in parallel doing whatever it is you need done, and if all those processes could share some code, it means less cache thrashing.
Technically speaking, anything can link to GPL software. The issue arises when people want to distribute the resulting combined binary. If you want to distribute a binary that contains GPL software, the license of the combined work needs to be compatible with the GPL. If you don't care to distribute any GPL software, it really doesn't matter because the GPL is a license for the distribution of software.
Stop starting sentences with So.
No.
Most of the problems with dynamic linking are linked to our use of shitty librarians essentially. Things to keep in mind for a toy OS.
I have the most recent glibc (2.29) and my distro ships it with libc.a, and I'm able to generate static binaries.
it should be possible but it might be hard if you use a distro that has shared things. you then have to compile the static version of the dependencies and somehow make the other parts understand that they should use those manually compiled things instead of the shared system libs and that might be hard if the program sucks.. basically anything that uses the autoconf crap will be really annoying.
I disagree, kinda.
It only saves space if the system was carefully engineered to do so. But again, with carefully engineered system we can achieve a lot of bloat removal so it would not be that necessary in the first place.
Just build your system with musl libc and see for yourself how much it reduces the total size of installation. Glibc is stupendously bloated. It is actually embarrassing to watch.
But it's true that potentially dynamic libraries can save some space, and they ARE in fact viable in embedded systems. Though the thing is, static people can do the multi-call binary trick such as busybox.
But the bloated shit like Firefox or some other random crap, like Gnome apps, do not save space at all for the most part. They might ship some libraries that duplicate the ones installed in the system, or they just duplicate the functionality, which isn't nice either as far as bloat removal goes. Duplicating libs mean, essentially, that you lose ALL advantages of dynamic linking except for the deployment one and left only with disadvantages such as slower execution, fucked TLBs etc.
And again, I want to note that a LOT of code in contemporary bloated shit is actually duplicated or the application just naturally uses more "private" memory. It is fine in the unix essential components, but it becomes awful (like, top reports about 12% of shared memory usage for Firefox and 37% for X in my system RN; X is clearly better, but that shit sits right at the graphical core, and it's less than half; and those two are among the top 5 memory hoggers on my system if you consider the always-running shit only).
I never took my time to actually build the full system in static though. I mean, I'm kinda lazy and there are actually NO static linux distros actively maintained, at least to my knowledge. Sta.li and some others are abandoned as fuck. I will get around to it eventually, but building the whole shit with it is more maintenance work than I'm comfortable with at the moment.
Just for the record, compile&link the helloworld program with gcc -static and post the size LMAO.
It's 733448 (FUCK!!!!) bytes on my machine.
the code:
#include
int main(void){printf("%s\n","Hello, world!");return 0;}
the command I used:
gcc -static -o hello hello.c
Also, running:
strings hello | grep usr/lib
reveals the following:
/usr/lib
/usr/libE1
/usr/libH9
/usr/libI9E
/usr/libI9
/usr/lib64/misc/glibc/getconf
/usr/lib64/gconv/gconv-modules.cache
/usr/lib64/locale
/usr/lib64/locale/locale-archive
/usr/lib64/
Like, I dunno if it will require this shit if I put this into somewhere without glibc in the appropriate place, but WHAT THE FUCK? Come on GNU, this is some horseshit.
didnt know that this terminal emulator is so gay. also size for me was 26040 but this is with musl
and if i leave the -static out then the size is 20064
32-bit ftw
i like the performance of a modern system. its close enough if i strip it too 14168
...
GNU/LARP buttblasted.
its so weird that a 77 byte text file can grow so much when you compile it
seething
No not really. Depending on the system involved, it's possible to link in a complete run time environment because that run time platform isn't a mandatory part of the target system. This implies that if the run time platform was already part of the target system, then there would be no requirement to link the platform into the application binary.
linking isn't a user concern. the entire debate is null and moot outside UNIX braindamage world. in my C projects (which are PoCs because UNIX is not a real OS), I don't even use libraries, since they're all useless. for example in my most recent project i just open a socket directly to X11 instead of using some retarded library to do this for me
Levels of braindamage regarding code reusing, ascending:
1) Shared libraries AKA we try to link random bullshit in at runtime (execution start) into our VM model
2) Static libraries AKA we try to link random bullshit once and forever into our executable
3) Do-it-yourself AKA we just write random bullshit ourselves even when we have but a vague idea what we're doing and basically we have no life
Like "cowsay"? Fine work m'lad.
You don't understand, a library (such as libX11) to compose messages _for you_, making sure you aren't working with the actual protocol, but _a paraphrasing of it_, its utterly moronic. With libX11 you can never be 100% sure how you're interacting with the X protocol because you aren't compsing the messages yourself. They literally just rewrite all the docs from the X spec in a shitty half-assed way. Why not just write the payloads yourself? It takes 5 minutes.
Meanwhile, outside of UNIX braindamage world, we use protocols all the time directly, and don't even think about wrapping them in a library, because we have actual mechanisms to talk to them (such as typed values that get automatically serialized with prefix encoding and is faster than some dumbasses byte-based protocol he made by hand) which are easy to use and well defined. Even sending JSON to some webshit service avoids the need to make libSendAllMyJSONandRewriteTheServicesSpecsButStop60%ofTheWay
samefag
See what I did here? I just made a baseless assumption and used it as a rebuttal. Funnily you probably think I'm a LARPer because I violated one of your precious neckbeard conventions. But actually it's the other way around, and always has been. The reason chans don't have any competent users is because whenever they post, some retard immediately replies "you don't know what you're talking about", and then they go back to arguing over how to use sudo or how to write #ifdef _MUH_LIB at the top of their C header.
Well, I don't KNOW the X11 protocol. Also it's crazy to know how all those extensions and shit work.
Though I assume it is sane to use the library from a fucking PROJECT, even if it is braindamaged. X11 is Unix braindamaged anyway, so I assume you're just a troll and a LARPer LOL.
Who do, the X11 devs? Well, shit, literally nobody else has an actively developed X Window System implementation.
Sounds like the library job is included in your language. Big. Fucking. Deal. I seriously don't give a shit.
How the fug do I use sudo and write to the preprocessor to do different things based on compile flags?
I'd like to congratulate you on doing the imposible. C is a high level language and therefore uses it's own libary (libc)
#include int main() { printf("Hello, world!\n"); return 0;}
$ gcc --version
gcc (GCC) 9.1.0
$ gcc -std=c99 -Wall -Werror -pedantic -static -o hello hello.c
No errors. ./hello runs as expected.
$ du --apparent-size --bytes hello
760696 hello
$ strings hello | grep usr
/usr/libH9
/usr/libH9
/usr/share/locale
/usr/share/locale
/usr/lib/getconf
/usr/lib/gconv
/usr/lib/gconv/gconv-modules.cache
/usr/lib/locale
/usr/lib/locale/locale-archive
/usr/lib/
I copied the binary to a system where /usr/lib/gconv doesn't exist. glibc 2.17. No gcc. It ran as expected. It's a trivial program, of course, so I don't know whether more complex statically-compiled programs would run into problems.
Stripping shrinks the binary by ~10%.
$ strip -s hello
$ du --apparent-size --bytes hello
686816 hello
...
based
unbased
based
unbased LARPer spotted
Even more size-of-hello-world-binary LARPing. This time with strip.
based
Imagine wishing to escape the C braindamage so badly, yet still writing in C even when you don't have to.
Thanks for telling this.
What the fuck is your problem?
Non-trivial programs linked to musl libc are much lighter too.
If you don't like the printf, here's another program for you:
int main(void){return 0;}
LMAO. See how bloated that shit is with Glibc.
Sure, non-trivial static binaries with musl will be heavier than dynamic glibc ones most of the time, but the main point here is that glibc is braindamaged and is not even viable for static linking.
Neither one is better than the other because they both have different strengths and weaknesses. For system software shipped with the OS or software installed by the system administrator static linking makes little sense, because the vendor or admin is in the position to manage dependencies and ensure everything works together correctly. Replicating the same library functions over and over again in every binary that uses them is wasteful, which is why dynamic linking was introduced in the first place. But for 3rd party application software distributed as binaries and intended for user installations, static linking makes far more sense, because applications can be shipped in simple signed packages that "just work", first time every time, without relying on package managers to manage dependencies, etc. The either-or approach is stupid imo; but dynamic linking won out, and that is what we seem to be stuck with.
...
Musl and glibc comparison, mostly.
There is NO fucking way a helloworld binary should weigh under a megabyte in the Cnile language AKA "the lowest of the high-level" and "abstraction close to assembly". This is beyond retarded. Compared to musl result in 1078862 it shows how much of it is unneeded. Though that user also states that dynamic binary weighs 20KB, and that's kinda weird too, maybe musl is not that optimized for dynamic linking as far as binary size for small programs goes.
archlinux.org
archlinux.org
Done. Now fuck off, LARPer.
That's pretty fuckin neato
xD
Pretty good post. The portability part can't be understated, I've lost count of how often I wished Drepper a long painful death because some compiler bootstrapping binary was linked to glibc.
No you aren't. -static gives you a dynamic binary if you use glibc. Try to run it on a musl system and you will see. Or just use ldd. Do it with an actual static binary as well to see the difference.
Correct but misleading. It was introduced to work around the idiocies of X11.
$ ldd hello
not a dynamic executable
$ file hello
hello: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, etc.
glibc 2.29
Retard.
The reason xbps-src does that is that it is doing exactly what the build instructions say and as the repo(rightly imo) assumes that you don't already have every library ever in your system the default build instructions point to urls for the source.You can (and should) keep a custom version that just grabs already existing deps instead if you are compiling something repeatedly.
Also the option to keep the previous build is called copying the damn thing out of the build folder.
The upside of static libraries is that its easy to distribute, but imagine if all your programs were statically linked with musl and a newly found security issue is found. You would have to update ALL of your software, and you would have to wait until the developers of all the developers pushed the update. With dynamic linking like in glibc, you only have to update ONE package and your system would be safe.
Static linking is for when you know the software won't change.
Dynamic is for when you are open to upgrading the software later.
imo dynamic is the best because in the future things could be better
With package management this purely a performance problem. That problem can in turn be solved by caching, which is still less complex than dynamic linking. This also cuts both ways, a security regression in glibc automatically affects all programs. Yet mysteriously dynamic linking proponents omit this point every time, as if software only ever improved.