Kernel Setup/Compilation

Are there any good resources I could look at for compiling the Linux kernel myself if I'm wanting to squeeze a bit more performance out of my toaster?

Attached: DpAQw_GWwAE8vre.jpg:large.jpg (640x295, 31.49K)

Other urls found in this thread:

web.archive.org/web/20180226135118/https://www.dotslashlinux.com/
twitter.com/AnonBabble

If you're on Arch download the pkgbuild, edit it so you use your config instead of the default, and run it. Should be easy on Gentoo too.
Before you waste time: disabling modules will not make it faster, only smaller which is useful for embedded but nowhere else. Tell it to target your microarchitecture instead of using generic instructions.

No. There is no secret config that will make your kernel faster, just compile it for your architecture. Although assuming you run amd64, it already is.

You seem pretty sure of yourself despite being completely wrong. The supported instructions from one cpu to the next are not the same even if both are amd64. If you don't generate code specifically for yours you'll be using only the older instructions all of them support. Also cpus aren't equal, an optimization for one may be bad for another.

Not that the kernel matters much since most people don't spend too much time running kernel code.

The gentoo wiki, of course. Use the experimental patch to build it with -march=native.

dotslashlinux
website is no longer up, but you can get the info on archive.org
web.archive.org/web/20180226135118/https://www.dotslashlinux.com/

This is basically wrong. There are very few differences between CPUs as far as compiling is concerned. The MAIN difference is access to specific instructions sets, usually vector ones. Which is where you see the big performance gains. You won't see any performance gains compiling specifically for your CPU vs another one with the same ISA.

Linux Fun Fact: The make config system was written by Raymond Chen.

Raymond Chen the Microsoft guy.

there's a million different config options that may slow down the kernel or speed it up and none of them are well documented.
biggest performance gain though is probably disabling all the security features created to deal with intel's shitty processor security problems.

You compile for the ISA; the idea that you can compile a Linux kernel with specific instructions that results in better performance for your Intel Core i5-2520M over someone else's Intel Core i5-2540M (or any other retarded shit you come up with) is a nigger-tier thought.
Kill yourself.

Are you retarded? Just look at you /proc/cpuinfo and you'll see that support instructions set are more than amd4 and sse/avx; and this changes almost every CPU gen.

Don't certain versions of CPU have features disabled via microcode and such?
Also, I've heard that some particular i7s have trouble running AVX instructions at high clocks, maybe I read the intel errata pdf wrong.

The only way a kernel can ever be 100% optimized for the CPU it runs on is if it's written from the ground up for that specific architecture. And I'm not even talking about the ISA. I'm talking about specific line of processor cores that are made with the same or a very similar design. It's (usually) manufacturer specific. Intel and AMD each have distinct architectures that make up their various product lines even though they're sharing a common ISA with x86-64. The ISA doesn't determine how the CPU works internally, only which interfaces the CPU exposes to software/firmware.

An example I'll give for a completely optimized kernel is the one found in SGI's IRIX. A large portion of it was carefully written by hand in assembly and it was optimized for a few very specific MIPS processors. Something like XNU or Linux or the Solaris and BSD kernels will never reach that level of optimization because they aren't tied to a specific platform. They're written to be portable across x86, x86-64, ARM, SPARC, PowerPC, etc. They can be optimized to a large degree (and they are) and it's really just good enough. There's no reason to compile your own kernel in the age of 2GHz CPUs you can pull out of the trash for free.

I once fell for the Gentoo meme as a side project after repairing a 700MHz P3 desktop. I followed the guides, enabled and disabled all sorts of flags to optimize for speed. Even on that ancient toaster my boot time was only 1 second faster and I saved around 10MB of RAM. It's just not worth it. The kernel itself is already small enough that you aren't gonna make it much smaller or faster by messing with it. What really matters are the applications you run. Using a lightweight window manager is going to speed up your system the most. I recommend MWM or FVWM for that.

not everyone likes this method:
no matter what distro you are using, you can always download a tarball from kernel.org, then you use 'make menuconfig' to tweak the config file to tune it to your machine.

when you need to switch versions you can switch versions by copying your config file to config.old and using make oldconfig to resolve any new options.

then you do make, make modules_install and finally make install. it will take a while, then test it with grub or kexec.


well, it's relative. i shaved more than 80 mb and 15 seconds of boot from my laptop's arch install by eliminating the ramdisk, adding support only for what i need and modules for what i would possibly need such as HID. but yeah, i did indeed waste a lot of time, and that kernel is only for my exact specs. generic kernels are a blessing imo.

What's your point? Are you implying that -march=native does absolutely nothing? Even if it gives a few percent more, these are welcomed.
The reason is limiting attack surface.
Looks like there's also a meme called "not being retarded" you didn't fell for. You use Gentoo for init/libc/arch freedom, sane defaults, big repos with numerous quality overlays, portage, USE flags, SLOTs, user patches and tons of other very useful features.

What? That's in the fstab.

Use hwinfo and lspci to gather info about your hardware, and then just run make menuconfig and disable as much as you can and set it to optimize for performance. You could also disable some security features but I don't recommend doing it. Tip: once you have made your .config, you can just copy it to the new kernels directory and run make syncconfig and it will prompt on changed/new configuration options.

So, to compile the kernel, run
tmux #this just makes it easier to look at the output of next 3 commands while you are selecting the config optionscat /proc/cpuinfolspciman hwinfomake menuconfig #you can type a / to search and then press a number key to jump to Nth result. Also be sure to read the help entries on all config optionsmake -j2 #change 2 to the number of CPU cores + 1#make sure /boot is mounted as RWmake modules_installmake install#then use dracut or whatever to build your initramfs#and regenerate your bootloader entries

Attached: 1.jpg (518x834, 179.93K)

march=native and then disable all of the hardware options that don't apply to anything in your machine.

Speaking from experience, compiling gentoo @700MHz isn't a great experience.
I tried on a laptop with pentium3, it was just grueling for how long compiling any one package took, let alone the kernel, which I'll have to do with a reinstall because it's so outdated it wont be able to retrieve new packages. Oh and only having (I think) 500mb of memory to work with, at least I can still save myself the time of configuring by just copying my kernel config...
I've been wondering if there's more to gain from configuring memory management than cpu optimizations, but I have no idea what changes could be made to those settings without fucking something up.

just use distcc

well, in archlinux the ramdisk is loaded by grub. my fstab only contains an entry to /dev/sda1.

i removed ramdisk support in my kernel and just added native XFS support so it loads my partition directly without loading a ramdisk with modules first

...

You mean initramfs, right? Ramdisk usually refers to tmpfs for GNU/Linux.

I suspect there is not much to be gained from kernel tweaking as the kernel is not all that resource inefficient to begin with. There are however few things that could at least be tried:
Even on faster more modern systems cache misses are quite expensive and on older systems with slower memory even more so, by trading some code execution speed in order to fit more into caches the overall performance could in theory get better, or worse, depending on workload and the cpus cache setup.
To enable this kernel needs to be patched, but it is a thing and there really is no downside to using it. It is going to produce better code for this particular system, but probably not anything noticeably faster >CONFIG_IOSCHED_DEADLINE
If still on spinning disk JFS+Deadline is still a great combo offering ok performance with really low cpu overhead. In practice this is quite meaningless however as the overhead from disk encryption completelly overshadows any scheduler or filesystem cpu usage

Some basic debugging options or things that fall under CONFIG_EXPERT could also be used to shave off few k of memory usage or code size, but unless it is some embedded system where every k counts it really is not worth the effort