SeL4 gets formal verification on x64

sel4.systems/pipermail/announce/2018/000024.html

Attached: dowfnload.png (300x168, 4.33K)

Other urls found in this thread:

qubes-os.org/attachment/wiki/QubesArchitecture/arch-spec-0.3.pdf
sel4.systems/pipermail/devel/2016-March/000760.html
openhub.net/p/chrome/analyses/latest/languages_summary
openhub.net/p/firefox/analyses/latest/languages_summary
openhub.net/p/gcc/analyses/latest/languages_summary
openhub.net/p/gnome/analyses/latest/languages_summary
openhub.net/p/linux/analyses/latest/languages_summary
twitter.com/AnonBabble

what the fuck?

GNU/SEL4 when?

The only thing this matters to is the handful of people wanting to run seL4 on their x64 IoT device rather than Windows like everyone else.

It would have been better for them to formally verify the Multicore support, they should have recognized x64 as the low priority it is.

I want that! Fuck Linux.

Unortunately for you, pretty much nobody writes drivers or user space programs for L4, not even the L4 people. They use it as a hypervisor nowadays.
So if you were to use it on the desktop, you would probably be just recreating Qubes OS, but with seL4 replacing Xen. So you would still be using Linux.
Ideally, they would actually build an OS around seL4 itself, sort of like what MINIX is trying to do by porting NetBSD stuff.
I don't want to virtualize an entire OS for each fucking program and driver running.

Still, Qubes with seL4 replacing Xen would be nice for the security assurances alone.

This

qubes-os.org/attachment/wiki/QubesArchitecture/arch-spec-0.3.pdf
Qubes poopood the idea back in 2010.

It seems the seL4 people have been looking into porting it though.
sel4.systems/pipermail/devel/2016-March/000760.html

Then they should start doing so.
The only argument I remember for Linux was that it's "free" and "not as bad as Windows" while Windows has a microkernel and they have a shitkernel you have to recompile in order to add or remove functionality.
How about a proper kernel then?
Probably because the L4 people are writing L4.

Ok.
docs.sel4.systems/Conduct.html

Attached: Mirai_botnet_is_a_bad_malware_type.jpg (500x334, 26.38K)

Yeah man i'm sure that their machine checked proof is wrong because of the license and code of conduct

Maybe in 1993.

You don't need drivers or a user space. Linus said Linux is an operating system. :^)

Okay you're right it's actually hybrid

It's about as monolithic as it can get at this point.

K E R N E L S P A C E S C R O L L B A R S

I've actually been looking into that. They got halfway through porting Hurd to L4/pistachio around 2004, but abandoned it because userspace capabilities weren't high-performance enough. Assuming pistachio and seL4 aren't very different from each other, it should be relatively simple to port Hurd/L4 to seL4.

Nobody is investing their time making it happen now. It won't happen until somebody takes the time to make it happen.

So the Minix guys are trying to make a crappy clone of OSX?


90% of the problems with GNU/Linux stem from GNU. Why bother putting so much effort into porting all of the GNU software and its derivatives when you could start from scratch and achieve something better in a shorter time.


Or you could just create a good OS rather than trying to solve the problem of old software being shit by containerizing everything.


This.

All you need to do is port glibc to SLE4. There is hardly any other code in GNU that specifically relies on Linux.

Yeah take this great very well designed formally verified system and port this massive pile of horribly bloated shit on top

t. guy who understands fuck all about how the OS his programs run on works

You also have to implement all the things glibc interfaces with, which is 99% of the work. seL4 only gives you scheduling, memory management, and IPC, everything else has no business being in kernel space and so isn't implemented.

Pic related doesn't even begin to convey the complexity of modern Linux/glibc.

Attached: Linux_kernel_System_Call_Interface_and_glibc.svg.png (1440x1080, 439.65K)

You know what I do not appreciate about glibc? The feature macros. It's unnecessary. If I am writing a program that uses strcasestr, it doesn't matter if it's compatible with any other program but the call to strcasestr. GNU/FSF etc. do this to make software developers forced into a situation to license their software with the GPL. It's garbage. I can write and have written a strcasestr function that doesn't need shitty GNU software.

I hope for a day that the only GPL'd software in a Linux is the kernel itself, and I hope that that day will lead to the Linux kernel to being relicensed to something that is not GPL.

Is there anything more obnoxious?

GPL restricts your freedom.

AFAIK Linux does not require contributors to transfer copyright to Linus or the Linux Foundation, so there are tons of issues with relicensing. Each contributor with code in the kernel would have to be tracked down and would have to agree to the change. Or their portion of the code would have to be rewritten, probably in a clean-room manner.

You could fork the kernel and wait for all of what was written up to your fork become public domain. That'll probably be in the mid-2100s, depending on how long Linus and the other contributors live. Take your statins.

You can't throw out old software every few years and expect to stay relevant.
Serious backwards compatibility is one of the things that make Windows the only usable Os for most people.

Quite a few functions and macros in the Linux kernel are licensed under the GPL for the same reason, to force developers of kernel modules to license their code under the GPL. Insidiously many aren't, its only the more useful ones you would need later on in the development of whatever you were making.

And I hope for the day when Linux becomes another one of the many relics of computing which are no longer used.


The only thing the GPL protects against is good software ever being written.


Backwards compatibility is a relatively recent phenomenon in computing and even what we have today is far from perfect, with many programs from the 90's and mid 00's experiencing compatibility issues with modern OSs (or simply being broken such as with Mac programs that used endian-specific code when Apple switched from PPC to x86). But what end users and most programmers fail to see is the repercussions of demanding backwards compatibility.

What if I told you that CPUs could have an order of magnitude more compute for any given power consumption?
What if I told you that RAM could have an order of magnitude more throughput for any given number of PCB traces?
What if I told you that you could trust your operating system completely even though it might have some non-free/binary-blob components?
What if I told you that you could have an OS which was mathematically proven to be secure against attack?
Because we can do that now, but it would break backwards compatibility. It has forced a set of hardware and software paradigms upon computing and as a result has led us down a path of diminishing returns.

Many people only use Windows for things like Word, email, and internet. And to be honest I don't give a fuck about most computer users anyway.

Did you know that GNU has a system that does things like that? It's called GNU Hurd.

back to reddit friend

do you mean the blobs would be unprivileged and can't access side channels, or are you just retarded?

Why? What's so wrong about the BSD systems?

You really have to go back you half chan POS.

Attached: DffkFr1VMAA_oxh.jpg (1154x1086, 221.81K)

BLOBs can be constructed such that they can be proven to only do certain things without access to the source code.

They don't do it because of the 60 million line problem.

openhub.net/p/chrome/analyses/latest/languages_summary
openhub.net/p/firefox/analyses/latest/languages_summary
openhub.net/p/gcc/analyses/latest/languages_summary
openhub.net/p/gnome/analyses/latest/languages_summary
openhub.net/p/linux/analyses/latest/languages_summary

First of all, memory for PCs (and soon for workstations)runs for about $30/MB, and 8 additional MB take care of bothX and GNU Emacs.In addition, I won't say much about X, which I dislike,although if I'm not mistaken most of the bloat has occurredbecause of vendor requests. X 10 was much leaner, andprovided more than sufficient functionality as far as I'mconcerned.With respect to Emacs, may I remind you that the originalversion ran on ITS on a PDP-10, whose address space was 1moby, i.e. 256 thousand 36-bit words (that's a little over 1Mbyte). It had plenty of space to contain many large files,and the actual program was a not-too-large fraction of thatspace.

The main attraction of seL4 is the strict isolation of programs it provides, the kernel is formally proven to only allow programs to access memory regions they have been allocated (either memory allocated specifically for that program or memory allocated as a shared buffer between it and another program).

Say you have two programs and a DMA capable peripheral device
You also have four memory regions managed by seL4
The IOMMU on the processor is configured to not allow C to access anything other than 3, from the perspective of C the only memory in the system is 3.

When A wants to have C display something it writes data to the shared memory region 3 and informs B that there is something to display. B then looks at the data in 3 and informs C what it needs to do by sending a message to it by writing to 4. C, having received instructions from B, uses its DMA engines to access 3 and copy the data to itself and then do what it needs to do to display what A wanted on the screen.

Why it doesn't matter if B is a binary blob is because it has no privileges, unlike with monolithic and hybrid kernel architectures which have device drivers run in privileged mode (and can therefore access any region of memory it wishes) it can only access what it needs in order to perform its task of controlling the GPU. To complement this the IOMMU is setup to only allow C to access the memory it needs, B can't use the DMA engines of C to bypass the restrictions imposed on it by seL4 (this isn't something which is practical in monolithic and hybrid kernel architectures).


And whats really frustrating with the Linux kernel is that much of that code is there to provide backwards compatibility to hardware and software from 20 years ago, there is also code bloat because its all written in C rather than languages which have native support for things like OO. If a group of developers were to rewite Linux in C++ and only support current generation hardware then I suspect the result would be somewhere around 4 million lines of code.

install gentoo

I guess counting 20 years as "recent" is very convenient, so you can pretend Linux is a recent project and still hope it will go somewhere.
And what ivory tower devs fail to see is the repercussion of breaking backwards compatibility: people lose the reasons they had to use your stuff.
And this is why you'll never accomplish anything.

Breaking backwards compatibility isn't a big problem for those of us who believe in software freedom. Whatever a developer chooses for a specific program, the user will always have the freedom to change it to fit as required for the user.

HOLY SHIT HOW DID I MISS THIS???
Hopefully Genode gets into a more usable state. I wanna get off this monolithic kernel train, and Fuchsia is looking to be just as botneted as everything else Jewgle puts out.

And then users will bitch forever when the freedom to change it means sitting down for 3 months of skilled development time.

The act of designing software is a sophisticated task and this is normal for all software. For some reason, people are able to wait many months to order a builder to build a house or a car producer to build a car; if you tell a user that it will take months to modify software, then that's too much for that user.

Attached: yinyangemoji.png (420x420, 260.12K)

It's not just months though--it's years. Only recently have I gotten KDE Plasma with Wayland to work soundly. And, btrfs is still garbage.

Refer to