What does Zig Forums think about seL4?

What does Zig Forums think about seL4?
sel4.systems/

Attached: unhackable-kernel-sel4.jpg (982x552, 72.41K)

Other urls found in this thread:

schneier.com/blog/archives/2011/04/schneiers_law.html
core.ac.uk/download/pdf/10697957.pdf
whiley.org/2017/12/19/verifying-bubble-sort-in-whiley/),
twitter.com/AnonBabble

nice bait

gno

By being in denial about possible security problems, they are essentially dooming themselves to have severe security issues.

Do they have a bug bounty system? How many people are auditing the code?

Just because you can't find a flaw in your own code doesn't mean it's perfect. Linux is in a good situation because there are lots of people auditing the code. Can't say the same about some no-name project that makes bogus claims about being unhackable.

Someone claiming their project is "unhackable" is either ignorant about security, or just knowingly making an edgy bullshit claim.

Security is not a kernel. Security is not an OS. Security is not a device. What is security?

Security is a service. Security is an ongoing process, never perfect and never finished -- auditing, finding problems, and then fixing them. If a place claims that security isn't a problem for their project, they are essentially choosing to forego the entire process, so security really isn't possible in such a project.

It's secure by mathematical proof, and if there is a security problem in non-theoretical usage, then the problem is in the hardware.

schneier.com/blog/archives/2011/04/schneiers_law.html
yeah man, just keep mentioning "mathematical proof" as if that's a magic bullet for security for your shitty project you're shilling
btw people have been using incorrect buzzwords like "mathematical proof" and "formal verification" for years, and guess what? security is still an issue anyway, despite all the bold claims about some neckbeard's shitty project being "unhackable"

Rustaceans like those words.

Formal verification is retarded. All it does is compare your code to a model of your code. That model itself has no guarantee of being correct in itself.

It doesn't have any use cases so it's pointless.

your software can't get hacked if nobody fucking uses it

Attached: galaxybrain.jpg (800x566, 74.74K)

Security is not a process. Security is a property of a system that says the system maintains some invariants(for instance, "processes can never access each other's memory"). If you can prove that your software maintains the invariant, it is secure based on whatever the assumptions of that proof are. For instance, one of the assumptions might be that the processor supports SSE instructions, or that it correctly implement ring switching. If there are any security holes, the fault is on the hardware.


The model can only be "wrong" in the sense that some hardware doesn't match up to the model, for instance if the hardware has some bug. One of the nice things about formal verification is it pinpoints the fault of any issues, so if the hardware violates its model, it's easy to look at the model(what is expected to happen) and the reality to deduce what needs to be fixed.


It's being used in drones and other military real time systems.

Formal verification is the only way to know that your software actually works. On top of that, most formal verification is done in languages with higher order types that make it possible to have "the type of functions which take a list of comparable objects and return that list properly sorted from low to high" which will ensure that your sorting function implementation sorts correctly. It's possible that you create a bug in your model, but the model is much simpler than the implementation, so it's very likely that the bug is in the implementation. In addition, the nature of proofs makes it less likely that a bug in the model will not be detected, as further proofs depending on the bugged property will likely fail. Strong type systems make it much easier to ensure that your code does what you expect it to do.

Bullshit when the specification is more complicated than the implementation you have just increased the number of errors you should expect. Specification is just another shitty programming language.

function detect.datamining.thread(){
if thread.title == "What does x think about y?" then
sage.post();
return true;
}

The specification is never more complicated than the implementation, I don't understand why you would assume otherwise. The whole point of the model is to be the simplest possible definition of what correctness is. Either way, you would need to make the same bug in both the model and the implementation for the proof to check out, which is where most of the bug reducing power comes from.

The specification language is usually a really nice functional language with access to solvers and strong types, where as the implementation language is generally some awful procedural language. There are obviously going to be fewer bugs in "the output must be a solution to x^3+6x=42" than in the C code which solves said equation.


I think the shilling against verified software is FUD propaganda, anyone who understands what formal verification actually does should love it. I think companies should be held legally liable for selling software that doesn't have either free source code or formal verification if that software fails.

Security is a process. The reason is that it doesn't matter if the hardware and software are perfect and correct, that doesn't matter if the operators don't maintain good security with bad practises. The operators need to actively participate in security because security is not just switch that you flip on.

You are an absolute retard that has no idea what you are talking about.

What if their proof has a flaw?

The proof is checked by a machine and is very unlikely to be wrong. The specification is written by a human though and is likely wrong.

Is the machine and proof checking code proven correct?
Is the specification proven secure?

Every proof comes down to self evidence eventually
Well user why don't you check your specification for secure

user how about you do 5 minutes of research and compare for example the specification that a function sorts a list, and the implementations of sort functions.

Writing sort is very simple and short. Writing a sort specification, and then the same algorithm in a way that can comply with the specification is far far far more complicated.

what the fuck am I reading?

There's no gaurantee that it actually describes the hardware other than your standard testing techniques that regular software development strategies solve.
The model is written by humans.

Not guaranteed. The specification and proof can be written for something that doesn't match the underlying semantics. The proof for the C semantics may be wrong, or the computer architecture proof may not describe actual system behavior. The specification of the kernel may be improperly defined and have an exploitable vulnerability. The specification does not check itself. All formal verification does is move problems into the model specification instead of the source code. You more than duplicate your work for no gain in safety.

Its not unhackable.
Saying that is dangerous and may cause people to trust it even though there is no reason to trust the model more than any other source code.

Also I guarantee the model doesn't account for the numerous side-channels and cache timing data exfiltration techniques that have been found for x86-64 CPUs.
Good luck writing semantics for opaque hardware properties that differ from model to model.

If it's possible for the operator to do something outside of the specification, the system is insecure. Outside of that, it's not the system's job to prevent you from doing something not in the spec. So the claim that the OS is unhackable doesn't mean that the user is unphishable. There are lots of meaning of security, but the relevant one for software implementation is specification compliance.


// define concept of partially sorted array
property sorted(int[] arr, int n)
where all { i in 1..n | arr[i-1] (sorted_ints result):
//
bool clean
//
do:
// reset clean flag
clean = true
int i = 1
// look for unsorted pairs
while i < |items|
where i >= 1
where clean ==> sorted(items,i):
if items[i-1] > items[i]:
// found unsorted pair
clean = false
int tmp = items[i-1]
items[i-1] = items[i]
items[i] = tmp
i = i + 1
while !clean
where clean ==> sorted(items,|items|)
// Done
return items

Looks like the types are a lot smaller than the implementation.


If the hardware doesn't match the model, the hardware is bugged and no software patch is going to really fix it, just work around it. If your computer broadcasts its ram to the internet when asked, there is nothing that can be done from the software side to fix that, it's a hardware bug outside of the scope of the OS.

The whole point of having a specification is that it defines which side of an interface the fault is on. If the inputs to a function are invalid, it's not that function's fault, if the output doesn't match the model then it's either a bug in the implementation or (much less likely) a bug in the model. When hardware companies start using formal models for x86(which they do now, and have for parts of the system for years), you get end to end verification for a system, which the DARPA guys want to make drones that are secure. The software people are slowly catching up with the standard practice in the hardware industry, which is formal verification.


The people who write your OS can't control if you decide to run it on a processor which doesn't meet the specifications required for their proof, but their proof obviously doesn't apply if the processor doesn't meet the requirements of the model. I don't know why this needs to be spelled out, formal verification is a proof that says "If you run this program on hardware that meets this specification, the program will have these properties." If you run the program on a computer that doesn't meet the specification, then it's not the OS being hacked but the hardware below it.


"unhackable" being the media's way of saying that the system has no livelocks, deadlocks, all syscalls run in bounded time, processes are kept separate, resources are only accessed in a valid way. If the hardware it's running on is compliant, it is not possible to execute a program that will violate the specification, thus it is "secure" for the definition of secure provided by the specification. For instance, it's possible to write an OS which guarantees process seperation, but doesn't have bounded time syscalls. Such a system is "secure" but not secure against DoS attacks, which makes sense because it was never required to be secure against that threat. The specification for security needs a definition of the threat model you wish to protect against, and if that threat model is too small, you can still be fucked. What can not happen on correct hardware(which can still happen with e.g. linux) is that the system claims to provide processes, but actually the processes leak.

...which is why it is currently only verified (last I checked) for the ARM platform, and they're working on a RISC-V port.

I have no idea what the fuck I'm talking about, but I thought formal models for CPU's usually hand-wave things like caching, and just give a minimal correct model of how the cpu operates, with any caching or speedup from branch prediction as a "nice side effect".

The internal models used by CPU production companies usually have formal models for the multiplier, ALU, shifters, and FPU, the entire x87 instruction set has had a formal model for a long time(Intel started using it after the FDIV bug fucked them, AMD has similar models). There are specs for the entire processor, cycle accurate, and how they should respond to all instructions. I suspect that AMD's formal models are nearly the entire chip based on some papers and talks I've seen, they've funded a fair bit of research in the area. I don't think this is one of the AMD funded papers, but it is an example of a model for x86.
core.ac.uk/download/pdf/10697957.pdf

Formal verification has a few down sides, it takes a bit of effort to make good strong types that correctly express whatever invariant you need, but the advantages are enormous if you want to know that your software actually does what it says it does.

Formal verification doesn't prove that the process (or CPU in this case) actually does the job you need. All it proves is that the process is indeed mathematically consistent. To prove that it does the job, you have to implement it and test that it performs exactly as you design it.

ah, what a great and progressive use of freesoftware ;^)

First of all your definition of the sort specification is wrong. Sort is a function relating two lists, different from the state of a sorted list.

Second that is NOT how these proofs are written. You did not even write the implementation in such a way that the property is clearly demonstrated. Go actually read modern non trivial proofs are written instead of larping. For example Coq. Or perhaps even just reading the source of SEL4

thinking a high level language can be 100% secure assumes your entire toolchain is secure too (compiler etc)

seL4 has proofs for verifying the compiled version is correct.

But not a proof for the operating system that is being used to run the compiler that is generating the assembly

Sort in this code is a function from a list of items to a sorted list. I'm sure there are other properties that you might want to test with this, like inclusion of all input elements in the output list, but user asked me to post an example from a google search(whiley.org/2017/12/19/verifying-bubble-sort-in-whiley/), so I did. This is a simple verified implementation of sort with a guarantee the result is sorted. Even if you don't think it's "clearly demonstrated" the proof checker doesn't care about your opinion and validates this is a proper implementation. There are languages other than Coq for proof checking, this example is written in Whiley.

So what? The compiled program is proven correct.

When the proof is checked by a broken machine the proof cannot be known to be correct. Similar to a mathematician that has brain damage.

...

The Type / specification is not. The type says "the result of this function is a sorted list". This is much different than "this function is a sort function" for example if you have the type "sorted list" a valid response is "f x= [1, 2, 3]".

Oof. This means you can prove things that aren't actually true.

Yeah user is right here 2. Anything can be proven if you don't do termination checking. Whiley is clearly in inferior limited use /power system.

If we cannot trust any machine to follow a mathematical verification process, then there's no point in using computers for mathematical verification. All forms of cryptography are unreliable an we may as well post our personal banking details in the open for all to see.

you give up easily user

Mods, we need a fucking medal over here

All I'm saying is that I trust the automated formal verification of a particular binary program without the need to formally verify every other part of the computer doing the verifying. I trust this because I believe in input-process-output for the verification and assume that this will be equally the same for all computers.

Well user trust is not proof :^).

The more i think about it, the more I wonder if it will be adopted inside mobile basebands (and other consumer products) like earlier versions of L4 were, or if it will end up being a defense-contractor interface to a shoddy C++ tactical webserver.

Every single shitcoin has claimed "mathematical proof" and then had flaws revealed, I'm not impressed.

Attached: Sure Anon.jpg (254x254, 29.07K)

Mathematical proof only means that the process is mathematically consistent. Having a formal proof doesn't imply the system is any good for its purpose. To find if it's good for its purpose, you have to implement the system and test it out.

More like every shitcoin that said that was just lying. The did not formalize shit in Coq.