Non-CoC modern OS from scratch

ITT: Organising non-CoC quality OS.
OSDev fags get in here.

I made the effort to format this in markdown, so feel free to enhance your reading experience.

As mentioned in I now made an attempt to gather ideas and put them into one post. I also added some of my own.

# Introduction

Since the introduction of the anti-meritocratic "Contributor Covenant" CoC, many seem to fear that this is the downfall of the Linux kernel, and there is a debate about a Linux replacement going on. There have been (direct and indirect) attempts to generate a "movement", or establish a group of people to develop a new operating system that is not plagued by a CoC, however, without much success as of yet.

Since many are not content with existing alternatives, and are in favour of a completely new "untainted" operating system, I have taken it upon myself to attempt to create a basis for collaboratively planning an OS.


# Modern design

A modern, proper operating system should think into the future and dare to prioritise making bold design decisions over compatibility with existing operating systems and software libraries. Although compatibility is great to immediately have, if it means adopting bad design choices from other operating systems, then there will be detriments in the long term.

Since we are moving away from Linux, I think it is an appropriate moment to break the cycle of bad design choices propagating themselves through platforms because of compatibility. As we do not need backwards compatibility in our OS (as there is no previous version), we have the opportunity to design a new operating system interface that fits current technology. We can take this opportunity to take a good look at the problems of previous platforms, and ensure that our platform does not have them.

This will be a major project, and take much time. The OS will most probably not be as fast as current OSs, as they are more mature and have had much time for optimisation. It will most probably not be able to run any games, but that is not important in my eyes: Games pressure the platform for efficient hardware drivers, which often leads to proprietary software (vendors' drivers).

to be continued

Other urls found in this thread:

joeduffyblog.com/2015/11/03/blogging-about-midori/
doc.rust-lang.org/book/first-edition/ffi.html
github.com/search?q=kernel rust&type=Everything&repo=&langOverride=&start_value=1
en.wikipedia.org/wiki/Open_Network_Computing_Remote_Procedure_Call
github.com/Ema0/i3lock-fancy/blob/master/lock
multicians.org/exec-env.html
multicians.org/protection.html
github.com/riscv/riscv-isa-manual/releases/download/draft-20181121-c743d2f/riscv-privileged.pdf
en.wikipedia.org/wiki/Call_gate_(Intel)
en.wikipedia.org/wiki/Global_Descriptor_Table
en.wikipedia.org/wiki/Task_state_segment
twitter.com/NSFWRedditVideo

# Proposed guidelines

We need to organise in some way to achieve anything with the project. We need to learn from the mistakes of the Linux project and prevent things like CoCs or corporate/government infiltration.
Therefore, I propose the following guidelines (which are OPEN FOR DISCUSSION):


## Project

1. All code written should be licensed as free software, preferably GPL3, to prevent *Embrace, Extend, Extinguish*, Pivoisation, and the like. I am not sure if AGPL3 has any merits over GPL3 if used for an operating system. If yes, then I prefer using AGPL3. Even though the version 3 protects against rescinding granted licenses, which is crucial for destroying Linux after the CoC, since we prepare against CoCs beforehand, there should be no need for not using version 3.

2. The OS should be architecture-aware, but provide a well-chosen amount of abstraction. This is an important part: To specify a well thought-out API that is modern and reasonable. With the goal in mind to create a lasting platform, we should take our time to carefully consider how the OS should be designed.


## Anonymity

1. Contributors should use aliases, not only to prevent attention whoring, but also to protect against surveillance: There cannot be any black mailing if nobody knows who is contributing.

2. Contributors should make the commit metadata (time, email, etc.) unusable.


## Code Organisation

We should organise in a decentralised manner, to avert attacks. I propose the organisation into feature forks, where a feature fork is a fork from an early stage of the project without any other features. To get the whole project code in one place, multiple feature forks can be merged. This is tedious, but highly resilient to centralised control.

1. To prevent centralisation of the codebase into a single repository (see Linux), I recommend that there should be many forks of the platform: After a certain base point, separate features should be worked on in separate forks, which then have to be merged to create a whole. This means that the whole project will be split into sub-projects. This may seem silly, but isn't: The current Linux spectacle has shown that having a monolithic main repository will lead to potential abuse by admins (refusing pull requests by blacklisted contributors etc.). In my proposed system, there is no main repository, every part of the system can be maintained by everyone separately. Of course, this will create an overhead when trying to find the latest forks for every feature, but, as I see it, this is the only way to prevent a dangerous centralisation of administrative power that could be used to destroy the project (again, see Linux). If we manage to master this form of organisation, we will have a decentralised, robust developer community that cannot be controlled in any way (short of personal threats).

2. Ensure that all feature forks are as early into the history as is reasonable. If features share as few ancestor commits as possible, it is easy to revert changes, and the code is kept decentralised. Imagine forking a 3D accelerator feature fork to create an audio driver feature fork. This would make the audio driver only usable in conjunction with the 3D accelerator, which bloats the system unnecessarily. Feature forks should not fork from features they do not rely on.

3. Prioritise a simple working set of features over fancy features that are not crucial. The earlier the platform can be used to actually develop applications on, the earlier we can enjoy it and notice design choices to be reconsidered. Fancy features such as 3D acceleration are not important early on and should be worked on later.


If you made it this far, congratulations.
I would like to hear what you think of this. Suggestions?
Also, we would need some kind of name.

I would like to hear what you're going to contribute to this project besides ideas.

l o g o
o
g
o

I would obviously also code, I forgot to mention this but I have much programming experience and know how to program an OS.

This thread is currently an attempt to form a consensus on goals and to create a group to launch the project with. I am not pretentious enough to just make rules and expect everyone to follow them.

...

That would be the easy way, but I think we can do better than that. You might as well just use the latest versions of Linux before the CoC, but that's not the point.

That's where you are wrong kiddo.

playing on their terms you will never win

Instead you must tell them CoC is meaningless bullshit from the fantasy of their own imagination, and has no purpose in your OS.

I don't think so, there must be at least a few able guys on this board, which is who I am trying to reach using this thread.


I never said that it was meaningful, if you had any reading skills you would know that I am on the anti-CoC faction.

However, as the CoC gets enforced on the Linux project, it is a very real thing, and should not be overlooked based on "heh, I bet it will not have any effect xD".

Attached: waismuth_xtafvea.png (1089x728, 22.79K)

Hey... let's LARP that this OS is already written!

hey guys, MrCode has just uploaded the rewritten TCP/IPv8 stack! ...and it's only using up 128 bytes of machine code! AWESOME

Attached: image.jpg (800x558, 285.16K)

Whatever, user. I did not try to, nor did I intend to, get ALL of Zig Forums to be OSDevs. And if you put in effort, it is not impossible to do, even on your own. However, I am convinced that a collaboration of a group of developers can produce higher quality code and architectures.

And don't meme your way out of responsibility or strawman me with some simplistic opinion. In the end it is everyone's responsibility to create a lasting, uncucked OS that is not controlled by trannies and megacorporations.

By trying to be neutral and not standing without any shame for the truth (that equality is a false god, trannies/faggots are mentally ill and the only remedy that can restore their almost inexistent dignity is euthanasia and a lot more not-feels-good trivia), you've already fallen for their trap.

I don't give a shit about the mental health of other people, I have nothing to do with them. That is why I don't care whether they should be eliminated or not. I also don't think that this hatred against trannies and other mentally ill people is what should be the glue of our project, but the authentic desire to have an OS that is not controlled by big corporations or subject to someone's politics and mind games.

I'd like to see a truly modular kernel, composed only of a "hypervisor" that just loads and hotswaps the kernel in case it needs an update, a "system" or framework which just provides communication to the different modules, and modules which provide the actual functions and have standarized API. You could say this is an exokernel, but I wouldn't be against at optionally running it all in kernel mode with a compile time flag.

I just want to see an OS where I can easily fuck any kernel module's shit up by removing a single reference to i. in a config file, then replace it with an equivalent one which uses the same API.

Remember LibreBoot? Trannies are ticking timebombs, eventually something insignificant is going to make them snap and they will take down the entire project along the way. Humans work as a whole, and if one part is rotten it will spoil the entire person. Just look at furries, they are not content with drawing their shitty fetish art, they have to spread it everywhere and ruing any place they go to.


Since Linux is just the kernel, why not use a different kernel instead of an entire OS? Hurd is still waiting to be finished.

This is very similar to what I was thinking: Although my terminology is a bit sloppy, these modules would be the feature forks I spoke of. So to say, you have a basic minimal kernel, and anything that is not completely essential for the module system is then separately developed. I also sincerely hope that there will be multiple versions for every component, and a healthy sense of competition.

The problem with standardisation is, there needs to be a consensus, but since we are anons here, it is hard to get a real consensus, so it might happen that there are slightly differing interfaces (+/- some functionality). So this might not be easy to achieve, but I guess it will be possible to get at least an almost-consensus on what an interface should look like.

No, I do not know what happened there, but I can imagine some scenarios.
I think that since there is only a decentralised organisation and no single rulemaker, no set of participants can fuck everyone else over, especially if we develop under GPL3 with no-revocation clause. Even if at some point trannies get into the project (we would not immediately know since they are anonymous), there is nothing they can do to the rest except fork the project and ruin their own forks.

I did not look into Hurd yet, but I do not know whether they are truly modern, i.e., did they just implement POSIX-compliance or did they think up a new standard / interface that is up to date for the next decades?

Must be nice being an adolescent lolberg.

I know we jerk off to how terrible the CoC is here, but you cannot make an new OS where the only feature is that it doesn't have a CoC. Linux will always win purely because it has more existing software.


8ch has it's own formatting. You can see it if you click on [options] up at the top, then "customize formatting".

In general, I suggest you lurk moar, and recognize that the only way projects get done around here is by having one sperg do all the work, and have everyone else tell him what features to have. You can either be that sperg, or give up now.

Exactly, the point was to create a new, non-POSIX-compliant OS, as POSIX is outdated.

That's similar (not the same) to saying that windows wins because there is so much software. This is a matter of principle and ideals. non-CoC, non-Corporation-controlled and modern is what I want.

Thank you for the tip.

I will gladly sperg up for this, however, I also hope that I can find at least one more sperg to do so with me, because one-man decentralised development does not really work.
I will gladly

The problem here is linux has made itself a monolithic kernel, it's built in most of the driver support needed to run most hardware, this comes in the form of modules that directly target linux, this is both a great blessing and a curse as other sort of systems become a pain to port to.
GNU hurd is by far the closest contender to main running linux there is, simply because there is an on going effort to both port and use a wrap layer for these modules.
I'm unfamilar with BSD but i assume it's more or less the same.

The thing about GNU hurd is that it uses a micro kernel, this way of doing things is quite slow so no one really wants to use hurd.
bsd is... well... bsd.

Personally, a system i'd like to see get more support would be plan 9 (or 9front) It's from the same people who orignally made unix so it's pretty fair system wise, sadly it's cursed with suckless faggots.

You can still have a monolithic kernel consisting of many modules that are chosen at compile time. If everything is designed to be as modular as possible, that makes the code easy to maintain and replace. Since one does not necessarily need to replace kernel modules at run-time, one can have the many modules statically linked. However, the architecture of the system should make replacing modules (at compile time) easy, without having to recompile the whole kernel.

I will have a look at it.

I also think that a capability-based approach to permission management is promising, instead of just sudo/non-sudo, there should be clearly defined capabilities to lock/unlock. If every program needs to request permission for potentially harmful capabilities, then the user is protected much better against malicious software.

WTF are your goals here. Sounds like a boring ass research project.

Please read the thread before posting. thank you.


I just had an idea: I think we could have the object files for every kernel module as part of the system, so that when a module is exchanged, the kernel is recreated by linking the object files with a new module, and replacing the old kernel library with the fresh one. However, I think that such a replacement would be non-trivial to do without rebooting the system. I guess we would need to have every module implement event handlers for the case where a module is removed / added / exchanged.

Remember the thread where user posted a bootable 'hello world'? Remember how far that thread went?

I am sorry, but I am new to this site.

Also, this is not a "my first kernel"-level project, and obviously requires commitment.
This is a serious undertaking, even though I guess you will probably ridicule / dismiss me based on the fact that I am new to this site.

No, I'm going to ridicule and dismiss you based on the fact that you vastly over-estimate Zig Forums's skill and motivation.

Fair enough. I did not expect Zig Forums to be swarming with people begging me to participate in the project. However, I will start the project regardless, and I guess that somewhere along the way, someone will join eventually.

Keep us updated man. I'm sure if you can present a mostly usable project with a link to your gitlab you'll get at least a few contributors.

I'll be waiting, I'll be impressed if you actually do something.

Rust? I have >3.5 years of Rust experience.

No. This problem is already well solved using group permissions (cf the audio, video, usb, etc... groups), it just needs to be extended to stuff that currently need capabilities; which means more stuff under /dev.

The main thing to do is solve the lack of interchange format for UNIX tools.
1) Tabular data: choose at two characters for FS and RS (hint: ASCII RS and US are already here for that purpose) and forbid them in filenames. A more reasonable approach would be FS=\t and RS=\n. Now, all tools must follows the RS and RS env variables and some --rs and --fs options to read and write their stuff.
2) Long options: they're needed so we can have consistent option names without single letter conflicts. In fact, ban short options; alias are here for shortening your command lines.
3) Simple interchange image/video/audio format: netpbm, y4m (better idea?), wav is a clusterfuck of extensions so I don't know (create your own simple PCM format without tags nor compression).
4) Unfuck Perl: reminder that it was supposed to be a more complete AWK+sed+sh, and nothing more (this is already a lot; I suggest you remove the sh part).
5) Unfuck sh: doing 1) will make the work almost non existent; stuff like associative arrays (thus no need to IFS split variables as a hack to emulate arrays; see zsh) and maybe threads (having to use FIFOs to communicate between shells and subshells is painful) would be good. Make it typed, too (just keep the types simple: int, float, string, array should be enough). Don't do an horror like rc, sh has a good syntax.
6) Unfuck C, of course, but keep it simple, so writing a compiler doesn't become too complex.
7) Unfuck signals; no idea how to do that, but there's something to do.
8) Steal the good shit from plan9 (maybe even use plan9 as a base) like bind, 9p or the POSIX compat layer.
9) Fix UTF-8 by having 1 codepoint == 1 grapheme, 4 bytes should be enough for this shit.
10) Do something about terminfo lacking combo keys like ctrl+arrow.

That's already plenty.

Congrats on replacing unix' flaws with new flaws.

I will not disappoint you. Although I am doing this for myself.


I was actually thinking about C or C++ without the advanced features that fuck everything up. I personally think that Rust is a meme language, but if you can convince me, then why not?
I think it is important to chose the language such that modules can be written in multiple languages and still work together. So it would have to be something that is C-compatible (I think Rust should be C-compatible though).

But before programming, we need to set some major design goals. Are you experienced / knowledgeable / skilled in OS design?


Almost everything you recommended is not part of the OS's tasks, except the thing about stealing from plan 9, which (after briefly looking into it) has questionable design choices.

it has a CoC brah.

Do you want concurrency? If yes, what kind of? Processes or something else? If processes, what kind of IPC do you want? Multi-user? Special hardware requirements like MMU?


so what
just ignore

I think concurrency is important, but I also think that one should have strong control over it (i.e., not have 2 threads on the same core). That said, I am in favour of processes, threads, and userland threads. With "or something else?", I assume you were referring to processes vs threads?
I am not sure, I think that shared memory would be great for efficiency, but I would probably implement a basic message passing API as well as a shared memory API, since shared memory is more tricky to use.
Not sure about that, intuitively I'd say not needed.
It makes it much more easier to implement concurrent processes.

Attached: 8b059e86c92478f1f60a58bbea4048f464c2d3311ccfdc4f71cd69b40bbbf3f2.jpg (514x514, 10.5K)

in the OP:


So not just forking some shit.

Nope, not that many. A few fringe elements on imageboards, and people like MikeeUSA. The vast majority of significant contributors to, and users of, the Linux kernel are utterly unconcerned by the CoC. Or if they care, they're not making it known, and they certainly don't care enough to build a new OS from scratch. I'm against the CoC in any form, especially the Contributor Covenant, but you're mischaracterizing the situation.
Meaningless marketing drivel. Are you sure you don't just want to go work for Apple? They love that kind of talk there.

No shit.
A project is either insignificant enough not to draw the attention of glowdarks, or significant enough to draw the attention of glowdarks. If the former, infiltration is not a concern. If the latter, you can't stop it. Good luck finding people who can not only program an OS but have the tradecraft to thwart glowdarks.
OK.
Pig disgusting.
wut
Attention whoring is the name of the game in open source/free software. Reputational benefits are one of the few motivations that people have for contributing to this kind of software. Now you've narrowed your pool of potential developers further: OS developers with impeccable tradecraft who want absolutely no credit for their contributions.
If a well-resourced intelligence agency wants to know who the contributors to your project are, they will almost certainly be able to find out.
Software development as Rube Goldberg machine. What a clusterfuck.
You're confusing a social problem for a technological problem, and proposing a technological solution to it. The fact that Linux is in a "monolithic repository" is irrelevant. Under your scheme, if there are 4 OS devs who each maintain part of the OS, the minute that 2 devs disagree with the other 2 about the direction that one of the components should take, you have a fork on your hands. The minute that 3 devs disagree with the remaining dev about his chunk of the OS being official, the remaining dev's repo is just a few unofficial OS features: the 3 will take the last version of the remaining dev's code, assign it to someone else, and christen it the official version. The issue isn't one of centralized repository infrastructure, but centralized consensus. You're not going to solve that by splitting up the OS into "feature forks."

Come back with a few thousand lines of working code, and a plan that's less vague than "a bold, fresh vision for the future."

Sounds somewhat Unix-inspired so far. What about files? Or any other kind of shared namespaces of IPC objects. May I suggest a system where processes can create their own namespaces and pass them down to child processes?

He isn't giving recommendations for the kernel, but the OS consists of every program that comes with the system, and all the ways they can interoperate. An OS consists of a text editor, a shell, a terminal, a file system, a file manager, a web browser, and any other programs that will be useful to most users. The implication is that every single argument that has ever been had on Zig Forums, about the best editor, browser, language, font, etc, are all captured in a single OS. You therefore cannot reject any suggestion as "not part of the OS's tasks", because everything is part of the OS's tasks.


These are good goals, but there needs to be a middle ground. Forking for compatibility ties you to existing paradigms. But you can still fork to get the old code base. Linux has drivers for hundreds of different peripherals. You could fork linux, rip out any syscalls you don't like, but keep the old drivers.

You cannot build a clean, modern operating system with clean, modern hardware. The reality is that for now we are stuck with 1980s CPUs like x86 and POWER. RISC-V is new, but it still uses the old ring-style protection. Mill seems like an interesting architecture, but it will take many, many years before they will have hardware for sale. Targeting multiple architectures is code word for targeting old architectures. That implies creating an OS that is not modern. If you really want to build a new OS, good luck, but that's like expecting a democratic solution to our current demographic problems. It's just not going to happen.

If you want to make computing great again, start with something more simple. Sadly, most of the big problems we are dealing seem to be social problems rather than technological problems.

I think files should only be used for what they are supposed to be. I think that there should be files and streams, eventhough they are used similarly.
I think that goes well with shared memory, and the additional layer of organisation should be beneficial for easier understanding.


Well, it will obviously be worth it to reuse parts of linux or use them as a reference.

Lol no. Nobody on Zig Forums has any experience beyond wageslaving.
There is a long blog series about Midori, a research OS project from Microsoft: joeduffyblog.com/2015/11/03/blogging-about-midori/
It has capability-based security.

Of course it is. doc.rust-lang.org/book/first-edition/ffi.html
The best way to replace C with Rust is by sneaking Rust code into C projects. librsvg has already generated butthurt.

My suggestion would be to utilize seL4 as the kernel.

The old way that unix manages things is using the idiom "everything is a file". This clearly doesn't work in the modern age, so it should be replaced: everything is a web server. When you want to edit a text file, you wouldn't execute the file containing your text editor, passing the file containing your text. Instead you would visit the website of your text editor, and give it the url of your text file to edit. Rather than complex and unnecessary windowing system, you would have a simple web browser with a number of tabs. Rather than having to argue endlessly about programming languages and graphics toolkits, ui would be coded with html+css+javascript. Rather than having hundreds of different systems for ipc (signals, pipes, fifos, sockets, etc), communication between servers would be done with good old fashioned get and post requests. The shell would be superseded by the search engine.

Some people might be butthurt that the familiar unix idioms will be dead as dodos. What you have to realize is that they're already dying; most applications are already written as I've described. It's time to finish the job, and move on to the future.

I approve. The web server should be Hyper. It is written in Rust so it can fearlessly utilize concurrency, it is memory safe, and it is blazingly fast. For the web browser the obvious choice is Firefox. But maybe we should use Servo until WebRender is fully integrated into Firefox.

What I meant was that any kind of file is a means of communication between processes. Just like signals, network interfaces, or just process IDs.
Just passing on shared memory pages and letting processes decide what to do with it could work for most applications, but you won't be able to isolate the PID namespace.


The web is stateless, which can be a PITA. Also you may be describing Sun RPC in new clothes. Look it up, it's cancer.

Got it. Good luck!

waste of time, openbsd already exists

There is currently a thread about OpenBSD. Looks horrible.

There have also been plenty of kernels written in rust, so we can take our pick:
github.com/search?q=kernel rust&type=Everything&repo=&langOverride=&start_value=1


this is what cookies are for, storing state on the client. We also have localstorage and indexeddb.
I have found this:
>en.wikipedia.org/wiki/Open_Network_Computing_Remote_Procedure_Call
but it doesn't list criticism. It sounds pretty standard as far as RPC goes. HTTP as RPC isn't my proposal though - it is used across the industry. Look up REST protocol for discussion of one of the common ways it's implemented.

So what do you think about this shithole so far

Full of LARPers tbh

Make modules or interfaces have a "role". This is what they do, regardless of how they do it. Interfaces which have a specific role have to comply with a "tier 0" API, which is the bare minimum a module has to do to be considered something part of that role. Everything else is a "diversion" of the API. If it is properly documented and it is sound, it can be included in the "tier 1" API, which basically means "you should support this unless you are looking to make your program an embedded systems exclusive". Anything else is part of the many "tier 2" API, officially documented, but not part of the standard.

Obviously, this should be transparent to application developers. Let all this be handled by the equivalent of the lang's stdli,, another abstraction layer upon which stdlib can be built, or whatever.

Sure you can store stuff in cookies and localstorage, but the server won't remember you.

Especially REST is a bad example. It is used to transfer state (hence the name), but keeping track of it is entirely done on the client side. REST calls are supposed to be idempotent. REST is made for CRUD. This means you can't use REST to start a server-side process that may have side effects on other data on the server. No such thing as "run function x() and give me the result".

RPC may be better suited for calling procedures, but as the very concept of http is that every call is isolated in itself, I'm wondering why you'd bother with http anyway.

Is this what you think HTTP is? You may be confused. The website your shitposting on is using HTTP, and yet I can see the posts you make, and you can see mine. Generally the architecture is this: the client asks the server for a token. The server complies. The client then makes future requests with the token. The server then modifies the state associated with the token.
POST requests don't have to be idempotent.

I would actually like to decouple files from pipes etc. Why would a pipe be accessible over the file system? IMO, the file system should only concern itself with storing and accessing files. Although a pipe can easily be implemented to be accessible from the file interface, I think that this is a dirty hack and that if someone wants to read a file, they should only be able to open actual files with their command. The same goes for devices, they are not files and should not act as such.

But using the namespace approach for pipes and files etc seems nice, even though I misunderstood it at first.


I come from 4chan, and I think that Zig Forums is better than 4chan's /g/, the community seems way more serious about technology. This is also why I decided to post here and not somewhere else.

Of course, like on every image board, there are anons like but I think it's at a very tolerable level.


That is a very nice idea, I like it. The standard lib would then have tier 0 and 1 API support, but tier 1 functions have to be checked for availability before calling. Tier 2 functions have to be imported, as they can be too diverse to standardise.

user I like your idea, but it will be a very very very difficult thing. Besides we could be experienced in C programming, this is totally a whole new level.
Therefore, I suggest forking an old version of Linux, adapt it to the new design and then proceed.
As a suggestion, consider looking at plan 9 design.

It's not like I have any other missions in life, and I have no GF, so no responsibilities to other people (except work lol).
So I don't mind if it is difficult.

What do you mean to say with this?
You mean because of the user that proposed using Rust?
I think as long as every single module has a C-style API, it does not matter what a module is written in.

I will try to take as much as reasonably possible without interfering with my design goals. Especially the module specification thing that proposed will probably conflict with some of the linux code base, making parts of it unusable.

I read through the Wikipedia article on it, but in the next few days, I will look deeper into multiple platform designs, and of course also into POSIX to determine usable parts.

I was only stating the truth. Zig Forums if full of LARPers. Prove me wrong by actually delivering on your OS project.
Protip: you won't

I meant that difficult may not be enough to describe this.
Main concern is, imho, that bad choices could show to a very late stage of development, and real OS can only be tested with full development.
But I don't want to disapprove your idea, it could be a great thing. Just expressing my thoughts.

Often a program uses files in place of pipes. I write to a file, and you read from it later. One version of this I've seen are PID files: A daemon writes it's pid to a file when it starts, you can read from it later to know which pid to signal. You also have named pipes, which behave exactly like pipes, but are given a location on the filesystem. The neat thing about these is that the program doesn't need to know it is accessing a pipe. Consider this code for locking the screen:
mkfifo $IMAGEffmpeg ${IMG_OPTS} $IMAGE &> /dev/null &i3lock ${LOCK_OPTS} -i $IMAGE -n
This is forked from code you can see here:
github.com/Ema0/i3lock-fancy/blob/master/lock
Note how the code required no change in order to turn $IMAGE into a fifo. ffmpeg and i3lock don't know and don't care that they're operating on a pipe not an real file.
In general I think it should be true that a program should not concern itself with how a file is being stored. If it is on one hard drive or another, or on some remote server, or in memory, or being fed to it from another process, is not it's concern.

With that said, the modern approach, used by most linux apps nowadays, is to use urls and not file paths. This allows you to more easily treat remote and local resources the same, without having to mount a FUSE for every server you want to access. In order to make this work, you would need to have a common interface for every protocol, so that you can add new protocols without having to update every app to support it.

so your idea is basically replace UNIX pipes with URLs and packets if i read it correctly (t. brainlet). it sounds interesting actually, and would provide a decentralized IPC solution.

we could access devices like that (although this might be too UNIX for your taste):

sysping --data=0x1 syscall://dev.sd.0 or
sys://dev/sd/0

sysping: sends bytes using system ipc packets??

and the UI would be a JS/HTML interpreter, an efficient, lightweight one (read: NOT electron) and the core set of UI libs would be a JS framework, perhaps we could add C binding headers that let us send draw calls to the JS UI manager and HTML/CSS data, so we could get rid of GnomeTK+ and Qbloat. if implemented correctly would potentially be something like GTK, Qt and Electron combined and done right without the 100MB of RAM usage.

Redox already does the URL part.

And that right there is why you will fail.

I'm doing my own thing mainly for my own amusement. The only major design goal as of yet is to leverage ipc\ole mechanisms to keep code complexity down and keep things modular.
As an example, If I were to write a browser for it (which I wont, because it would take more time than the os+userland+compiler combined), the protocol related stuff would be an entirely separate module. My implementation of wget or curl would be little more than a shell script that talks to the http service.

tldr:i like plumbing.

How crazy would it be to implement the OSI model like it was meant to be done at an OS (userland, of course) level? I would actually fucking love to see my programs all using the same XML parser, the same encryption library, etc, all "transparent" to the developer, so the user can always have a final say on how to route/pipe a given request.

I know that the design choice "everything is a file" is handy in this case, it is very handy in many cases, but it is not clean.


Thanks, I guess this will be helpful.

Why consider anything other than a native widget system? Under the hood it doesn't need to be much more than some primitive drawing functions, so what I'd be looking to do is finish\rewrite a prototype I shit out a few years ago in love2d. Main things I had left to do before I got bored and moved on were proper padding, margins, and a couple builtins for positioning. Or I'll just shit out another one, guis are literally the easiest thing to make.

Attached: fart.png (581x416, 21.6K)

You can install any wm/de you want.

A native widget system presupposes that most code will be designed specifically for your system. In fact, you expect most code to be written generically for any system. So your native widget system will be primarily used through a toolkit. Once you realize this, it becomes obvious that you should just implement the toolkit directly. Whatever toolkit you choose will probably be much more mature, and so have support for a variety of widgets, themes, language support, etc.

Lisp machines went even further than this. There is no kernel mode and all code can be edited and replaced at runtime.


If you want to win, you make something better. "Worse is better" sucks.


There's more than UNIX out there.

Copy how Multics does it.
multicians.org/exec-env.html

Plan 9 has all the same problems.


UNIX code is well-worn in the sense that it's full of holes.


RISC-V does not use rings. Multics rings allow different privileges to be associated with different procedures running in a process, which neither UNIX nor Windows do. Most UNIX "innovations" are actually undoing real solutions to problems, like claiming that getting rid of toilets and shitting your pants is a solution to clogged toilets.

multicians.org/protection.html


That's actually marketing bullshit created as a reaction to "everything as an object" in languages like Smalltalk and Common Lisp. In these languages, integers, lists, arrays, strings, structures, classes, functions, packages, and all other data are objects. In UNIX, none of those things are files.

Date: Tue, 19 Nov 91 13:53:22 ESTSubject: Once Again, Weenix Unies Reinvent History Date: Tue, 19 Nov 91 08:27:49 EST From: DH Yesterday Rob Pike from Bell Labs gave a talk on the latest and greatest successor to unix, called Plan 9. Basically he described ITS's mechanism for using file channels to control resources as if it were the greatest new idea since the wheel.Amazing, wasn't it? They've even reinvented the JOB device.In another couple of years I expect they will discover theneed for PCLSRing (there were already hints of this in histalk yesterday).I suppose we could try explaining this to them now, butthey'll only look at us cross-eyed and sputter somethingabout how complex and inelegant that would be. And thenwe'd really lose it when they come back and tell us how theyinvented this really simple and elegant new thing...

>people start talking about fucking rust (has a coc)
Zig Forums == /g/
8chode == 4cucks

learn to identify shitposts.

Still waiting on you to begin working on a modern successor to LispOS.

no that html ui dude is a retard, he also wants to replace pipes with packets

He's not a retard, he's a shitposter.

It's really tiring to read the papers, but I'll try to work my way through them.
I think I will add RISC-V features into the OS, which will have to be emulated on other architectures. The reason for that is that RISC-V is more modern, and on top promotes freedom.

Yes it does, except they're calling them "privilege levels". Chapter 1.3, github.com/riscv/riscv-isa-manual/releases/download/draft-20181121-c743d2f/riscv-privileged.pdf


Such as?

As far as I understand, Multics rings are a RISC-V thing, and they work different from x86 rings. They offer more fine-grained control.

It's stronger than just handy. It is the necessary level of abstraction. It is not the responsibility of the tool to keep track of where my files are coming from, and change their behaviour to match. I don't know how you use the word "unclean", but a system that forces a separation between files on disk and files in memory creates unnecessary work for all parties to no visible advantage, which I see as a very unclean thing to do.

It always pissed me off that send/recv aren't just write/read. I suppose one difference is that a socket is two sided, whereas file descriptors are normally unidirectional. But this distinction doesn't seem sufficient to justify two separate io apis.

Thanks for defending me user :)

I think that only files should be treated as such: You can seek in a file, tell its size, append, delete, overwrite. You can do no such thing with a socket or pipe. What would be more sensible is to provide a stream abstraction, where you can use sockets, files, microphones, etc., with an API that allows to read/write bytes (even then, there needs to be a distinction between read-only, read-write, and write-only streams). If you want to meme it the UNIX way, then use the stream API for everything. But if you want to have powerful file manipulation primitives (mainly seek, tell), or socket primitives (shutdown, etc.), then you need to access an API that is designed only for files (or sockets, respectively).
I am a big fan of type-safety, and *everything is a file* just doesn't fit right with me.
Also, why would you use filesystem locations to locate a socket or keyboard? It is a fucking filesystem, there to organise data on your permanent storage into named FILES and DIRECTORIES.
Also, in the UNIX file api, not every file operation can be applied to every file, which is why I say it's not clean. Rather than that, I will make a FILE,SOCKET,KEYBOARD,PIPE < IN/OUT-STREAMABLE hierarchy. If something fits into the FILE concept, then all file operations can be applied to it. If something is a SOCKET, then all socket operations can be applied to it. And so on. The UNIX equivalent of a file would then be IN/OUT-STREAMABLE, which applies to almost everything. I would also make a distinction between filesystem locations and pipe names, device names, etc. You could, for example, create a streamable handle to a file via file(location) or file://location, and devices via dev://name.

I never did say, or intend to say, that files in RAM should be treated differently from files on a HDD or SSD or other storage medium. I did say that the API for files should only be applicable to files.

A direct consequence of this is that URLs will be the default way to pass resources, which will also make it easier to pass remote resources, as the programmer would not try to open everything with the file API, but with the streamable API, which would then detect remote URLs and other things, and handle them accordingly to the specified protocol.

Right now I at the stage where I have the following resource types:
Files, devices, pipes, connections, locks, conditions, barriers, events, futures, promises.

The following abstractions exist:
Stream = {file, device, pipe, connection}
Awaitable = {condition, barrier, event, future}

I have taken many synchronisation primitives, as I am convinced that they should be supported for simpler cooperation amongst processes. I.e., if you want to synchronise execution, and need a barrier, then on UNIX, you would have to do some strange IPC, and even simulate the barrier in one master process.

Everything having a web address is what Alan Kay said a modern Smalltalk machine should have. Every object would have a network address. Is OP taking his ideas and merging them with ((( rust )))? Don't use rust either unless you want AIDS.

What? I said URL, which stands for Uniform Resource Locator. I was not talking about web addresses (which often are expressed in terms of URLs).

To be precise, I think that resources such as locks, if owned by processes, should be located in that process (i.e., barrier:///). Only file:// and ftp etc. URLs would actually correspond to a location in the network or in a file system.

I meant to type network address.

Well, in theory, one could do that, but only with a well thought-out access control scheme. But as long as one is authenticated, I don't see why it should not be possible to interact with other machines like the local machine.

One could have a default OS interaction port, and resource references across machines would then be something like: wait condition://machine-name:port/pid/id, or maybe without the pid part, i don't know.

it's easy to make an OS faster than any current bloated piece of shit. i've been working on an OS for 10 years which is memory-safe, has a single PL with small amounts of assembly to boostrap, and is slow and secure and has no 3D graphics.
but I doubt anyone who thinks in CoCs can help

I don't think that speed is everything. I believe that the system should be programmer-friendly, and that you should be able to use it to easily program user-friendly programs.
I don't need it to outcompete in terms of speed or games or whatever, I just want a platform that truly respects freedom, is not subject to corporate interests and tyranny, and allows me to comfortably program it. As programming is evolving, so should the OS.

RISC-V is based on the PDP-11's supervisor/user modes with extra hypervisor and "machine" modes. The 286 and 386 protected modes were inspired by Multics. Instead of a kernel, Multics has code segments that run in certain rings. Running in a ring gives access to other segments in the same or outer rings. In both Multics and x86, each ring has its own stack segment. This goes beyond microkernels and also solves the PCLSRing problem and these other bullshit UNIX problems in a very simple way.


Multics rings and call gates are exactly like x86 rings and call gates. On x86 and Multics, a single process enters multiple rings depending on the code that is executed. The only reason call gates aren't widely used is because they're not portable to RISCs and other worse hardware.

en.wikipedia.org/wiki/Call_gate_(Intel)
en.wikipedia.org/wiki/Global_Descriptor_Table
en.wikipedia.org/wiki/Task_state_segment


UNIX files suck so much that seek and tell are considered "powerful file manipulation primitives" but most mainframe OSes have keyed and random access files. Files on these OSes are designed for random-access disks, not tape drives. Even "seek" and "rewind" are tape drive bullshit that slow down your computer and clog up your brain preventing you from understanding what your computer can really do. A disk lets you access multiple parts of a file without having to read everything in between and an SSD is even better at this. A consequence of the UNIX way is file formats like XML that are designed to be read one character at a time.

>Also, in the UNIX file api, not every file operation can be applied to every file, which is why I say it's not clean. Rather than that, I will make a FILE,SOCKET,KEYBOARD,PIPE < IN/OUT-STREAMABLE hierarchy. If something fits into the FILE concept, then all file operations can be applied to it. If something is a SOCKET, then all socket operations can be applied to it. And so on.
That's why OOP and inheritance are good. "Everything is an object" also means files are objects, but "everything is a file" means UNIX weenies don't know what "everything" means.

I don't regard it a "real" UNIX, then again I wouldn't buy a"real" UNIX, 1970s software technology is not something Iwould want to buy today.Getting caught up in the "pure" UNIX war will lead you torestrict yourself to "pure" SVR4 implementations, in themainstream camp *only* SUN have gone for this. That in myview does not make it much of a "standard".If a vendor decides to do something about the crassinadequacies of UNIX we should give them three cheers, notstart a flame war about how the DIRECTORY command *must*forever and ever be called ls because that is what the greattin pot Gods who wrote UNIX thought was a nice, clear namefor it.The most threatening thing I see in computing today is the"we have found the answer, all heretics will perish"attitude. I have an awful lot of experience in computing, Ihave used six or seven operating systems and I have evenwritten one. UNIX in my view is an abomination, it hasserious difficulties, these could have been fixed quiteeasily, but I now realize nobody ever will.At the moment I use a VMS box, I do so because I find that Ido not spend my time having to think in the "UNIX" mentalitythat centers around kludges. I do not have to tolerate ahelp system that begins its insults of the user by beinginvoked with "man".Apollo in my view were the only UNIX vendor to realize thatthey had to put work into the basic operating system. Theyhad ACLs, shared libraries and many other essential featuresfive years ago.What I find disgusting about UNIX is that it has *never*grown any operating system extensions of its own, all thecreative work is derived from VMS, Multics and theoperating systems it killed.

based