Linux design: standard linux folders

what nigger did invent standard linux folders?
why in the filesystem there is 15 root folders? Windows has few folders, Windows, Program Files, Users
why are their names 3-4 letters that don't tell anything? even MSDOS had 8+3 filenames, why linux uses such shitty names like "dev", "etc", "mnt", "opt", "sbin", "usr", "var"?
what this shit means and why isn't it placed inside one root folder like "System" or "Linux"?

Who designed Linux? What are his skills in systems design? How much time did he spent designing Linux before implementing it? Who has chosen him for linux designer position, what credentials and experience did he have?

Other urls found in this thread:

en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard
en.wikipedia.org/wiki/GoboLinux
gobolinux.org/
fuchsia.googlesource.com/fuchsia/ /master/docs/the-book/filesystems.md
freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/
gnu.org/software/hurd/hurd/running/nix.html
write.flossmanuals.net/command-line/file-structure/
lists.busybox.net/pipermail/busybox/2010-December/074114.html
en.wikipedia.org/wiki/Union_mount
en.wikipedia.org/wiki/Virtual_folder
github.com/torvalds/linux/search?q=folder&unscoped_q=folder
en.wikipedia.org/wiki/ICL_VME
github.com/TUD-OS/NRE/blob/b0c08bd3b3682612c111d5ffab3115ea40ef7ea4/nre/tools/linuxextract.sh
twitter.com/NSFWRedditImage

I want to cum in your ass!!!

It's called UNIX braindamage. How new are you?

Assuming this isn't bait, it's called HFS and it predates Linux.

As others have said, it's both UNIX braindamage and this
en.wikipedia.org/wiki/Filesystem_Hierarchy_Standard

But IIRC there are some obscure Linux distros that do away with that shit in favor of a Windows-inspired design with symlinks and shit

Historical Unix braindamage.


There are, but I can't remember the name for the life of me. Their role model was Apple if I remember correctly, not Windows. Not generally a bad idea, aside from the walled garden shit Apple's design usually beats the FOSS junkyard wars "design". Unfortunately they have to do symlinks and keep the retarded old structure around because of more unix braindamage, this time in the form of hardcoded paths. gcc is probably king at that.

en.wikipedia.org/wiki/GoboLinux
gobolinux.org/

UNIX weenies.
No. ProgramFiles(x86), ProgramFiles, Windows, AppData, ProgramData, there are files everywhere.
But they actually tell something. mnt - mount, usr - user, etc.
Anyway, your answer - Unix weenies, PDP-11, etc.
Because Linux is just a kernel, whereas GNU/Linux is the system. And that's not how Unix works, don't compare everything to Windows. If you were using Unix-like system instead of Windows, you would say "why isn't it placed inside /bin/?"
Linus Torvalds
No one, GNU/Linux is not a corporation, volunteers make the system. And blame Unix weenies, they designed the system, people are just copying it.

This.

Attached: Screenshot_2019-04-08 Understanding the bin sbin usr bin usr sbin split.png (644x932, 66.26K)

yet
for now, but still theyre no more volunteer individuals but volunteer tech companies

While windows have all system files at c:\win32 (or c:\system32 idk), they are all thrown there without any organization or categorization: libraries, executables, configurations, drivers, even typefaces... all shit in the same directory. Unixen at least had each type of file in a directory, but now gnu/linux fucked it up and now there are a bunch of different type of files siimply thrown at /usr/lib or /usr/share. for example in most distros firefox is installed at /usr/lib/firefox/ and its executable is symlinked to /usr/bin/

Attached: the current state of linux.png (1343x2400, 635.46K)

It's pretty simple really.
Developer. This where all the development files go.
Etcetera this is where all the unimportant files like readmes and such go.
Mount this is where usbs get automatically mounted when you plug them in.
Optical this is where CD/DVD drives used to be mounted.
Super binary. This is where the root binaries, like true and whoami, are located.
User this is the users folder where they can store files and install programs.
Variable this holds variables needed by the programs, like environment variables and permissions.

Nope. It is where configuration and rc files are located. readmes are in /usr/share/doc or /usr/doc, depending on the distribution.

Nope. This is the directory for temporary mounting. Most distros do automounting in /media/.

Nope. This is where stuff which do not fit in the system hierarchy go. If the application's files are not organized to fit in the /usr/ hierarchy, they are installed at /opt/. But nowadays most distros package their applications to fit at /usr/

Nope. System binary, this is where system binaries, like init and fsck, are located

Nope. This one was where users stored files in the classic Unixes. Now it is where the package manager installs applications.

Nope. This is where the system and applications puts cache. Permissions and environment variables are not stored there, they are managed by the kernel and set through instructions written in files located at /etc/.

*FHS
I thought it looked wrong when I typed it out.

I use /opt under the "This is similar but not quite /usr/local" rule. It contains the extraction of Icecat, and Tor, and the programs LBRY and wine automatically use it. I never knew that /opt originally was for optical disks, but it makes sense.

Wow what a cocksucking n00b. Everyone knows that's where you put your sexually deviant pornography.

...

This is convention that predates linux, it's unix braindamage.

Anyone have a screenshot of that parody tweet from an account pretending to be from the future saying "Today we remember 2024, when Microsoft finally gave up and changed their website to a link of the source code of every Windows Operating System. goog://12Gteadnt"

if you weren't a reddit winshit gamer you'd understand that simple 3 letter names for your folders are better to type out. unlike in winshit where to get application data you'd go "c:\users\username\AppData\etc\etc" instead of /etc/../

or that all binaries are located in /usr/bin and /usr/local/bin depending whether you installed them via a packet manager or compiled them yourself. instead of 34879374837483748 locations strewn everywhere.

Brain damaged UNIX weenie spotted.

Linus Torvalds
It's irrelevant to Linux, as it's only the kernel, niggerfaggot.

Location of 7zip binary in Windows
C:\Program Files\7-Zip\7z.exe
Location of p7zip binary in Linux
/usr/bin/7z
Same number of sub directories from root, but here's the big difference, in Windows 7zip has its own folder with its own libraries and support files. In Linux the same files will be scattered between /usr/lib/, or maybe even /lib/, or hell even /usr/local/lib, and then for support files something hiding in /etc/ or /var/

Tell me that's not UNIX braindamage

Attached: 258.jpg (500x346, 19.16K)

Glyphs already have horribly low information density compared to the amount of raw data needed to represent them (128 bits for an 8x16 1bpp character, while actual information is from 7 (ASCII) to 16 (Unicode) bits only), why would you want to waste even more? Names like "dev", "bin", "etc" etc. are obvious to interpret unless you are a brainlet noob who learned about Linux literally yesterday.

Please nigger not even Linux system designers know the difference between /var/ and /etc/ at this point.

The Windows way is also braindamaged.


You have UNIX brain damage.

How would you do it then?

Don't use a hierarchical filesystem.

So go back to using a flat filesystem like classic Mac OS?

fuchsia.googlesource.com/fuchsia/ /master/docs/the-book/filesystems.md

Fascinating. So instead of having a global directory structure attached to root it's basically just APIs and services requesting data to eachother making it more dynamic? As someone used to the conventional way of doing things this is hard to contextualize

You can kinda achieve that in Linux too. Just don't install programs and then write some retarded script like cd prog; ./prog and put it in /usr/bin
That's how I run my wm, browser and a bunch of other personal projects.
Of course this still doesn't solve the faggot issue of /home/ becoming a fucking mess but there is literally no way to prevent that anyway.

*/usr/local/bin*
lmao

The Linux Foundation is a bitch. The GNU Project will never be a corporation, because the FSF values freedom over money. LF is lead by 'open source' camp, they don't care about freedom, they care about convenience, they just want to provide useful software and they only provide their software under free licenses, because they think development is more effective. GNU on the other hand is lead by people using free software just because of freedom.
There are distributions trying to fix this, for example Guix System, NixOS, also one distribution name of I don't remember, and systemd tries to force not that bad /usr/ merge thing freedesktop.org/wiki/Software/systemd/TheCaseForTheUsrMerge/
On Guix and similar on Nix

Windows is a lot more braindead than Unix. For starters, it has long paths with embedded spaces, so you have to either escape those or quote the whole string. Then there's the registry, where they shove and hide all the important bits that you need to use a stupid GUI for, when in Unix you can just do everything in the shell and edit plain text config files. Windows is trully the niggest.

Attached: computer niggers.webm (1280x720, 1.39M)

It's like that because normally you'll only have one version of any given library. It also makes it easy to see which libraries/versions are on the system: just type ls /usr/lib /usr/local/lib and there you go.
With Windows scattering shit everywhere and each program bringing its own shit, you end up with DLL hell, and it's not easy at all to tell what's on the system. What if libsnafu v6.66 is vulnerable, and you want to make sure none of your programs use it? Oh shit nigga, you're gonna have to check every singlle program directory one at a time.

Protip: Linux paths are just zero terminated byte strings with '/' being the separator. Linux allows spaces in paths too. This isn't brain damage. What us brain damage is the UNIX niggers that write programs that can't handle paths with spaces.
Your brain damaged.

...

no you are braindamaged
Here is how you do it correctly

That's only a workaround for braindead design. Plus the name completion breaks down when you have similar paths you want to cd into, because it doesn't work well when there's embedded spaces.

Why is lispfag still using GuixSD GNU/Linux instead of GuixSD GNU/Hurd since it is already a thing?

Yes, the shell is peak UNIX brain damage.
?????

A tagged system like a database or maybe a tagged graph would be very nice. A lot of the filesystem hierarchy is very artificial -- does a porn image belong to images/porn or porn/images? It reminds me of the meaningless distinction between data and metadata in XML.

That's where you're wrong, kiddo

Because Guix System on the Hurd isn't ready yet and the Hurd isn't ready itself.
gnu.org/software/hurd/hurd/running/nix.html

If you don't like the command line, you're a nigger, end of story. As for the Windows CLI, yes it's braindead and I had all sorts of such problems when I had to use it at work. Their tab name completion shit is fucking broken. I never had bugs with that in ksh or bash, only on Windoze. The fact that you think it works flawlessly shows you never used it, and proves you're a nigger. QED

write.flossmanuals.net/command-line/file-structure/
< /usr - users' programs (Another bin, lib, sbin, plus local, share, src, and more)
So userS not user, you got me, still a Unix braindamage anyway.

This has always annoyed me.

Reminder that the LISPfag failed to make a simple text editor in LISP, cried about it for two threads, and continues to shit up every thread on Zig Forums out of pure butthurt

Attached: 95d81707510918c89902704b030890e5ac5c91fb06276ef46a9aaadd225e1ce7.gif (540x540, 577.24K)

This, it's one of the most embarrassingly stereotypical chapters of *N*X braindamage history:
lists.busybox.net/pipermail/busybox/2010-December/074114.html

If it's just porn images, then the images folder will take priority and port would be a subfolder of images IMO

Daily reminder a privilege escalation bug existed in Bash for 25 years before being patched out

Attached: 4ed6c0d2e9a35e4ebe2b269b8c9c9450be8368be39b8abd429853867059d6e98.jpg (200x200, 10.61K)

How does that work, exactly? I don't have bash, but clearly the shells don't have suid bit set.
-r-xr-xr-x 1 root wheel 206848 Jul 17 2018 /bin/csh*-r-xr-xr-x 1 root wheel 280796 Jul 17 2018 /bin/ksh*-r-xr-xr-x 1 root wheel 222768 Jul 17 2018 /bin/sh*

Bash is written in C

To keep people like you from using it that'd ruin everything. It was a more or less perfect plan until there was Ubuntu.

The linux way is better because there are set standards for where you can find certain types of files whereas windows encourages just application doing its own thing. Also the AppData/ProgramData folders are really confusing and end up producing quite long paths.
The "my computer" thing in Windows feels unnatural as fuck, just having mountpoints and only one root feels much more logical. On Windows you think in drives that hold some kind of data. On Linux you don't have to care, the drives will simply blend into your FS and you'll find their location based on its contents rather than based on what device they are stored on.

That sounds like iOS. Also why would you wan't your porn to be indexed and tagged as such? I wouldn't even trust a file really exists unless I know it's hierarchical location on the drive or have open.

Yeah, since I don't really save videos or audio this is how I do it right now. But the question still stands: What if it's mixed? For instance, my imageboard directory lies in images for obviously reasons, but this has the perverse consequence that WebMs are typically filed as images despite being videos.

That doesn't explain the priviledge elevation. A buffer overflow or other bug doesn't automatically give you root.

Both should be present, same way we have /dev/disk/by-uuid, /dev/disk/by-partlabel and /dev/disk/by-id.
Unique philosophy (everything is a program, every variable is a string, pipes, minimalist API of argv, stdout/in/err, return codes and signal) with a very flawed execution (short options even existing, no interchange tabular format, newlines in filenames not handled, no arrays bringing eval/quoting hell) requiring so much workarounds it becomes painful.
Tcl is what sh should have been.
It's shit for retards. No other shell is as plagued by bugs and bloat.

Where I live, "unique", like "interesting", is a euphemism for "horrible". This fully applies in the case of sh. Imagine your favorite programming language for a second. Now imagine that functions can't return values; that all input parameters must be strings which must be parsed and written by each function; that there is no error handling beyond dying and barfing up by an error code; and instead everything has to be done via global side effects. This is what a shell is: A retarded REPL. Unix then optimizes for this degenerate scenario through the way functions ("programs") are loaded (and unloaded!), which conveniently destroys the ability to cleanly get rid of that mess since anything that does now has to either be persistent (clashing with the rest of the system) or carry a massive runtime penalty by loading/unloading its runtime on every use. It's a textbook example of a premature optimization causing problems later on.

You say you want a standardized format to exchange information between programs, but that's not correct. You want a way to pass structured data around without a parser mess. A shell can't give you this.

What if there are three, four, five possible categories? 6, 24, 120 symlinks?

/home/niggerfaggot/multimedia/[video] [audio] [pictures]/[memes] [jewish media]/[porn] [hollywood]/file.dat

Not going to lie, I actually really like how iOS Photos uses machine learning type shit on the local machine to automatically categorize certain types of images for you. I wish Windows and GNU/Linux could do something similar

I like the solutions plan9 and haiku implement: each file lies in a hardcoded path along with the other files from the same package, source, domain or set; and then the system index them into special filesystems or directories.

For the path problem, you use union mount[1].
Say you have /app/firefox/bin/firefox.exe and /app/coreutils/bin/sed.exe as hard coded paths into the hard drive. The system would bind each /app/$APPNAME/bin/ directory into the union directory /bin/, which would contain both firefox.exe and sed.exe
[1]: en.wikipedia.org/wiki/Union_mount

For different types of files scattered throughout different parts of the filesystem, you use virtual folders[2].
The system lists the files having the same property (e.g., image files, files created in a given day) in a virtual folder.
[2]: en.wikipedia.org/wiki/Virtual_folder

What sucks is that plain Unix have no way to implement filesystem indexing, and most of modern system designers end up doing some shitty solution based on managing symlinks, which sucks because you need to run the symlink manager each time you add/del/change a file, and it also have to have a service to deal with the deadlink hell.

Also, although each OS have a single indexing system (Plan9 and BSDs have a single way to do union mounting and beOS/Haiku have a single way to do virtual folder), GNU/Linux have no single solution. Each kernel version, distro and desktop environment add a new way to do it, systemd also have its own two or three different solutions. All of them incompatible with each other.

It's directories. A "folder" is a normalpleb abstraction of a directory along with an obligatory GUI avatar of it. A "folder" represents a directory within a filesystem, but a directory needn't be a "folder".

so here's the thing I don't get the folders
why are most of them empty?
using MX live with persistence btw

First mistake, sh isn't supposed to replace lower level languages.
They can, it's called stdout. Also, functions are second class citizen here, binaries are kings and they can return values.
Doesn't have any meaning. Everything that doesn't contain \0 IS a string. Anyway, you're still stuck with the idea that sh is a programming and not a binary gluing language.
What's wrong with error codes? C works well with this (well, except that C can't return multiple values, but that's another story).
But that's wrong.
The only thing right in your babble is that fork is a mistake.
Still confusing binaries and functions, I see.

Like I said, sh is flawed and Tcl makes it way better, but you simply don't get the point of a shell language.

In this case you use tagging (what xattrs are for) and you dynamically create views.

Yes, Plan9 solve so many problems in elegant ways (their mouse fetish is disgusting though) that you can only dream about a world where UNIX wasn't "good enough".

What is inotify?

This is such a textbook case of "Unix does XYZ and therefore I can't envision any other way to do it" it's really amazing in a way. I didn't even mention fork, its suckage is completely unrelated to shells.

Thought your
Was referring to fork. Which it does, even if you didn't mean it.

Where did you read something like this, niggerfaggot? Go back to Lisptarding, already; this was funnier.

Guix moves away from it lol

It says in the very same sentence what I was referring to, and it's not fucking fork, if you spawned programs via posix_spawn et al the shell would still be retarded. Of course, if you cannot think of anything except in terms of Unix, as you demonstrate with the insistent spurious distinction between function and "program", you would not see this -- functions pointlessly loading and unloading everything they use every single time is a "fact of life" then, as is pointless serialization.
In your post where you constantly assume that a program and a function call are substantially different. They aren't, and I explained why. A "program" is a special case that Unix optimizes for, with the consequence that you can't have anything else.
Not everybody you hate is the same person, downie. Get your head checked.

Not really. If you examine each package, you'll find all your favorite /bin /etc /lib /share directories within them

Ok, but that's for compatibility reasons and it's better to have these directories in package directory, than having all these files and directories in /.

you're retarded

and
there is no ambiguity, you're just retarded. user libraries installed via your distro packet manager are in /usr/lib, critical system libraries are in /lib, and whatever you "make && make install"'ed is in /usr/local/lib (but nothing stops you from installing these in /usr/lib as well)
also that defeats the entire purpose of dynamic linking, a common retardation in winshit world.

This is bullshit made up after the fact. Are ncurses, PCRE and readline critical system libraries?

Attached: 1417549295568.jpg (599x600 171.79 KB, 45.33K)

Wrong. Programs are functions which return arrows. This arrow type contains a lazy integer which is the program's exit code. The arrow is always an arrow from a stream of characters into a product of two output streams. For example the | operator is just the composition of the first and sequencing combinator ((

It made sense in old commercial unices, because '/usr' was commonly mounted over network. So '/' played same role as today initramfs - minimal system necessary to mount primary storage.
On linux that was not widely practised and thus nobody bothered to keep '/' clean. So it rotted into useless vestige, and once problem of moving root mounting out of kernel (due to device mappers and fancy fs) it was solved by initrd (and initramfs later)

That's why WinSxS exists, to basically symlink all installed libraries to a single folder to avoid dupes everywhere and dll hell and allows complete uninstalling of programs. It only works for programs installed with .msi files though.

based


github.com/torvalds/linux/search?q=folder&unscoped_q=folder


unbased


based


braindamaged

Why the fuck is everyone replying to this seriously, it's an obvious mocking, this is not stack overflow, you do not get pajeet points for correcting someone.

(OP)

Again, old UNIX shit, it has nothing to do with linux, the filesystem is designed to have short names to be easily typable and it's in the root folder to be easily accessible, since UNIX is all about files you wouldn't want to type "C:\Windows\Disks\Disk 1\Partition 2" instead of just "/dev/sda2".

That said, saying Windows' way is better is either retarded or ignorant of you, Linux's directory hierarchy is actually not bad, there is nothing wrong with it being in the root folder, meanwhile Windows' actual OS filesystem is an absolute mess, they just hide it in C:\Windows\, try looking in there, imagine if that were in the root folder.

UNIX's directory hierarchy* ffs

tldr UNIX's directory hierarchy makes sense when you actually know how UNIX is meant to be used (not that it's good, but it makes sense in its own world), there is literally nothing wrong with it being in the root directory, the retarded minimalist naming however, is terrible. Windows' filesystem is absolute trash, they just put it away in C:\Windows.

/etc/ contains config files which are necessary for base system startup, thus /etc/ must always exist on the / filesystem. Contrarily, /var/ can (and should, in many scenarios) be the mountpoint of a separate filesystem.

That's because the guy you're replying to is a retard and shellshock wasn't a privilege escalation attack but an arbitrary code execution attack.

This is the real answer, and it sucks. One computer ran out of disk space one time so they replaced one hardcoded path with two, with the new one being on the disk used for home directories because none of these weenies ever thought anyone would want UNIX to run on a different hardware configuration even though they already did that once when rewriting UNIX in PDP-11 assembly. They also never fixed this when rewriting it in C or porting it to another computer. Can you imagine this happening anywhere other than AT&T?


>en.wikipedia.org/wiki/Union_mount
>en.wikipedia.org/wiki/Virtual_folder
This is like the VME Catalogue. Also relevant to this thread are the compiled command language and the file version numbers.
en.wikipedia.org/wiki/ICL_VME


Originally, it was the main UI for UNIX, but it sucks. Weenies "reinvented" the shell as some kind of "hardcore" language because you need all these different processes to find bytes in a file.

github.com/TUD-OS/NRE/blob/b0c08bd3b3682612c111d5ffab3115ea40ef7ea4/nre/tools/linuxextract.sh
echo "Searching for gzip header..." >&2begin=`od -A d -t x1 $binary | grep -m 1 "$gziphead"`# cut off the leading zeros; otherwise its interpreted as octaloff1=`echo $begin | sed -e "s/^00*//" | cut -d ' ' -f 1`# count the number of spaces between the offset and the byte-sequence; this will# tell us how many bytes are in front of that sequenceoff2=`echo $begin | sed -e "s/^[[:xdigit:]]* \(.*\) $gziphead.*$/\1/" | grep -o ' ' | wc -l`offset=$(($off1 + $off2 + 1))printf "Found gzip header at offset %x\n" $offset >&2echo "Unpacking gzip file to stdout..." >&2dd if=$binary bs=1 skip=$offset | zcat
Guess how you would do this in a real language.


UNIX has hundreds of system calls and I/O streams didn't come from UNIX. This is more "if I squint and ignore 99% of how things actually work, this UNIX braindamage kind of looks like this other thing, so we'll pretend they're the same thing" bullshit. Actually it requires ignoring more than 99% because you left out opening and closing files, and shell "features" like 5< and 6>.

So the question arises, if this situation happened once ortwice a day on a typical ITS machine, where a safetymechanism prevented it from doing any damage, why doesn'tthe lack of such a safety measure screw Unix users all thetime?Well, part of the answer is that is -does- screw Unix users,at least occasionally. Witness DW's message above. I'vebeen bitten too. (I once zeroed out the password file on amachine I was using this way, and boy did I think carefullyabout what to do next in that situation!)But it probably doesn't happen as much as our experiencewith ITS suggests it should for a couple of reasons. First,we have a lot more disk space here in the future. Runningout of disk space on the file system containing onespersonal files used to happen every couple of weeks, nowthat is a somewhat more rare, although not unknown,phenomenon. Secondly, we have hardware that is quitereliable compared to what we used to have. Crashing in themiddle of something critical is simply a lot less likely.So I would conclude from this that Unix is able to get awaywith being sloppy about this issue because technology (inthe form of bigger disks and more reliable hardware) hashelped to -hide- Unix's deficiencies. As the technologycontinues to improve, we can look forward to even -lower-standards of software engineering in our operating systems.

nigga have you looked into the windows folder? not to mention the fact windows has THREE temp folders, TWO swap files, THREE "home" directories and the POS shit registry which is literally shit all over the place but hey, it's in a single file!

windows is so fucking shit you can't even MOVE the user folder because it breaks windows updates. and here's the 500$ question: who is in charge of windows update?

even with all the shit on linux is has NOTHING on the bloat and retardation that is "modern" windows.

this is one of my biggest gripes with linux. I do prefer how it's done in windows. I don't like the fact that a package manager is required to handle programs normally, it feels convoluted and like it's meant to limit my control. Also, I keep getting different opinions on what goes where. like you said, usr, opt, bin, etc gets confusing and it feels like distinctions are arbitrary most of the time.

I found myself downloading program packages online rather than using a manager, extracting them to /opt and using them that way. it feels much better and I can just use an alias, for example seamonkey=/opt/internet/seamonkey/seamonkey or something like that, and it's all in one place.

on windows I like to organize all my programs into program files then into subfolders like communication tools, utilities, etc. One thing that is a problem here is program files (x86). why the fuck does it exist? 32 bit programs run on my 64 bit OS anyway, why should it matter? also, programs that refuse to install in a chosen directory and just install where they were programmed to are a pain, as are ones that refuse to have anything to do with paths with a space in them. it's just another character ffs.

on windows you also have appdata and all that junk. A lot of this crap comes from multi user stuff. fucking nobody needs multiple users on their system. only I use my computer. I think having a root or admin account for security is stupid too. Programs should only have access to the folder they're installed in, perhaps within the OS settings can be changed to allow it access to other areas or something. Seems like having an admin password, but not actually an admin user account would be better.

I think ideally after installing a system, you have a directory in which the OS keeps it's things and it can be organize however in there, and you don't touch anything in there unless you know what you're doing. and that's it. the user can do whatever else they want after that.

for example after installing linux it would be nice if all I see is a /sys or something, in which are all the necessary system files, and nothing else. then I can make /docs, /videos, whatever I want, and put programs where I want.

too bad I can't still use DOS.

What exactly is the benefit of that? Feeling in charge even though the difference is meaningless?
You can do that with global management too. This must be the first time I've seen somebody want the self-maintenance mess that Windows regularly turns into.

The user is the lowest of the low in Unix land. You are in some ghetto home folder and programs will still dump shit in there.
Oh but don't worry we'll use some embarrassing bug to 'hide' the folders otherwise people would rage quit the OS after using it for a month.

pretty obvious linux sabotaged itself by sticking to this retarded folder structure
Cutler was right on many things sucking in Unix, no matter how good the reasons were, this is just pathetic

I think it should be hidden, and then you have an 'empty' root folder to put stuff in.

The only reason Linux was even invented was because Bell Labs was holding the source and licensing rights to UNIX hostage and BSD wasn’t open source yet. Then over the years people tried making Linux into something it wasn’t because it was the only major open source kernel under the GPL

BASED

It's called /bin because it's like a big bin into which you throw all your executable files.

Just use Windows 95.

Mutiple users are useful to isolate programs. If you run your webbrowser on your main user account you're a brainlet and deserve living that ethernally confusing and frustrating experience that is your life.

unix braindamage

lmao this nigger has multiple user accounts so zhe can LARP as a sysadmin

based

I swear to god this board is just a bunch of trolls, people with absolutely no tech experience and larpers at this point. I think the depth of experience of the average *nix user ITT is probably installing Ubuntu. You're all bikeshedding about topics you obviously know nothing about. Shit's embarassing Zig Forums.

Fonts are in C:\Windows\Fonts

I love windows, it's the operatng systme for whiten me