Do you think it’s possible for a piece of software like Linux to ever reach a final version and never need an update again and everyone would be satisfied with it? Or is kernel/OS development an endless game of cat and mouse whenever new security holes and bugs are introduced as more and more libraries are being added and more and more hardware is getting supported ad infinitum
Linux Kernel ‘Version Final’
Other urls found in this thread:
en.wikipedia.org
en.wikipedia.org
en.wikipedia.org
en.wikipedia.org
multicians.org
multicians.org
multicians.org
opensource.com
twitter.com
"If the son of Adam were to possess two valleys of riches. he would long for the third one." [Muslim 1048]
In conclusion: No. It will simply die or meld into something else at some point. Humanity cannot help but continue progression whatever the cost, such is our affliction.
You could’ve told me that without shoehorning your Sandnigger trash at me
You answered your own question. Even if you end up with some magical, 100% bug and exploit-free kernel, there will always be new hardware to support.
Thats really sad and desperate
When will we stop developing new hardware?
A program in general can certainly reach a final stage, even though this will be difficult/impossible in something like C. Linux can't be finished because of the driver question, but maybe this would be different if drivers weren't such an Anything Goes land and there were some (sane, nobody needs another ACPI) standards.
The solution is obviously microkernels. separate the kernel, drivers, and firmware so each component is as mutually exclusive as possible
Also, what if we set final versions for hardware? Then the issue would just be ironing out security bugs. Unfortunately security bugs exist aplenty in hardware nowadays since we "softwarized" hardware development
Not until industrial civilization dies out.
I posted it because the fact you asked the question shows that you don't understand humanity. Would you be this mad had I quoted the Unabomber?
No I wouldn't because the Unabomber wasn't a desert rat camel jockey
What if the hardware we get stuck with has security bugs a la Meltdown and Spectre? Are we just fucked forever then?
Ah but once you've got the version compatible with your hardware you'd never need to update again.
If you like alcohol (al-kuḥl) and use algebra (al-jabr) you are indebted to Muslims.
lmao go ask the Persians
The descendants of the Persians are now all over the world. Your Muslim neighbor has ancestors who invented Algebra. Your Muslim dentist has ancestors who invented your Arabic numerals. Everywhere you look, a touch of Islamic ingenuity.
The retard keeps on going
Alcohol is harram and predates the invention of Islam
what progress do you speak about nigger?
we have degradation in every area of life:
where is the progress you speak about?
Indebted? Why don't we have a parade for the first fucker to suck on a goat tit, while we're at it?
Birthrates lmao.
Giving something its common name doesn't mean you invented it, faggot. Both alcohol and algebra predate Islam by thousands of years. If I call you a goathumper and everyone follows suit, does that mean I created you?
Those are nosediving in the Arab world too. The only places on Earth not projected to have decreasing birth rates is South Sahara Africa
Fantastic bait. Really just excellent bait, this thread will probably never recover. Here, have a free (You).
The linux kernel, and web browsers, will never have a final version because then all the programmers whose livelyhood that it depends on would be out of a job. So they program shit code and add bugs or new features like renaming all the functions for no techincal reason beyond making uneccessary work just to keep their jobs. Progress on improving the linux kernel's base stopped at around version 4.18. For examples see the kernel commits or try implementing new features or look at out of tree forks doing so on top of the kernel for post 4.19. Now its very little bug fixing, drivers getting added, and mostly make work. Most software is like this when corperations get their hands on it. Take chromeium and firefox/forks. They keep adding shit features like renaming everything or always on javascript because if they fixed all the bugs and stabilized, improved secured and made performant, a select set of features like, i dunno, PARSING HTML THE WHOLE FUCKING POINT OF A BROWSER they would be out of a job. Lynx and other text browsers parse html better the chromium and firefox as a example to such.
Also remember that wither we want to admit it or not, a lot of software development is developed by for-profit companies and most commercial software development is marketing-driven (OUR WEB BROWSER CAN RENDER SHIT THE COMPETITION CAN’T!) so this puts a burden on FOSS developers who just want shit to perform basic functions but are also under pressure to actually gain adaptation among normalfags outside the FOSS community
Only if it had been written by lord terry.
If you feel that later versions of kernels and web browsers are so meaningless, why don't you keep only the last meaningful version that you wish? You should never upgrade from that version and hopefully you will have zero problems in the future.
Thats the problem. If you just want security based updates even thats not enough because either superficial shit piggybacks off of security updates or vice vera where security updates have to be apart of superficial shit like new web browser versions so he just can't
Security updates and hardware support. On browsers its fesable to backport security updates like forks of firefox do all the time. For the linux kernel not so much, just look at grsec for a great example. The dev of grsec is a crazy dedicated autist who undid decades of damage and still had updates on occasion to fix the shitshow of the kernel for newer hardware supported by newer versions. Grsec is long gone now though since the lead dev is dead.
Web browsers are much easier to backport updates for compared to kernel updates. The problem you described of small updates hiding security fixes, in my experience, seems to only apply to the kernel. Most browser bugs are caused by newer features that you just don't need to browser the internet while avoiding the botnet. If you want botnet like goybook or something like that then you are in for heaps and heaps of shit from modern javascript malware.
No, because people are starting to admit that Kaczynski is a bright guy who got some things very right. Muhammad got nothing right, he was just some merchant who got religious hallucinations out of epileptic seizures.
kernel development can only ever stop if either an eternal universal standard hardware interface is developed (unlikely, but a man can hope) or hardware development stops.
As far as general purpose machines go we already have UEFI. For OEM devices that have their own proprietary interface it doesn’t matter because in that case the burden of adapting the software to their proprietary interface falls on them only and is not expected to be developed further
wew, the moderation in here truly respects freedums :)
Every new OS now begins in a VM, most "machines" of any sort now run in a VM. VM performance is therefore what matters. Only a matter of time before some crazy bastard makes a bare metal machine based on the virtio devices. And it some of it is really a shim in ring -1 who would know, and would it really matter?
Bang, universal hardware standard.
IIR he’s literally one faggot that sometimes posts mud propaganda on b2 and stands out like a sore thumb
Aren’t you literally just describing hypervisors?
These attempts to turn Zig Forums against Zig Forums without lulz are remedial
I've been here since 2014 lmao
I've never browsed /b2/, you do realise there's an entire board for Muslims on here right.
what happens if we hit a limit with what is possible in tech and the laws of the universe?
then would hurd finally be finished?
Its not even in the top 50, meaning theres less then 50 ISPs lurking it. Stop lying and stop kidding yourself.
that would mean that its perfect and its never going to happen when its so big. there will always be bugs in it and they will keep adding support for new hardware. a "final version" means that its dead.
i dont really update kernels that often. only if there are serious vulnerabilities or bugs that affect the things that im using. most of the changes in the kernel is to things that i dont even use.
At this point, it's hard to care when Linux is still not 100% backwards compatible with Unix.
You'd think after more than 30 years, they'd manage to reverse engineer it all the way legally!
I'd probably be surprised how they're still profiting from Unix when almost nobody uses it today except a few Java Shrills, AT&T Buffoons and a good chunk of the HP-CompaQ Cocks (Solaris, HP-UX and AIX).
MacOS doesn't count since they're based off a modified BSD that only got Unix-certified because they made it proprietary later (look up OpenDarwin).
Even every Supercomputer in the world stopped using Unix in favour of something they don't have to pay for tens of thousands a year.
Can anybody fill me in on why does a OS with such great potential still kick back on anything that's not Android (besides the people)?
What did he mean by this?
sauce on the second image?
The OS the 500 fastest Supercomputers in the world use by year.
i said the sauce, not the filename
1. It's spelled "the source", I assumed you meant "give me the toppings/description".
2. en.wikipedia.org
newfag much
Linux will never reach a final version because computing will never reach a final version.
Computing is a dark art that feeds off human knowledge. As we change, so will computers.
These do exist, but you need a simple interface. An operating system can have many times more features than UNIX while having a smaller kernel simply because less occurs in kernel mode. UNIX weenies have trouble understanding modular design. This seems to have come from the origins of UNIX. Originally, UNIX had no DLLs, shared memory, or threads and ran one executable in one process in one address space. The only way to communicate with another process was by pipes or temporary files. The only way to share code between processes was by putting it in the kernel. Other operating systems have dynamic libraries where code could be shared outside the kernel.
Those graphs also show that Linux is getting worse and worse. If you would have told computer scientists in the 70s that a kernel would cost 2.2 billion euros, have more lines of code than bytes of disk space on real machines, and need 15,600 developers, but still can't do what Multics did in the 60s, they would be less likely to believe you than telling them that you're from the future.
Microkernels are a solution and probably the only way a kernel can have a final version. They have small enough interfaces and separate all code related to devices into other processes, so when all the bugs are gone and it's optimized, there is nothing left to do.
The business model is based on spreading Linux and making it hard to move away from Linux, in order to get more developers and more money focused on Linux and away from real solutions to problems. That's why they don't want you to use a different free operating system. That's why the OOM killer is still there. C sucks and allows them to create "new" software that is just bug fixes without having to add or fix actual features.
Why does proprietary software have "Live Free or Die" as a slogan? Revisionist weenies are trying to claim Stallman's work as the "freedom" of UNIX, which is like Microsoft saying WINE and ReactOS are proof that Microsoft has always been a supporter of free software. Stallman chose to clone UNIX because it was popular, not because it was good or because he liked it.
en.wikipedia.org
Expect "delegations" showing up at your company if you don't pay those licensing fees.
The solution is to design the OS for the hardware and its uses and not to have all this hacky bullshit like building I/O on top of a system call originally meant for controlling typewriters on a PDP-11. Phones don't need UNIX file systems, commands, and shells. They should be more like a modern version of old Mac OS where those program icons really are files instead of faking it. Many things need no OS at all. Routers don't need to run a UNIX clone, just simple software. Game consoles can use simple menus and maybe a few system calls or BIOS, not a huge OS. This makes even less sense when the companies have control over the hardware and don't have to support all the possible kinds of PCs. Maybe their "programmers" just don't realize code can run without an OS? That would explain a lot.
Shite painted pink is just pink shite. Why didn't FSF try to invent a free operating system that was good? Answer: their political agenda is 10^6 times more important to them than good engineering.I doubt that, given that RMS had no particular unixexperience when he started FSF (which may explain a lot).Given a background of ITS and Lisp Machines, I suspect unixwas chosen as the OS of choice because: 1) Useful tools could be developed and used before the OS was ready. 2) There was sufficient public domain and portable pieces of unix to serve as a place to start. 3) replacing unix would be "better for the world" than replacing ITS.RMS spent a lot of time before FSF trying to do basicallysimilar things with lisp machines. No one noticed, no onecared, and there aren't any more lisp machines anyway...
Real halal hours in here today
Your Arab world is fucked sideways thanks to Islam and the cia, Islam is nothing but worthless cancer to the middle east.
Both Stallman and Linus have expressed distaste for various aspects of Unix and POSIX standards. While things like "posix me harder" exist, there's likely no effort to extend compliance because there's no practical reason to do so. If they're not compliant, it's intentional.
Furthermore this isn't something anyone should strive for anyway. Ironically, reading the design docs for Plan9 (written after decades of real world experience by the Unix authors themselves) highlights all the shortcomings of Unix better than any of its detractors ever could. And even today, those same people are talking about the shortcomings of Plan9, asking people to put effort into OS research again. Because of course an OS designed in the 60's and 90's are not going to be considered perfect today after all the changes in how we interact with various technologies.
Microkernel devs do appear to be trying today. Some of that stuff looks promising and sane because it builds off of real world experience over multiple decades in the same way. They have the ability to build a project out of frustration rather than speculation. To do what they want to do directly at the system api level rather than extending or hacking existing things to function in ways they where never intended to. (Re-)Implementing Unix and retrofitting things on to it will never be a worthwhile effort as far as I can tell. Emulating electronic typewritters is not sane in 20XX. No matter how much you want to say it's diverged.
Not sure what site you think you're on.
Linux does everything Multics did in the 60's. prove me wrong.
protip: the vast majority of Linux are hardware drivers
You can have true UNIX compatibility without tying your entire OS to UNIX paradigms. That’s pretty much what Apple did thanks to using a microkernel. And modern Darwin systems are superficially derived from BSD at best since Apple rewrote a lot of shit after acquiring Next. Under the surface it resembles UNIX and it got official UNIX certified but Apple was able to do that without really being UNIX. That’s the beauty of microkernels. If Apple really wanted to they can remove the UNIX core shit and just stick with their proprietary APIs and libraries
Literally no one cares about muh Unix compatibility except Enunchs weenies.
This is correct. Many people still fail to realize that Stallman made GNU similar to Unix from a functional point of view simply because Unix was popular at the time, and thus it would have made the switch easier. He himself said it. He also never used Unix, "not even for a minute", before starting GNU. This choice was never for technical reasons, as the real goal was, and has always been, spreading free software.
It's not like the world is in a race to clone Unix, like some people here believe.
Alright, I'm willing to bite: what are some comprehensive, reliable sources on Multics, especially some that highlight objective advantages it had over Unix?
Plan9 is crap. If it wasn't for the names associated with it, it would be considered on the same level as some hobby UNIX clone. It's even more disappointing considering it was made after all that money AT&T got from licensing fees, so they should have been able to hire real OS developers, but no, it's the same old crap with some badly implemented features removed instead of redone properly.
en.wikipedia.org
en.wikipedia.org
If OSI is from the 70s and distributed microkernels were developed in the 80s, why do all these weenies point to Plan 9, which does it worse than any real distributed OS? It's the same reason weenies always bring up C++ and Java when complaining about OOP, and not CLOS.
Read this. It has no segmentation or rings, other than the bare minimum needed to work around them in x86, while Multics is based entirely on segments and rings.
multicians.org
That just means Linux sucks. Mainframes in the 60s already had drivers distributed separately from the OS, so it doesn't even need a microkernel. Microkernels are just a way to provide additional protection and separation.
I posted a link to an OS written in Modula-2, which was tiny. It might have even fit on a floppy. They ended up "converting" C programs to run on it, which made it into UNIX with a tiny Modula-2 subsystem. This is exactly what happened with microkernels.
UNIX weenies care more about UNIX than free software. They would rather pay licensing fees to use C and UNIX than get something better for free. That's why Stallman cloned UNIX, and it sucks. Ironically, if he cloned something good, none of the UNIX weenie "programmers" would want to use it, which means the quality of free software would be better because it would be made by competent people.
Read some links on this site. One of the big advantages is segmented virtual memory. In other words, all files are mapped into the address space and can be accessed directly with CPU instructions.
multicians.org
multicians.org
opensource.com
This is a great example of UNIX brain damage. A real OS wouldn't use temporary files for this at all. It would just call functions in libraries, which is much simpler and needs no system calls or task switches or address spaces at all. UNIX weenies don't want simplicity, they want UNIX. Even worse, pipes can only send sequences of bytes so you have to serialize and deserialize data in both directions, which you wouldn't have to do with real IPC. It all makes sense when you realize pipes were a hack over teletypes, the 70s equivalent to controlling a program by scripting keypresses and mouse movements.
Raise your hand if you remember when file systems had version numbers. Don't. The paranoiac weenies in charge of Unix proselytizing will shoot you dead. They don't like people who know the truth.Heck, I remember when the filesystem was mapped into theaddress space! I even re
Of course, you've made a fool of yourself every time you tried shitting on it in detail so you dug up two Wikipedia links and called it a day.
...
bump