Making graphical software more unixlike

Why is the Unix philosophy of small, general-purpose tools you creatively string together with pipes and shellshit completely abandoned whenever people design graphical software and toolkits? It's like we forget everything that makes Unix good the moment we look beyond the shell and just copy whatever Windows and the other graphical OSes are doing.
One way of fixing this is abandoning the one window == one program idea and giving graphical programs some STDIN/STDOUT + pipes equivalent. We could break down software suites into smaller, independent programs and let users create their own workflows with these tools by dragging multiple programs into a window-like container, then connecting their STDIN/STDOUT equivalents with whatever the graphical version of pipes is. As an example, an image editing suite could be a collection of smaller programs sending their output to an image viewing program which takes input, sends its output to programs which request it (for example, filtering software), and maybe handles stuff like undoing and redoing.
An advantage of this approach is that we could solve many software suites' architectural, scalability, and responsiveness problems by breaking them down into smaller components and handing them off to the OS' process/threading model. It's also much closer to both the terminal workflow and most real-world workflows, where people prefer using small, general-purpose tools together in creative ways over monolithic and complicated single-purpose machines. The downside is that you'd need a really good process model, along with a new GUI toolkit and maybe a new display server. There are already suites like Blender which try creating what I've described within the confines of their program, but they all implement this differently and this experience never carries over to the rest of the OS.

So what do you fags think?

Attached: libbie_otherside_cropped_rotated.png (331x331, 45.57K)

Other urls found in this thread:

en.wikipedia.org/wiki/OpenDoc
multicians.org/multics-vm.html
lsub.org/ls/octopus.html
doc.cat-v.org/plan_9/4th_edition/papers/9
archive.is/AMN4f
wiki.NetBSD.org/ports/evbarm/raspberry_pi/
github.com/lllyasviel/style2paints/blob/master/README.md
twitter.com/NSFWRedditVideo

Isn't Imagemagick a front-end for a variety of image manipulation programs?

unix philosophy sux

...

Lisp machine academics please respond.

...

Because that is an inherently inefficient and overly simpicistic philosophy.
You're asking the user to code part of the program for you, and you aren't making things that much flexible because programs usually handle very specific kinds of data that doesn't really mean anything when interpreted as text.
Your very example of the image editor shows it: you don't stop to consider what the output of an image viewer should actually be, should it output a render of the image plus layers, an object including plain image and layers separately, should it include both?
What if a tool only needs to operate on certain layers, are you going to spam STDIN with a ton of mostly useless data or have one in/out channel for every layer?
Also the casual way you write
is telling, undoing and redoing might rely on state contained in programs other than the viewer so the viewer alone cannot possibly handle them.
And no, using pipes and shellshit to sync and rollback state across multiple programs is not saner than putting all relevant parts in a single program.
Most of those issues come from the program logic, not from shoddy implementation: you won't magically multithread things well by splitting every program in a stupid amount of coroutines.
This fantasy is literally the opposite of the entire history of technology.
Even right now, we use massive,complicated, mostly monolithic CPUs instead of ASICs because of that: single-purpose is much cheaper and faster, and we are clever enough to use it creatively anyways.
On the other hand, "small and modular" hides massive costs in the form of the time, expertise, and effort required to get all those parts to work together without issues.

CPUs are ASICs

Meant FPGAs, always misc up the two for some reason even though Field Programmable Gate Arrays should be a pretty good clue.

And the "just stuff everything into one program" mindset of today's monolithic software is much better and totally doesn't lead to severe performance and architectural issues. It totally doesn't hurt the system's consistency or user experience either.
Not in that brief sentence, no.
Layer selection can be handled within the central image viewer and the external program is fed and spits out to whatever layers you've selected assuming said program even needs to read the layer, something like a brush engine doesn't unless it's doing shit like erasing or blurring. What if the user selects a different layer while a filter is processing? Add a layer ID system.
I said maybe for a reason, faggot. There's multiple ways of handling it and I haven't settled on one yet.
I wasn't talking about multithreading alone of course you'd assume that, but threading smaller programs (if they even need it) is generally easier than larger ones. Dividing these into separate programs could also minimise time and data lost if one program locks up or crashes.
Outside computer hardware design and the past couple decades of anti-consumer bullshit, mostly false.
If you literally only need a single thing done quickly and efficiently, an FGPA is often the better option.
On the other hand, monolithic "do everything" software becomes an absolute nightmare to debug and fix the moment something goes wrong. Sure, designing smaller, interoperable programs requires more thought than stuffing everything into a monolithic blob but they're also usually easier to comprehend and troubleshoot in the end.
This also applies to appliances and machinery.

tl;dr
I'm aware this isn't fully fleshed-out, it's just an example of some stuff I'm thinking about. That's why I asked what you fags think.

Apple had something like this: You could edit a video in iMovie, select a portion of a clip and drag&drop it into Keynote to make it part of your slides.

This only worked because Apple made Quicktime a de-facto standard on their OS. On the terminal you have text as your standard format, but what is the standard format when dealing with binary data like images or video clips? Also, the shell defines only two contact points: stdin and stdout, so daisy-chaining programs is straight-forward, but how do you daisy-chain a video editor and a slideshow program?

en.wikipedia.org/wiki/OpenDoc
OpenDoc was pretty much exactly what you're talking about. Instead of being application-centric, it was document-centric, with each element in a document being editable by a "tool" that was actually an entire program, with OpenDoc itself serving as a transclusion mechanism to facilitate that in realtime.

Predictably, its main attraction for users was also its main flaw for developers.

Breaking monolithic software down into its individual features, there's no need for every competing package to reimplement every single "checklist feature", lowering entry barriers, and allowing new entrants to write the minimum amount of features needed for an innovative product.

For developers of dominant packages (MS Office, Adobe Photoshop, QuarkXpress, Macromedia Freehand, etc.), breaking apart their bloatware into OpenDoc components would have been not only laborious, but moreover suicidal.


Leading up to OpenDoc, Apple had a variety of standards forcing interoperability:

Attached: $_35.JPG (300x225, 12.62K)

Whatever the application supports, I guess.

That's another issue. Even if you managed to overcome OpenDoc's bloat problems, many developers of big proprietary software suites would piss themselves and you'd have to rely on curious freetards to implement components.
Also,
What the fuck were these fags smoking?

This never happened because the idea of pipes for graphical processes is inherently difficult to handle.
How would we even pipe between windows?
How would we handle images? How would we even signify a link between windows?
Would that be cross-wm compatible?
How do we know where the output goes?
Would all these pipes also be graphically represented?
Would they obscure your view?
What do you do about processes trying to output to the same window?
What if two processes use two different toolkits?
Can we use this with terminal pipes?
How is all this data represented?
It's just too complex to make work.

You mean like GIMP does?

(from The Art of UNIX Programming)

Didn't the GIMP also integrate parts of Mypaintg recently? Might be why I've been seeing drawfags using it more recently.

Attached: DoZHpHhUcAAtI6L.jpg:large.jpeg (1920x1038 68.56 KB, 110.46K)

MyPaint*

1. Imagemagick
2. Gimp already has a scripting mode where you can pipe together graphics operations making use of the Guile extension system.

Gimp integrated the Mypaint library. You can paint in Gimp using the same Mypaint algorithm as Mypaint itself.

So what's the point of MyPaint now?

Attached: qwe_download.png (385x367, 66.99K)

If your sole purpose is to paint digitally, Mypaint can help with that. It will have a lower memory footprint and faster startup speed because it doesn't offer all the features that Gimp provides.

to not be bloated as gimp, i mean gimp for a long time loaded every god damn font before starting up

Start by implementing the display server in the kernel.

For this kind of thing to happen, a standard format should be decided upon. UNIX and POSIX had the good idea (small tools that you can compose endlessly) but a terrible implementation, because they didn't specify any interchange format. So, when you create POSIX-2 or whatever you call it, don't forget this part:
* TSV for tables (forbid \t and \n; in fact, only allow [:graph:] with the whitespace)
* Don't care about trees and graphs, they're probably not needed
* NetPBM for images
* What about vector images? svg is shit, we need something simpler
* Something to replace WAV and all its extensions that fixes its 4GB limit and all its bloat; the NetPBM of audio (the pam part, though)
Anything else?

But now you run into the same problem as that quote below: You can pipe data between Microsoft Word and Excel, but no other application. Text does not have the problem of obscurity, and if you have incompatibilities you can simply insert some short AWK script or something in-between the two programs to format the output of one into suitable input for the other.


I thought Gimp uses its own Scheme-like scripting system called Script-Fu or Python, not Guile?


How hard would it be to replace the Gimp UI? I feel like the thing holding Gimp back the most is its interface.

LOL

It's a set of programs

Sorry I got it wrong. Gimp uses the TinyScheme library to implement the ScriptFu system. It was GNU Emacs that was updated to rely on Guile rather than its own Lisp interpreter.

That was fixed. Try a new Appimage or Flatpak.

UNIX pipes are virtual PDP-11 tape drives. AT&T shills convinced DOS users that their virtual tape drives are "the only way" to do modularity and some even believe that it brought modularity into programming, which are bullshit and flat out lies. Modularity is improved by not using pipes, which suck.

Before UNIX existed, Multics used dynamic linking to combine different segments into one process, without pretending your RAM is actually a tape drive that can only read and write one byte at a time. Segments can share code and data between programs so you can use CPU instructions and addressing instead of reading and writing virtual tapes. UNIX weenies shit on dynamic linking because their implementation sucks and because they don't want to admit Multics did it right and the UNIX way is wrong.

multicians.org/multics-vm.html


That's the anti-UNIX way and has more in common with Multics. If it was the UNIX way, all of those libraries would be separate processes in separate address spaces that pretend they're running on PDP-11s with virtual tape drives.

If there's one thing which truly pisses me off, it is theattempt to pretend that there is anything vaguely "academic"about this stuff. I mean, can you think of anything closerto hell on earth than a "conference" full of unix geekspresenting their oh-so-rigourous "papers" on, say, "SMURFY:An automatic cron-driven fsck-daemon"?I don't see how being "professional" can help anything;anybody with a vaguely professional (ie non-twinkie-addled)attitude to producing robust software knows the emperor hasno clothes. The problem is a generation of swine -- bothprogrammers and marketeers -- whose comparative view of unixcomes from the vale of MS-DOS and who are particularlysusceptible to the superficial dogma of the unix cult.(They actually rather remind me of typical hyper-reactionarySoviet emigres.)These people are seemingly -incapable- of even believingthat not only is better possible, but that better could haveonce existed in the world before driven out by worse. Well,perhaps they acknowledge that there might be room for someincidental clean-ups, but nothing that the boys at Bell Labsor Sun aren't about to deal with using C++ or Plan-9, or,alternately, that the sacred Founding Fathers hadn'texpressed more perfectly in the original V7 writ (if only wepaid more heed to the true, original strains of the unixcreed!) In particular, I would like to see such an article separate, as much as possible, the fundamental design flaws of Unix from the more incidental implementation bugs.My perspective on this matter, and my "reading" of thematerial which is the subject of this list, is that the twoare inseparable. The "fundamental design flaw" of unix isan -attitude-, and attitude that says that 70% is goodenough, that robustness is no virtue, that millions of usersand programmers should be hostage to the convenience orlaziness of a cadre of "systems programmers", that one'stime should be valued at nothing and that one's knowledgeshould be regarded as provisional at best and expendable ata moment's notice.

Yes.
It's not perfect and there is room for improvement (the widespread development of public facing APIs being a notable one in the last years), but it works, and products made with that model are the ones that are actually being used.
Again, nothing stops anyone from coding what you described, and if you make a good program out of it you'll be rich/famous/smugposting on a tibetan monkey shaving forum.


Congratulations, you reinvented command line arguments.


I feel like that explains the awful UI.


Is this bait?

then you end up with systemd-gimpctl

So basically mmaping everything is better than pipes somehow?

Unix is garbage, so it's kinda hard to follow its philosophy when you're trying to produce quality. Linux and the BSDs don't fucking work (and they are really just Unix, lets not pretend that there is much of a difference), so there's no incentive to imitate them. I have been trying a couple of different OSs lately like AROS and RISC OS, and they work shockingly well, no issues. It has been way too long since I installed something that actually just worked, while half of the distros out there don't work on my computers and none of the BSDs are compatible. Unix is a waste of effort. The amount of work that it took for it to survive this long was enormous and it's still a broken piece of shit anyway. There has to be something wrong there when OSs made by 6 people work better than something as huge as Unix. Really, the only practical advantage it has over Windows is that you can customize it a lot more, but at the end of the day you are just wasting time polishing a turd. Even pirating Windows 7 may actually be a better choice.

Attached: P1200780.jpg (3921x2619, 3.88M)

Your grievances seem more like you're mostly frustrated with hardware compatibility. What computers do you have?

I am frustrated with how much time and effort I wasted trying to use shit that doesn't work.

I have a lot of them. Some old cheap pre-builts and some that I made (and a Pi, that I got because people finally convinced me). Still, even without compatibility issues, I am still sick of dealing with shitty software. The OSs themselves have a lot of stupid issues that make no sense and the amount of problem solving required to do anything is big enough that I might as well develop my own software (I would be doing that, but the computer that I normally use for that is fucked and my setup is a mess because I have been testing a lot of things). I am really sick of having to read gigantic man pages and search for information on the internet to do fucking everything. Then I decide to check out some alternatives (because the Linus' cuckoldry made me finally do it, though I did want to do it for a while) and realize that I'm just inconveniencing myself for no reason and wasting a lot of time not being productive and not having fun. I kinda miss Windows XP at this point. Well, 95 is still my favorite so maybe I miss that. Maybe I should just install old shit and do my own thing. Fuck this decade.

I hear you. My take on it though is that *nix might not be perfect, but it's the best we have. I mean what would you even do with an old Windows install? I've fantasized about going back to 98 which was on my PC as a kid for years. But as soon as you plug an Ethernet cable into that shit, it's dicks up your ass. Anything XP and earlier is that way. You'd have to give up the internet, unless you're the type of person to have unprotected sex with south american prostitutes. So what would you do? Copy installers to CDs and try to get them to run? Buy old software off ebay and hope it's legit? It all just seems so hopeless.

At least now, I can waste time online, and play around with different programs and games. I can watch too much anime and talk to people on imageboards. I have all the entertainment in the world, constantly at my fingertips. So why does it feel so fucking unsatisfying? Why do we still wish we could go back to a better time?

I guess this turned into something totally different than what you were complaining about. I just get the feeling that everyone's dissatisfied for different reasons and nobody really knows what's wrong, or what needs to be done.

I might be at this point. At least in this situation. Here's a picture of me doing it right now. At least the firewall is on. Wouldn't want to be too unsafe, right?

Anyway, I guess everything is horrible and nothing will ever be good again, and becoming a skeleton in a coffin will be an improvement, so the real solution is to get drunk(er) and watch more ancient horror movies. That's my solution right now, other than installing Windows XP for fun, though maybe OS/2 could be a better choice for this one. Maybe I can do that later, it's not like I have a life.

Attached: untitled.PNG (800x600, 78.3K)

That seems to be more of a product of these OS trying to run on a multitude of different systems on wildly various hardware. Yeah it sometimes sucks trying to keep what little shrinking freedoms we can find for us simple end users.

Linux (esp desktop) is mostly just a curmudgle of softwares built with wildly varying design philosophies and lots of times they don't want to play nicely. Linux is a fractured target too with various distributions and architectures and software that's installed. Run all that on closed hardware that was built specifically to run with another OS and you'll have headaches. Then there's the "rolling release" shit some distros pull which creates and even more unstable system.

BSDs are better in many ways in that the OS is designed as a whole system but then you start adding ancillary software which may or may not mesh together well or even support the OS. It ends up having lots of the same problems as Linux. They're still worse in others like hardware support. Honestly I'd be using a BSD full time if I didn't wan't to play games.

The only systems that actually work well are those whos software and hardware are designed to be an inseparable combo. You see this in places like phones, games consoles, appliances, and Apple products.

As for Windows it's almost the same... Every laptop is made to run Windows. Every desktop hardware is made to run Windows. Every application is made to run on Windows. Every alternative OS is forever playing catch up in the "just works" department. All this convenience comes at a price I'm not willing to pay though. I fully understand I have to compensate somewhat by using my own time but I don't mind it at all well mostly.


Same here. Many times I wish I were ignorant enough to just stupidly use Windows, OSX, etc. completely oblivious to the amount of AIDS laden semen being blown up my ass. I hope I'm not just wearing misshapen nostalgia goggles but I think the future of technology is bleak.

because graphical software isn't shit

Everything was like this in the past, but now only Apple really does it (while everyone else just supports Windows). Their hardware is absolute garbage, though. They are complete shit overall, but the ridiculously expensive inferior hardware already means that they aren't even an option. It's a shame that they suck, because they could be a viable alternative if they didn't, though really, Apple has been going to shit since the beginning. Just compare the Apple II to the Macintosh and the Apple III and you can see the beginning of the Jewish tricks. Honestly, I would rather use pirated Windows on old computers. At least it costs very little or nothing at all.

Well, the current version of Windows. Compatibility with 7 will disappear eventually and I will never use Windows 10 no matter what. Not that I feel the need to buy new hardware anyway. There is nothing that I do that can't easily be done with hardware from 8 to 10 years ago.

I should precise by system tools, I guess.

Nothing from windmills to cars has become more modular in the last centuries.
Tools have become more capable, which inherently allows for some flexibility via inefficient uses, but the ever increasing complexity of them means it's harder and harder to even perform basic repairs, let alone having well separated parts that can be useful on their own.

I don't think it always is. Think of it like this. When you string together a bunch of commands with pipes to accomplish something, that is the equivalent of the GUI application. Think of the GUI application as being the stringed together commands. The GUI application can be made with a variety of different libraries. Each library does a single thing and does it well (ideally). You are stringing together the functionality of these different libraries in order to solve a problem. Seriously, think about it. What is the difference between me issuing a string of commands using curl, jshon, and ImageMagick commands from the command line vs writing an application which leverages the libcurl, jsoncpp, and Magick++ libraries?

I'm not saying every GUI application follows the Unix philosophy. I just think your idea of the Unix philosophy feels a bit narrow.

How about using a simple gui to string the parts together and how they integrate?
It doesn't have to be a shell, but I like OPs idea of modularity and general purpose tools.
It will probably end up looking like a simulink gui

this is quite retarded way to communicate once you go past trivial stuff

We don't have a decent image editor period, and you're asking to make it even more complicated.

should be regarded as provisional at best and expendable at
a moment's notice
The programming world is going to kill itself with this attitude.

only way to make graphical software more like unix is to split the gui from the actuall use, creating a backend and a frontend.
Then, make the gui completely option and upto implementation.

KRITA BTFO

Smalltalk.

There are programs like that.
Like the "non" music DAW, which splits itself up into mixer, tracker, and so forth.

when the fuck do you compose anything in unix? almost never. all you can do is pipe some shit to some other shit and then if that doesn't crash it because it needs to be escaped in 30 different ways, it probably works. UNIX is shit nigger. don't compare UNIX to the pinnacle of composability. as for GUIs, they're all made by niggers, that's the only reason they're shit

Almost always. Let me describe a simple programmers workflow. You open a text file in your editor, which itself is running in your shell. You make some edits, then close the editor. You open a makefile, and add a few lines. The first takes the text file, and compiles it in to an object file. The next takes the object file, and links it with libraries to form a binary. You take the binary and open it in a debugger, which runs your program. In this simple example, you use a terminal, a shell, an editor, a compiler, a linker, and a debugger, for the relatively simple task of writing a program. None of the examples listed involve pipes, but we very easily could add an example. The programmer wants to find a file which he knows is in a directory tree. He uses find to list all files, and grep to filter only the ones he wants.

You might think this all sounds trivial but in fact the majority of modern software completely ignores this principle. Take the modest IDE. Every single one completely reimplements from scratch: text editor, terminal, debugger, file browser, build system, vcs (at least porcelain), and any other tools a programmer might need. This means that all IDEs are bloated pieces of shit, and you need to relearn them from scratch for every new language you program in.

If you use a sane shell (zsh) and sane coreutils (GNU-like), you can have NUL in pipes AND in variables. No more problems, then.

It looks like the artist used the standard hard /soft round brushes that always came with gimp and didn't use mypaint brushes at all.


i find MyPaint's interface is a lot nicer compared to Gimp's for drawing. The biggest things mypaint needs to implement into their program is basic selection/transformation tools. Even AzPainter has them.

Basic adjustment filters like HSV wouldn't hurt either.

Modern IDEs and software suites are basically their own operating systems.

That's what I'm interested in: a graphical way to represent inter-process communication and connect programs.

Give us an example of a Unixlike UI.

sxiv and imv, I guess.

lol pretty much exactly


That's fantasy workflow and it ONLY works when you are doing it nearly everyday. Once you stop for a month you'll forget all those retarded gdb commands. You'll have to look up gcc switches all over again, using your own Makefiles. God forbid you lose your text editor rice setup. And you'll realise how much you rely on shell history and autocomplete.
Compare that to any single IDE. The learning curve is non-existent.
So please let's not jerk off over how intuitive and simple one of the most unintuitive collections of software is.

Plan 9's rio.
A program sees tree files in /dev: window, for drawing; cons, for reading keyboard input; mouse, for reading mouse input and can be written to set pointer position.
These files can be multiplexed by the program, witch means you can have an instance of rio inside rio.
This allows things like to take a screenshot, you just filter the device through a png converter to a file:
[code]#entire screen
topng < /dev/screen > yourfile.png

#current window
topng < /dev/window > yourfile.png

#other window (n is the window's number)
topng < /dev/wsys/n/window > yourfile

That's why simple stuff is the best. So you can't forget it.

I can attest to it
If you use a tool a couple times, it will take a lot longer than a month to forget how to use it
God forbid. Remember to always keep backups
Not really. Both nice for saving time, but I have everything memorized. It's IDEfags that need autocomplete.
For trivial stuff, sure. Whenever I hit something more complicated I generally end up opening a terminal.
I didn't jerk off over intuitiveness. Still, knowing how everything works underneath makes it a lot easier to fix when it breaks. Having to consult a man page doesn't make something unintuitive.


That sounds really neat actually. What's the best way to try plan9? I used 9front in virtualbox, but I couldn't get past the install.

plumbing rules and custom scripts is the most god tier gui possible.


You really want to RTFM the install dude, it's confusing at first because it handles drives differently to what you expect.

Try 9front on a virtual machine. Regarding visuals, it won't blow your mind, but if you're a programmer, try writing trivial programs and you'll see how Plan9's simplicity shines. For instance, there's no "select" call to see which FID has data available, you simple rfork a new process to do this for you. The rfork is really impressive for how you can control the resource sharing with the parent. Even freebsd implemented it (see rfork(2)).


Yes, the plumber pretty much solves many things people are debating here. Need this file opened? Just plumb it. If you have acme editor opened and plumb a file from another terminal, it uses the same instance of acme instead of creating a new window. It's incredible how many text editors today don't do this, starting a new instance without the state of the one already running.

It's just the way processes work in Unix. If you run ed or vi in separate terminals, you have two entirely different copies of those editors, each with their own memory, file descriptors, etc. Emacs did it differently because it was always a memory hog and it wants to be your OS.

The Unix Philosphy is shit and becomes a nightmare with complex programs. No kernel can follow it and X doesn't either.

Gates got it, Torvalds got, Stallman got it. Everyone got it except the retards at BSD, whatever UNIX OS is still im development (or should I say "life support"?) And the two guys working on Plan 9. Guess what? The world runs on Windows, Android and GNU's not Unix. Things could be better and there is a lot of room for improvement, but going back to UNIX is not the answer.

It's been almost 50 years since 1970. Let UNIX die already.

Attached: 1509534278484-co.gif (360x270, 1.06M)

And most realtime and/or important stuff runs on microkernels like QNX, OKL4 or Vxworks. Was their a point besides the usual ad populum.
The only meaningful thing you implied is that the UNIX philosophy is better suited for user than kernel space.

Hey, guy that shows up just to spout your opinion without adding anything constructive. Did you forget your gigantic quote wall this time?

Why is it shit? Explain or, at least, point to an article explaining it. Unix philosophy is basically small, simple programs cooperating to archive a bigger task. Are you against that?

How so?

The philosophy is about user space programs using facilities from the kernel to cooperate. Not the kernel by itself. By X you mean X11? It was designed for a single purpose on MIT and then its scope went way beyond the original idea when it was adopted by everyone else. Unix should have used The Blit instead of X11.

Gotta need citations there.

Everyone got it except the retards at BSD, whatever UNIX OS is still im development (or should I say "life support"?) And the two guys working on Plan 9. Guess what?
Appeal to populism and the fact that GNU stands for GNU is Not Unix doesn't mean rejection to Unix, it's just a name.

I agree with this, we should've been using an evolution to Unix like Inferno or even The Octopus from LSUB but we're stuck with 70's state of the art computing using abstractions like teletypes thanks to HP, IBM, DEC and Sun for their Unix wars from the 80's ~ 90's and now Linux.

For anyone more interested about the Unix philosophy, have a read of this paper.

sad but true

Linus considers the 'Unix way' to be a guideline. Stallman rejects it entirely. Just look at emacs.

The Mac way: even if a program has a ton of windows open, it's still presented in the dock as a single instance of a program.

It wasn't asked for you to refrase your point, but to provide the source for your claim.
And emacs is designed as many small lisps components working together passing pointer-like structures around over a small base of C programmed routines for speed.

That's ironic, because GNU Emacs is often cited in Unix Haters when detailing the flaws of Unix. It's certainly far more amiable towards Unix environments than it is Windows land.

BTW, check the chapter "How Lists are Implemented" in the book "An Introduction to Programming in Emacs Lisp" available in Emacs itself for source.

Before and after pics from using ImageMagick. git gud


thanks for the tip. I ditched Krita for MyPaint, but it crashes too often (less than Krita though). Good to be able to ditch it too and stick with one.


...organized. The only addition the IDE brings is organizing everything into one app, ergo you only find it essential if unorganized. God forbid your riced IDE isn't available to edit your code.

Attached: ClipboardImage.png (1080x1086 2.68 MB, 2.58M)

ImageMagick and ffmpeg are great stuff, but I was talking about making software with graphical user interfaces more unixlike.

What is HURD?

What is HURD?

unix-like = command line for me, but ymmv
Basically you would be re-creating command lines with a graphical overlay, otherwise re-inventing the wheel.
However if taking the re-invent approach, it would be (like an omni-wheel to a standard wheel) a pretty futuristic GUI, and long overdue. The data passing between the elements seamlessly is the core element, and where I imagine the major headache would lie.

More like making the OS more consistent. I may try hacking something together using 9P.

The fundamental issue with modularity is that for it to work you need the user to:
1. Know very well the tools at his disposal.
2. Make a good model of the problem in relation to the available tools.
3. Figure out a way to create a solution by combining those tools.

Simulink is in fact something aimed at engineers, which are extremely qualified and intelligent workers that do this kind of problem solving for a living: for them, such modularity is perfect.
But the average person is not an engineer, and does not have decades of practice in abstract problem solving: for them, sensible defaults are a must and modularity is not relevant directly, it can be a plus if it allows for extensions/plugins but that's it.

I think having good APIs and a codebase built with plugins in mind is far more important than the extreme modularity suggested in the OP.


No, it means IDEs are easy to use, easily portable across different systems, and an example of good encapsulation as they only show the user the parts they want to work on.
Bloat is an issue (rewriting everything in JS), but using your own text editor instead of relying on whatever non-standard editor installed on the system is good practice and not bloat.

That's exactly right. Even the amount of work put into Multics and other mainframe OSes is miniscule compared to Linux, but most of that was R&D work, like inventing new hardware architectures, file storage structures, programming languages, storage devices, and so on. Linux needs 15,600 "programmers" just for a kernel of a clone of a PDP-11 OS.


UNIX weenies mean the ability to replace a UNIX "tool" with a GNU "tool" that behaves identically. What really sucks is that everything is based on text instead of APIs and binary, so you end up never being able to fix text formats because computers depend on text being a specific sequence of bytes instead of being for people to read.


That's the UNIX workflow. The Lisp, Smalltalk, BASIC, and FORTH workflow is using a REPL/prompt to directly alter the program and state of the machine. All of that is combined into one program, the opposite of the UNIX philosophy.


That's as "simple and elegant" as using a PDP-11 tape backup program to copy a directory hierarchy.


That's bullshit. UNIX pipes are much worse for making programs cooperate than dynamic linking and Multics segments. Most uses of pipelines are because the shell sucks at text processing so you need all these other programs that also suck.

Calling UNIX "70's state of the art computing" is like calling shitting in the streets "70's state of the art plumbing." Xerox Alto, VMS, and many other good systems are from the 70s. I never heard of "The Octopus" but it's more UNIX bullshit with the same UNIX problems.

lsub.org/ls/octopus.html
Plan 9 itself shares most of its source code with UNIX.


The problem with GNU Emacs is that it's bloated due to running on top of UNIX. Most of the UNIX Haters were Lispers who like Emacs and used a variant on ITS, Multics, or Lisp machines. They hate how much code GNU Emacs needs compared to the other versions because C and UNIX suck.

With respect to Emacs, may I remind you that the originalversion ran on ITS on a PDP-10, whose address space was 1moby, i.e. 256 thousand 36-bit words (that's a little over 1Mbyte). It had plenty of space to contain many large files,and the actual program was a not-too-large fraction of thatspace.There are many reasons why GNU Emacs is as big as it iswhile its original ITS counterpart was much smaller:- C is a horrible language in which to implement such thingsas a Lisp interpreter and an interactive program. Inparticular any program that wants to be careful not to crash(and dump core) in the presence of errors has to becomebloated because it has to check everywhere. A reasonablecondition system would reduce the size of the code.- Unix is a horrible operating system for which to write anEmacs-like editor because it does not provide adequatesupport for anything except trivial "Hello world" programs.In particular, there is no standard good way (or even any inmany variants) to control your virtual memory sharingproperties.- Unix presents such a poor interaction environment to users(the various shells are pitiful) that GNU Emacs has had toimport a lot of the functionality that a minimally adequate"shell" would provide. Many programmers at TLA neverdirectly interact with the shell, GNU Emacs IS their shell,because it is the only adequate choice, and isolates themfrom the various Unix (and even OS) variants.Don't complain about TLA programs vs. Unix. The typicalworkstation Unix requires 3 - 6 Mb just for the kernel, andprovides less functionality (at the OS level) than the OSsof yesteryear. It is not surprising that programs that ranon adequate amounts of memory under those OSs have toreimplement some of the functionality that Unix has neverprovided.

Just a fancy way of saying unintegrated with the OS. Coming from windows I can see why you'd think this was a benefit, but on most systems the OS is there to help you.

If you hate Unix so much, stop whining about it like a little bitch and write some fucking code. Prove the Unix weenies wrong with working software inb4 you point to some proprietary OS that we can't test without paying thousands of dollars to some greedy jews.

Attached: 1433860135028.png (500x465, 307.13K)

The Multics guy is back, I'm sure your PDP11 is up and running because it's the only thing that can run Multics.

Forking in Plan 9 is a cheap operation[1a], so instead of implementing more system calls, you just use what the system has.

How is it worse? How Multics segments do better? Wait, no one uses Multics anymore, nevermind. And how can you dynamic link on the fly like you can do with pipes?

Of all these systems, just VMS is alive as OpenVMS. Acording to the wikipedia page, it looks good with stuff like Common Language Environment so why aren't you using it? For curiosity sake, also check how much of Windows NT kernel has from VMS due to MS hiring developers from DEC[2].

I have a more open mind than you. I never assumed Multics was bad before reading about it on multicians.org, indeed it was a nice system, but bloated and trapped in the PDP hardware, unable to adapt and died like the dinosaur it was. But I salute it for file hierarchy and other good ideas.

No. [1b]
Plan 9 borrows many ideas from Unix, while improving them. The code is different.

[1]doc.cat-v.org/plan_9/4th_edition/papers/9
[1a]A single class of process is a feasible approach in Plan 9 because the kernel has an efficient system call interface and cheap process creation and scheduling.
[1b]Producing a more efficient way to run the old UNIX warhorses is empty engineering
[2]archive.is/AMN4f Windows NT and VMS: The Rest of the Story

Sad bait, but he'll probably fall for it.

It wasn't bait. I was wrong. Yep, the GE-645 was the computer designed to run Multics. It just happens that the PDP line was around when the Multics was being conceived and was popular at the time. My point stands, Multics was trapped in a specific architecture incapable of adapting and died.

Unix-like OSes have ever been the best OSes!

It's great bait because the Multicuck really, really hates the PDP-11 for being the original Unix machine. Multics was designed for specific mainframes and, like most of his favourite OSes, depended on unusual hardware features with tradeoffs. The entire reason Unix exists is because his wunderOS didn't scale down to a cheaper minicomputer, so some ex-Multicians made a "good enough" operating system which just happened to be inherently portable. Most of his complaints are nitpicking about legacy holdovers from the PDP-11 days and trying to blame software bloat and bad 80s corporate software on the Unix philosophy.
Again, the Multicuck's biggest problem is that he sees an operating system as a collection of features rather than a cohesive whole. Every time you ask how he'd fix Unix, he goes full Poettering and suggests slapping on his favourite shit from other operating systems with no regard for how well it would fit the rest of the OS. If he really knew his shit he'd be contributing to an existing non-Unix OS or writing his own, but instead he's bitching about it on the internet in hopes one of us will write his dream cluttered-box-of-features OS for him.

NetBSD works good on the RPi boards. The support for that hardware is in fact much better than basically all the other ARM boards. See here:
wiki.NetBSD.org/ports/evbarm/raspberry_pi/

Myself and anyone with a LAMP stack would like to have a word with you.

Which is exactly what you want on a large enough project.
You don't want to rely on the OS unless strictly necessary, so you aren't stuck on a suboptimal OS as easily.
Imagine coding something that relies on the X server and then having to redo it all when wayland becomes virtually mandatory.

Lisp machines are a funny metaphor for communism. :^)

A software library is a library of software functions. When you have multiple libraries that are conglomerated to form a big cohesive library spanning a wide range of general software functionality, this situation is often referred to as a software platform. It is very normal for programmers to target specific software platforms rather than to recreate a targeted subset of what a software platform provides. When your software targets the X11 protocol, that is a design choice that the programmer makes. If the team decides for the program to target the Wayland platform, that will take significant effort. However, I would bet that this effort is smaller compared to the effort required to independently implement the tiny subset of needed functions that are offered by the X11 platform or the Wayland platform.

As someone writing software, you are interested in minimizing the amount of work you have to do. That's why you probably prefer writing webapps over real applications. As a user, I prefer software that uses familiar idioms, and integrates with the rest of my OS, so that it minimizes work for me. As such, I prefer software that is a library first and foremost, which works completely independent of the OS, then has a layer of UI code surrounding it to help me get my work done. Besides allowing for the application to be ported between OSes, this allows someone to consume the library directly in their programs. They could even write their own custom ui.

I believe he would recommend that you use some framework like qt or electron rather than target the windowing system directly.

Sure, because most programs don't have the requirements an IDE is expected to offer nowadays.
If my game has a tiny difference in chat text formatting between Linux and Windows, that's unlikely to even be noticed, while an IDE formatting text differently on different systems could be a big source of headaches.
Combine that with the extreme portability demands, and forking your own solutions starts to sound good.


I can't say writing webapps is a pleasant experience, especially since performance requirements bite you in the ass as soon as you try to do anything interesting with them.
Which is general a wonderful idea, it's so good even the most webshit IDEs follow it by having some impressive internal modularity: of course they try to lock down user-facing modularity so they can jew you out of money for dark themes and such.
Also, I would only recommend electron to my worst enemy, and even then only if they really pissed me off, qt is pretty neat tho even if you were developing only for windows.

YOUR JOKE HAS GONE TOO FAR
github.com/lllyasviel/style2paints/blob/master/README.md

that seems pretty cool, and a good example of sensible modularity as it's based off real life workflows.