C Is Not a Low-level Language

queue.acm.org/detail.cfm?id=3212479

Or rather, it's only low-level if you run it on a PDP-11. Modern processors use a massive amount of abstraction to make it fast.

C assumes flat memory and sequential execution. Processors don't have those things, but everyone uses C, so they have to pretend they have them just so you can run C fast. Instruction-level parallelism is hard and prone to bugs, but it's the only kind you can use for C without completely changing the language or the way it's used.

Compilers jump through hoops to make it run fast. It's not just undefined behavior, it's also unclear padding rules that even expert C programmers often don't understand, and pointer provenance rules that mean it doesn't just matter what a pointer contains but also where it was made.

Much of the difficulty in parallelism is just about fitting it into C's execution model. Erlang manages to avoid that and makes parallelism a lot easier, but people keep writing C.

Processors could be simpler and faster if they didn't have to layer a fake PDP-11 on top.

Attached: PDP-11.jpg (1536x2048, 351.29K)

Other urls found in this thread:

computing.llnl.gov/tutorials/pthreads/
github.com/xoreaxeaxeax/movfuscator/tree/master/validation/doom
multicians.org/multics-vm.html
twitter.com/NSFWRedditVideo

oh so erlang is low level. right

try reading the text next time

I wonder what kind of autism it takes to have a hateboner for a programming language.

Mostly incoherent ramblings desu. C has its flaws but they don't affect CPUs as much as you claim.

No
But they don't.

There are no low-level languages on modern processors. You can get low-level on a GPU, maybe.
Erlang on a hypothetical processor designed to run it wouldn't be as much more high-level than the modern C monstrosity as it should be. C is no longer fit to be a low-level language.

What is Assembly then?

All the baggage of a low level language, anchored by the fact that it's actually a high level language.

Attached: jW4dnFtA_400x400.jpg (400x400, 13.61K)

It's a way to get close to a horribly complex abstraction layer. It's still pretending instructions are sequential when they aren't, which is the direct cause of Spectre and other bugs.
There's a lot of abstraction inside the processor.

rust is the only low level language

How would a processor that did not hide non-sequential instruction execution look like?

It might just not do any instruction-level parallelism that requires branch prediction in the first place, and offer better explicit parallelism instead.

thanks Intel shills

Spectre is because of predictive (speculative) code execution, not parallelism

It's instruction-level parallelism.

C is not low level either. It can compile however it wants. You don't get many guarantees about how it will look in assembly. You can't for instance read something at ESP+8 in any useful way (it may just work on your compiler today but only because of coincidence). oh he says pretty much this in the "Optimizing C" section. C is also not high level
not really, it was already obvious that cache is a cause of side channels without knowing much about CPUs in particular

most C programmers think C allows you to program x86 without using assembly instructions. however this is wrong because C is an abstract language. trying to use it in some specific way breaks stuff, for example prevented people from moving from x86 to x64

just turn off cache and branch prediction lol

polniggers aren't qualified for this board. these new vulns have nothing to do with Intel, it's caused by superscalar CPUs. if AMD wasn't superscalar you'd get LARPers complaining that AMD is too slow for real life

So what do processors have?

And how could a language be designed to take advantage of what processors do differently?

$0.01

They are manufacturing CPUs with the same architecture, you nigger.
AMD and Intel bugs strongly overlap because they basically make the same CPUs with only slight differences. Otherwise your programs wouldn't run on them without recompiling.

OP you're a faggot and you know it
If it really worked like that your PC could only run one program at a time. Also that doesn't matter and no programming language does it differently because it makes zero sense.
No C is an abstraction of assembly and can be compiled into NATIVE CPU INSTRUCTIONS
No it's not. You ever heard of pthreads? And ALL CPUs execute one thing after another that's how ever calculator on the planet works. Your brain is probably nothing more than a multicore CPU running native instructions

Processors start executing the next instruction before the last one is done. They execute a lot of them at once. To do that they need to predict which instructions are going to need to be executed, but it's impossible to do that reliably.
Processors also use multiple levels of caches, transparently, in a way that heavily affects performance of C programs but isn't visible to them.


No? I don't even understand what kind of confusion of ideas is going on here.
"Native CPU instructions" aren't very low-level any more, which is why it says that processors use a massive amount of abstraction. I recommend you click the link at the top of the page.
pthreads is unnecessarily hard to work with because it's bolted on top of C's execution model. C (very reasonably) wasn't designed with that in mind, so you need a lot of coordination between threads to make sure you don't fuck it up. Adding parallelism to C programs is nontrivial, and starting threads is fairly expensive.
Erlang is different. It's worth looking into its design more deeply if you aren't familiar with it, but the most relevant thing here is that everything happens inside cheap processes that share no mutable state. That makes it easy to run a lot of communicating tasks in parralel.

You might have misunderstood. Every core does everything sequentially but the programs don't run at ring 0. And OP claimed that to be an issue
I'm not sure what you're talking about. C is a high level programming language but the amount of abstraction has been proven to be reasonable.
I don't think anything keeps you from starting them at the beginning of the program
Other languages just abstract that as far as I know. So it won't get any better because your code looks prettier.
And no one cares for theoretical models. The only thing which matters is practical capability.

It's an issue because C is not very good at using multiple cores, so processors focus on making single core execution fast, which causes all kinds of problems.
C is a lot higher level than it appears to be because of everything going on in the processor, but it's still being used as a low level language. It's getting some of the worst of both worlds.
Not knowing how many you're going to need in the first place and what data they're going to process is what keeps you from doing that.
If you want good parallelism you need to make it a core part of the way computation is handled. Erlang processes are the natural way to structure Erlang programs, so the parallelism is almost implicit. POSIX threads are absolutely not the natural way to structure C programs.
It would get better if the processor were designed for that execution model. Modern processors are optimized for C and do things that are necessary to make C run fast but would have better alternatives for more realistic execution models.
GPUs are useful because their execution model is so different from CPUs, even though everything you can do on a GPU could be done on a CPU in a more sequential way.
These models represent the ways the systems actually work. Practical capability depends entirely on the way these systems work.

I believe what he meant is that C is meant to work with an OS where memory is treated as one big array. Do you have an argument against this?
I don't know if they do, but x86 is a giant mess. RISC-V has major interest in it for a reason, corporations (including Google, Samsung, Western Digital, NVIDIA) aren't just burning money for fun.


It can be compiled into machine code like many other languages. What you said didn't refute what he said.
You're saying that POSIX threads, the thread execution model made for Unix-likes and Unix, has nothing to do with C? Come on, you can't possibly believe that.
Heres how they are used in practice, if you don't believe me: "Pthreads are defined as a set of C language programming types and procedure calls, implemented with a pthread.h header/include file and a thread library - though this library may be part of another library, such as libc, in some implementations." See: computing.llnl.gov/tutorials/pthreads/

I think they're saying that pthreads is a way to use parallelism in C and therefore parallelism in C's execution model is a solved problem and as easy as it could be.

No shit

Read the thing

shut the fuck up faggot. you don't have the slightest idea what you're talking about. side channel attacks on cache/bp are on all mainstream CPUs. this has nothing to do with "muh intel vs AMD", since it also applies to e.g ARM. also I haven't read these meme vulns like Spectre and Meltdown, but they probably are some general exploit on some way the BP is done in Intel. However, since cache/BP intrinsically lead to side channels, no matter how many of these meme vulns that get released and patched, you will always be able to form new similar side-channel attacks against any software running on a superscalar CPU

pedantry, i was just telling this polnigger that the vulns are on AMD CPUs as well. the fact that intel invented the architecture is not relevant
wrong
Nope. C is an abstract programming language, just like Java. It has nothing at all to do with assembly contrary to what LARPers tend to think.

Not everything is parallel.
serial computation is a better abstract modeling for reasoning, just as call by value is more understandable than call by reference/name.
Automatic parallel compilation was mastered by fortran architects, and resulted in SIMD, Very long instruction word (VLIW), and GPU hardware.
C is not designed for strict numeric computation, it is an attempt at symbolic computation in a byte oriented fashion.
American engineers thought call stacks and recursion were bad because IBM would lose money if they were to implement European "novelties".
I agree that C compilers should not aggressively optimize as gcc.

Attached: b8580f2d9f5e1333e4b2613d0285572e9c62f3d8c48efec55b8f6e881452f774.jpg (400x520, 76.04K)

Already said that here
CPU switches between the threads but OP claims
It is sequential and to work around that the OS abstracts it away by switching between the threads.

Not in any way. Java runs in a VM from byte code with automatic memory management. The VM with the standard library alone is 130MB in size. Running software on a PC inside a PC can't be counted as running software directlyon a PC in its native instructions.
Assembly and C are close enough that there is a GCC extension which allows for embedded assembly.

recursion is simply slow and offers only laziness to the one writing

Attached: laughingwhores.png (449x401, 490.09K)

If it's tail call recursion then it's equivalent to a loop anyways.

reread
you can embed assembly in any programming language, including Java. Java will just have some more overhead than C

I'll have to assume you're DEK to argue in good faith.
A language with static call graphs coupled and explicit stack structures seems like a compromise in the machine's favor.

imagine you have a parser implemented as a set of mutually recursive procedures, trying to implement this without a implicit call stack would require one huge procedure with explicit control stack and some means of dispatch (indirect goto, or switch).
Sure, the best solution is to use what yacc does and do the whole LALR approach, but the yacc language itself has recursion.
We are arguing over whether a central concept in computing should be reflected in computer language.

C's environment is a lot smaller and Java IS A VM. There is no denying this.

The approach you describe is called VLIW and it's utter shit. You can't schedule instructions well beforehand since you don't know which execution units are free at any point (which branch did we come from? how long did that memory access take? did an exception happen?). Trying to encode parallelism explicitly leads to bloated, shitty, inefficient code.

what a shit thread

The only reason for any problem in C is that it sucks. There are languages out there that do not have these problems, so the flaw comes from copying C and UNIX and avoiding everyone else's work. There's a brief mention of Fortran, but no mention of Lisp, Ada, PL/I, or many other languages. I still don't know why anyone would use C unless they like using software that sucks.

This is more UNIX bullshit. Why must every computer run C? UNIX shills want you to feel helpless and it sucks. If a vector processor only runs Fortran, people will write more programs in Fortran to take advantage of the speed. If a Lisp machine doesn't support pointer arithmetic and has garbage collection, they would use Lisp and other GC languages. This is why this argument makes no sense and is just fearmongering. If anything, it's the only thing that could be a commercial success, which is why they don't want anyone to do it.

OK. How about: switch (x) default: if (prime(x)) { int i = horridly_complex_procedure_call(x); case 2: case 3: case 5: case 7: process_prime(x, i); } else { case 4: case 6: case 8: case 9: case 10: process_composite(x); } Is this allowed? If so, what does the stack look like before entry to process_prime() ? I've been confusing my compiler class with this one for a while now. I pull it out any time someone counters my claim that C has no block structure. They're usually under the delusion that {decl stmt} introduces a block with its own local storage, probably because it looks like it does, and because they are C programmers who use lots of globals and wouldn't know what to do with block structure even if they really had it. But because you can jump into the middle of a block, the compiler is forced to allocate all storage on entry to a procedure, not entry to a block. But it's much worse than than because you need to invokethis procedure call before entering the block.Preallocating the storage doesn't help you. I'll almostguarantee you that the answer to the question "what'ssupposed to happen when I do ?" used to be"gee, I don't know, whatever the PDP-11 compiler did." Nowof course, they're trying to rationalize the language afterthe fact. I wonder if some poor bastard has tried to do adenotational semantics for C. It would probably amount to atranslation of the PDP-11 C compiler into lambda calculus.

YOU LIE

kill yourself rustfag

I see, the UNIX and C hater is back again
Can you just leave the board.
Garbage collection is still a pile of shit. Even if you have hardware support for it.

Is rustfag and UNIX hater really one and the same person?

yes, the rustfag has switched tactics

Thx for informing me I guess I was a little out of touch.

Attached: out of touch.mp4 (640x480, 3.63M)

Nah. I'm not the UNIX hater.
Also you don't have to worry about me anymore. I've pretty much stopped posting here. Zig Forums is a shithole and I'm sick of it. Every time I look at the catalog it is the same /g/-tier shit and triggering you anti Rust fags hash lost its appeal.

Attached: steve klabnik 9.jpg (1280x720, 41.19K)

nice try rustfag

The real question is: could PGO lead to better VLIW compilers?

literally kids

About as much as assembly, really. Memory hierarchies and instruction reordering/parallel execution are microarchitecture details and programming while having to fully account for them would be a fucking nightmare.
Much of the difficulty in task-based parallelism comes from synchronization between execution units and avoidance of race conditions. You will have the exact same challenges with pure assembly.
Erlang "avoids" nothing; it hides it all under abstractions.
You whine about compilers having to jump through hoops, but I guarantee that removing this layer wouldn't help. Case in point: compiling for VLIW architectures (i.e. instruction parallelism directly exposed to the software) is not nearly as easy to do quickly and optimally.

Attached: f21c9e3cf9404d0ce8907c42134797508e4a2a07406b40215b21eed664e8a77f.jpg (355x342, 29.54K)

I see the argument has once again shifted from RTOS kernels, and back into highlevel languages. After reading, I have determined what is being argued is execution models. Guess what, execution models are not tied to languges.

A chicken and egg flaw in logic. Processors parse their execuition pipelines with a program counter. Microarchitecture has features that will try to keep the processor pipeline (also a sequence) from stalling and keep execuiting instructions, and if possilbe reduce the number of cycles required for each instrucition. Symmetric multiprocessing? Multiple symmetric pipelines working in parallel.

Wow, the architecture that inspired most other architectures, including the x86... no wonder I can't rice.

C is a glorified macro assembler

Did you read the article? It isn't just saying that C is no longer low-level, it's also saying that pure assembly is no longer low-level. It's saying that processors have been optimized for C.

C IS A HIGH LEVEL LANGUAGE TO BEGIN WITH AND ASSEMBLY STILL IS LOW LEVEL
It doesn't change because of hardware.

"High-level language" means at least two things, and there's a common meaning of high-level language that usually isn't applied to C. It's sensible to make a distinction between something like C and something like Python, especially in an age where people aren't hand-writing assembly much.
Assembly has become high-level because it no longer reflects the underlying hardware well.

All these people who don't realize how much of a modern Intel/AMD's core transistor count is dedicated to determining which instructions are safe to execute out of order.

ADD R1, R0, #1
In arm ISA *directly* coresponds to
1110 00 1 0001 0 0000 0000 0000 0000 0001

Which "1110" is used to set the ALU operation mode, fetching bits from r0 "0000", and adding 1 ("0000 0000 0000 0001"), and storing it in r1 "0001" in the write back.

Okay, but how does the processor actually execute that behavior? You're not describing what happens, you're just describing the result.
The fact that it's abstracted away so much you're ignoring it is telling.

Attached: Screenshot from 2018-05-05 22-19-44.png (686x508, 188.91K)

What is "low level" enough for you? Some microarchitectural details are inevitably going to be abstracted away by whatever assembly you invent.
Also, see:
And not "optimizing the processor for C" doesn't really help with the fundamental challenges of task-based parallelism either.

Fine, but if we go down this route, I will step through MA, then to logic, and eventually we will find ourselves at solid state physics. I'm not interested.

The fact that branch prediction exists and has got acturate to the 98 precentile does not change the basics of microarchitecture. I'm not going to dump countless lines of VSLI on you. I could implement a state machine, that just always guesses "take the branch", and if it gets it wrong (which will be known at the write back, usually for the comparision), well, I still have to stall to get the bad result out of the pipeline. There is no abstraction to it, just many things manipulating the pipeline attempting to keep it from stalling because of the possiblity of a wrong answer.

Attached: 500px-Pipeline_MIPS.png (500x314, 46.93K)

Objectively and historically it's high level.
If you compare it to java, javascript or PHP of course it becomes low level.
No. Assembly written for your architecture reflects your cpus architechture 100%.
Assembly commands are different for every CPU.
The article is garbage and the cause for Meltdown and Spectre were speculative execution where things are guessed while waiting for resources.
That PCs became slower because of the following patches is true and we should use different architectures. x86 and amd64 are full of bugs and legacy features.

Are you asking for a microcode-tier level of abstraction?
Some assemblies do require knowledge of the pipeline (e.g. to avoid pipeline hazards), and compiling C to them is not a problem. However, no sane person would rather use a language that constantly forces the programmer to think about those things.

*VHDL, not VSLI.

Great point autist, but I never claimed Java is the same as C so you're arguing about literally nothing. Both are abstract languages (you clearly don't understand this concept and are confused and think you can just manipulate the stack and registers from C and still have portable code). Do you also disagree that C and Java are not programming languages, because if they had something in common that would make them be the same thing?

Both are true. C is as a macro assembler but at the same time they wrote a giant specification to try and make it into an abstract language that can run "efficiently" on multiple architectures. The most basic example of course is that integer sizes are abstract and making invalid assumptions about them can lead to undefined behavior.

Java and C are both programming languages but C is compiled to CPU instructions and runs natively while Java is compiled to bytecode and run inside a VM with automatic garbage collection. Both are abstracted and portable. And if you embed assembly in Java it will become slower, if you embed it in C it will be executed directly as if you compiled it as the assembly it is (at least in GCC).

Pick one.

what a shit thread

First, there are multiple unix heaters on this board. Second, Rust users have seen the flaws of C, but have Stockholm syndrome. They just keep putting make up on a pig thinking it will be beautiful except all they are doing is making a mess.

Nice try sole UNIX and C hater

The answer is: Yes.
Then again, it also leads to better RISC and CISC compilers, so the margin of failitude is still the same for VLIW.

Same for every other ISA. You want performance - you pay the price.

Certain VLIW ISAs, like Itanium, force the compiler to compute data dependencies and didn't try to do it on die.

It's more likely that you're Rob Pike than it is that I'm the rustfag. Rust has too much C/UNIX bullshit, but it's better than C. I think Rust syntax sucks and they shouldn't have tried to make the language appeal to C/C++ weenies who would never use it in the first place.


That's fearmongering. CPUs that don't emulate a PDP-11 don't force the programmer to "fully account for" all that stuff. They're usually higher level than a PDP-11, like segmented memory, channel I/O, string instructions, and so on. These are all hardware features that can't be duplicated in software. None of the new RISCs have any of this because it won't be useful for C and UNIX even though it would make other languages and OSes run faster.


C is historically low level, but that's not why it sucks. JavaScript is high level and it sucks too. Low level languages do not have to "decay" arrays, use null-terminated strings, have a switch statement that sucks, or any of that.

The whole point of the article is that not having to run C would make hardware faster, so all that bullshit they have to do to make C "fast" is still slower than a simpler computer that doesn't have to run C.


You're right. Rust sucks because Rust users do not look outside the C and UNIX culture. They still think UNIX can be fixed with Redox too.

Hey. This is unix-haters, not RISC-haters. Look, those guys at berkeley decided to optimise theirchip for C and Unix programs. It says so right in theirpaper. They looked at how C programs tended to behave, and(later) how Unix behaved, and made a chip that worked thatway. So what if it's hard to make downward lexical funargswhen you have register windows? It's a special-purposechip, remember? Only then companies like Sun push their snazzy RISCmachines. To make their machines more attractive theyproudly point out "and of course it uses the greatgeneral-purpose RISC. Why it's so general purpose that itruns Unix and C just great!" This, I suppose, is a variation on the usual "the wayit's done in unix is by definition the general case"disease.

Branchless Doom gives you, like, 1 frame/hour if not less.

assumptions C makes are convenient and powerful enough to use
if you can make assumptions in your language in such a manner im all open

Name examples. What architectures exist that implement which features that takes this burden off programmers, even partially?
But seriously, this has literally nothing to do with C. C works just fine with flat, segmented, and paged memory.
not a problem of the language. Just write an asm wrapper or use a library that does.
asm

correct
false

C has abstraction and neckbeards claim C is fast. Checkm8

I hate C and UNIX too

but that's the whole point in Rust, otherwise they'd have just rewrote everything in a non-complete-shit language, like SML
What, you want your PL to call some specific assembly string instructions (assuming they're even faster in the first place) to pretend to be fast for builtin functions?

>github.com/xoreaxeaxeax/movfuscator/tree/master/validation/doom
nice meme. this isn't the same thing.
also this is suspicious. he could have some inefficient path for rendering, which is common in these sort of meme projects

ANY ๐Ÿ‘ LANGUAGE ๐Ÿ‘ IS ๐Ÿ‘ ONLY ๐Ÿ‘ AS ๐Ÿ‘ GOOD ๐Ÿ‘ AS ๐Ÿ‘ ITS ๐Ÿ‘ COMPILER

You know that Java runs in a virtual machine? The overhead was already mentioned by
If you run it outside of Java, it is not embedded
The overheat in comparison is tiny. Most seL4 implementations are in C.
I'm out. At least we have a thread for straight people

If you honestly don't like C or C++, then you are a 65IQ individual. C/C++ is the GOAT.

C didn't age well, and the only reason it's in the position it is now is because the rest of computing stagnated around it.
C is almost half a century old. It's shocking that we're still using it.
I think Ritchie would agree - he said that Unix retarded OS research by 10 years.

I've literally not programmed anything aside from C for the last 2 years. stating basic facts about Java over and over again when it's hardly on topic isn't helping you
u wot m8. I said Java has more overhead than C. who said anything about "embedded"? With JNI you have to put your shit in a function in a shared library (IIRC) which Java will call. so yeah there's, overhead
good, perhaps retards who LARP about C all day would be better suited to a board like cuckhan/g/

lol'd. Is this your Lisp machine bullshit again?
None of the new RISCs have any of this because it's understood that pushing bloat onto the CPU architecture only increases the cost and power consumption. I'm pretty sure that most computers these days are used in mobile and embedded application, and if you ever need extra processing power in those cases, there's a million ways to do it without bloating all CPUs with bullshit that 1% of developers will need.

LARP ALERT

The stagnation happened because AT&T shills wanted it to be that way. They created the "UNIX is right and everyone else is wrong" mantra, which goes along with the mentality that you should never fix anything because all mistakes are user error. If you're coming from VMS and you say the UNIX way is broken and sucks, the shills trained the weenies to say that VMS is the problem and UNIX is always right. A language where 4[a-1] means a[4-1] is obviously broken beyond repair, but weenies still defend that bullshit.

What's more shocking is that some languages were already better than C before C existed than C is in 2018.

More like 50 years, although if that quote was from 1980 or earlier it was probably true at the time.


Lisp machines are not the only thing besides UNIX and other C-based operating systems. Instructions for segmented memory and strings would be useful to most programmers. If the file system worked the way flat memory model does, it would suck. The main Multics idea was to make memory work more like files so all memory is in the file system and made of segments that can change their size like files do. This is still a good idea even though the MMUs of most processors don't support it.

It might use an extra 1% of silicon to benefit 90% of developers, but I don't know the exact numbers. RISC fanatics used to say FPUs were bloat and you should do floating point in software, but the facts showed that they were wrong and now they have FPUs too.

This poor user tried to use Unix's poor excuse forDEFSYSTEM. He is immediately sucked into the Unix "group ofuncooperative tools" philosophy, with a dash of the usualunix braindead mailer lossage for old times' sake. Of course, used to the usual unix weenie response of"no, the tool's not broken, it was user error" the poor usersadly (and incorrectly) concluded that it was human error,not unix braindamage, which led to his travails.

Continuing in the Unix mail tradition of adding tangential remarks,Likewise, I've always thought that if Lisp were a ball of mud, and APL a diamond, that C++ was a roll of razor wire.That comparison of Lisp and APL is due to Alan Perlis - heactually described APL as a crystal. (For those who haven'tseen the reasoning, it was Alan's comment on why everyoneseemed to be able to add to Lisp, while APL seemedremarkably stable: Adding to a crystal is very hard, becauseyou have to be consistent with all its symmetry andstructure. In general, if you add to a crystal, you get amess. On the other hand, if you add more mud to a ball ofmud, it's STILL a ball of mud.)To me, C is like a ball. Looked at from afar, it's nice andsmooth. If you come closer, though, you'll see littlecracks and crazes all through it.C++, on the other hand, is the C ball pumped full of toomuch (hot) air. The diameter has doubled, tripled, andmore. All those little cracks and crazes have now growninto gaping canyons. You wonder why the thing hasn't justexploded and blown away.BTW, Alan Perlis was at various times heard to say that(C|Unix) had set back the state of computer science by(10|15) years.

Like a fucking clock.

not an argument

The non literal, imageboard memey type, since plenty of socially competent normalfags who work as devs have hate boners for certain languages.

...

Please explain that segmented memory bullshit. Why would anyone want to use that instead of paging these days? And what does C have to do with it?

Do you know what the commutative property is? Do you know how math (arithmetic) works?

I'm glad we have a dismissive kike serial first poster who obviously doesn't even read or comprehend threads. We all know you're just here to try to ruin the board and it simply makes everybody more antisemitic every time you do it.

Great job!

It's more Rust shilling.

Elaborate. And like is asking, how is that better than paging?
In most embedded applications (except maybe something like a router with a complex firewall?), string processing is not a performance bottleneck, or is not needed at all; that extra 1% of silicon would just mean a shorter battery life and a higher cost.
FPUs were first added as optional coprocessors. It's only after much experience with them that their general usefulness was realized. I'm all for a completely optional string coprocessor that isn't part of the main CPU architecture.

Multics has both. Paging is about swapping and organizing physical RAM into virtual memory. Segmentation is about organizing data into segments that can grow and shrink independently like files. Instead of reading and copying data from disk, all data is directly addressable as memory and paged in and out as needed, which the article calls demand paging. The stack and other parts of the process memory are also segments which are part of the file system. What C has to do with it is that the C weenies didn't care about segmentation and blamed C's inability to work well on these architectures on segmentation.

multicians.org/multics-vm.html

It sucks because it doesn't work how arithmetic works. a - 1 should mean what it does in math, so if a is [1,2,3,4], a - 1 should be [0,1,2,3]. That reminds me of more bullshit "UNIX math" from the UNIX language JavaScript. [1,2,3,4] == [1,2,3,4] is false, [1,2,3,4] == "1,2,3,4" is true, and [1,2,3,4] - 1 == [1,2,3,4] - 1 is false, but that's because they're both NaN. Someone's going to defend that too.


The lesson I just learned is: When developing with Make on 2 different machines, make sure their clocks do not differ by more than one minute. Raise your hand if you remember when file systems had version numbers. Don't. The paranoiac weenies in charge of Unix proselytizing will shoot you dead. They don't like people who know the truth.Heck, I remember when the filesystem was mapped into theaddress space! I even re

Your words mean nothing.

So, memory mapped IO. The lack of memory mapped io for disk access on hardware architectures is not a result of C but rather that there's no point in implementing it. Also you didn't explain why we need segmentation for that (and why that's better than paging), or why paging "in and out" is not "reading and copying data from disk".

Did you read the article? It says:

Also let's talk about C and hardware instead of memory allocation mechanisms implemented by operating systems.

Buffer overflows, null pointers, segmentation faults, weak typing, and manual memory management: The Universal Programming Languageยฎ

So, [] (as is a[i]) is an operation that takes the ith element of collection a. There is nothing commutative about this operation: the left-hand side is a collection, and the right-hand maps into it. From this definition, it should be obvious that the ith element of collection a is not the same as the ath element of collection i. In addition, in C i must be of a type that is coercable into an integer.

a = { 1 : "a", 2 : "b" }
i = 2
a[i] == "b"
i[a] is a type error

It's clear that you do not know how the C standard defines array evaluation.

I know it, I just reject it as nonsensical and unintelligent.

No you don't.

C's Standard behavior is nonsensical for the definition for array subscripting that I clearly gave.