C is used because it sucks. C requires more programmers, more bullshit "research", and more wasted time. Good languages need less programmers, can reuse decades of existing research, and save time. Since a C program needs more programmers just to produce a lower quality result, there is more demand for C programmers, so more universities teach C. Since C is unable to use 60 years of research that works, there is more demand for bullshit "research" to make it safer, which never works because of "unrelated" C flaws, so the problems can be "researched" again. "If it worked, it wouldn't be research!"
A few examples of this bullshit are integer overflows, buffer overflows, and incompatibility, which were all solved in the 60s but are still problems with C. These were known to be old problems with C in 1990 and it's not like nobody tried to fix them.
There are "guidelines" about how to check for overflow with addition and multiplication in C. In assembly, it's just checking a flag or comparing the high register to 0 after performing the operation, if the computer doesn't have a way to trap automatically. C needs about 8 branches (7 ifs, 1 &&) and 4 divisions (>>954468) just to check whether the result will be in range after multiplication. That's what professional "coding standards" recommend.
Most vulnerabilities, segfaults, and other errors in C are due to overflowing the bounds of strings and arrays. All this bullshit like ASLR, heap canaries, and stack canaries were invented to put a band-aid on this one problem. This was solved around 60 years ago by having the compiler compare the index to the array bounds. Tagged architectures like Lisp machines have an array data type and bounds checking in hardware to make it faster and easier for programmers and compilers. There are also around 60 years of techniques to determine when it's possible to optimize away unnecessary bounds checks.
A big feature of mainframes and workstations (not including UNIX and RISC) is compatibility between different languages. Multics and VMS have data descriptors so you can pass data between programs written in any language. Fortran and Pascal on Lisp machines share bignums, arrays, and GC with Lisp. Microkernels treat drivers as ordinary user programs, so drivers can be written in any language including interpreted languages if they're fast enough. What really sucks is that AT&T shills did not want C to be compatible because it kept people from moving away from C and UNIX.
C was "designed" in a way that these solutions don't work for it. The solutions work for other languages, like Lisp, Ada, Cobol, and BASIC, but they don't work for C, because C sucks. Instead of using decades of simple and proven solutions, the C "solution" is to throw more programmers at it and, if that doesn't work, blame the user or hardware.
Doesn't it give you that warm, confident feeling to know that things aren't Quite Right, but that you'll have to wait 'til you need something to know Just What? Life on the Edge. Get with the program -- remember -- 90% is good enough. If it's good enough for Ritchie, it's good enough for me!"It's State of the Art!" "But it doesn't work!" "That ISthe State of the Art!"Alternatively: "If it worked, it wouldn't be research!"The only problem is, outside of the demented heads of theUnix weenies, Unix is neither State of the Art nor research!