C Is Not a Low-level Language

You might have misunderstood. Every core does everything sequentially but the programs don't run at ring 0. And OP claimed that to be an issue
I'm not sure what you're talking about. C is a high level programming language but the amount of abstraction has been proven to be reasonable.
I don't think anything keeps you from starting them at the beginning of the program
Other languages just abstract that as far as I know. So it won't get any better because your code looks prettier.
And no one cares for theoretical models. The only thing which matters is practical capability.

It's an issue because C is not very good at using multiple cores, so processors focus on making single core execution fast, which causes all kinds of problems.
C is a lot higher level than it appears to be because of everything going on in the processor, but it's still being used as a low level language. It's getting some of the worst of both worlds.
Not knowing how many you're going to need in the first place and what data they're going to process is what keeps you from doing that.
If you want good parallelism you need to make it a core part of the way computation is handled. Erlang processes are the natural way to structure Erlang programs, so the parallelism is almost implicit. POSIX threads are absolutely not the natural way to structure C programs.
It would get better if the processor were designed for that execution model. Modern processors are optimized for C and do things that are necessary to make C run fast but would have better alternatives for more realistic execution models.
GPUs are useful because their execution model is so different from CPUs, even though everything you can do on a GPU could be done on a CPU in a more sequential way.
These models represent the ways the systems actually work. Practical capability depends entirely on the way these systems work.

I believe what he meant is that C is meant to work with an OS where memory is treated as one big array. Do you have an argument against this?
I don't know if they do, but x86 is a giant mess. RISC-V has major interest in it for a reason, corporations (including Google, Samsung, Western Digital, NVIDIA) aren't just burning money for fun.


It can be compiled into machine code like many other languages. What you said didn't refute what he said.
You're saying that POSIX threads, the thread execution model made for Unix-likes and Unix, has nothing to do with C? Come on, you can't possibly believe that.
Heres how they are used in practice, if you don't believe me: "Pthreads are defined as a set of C language programming types and procedure calls, implemented with a pthread.h header/include file and a thread library - though this library may be part of another library, such as libc, in some implementations." See: computing.llnl.gov/tutorials/pthreads/

I think they're saying that pthreads is a way to use parallelism in C and therefore parallelism in C's execution model is a solved problem and as easy as it could be.

No shit

Read the thing

shut the fuck up faggot. you don't have the slightest idea what you're talking about. side channel attacks on cache/bp are on all mainstream CPUs. this has nothing to do with "muh intel vs AMD", since it also applies to e.g ARM. also I haven't read these meme vulns like Spectre and Meltdown, but they probably are some general exploit on some way the BP is done in Intel. However, since cache/BP intrinsically lead to side channels, no matter how many of these meme vulns that get released and patched, you will always be able to form new similar side-channel attacks against any software running on a superscalar CPU

pedantry, i was just telling this polnigger that the vulns are on AMD CPUs as well. the fact that intel invented the architecture is not relevant
wrong
Nope. C is an abstract programming language, just like Java. It has nothing at all to do with assembly contrary to what LARPers tend to think.

Not everything is parallel.
serial computation is a better abstract modeling for reasoning, just as call by value is more understandable than call by reference/name.
Automatic parallel compilation was mastered by fortran architects, and resulted in SIMD, Very long instruction word (VLIW), and GPU hardware.
C is not designed for strict numeric computation, it is an attempt at symbolic computation in a byte oriented fashion.
American engineers thought call stacks and recursion were bad because IBM would lose money if they were to implement European "novelties".
I agree that C compilers should not aggressively optimize as gcc.

Attached: b8580f2d9f5e1333e4b2613d0285572e9c62f3d8c48efec55b8f6e881452f774.jpg (400x520, 76.04K)

Already said that here
CPU switches between the threads but OP claims
It is sequential and to work around that the OS abstracts it away by switching between the threads.

Not in any way. Java runs in a VM from byte code with automatic memory management. The VM with the standard library alone is 130MB in size. Running software on a PC inside a PC can't be counted as running software directlyon a PC in its native instructions.
Assembly and C are close enough that there is a GCC extension which allows for embedded assembly.

recursion is simply slow and offers only laziness to the one writing

Attached: laughingwhores.png (449x401, 490.09K)