The world runs on C

Let's run through this. You sit down at your computer. It's running an OS with a kernel written in C. It also has a set of userland utilities written in C. You open your browser. That browser is written at least partially in C. You type in a URL and hit enter. That connects to DNS servers to resolve the human-friendly name you typed into an IP address. The most common DNS server is BIND, which is writtten in C. Those IP addresses are most likely dynamically assigned with DHCP. ISC DHCPD is written in C. I hear some prefer DNSmasq for these tasks. That is also written in C. The web server you're connecting to responds to the request and gives you the page. That web server is running either apache httpd or nginx, which are both in C. Later on, you go to check your email. Your email client is written either partially or completely in C. You most likely connect to an external server that stores your mail. That server needs special software to do its job, and there are often serveral options. To mention a few, Postfix is in C, Sendmail is in C, Exim is in C, OpenSMTPD is in C, and Dovecot is in C. What if you want to transfer a file at some point? FTP? The standard FTP/SFTP clients are in C. vsftpd is in C. pureftpd is in C. profptd is in C. NFS? nfs-utils are in C. SMB? Samba is (at least in part) in C. Need to connect to something over secure shell? ssh client and daemon are in C.

How does this make you feel?

Attached: c-programming-language.jpg (900x500, 40.61K)

Other urls found in this thread:

source.isc.org/cgi-bin/gitweb.cgi?p=dhcp.git;a=blob;f=common/alloc.c;h=47609bbc022378b8a291823a7c2c38dfc50e5815;hb=b1ed27b95152c128d735eb36c42cc5c35e001ae2
openhub.net/p/linux/analyses/latest/languages_summary
openhub.net/p/coreutils/analyses/latest/languages_summary
openhub.net/p/grep/analyses/latest/languages_summary
openhub.net/p/glibc/analyses/latest/languages_summary
openhub.net/p/chrome/analyses/latest/languages_summary
openhub.net/p/firefox/analyses/latest/languages_summary
openhub.net/p/bind/analyses/latest/languages_summary
openhub.net/p/dhcpd/analyses/latest/languages_summary
source.isc.org/git/dhcp.git
openhub.net/p/dnsmasq/analyses/latest/languages_summary
openhub.net/p/apache/analyses/latest/languages_summary
openhub.net/p/nginx/analyses/latest/languages_summary
multicians.org/mtbs/mtb692.html
cve.mitre.org/cgi-bin/cvekey.cgi?keyword=buffer
twitter.com/AnonBabble

Nothing would boot if we didn't have assembly.

Why is Ada such a meme outside of serious Computer Engineering?

how into C?

Attached: 792px-The_C_Programming_Language_cover.svg.png (792x1023, 104.69K)

Ada is a meme because it was so comically verbose and asinine that even the government found it too expensive to use.

But why write assembler directly when for the most part your compiler can generate that for multiple architectures from a single source? Additionaly writing asm directly for some risc architectures is just not that usefull since they were designed with HLL in mind.

Looks like there's a lot of things to rewrite in Rust!

Did you write your compiler?

There's the door. Help yourself to it.

Attached: door.png (154x336, 123.85K)

No, since I am quite content with the output produced by clang. What's your point?

Just asking a question. Why are you getting defensive?

I feared for a pointless discussion to erupt about how you could only trust your compiler's output if you have not written it yourself.

Clang huh? What's its advantages over gcc?

I found it to be one of the most sane languages I've ever used. It is a big language, but unlike C++ which has disparate features added randomly, Ada works in a coherent whole.

you get to take extended multi-hour breaks and heat your house with cpu while it takes 50 times longer to compile.

I kinda got used to using clang for development builds because of better error messages und doing production builds in gcc. Although this gap seems to close with every new version.

Those days are over and were mostly due to clang/llvm not doing all the optimizations gcc was performing. Now both take quite long to compile bigger projects.

But, the verbosity doesn't make it unreadable; unlike Java (and other verbose languages).

Wasn't also named by the StingyJeWs?

The sensation of self-defeat as you replace a Free toolchain with one that was funded specifically to get the GPL out of tivoized platforms.

Not intentionally gimped by a narcissist, foot-scum-eating ideologue, for one.

Appreciate it anons. This makes it easy to choose which is the better compiler.

It's not like you're going to write anything with either.

They are both terrible. TCC is also not perfect but at least it takes 10x less time to compile things.

TCC is a joke, it's intended for small embedded tasks.

Lol, for serious embedded projects like the government/military does, the total LOC produced per hour is like 3 max if you count all hours spend on the cycle. So including inital architecture, code reviews, all the way through to the final test.
I highly doubt that having to type "procedure foo(" instead of "void foo{" was the deciding factor.

Uuuuuuu

Which complaints do you have specifically?

Damn it.

C accidentally stumbled into it's position. Historical shit, not quality. Not using C++ for large systems-tier projects in current year is the height of retarded.

Something still in use is not equal to something that was in use but has been replaced

Yay, hooray for encouraging lots of bugs. Could've written OS in Ada or something that doesn't totally suck. Hell even Pascal is better. C is the niggest.

I heard that C++ is designed. Is that true?

Do you enjoy the smell after you talk out of your ass?

Do it faggot

C++ AMP does parallel

It sucks. C requires much more code than other languages and has more bugs per line. That's because basic programming tasks like strings and memory allocation are extremely complicated, slow, and bug prone in C. Anything more sophisticated than slow and buggy strings and some "UNIX backwards compatible" floating point math functions aren't even provided by C, so everyone has to reinvent them from scratch. How much of that code is from memory allocators and garbage collectors because malloc sucks?

The ISC dhcpd allocator alloc.c is a good example of how unproductive C is. It has its own allocating and freeing and its own implementation of strings.
source.isc.org/cgi-bin/gitweb.cgi?p=dhcp.git;a=blob;f=common/alloc.c;h=47609bbc022378b8a291823a7c2c38dfc50e5815;hb=b1ed27b95152c128d735eb36c42cc5c35e001ae2

Everything in C sucks so badly that even with the meager functionality C does include, reinventing your own saves time and increases speed and reliability compared to using the broken bullshit that comes with it. Maybe that's why C weenies think C is productive. Instead of wondering why C strings are so bad that every major program has to reinvent strings, they think C is a great language because making their own strings takes less time than using string.h properly. I also completely left out all the GUI software and libraries other than glibc that need to be present for the browsers and other bloatware to run.

openhub.net/p/linux/analyses/latest/languages_summary
openhub.net/p/coreutils/analyses/latest/languages_summary
openhub.net/p/grep/analyses/latest/languages_summary
openhub.net/p/glibc/analyses/latest/languages_summary
openhub.net/p/chrome/analyses/latest/languages_summary
openhub.net/p/firefox/analyses/latest/languages_summary
openhub.net/p/bind/analyses/latest/languages_summary
openhub.net/p/dhcpd/analyses/latest/languages_summary source.isc.org/git/dhcp.git
openhub.net/p/dnsmasq/analyses/latest/languages_summary
openhub.net/p/apache/analyses/latest/languages_summary
openhub.net/p/nginx/analyses/latest/languages_summary

> There's nothing wrong with C as it was originally > designed,> ...bullshite.Since when is it acceptable for a language to incorporatetwo entirely diverse concepts such as setf and cadr into thesame operator (=), the sole semantic distinction being thatif you mean cadr and not setf, you have to bracket yourvariable with the characters that are used to representswearing in cartoons? Or do you have to do that if you meansetf, not cadr? Sigh.Wouldn't hurt to have an error handling hook, real memoryallocation (and garbage collection) routines, real datatypes with machine independent sizes (and string data typesthat don't barf if you have a NUL in them), reasonableequality testing for all types of variables without havingto call some heinous library routine like strncmp,and... and... and... Sheesh.I've always loved the "elevator controller" paradigm,because C is well suited to programming embedded controllersand not much else. Not that I'd knowingly risk my life inan elevator that was controlled by a program written in C,mind you...

Yep. x += y; has so many bugs in it.

That line does contain a bug because it could overflow. C has no way to detect that overflow has occurred. I think it makes sense to trap when unsigned arithmetic overflows too, but C weenies use "unsigned" to mean "won't trap" instead of its real purpose, to represent numbers that can't be negative. Adding 100 and causing a number to overflow and become less than 100 is just as bad as adding 100 and causing a number to become negative.

It would be a good thing if hardware designers wouldremember that the ANSI C standard provides _two_ forms of"integer" arithmetic: 'unsigned' arithmetic which must wraparound, and 'signed' arithmetic which MAY TRAP (or wrap, ormake demons fly out of your nose). "Portable Cprogrammers", know that they CANNOT rely on integerarithmetic _not_ trapping, and they know (if they have donetheir homework) that there are commercially significantmachines where C integer overflow _is_ trapped, so theywould rather the Alpha trapped so that they could use theAlpha as a porting base.

Oh no. I don't know how to check the carry flag? The post.

Bad programmers write bad code regardless of language. Embedded folks use dnsmask to provide DHCP because despite the limited scope it's much higher quality and ironically supports more features of the protocol.
Sounds like you're just a bad programmer.

...

...

There's nothing wrong with intentionally overflowing an unsigned value. It's a heavily used technique for speed. Maybe this is just a level of software engineering you're not comfortable with and should go back to webdev and let others write the libraries you depend on?

not C

...

Who said anything about portability?

asm keyword is C

...

The asm keyword is not C. It's listed as a common extension in the standard as a warning to compiler implementations and is not required.

I'm no C expert but wouldnt you just be able to do sth like (not perfect but works in most cases):
int finalx = x+y;if(finalx

fug
*finalx>x

Who said anything about portability?

asm keyword is C

Pretending to be retarded, or retarded?

Attached: faggot.png (684x539, 45.51K)

anything that can be compiled could be the foundation. C is popular because of unix.

No, the foundation has to be the one with the smallest common denominator otherwise everything will run like fucking Java.
It is because you have yet to experience it. To our luck Java faggots are unproductive as fuck and rarely write anything bigger than Hello World programs. The only Java program I use is JDownloader2 and that's it.
How do you address single bits also called booleans in memory without C struct bitfields?
Which language would you use for the task?

Now you pretend you didn't read it already because you have no arguments anymore.

You wanna argue about Java being shit go ahead.
You can't address single bits in modern architecture anyway so idk what you are talking about.

What does a large number suddenly becoming much smaller have to do with speed? If you go to download a big file and the size wraps around to 300 bytes because of an unsigned overflow, it would "download" faster, but that would suck.

I'm not comfortable with 17 million line kernels and 30 million line browsers. What sucks about UNIX culture is that they measure value by the amount of code instead of by functionality, quality, and usefulness. That's probably why they prefer monolithic monstrosities like Linux over microkernels where people can write drivers in whatever language they want.


C doesn't provide any way to detect integer overflow at all. Assembly is not C and C is not assembly.

You're brainwashed. Some parts of C are based on whatever the PDP-11 did, like the pointers and flat memory model, but C is not closest to how any machine works. That's AT&T marketing, not reality. AT&T shills knew people criticized C for not having good arrays, strings, numbers, control structures, memory management, linking, macros, and so on, so they made up this bullshit that C is actually portable assembly, so all of its mistakes are the hardware's fault. Every single problem with C is blamed on hardware and programmers who use the language, instead of the AT&T employees responsible. AT&T ran into the same problem, but since they didn't care about quality, they didn't have a reason to complain about their own bugs and blamed the users for using the "tools" the wrong way.


Java defines all overflows as wrapping around. This doesn't have anything to do with why Java is slow.

PL/I has bit strings which have the properties of strings and you can also say how many bits a number is. This is what Multics did since before C and UNIX existed.

multicians.org/mtbs/mtb692.html
dcl 1 encoded_access_op aligned based, 2 audit_type, 3 object_type fixed bin (4) uns unal, 3 access_type fixed bin (2) uns unal, 2 operation_index fixed bin (12) uns unal, 2 detailed_operation fixed bin (18) uns unal;

You have obviously been brainwashed. You can't tell workingsoftware from broken software. If you don't have somehorror story, or some misdesign to point out, KEEP YOURPOSTS OFF THIS LIST!! Kapeesh? We don't want to hear yourconfused blathering. Go bleat with the rest of the sheep inNew Jersey.

Of course, used to the usual unix weenie response of"no, the tool's not broken, it was user error" the poor usersadly (and incorrectly) concluded that it was human error,not unix braindamage, which led to his travails.

Power-of-two ringbuffer indexes work via small registers or ANDing off the top bits of a larger register and letting the remaining bits overflow to loop. The webdev alternative is to do like,
if (index >= size) index = 0;
but this involves a conditional and divergent codepaths. It's significantly slower, and in some cases like in a CUDA kernel will completely destroy performance.
Various other algorithms rely on adding large primes to indexes that wrap to effect a 'random' walk of a buffer, and some encryption algorithms do similar with the inverse of the golden ratio as part of the key schedule.
This is a level of software design that you'll probably never be comfortable with, and that's fine. Stay away.

They do? Last I checked, Linus Torvalds was very uncomfortable with his kernel's size, mainstream web browsers are hated as bloated pieces of shit, and the suckless movement is making some great software with very minimal codebases.
Nice in theory, but they've had a mixed track record aside from SeL4 and QNX.
That's part of its appeal: it doesn't rely on architecture-specific features and works on practically anything, but it's still vaguely low level enough that reasoning about performance and underlying hardware features/quirks is fairly easy. Sure, the langauge definitely has its flaws but it's popular in embedded circles and almost everywhere else for a reason.

With a CPU running an opaque hypervisor coded in C that's vulnerable to remote exploits
Which is buggy and vulnerable to remote exploits
Which are buggy and vulnerable to remote exploits
And buggy and vulnerable to remote exploits
Or to an eavesdropper that is able to run remote exploits because your C-based network stack is buggy and vulnerable to remote exploits
It's also the most buggy and exploited
Which is exploitable
Which is also exploitable
Either that, or the full list of the emails and passwords in their database because their predominantly C-based LAMP stack is buggy and vulnerable to remote exploits
And thus is buggy and vulnerable to remote exploits
Good thing I don't use a client written in C, that would be buggy and vulnerable to remote exploits
Your spell checker is probably buggy and vulnerable to remote exploits
All had to be patched the hell out of to prevent them from being buggy and full of remote exploits
I'd do it with a secure protoco-
Oh, you're serious. Let me laugh even harder.
Which are buggy and vulnerable to remote exploits
That must be the part that's buggy and vulnerable to remote exploits.
Which are ancient but still need to be patched every odd week because they're buggy and vulnerable to remote exploits.

tl;dr while I'm impressed by the elegance and simplicity of sand castle design, I think there's much more of a future for concrete in the construction industry.

C's weaknesses are now advantages. The enemy isn't outside the gates anymore, it's inside and is pushing you out. If you lock everything down now, you'll soon find you've helped lock yourself out of your own devices with Google and Microsoft spying on you from the safety of their flawlessly secure hypervisors.

...

You won't have a choice. They'll be running above your Linux distro, like Minix does today.

Bullshit, I can buy whatever hardware I please and I avoid (((Intel))) like the plague.

C is great for systems programming, but shit for applications. Not sure what your points is.

Not if I use Haiku.
True only for Unix-like systems. Mac OS, while it does include FreeBSD stuff written in C under the hood, has produced most of its "userland" in Obj-C. Windows has produced most of its "userland" in C++ and C#.
Not really sure how that's relevant. Most of, say, Firefox, is written in C++. Yeah, there's some C. It's being swallowed by Rust code...
Sure.
I hope you know that Apache and nginx aren't the only web servers. It's true, however, that a lot of web servers are written in C. People looking for performance like C. Of course, people looking for vulnerabilities in networking software also like C. So, there's that.
Yeah, yeah, same as above.
OpenSSH is in C. There are many ssh v2 clients and daemons. Some are in C. Some aren't.
Indifferent.


ftfy. seL4 is an interesting research kernel with some potential, but that's all so far, in spite of morons constantly memeing it here.

Supposedly Plan 9's take on C is pretty good, although I haven't looked into it much aside from its threading library and Go-style #include restrictions.

See below how to do something like that in a language that is not c. Example uses an array, but of course you can extend it no records, records with arrays in them, arrays of the same record type.
with ada.Text_IO; use ada.Text_IO;procedure Main is type Bit is range 0 .. 1 with size => 1; type Byte is range 0 .. 255 with size => 8; type Op_Type is (Not_A_Faggot, Faggot) with size => 1; type Op_List is array (1 .. 8) of Op_Type with Pack, Size => 8; foo : Byte := 2#1111_1111#; This_Board : Op_List with Address => foo'Address;begin for op of This_Board loop put(Op_Type'Image(op)); end loop;end Main;--Prints FAGGOTFAGGOTFAGGOTFAGGOTFAGGOTFAGGOTFAGGOTFAGGOT as expected

My kernel is written in rust.

Wow, not everyday see a person who knows Ada Language.
Do you professional code on Ada or it is just a hobby? Is it worth to learn it for interesting job (military or something else)?

Sry, expected just the UNIX hater with random quotes from email archives in the code box shilling something like LISP or Java or whatever memelang comes into his mind.
Also

Hobby for now, still learning how to use it properly. If you seek the easy road to employment just go for c/c++.
I think however, that Ada, or Spark, is a much better way to program embedded systems than C/C++.
Somewhat related, if you look at Structured text used for PLC programming, the language is derived from the Pascal/Algol family, same as Ada. I think they did that for a reason.

Attached: Vade retro Satanas.jpg (275x183, 5.25K)

Why not JavaScript, you low-level luddite?

It's literally slower than GCC at -O0. And produces slower code.

...

I had a teacher who used to be in the military and worked at a nuclear control center. He unironically says Ada is his favorite language and he loves it.

inline bool carry(unsigned x, unsigned y){ return x+y < x;}
every compiler worth its salt will recognise this idiom if optimisation is turned on and use the carry flag you retard

...

OMG it's $CURRENT_YEAR, I can't even...

user you're making me afraid.

I love how you list a bunch of useless tripe like DNS, email, and FTP as applications of C. The only reason I still use C is because I haven't switched out my kernel yet.

No you retarded nigger. C is popular because it's popular, Ada is not popular because it's not popular. No other reason. Ada is not any more verbose than C (conforming C, or at least portable C, not your chicken scratch)

even as a trolling attempt this is fucking retarded

HAHAHAHA
He thinks he's elite because he knows about the asm keyword, but he isn't even aware of the concept of portability in C.

Ada's verbosity and high cost to develop in was a fax machine meme for at least a decade, kiddo. While Ada 2005 tried to address that, 2005 was long after everyone had chosen to deprecate Ada.

Who said anything about portability?

No.

Once again, the carry flag cannot be checked portably. For example, x86 and ARM differ on when the carry flag should be set. Unsigned addition is the same but unsigned subtraction differs.

It's literally the inverse on ARM of what it would be on x86. Who the fuck cares if the code is portable and compiles into optimal instructions. Some ISAs don't have a carry flag at all, anyway.

Actual programmers.

Actual programmers know the arch's they will be programming for and take the time to write source that implements the necessary precautions for those arch's and get the job done without complaining.

No, actual programmers write portable code so it continues to work on new hardware. Would be a sad tale for RISC-V if all the open sores didn't work on it.

Who gives a shit about a new arch that doesn't even have a compiler written for it? Why should someone write for an arch that didn't exist when they wrote the source? You are moving the goal posts.

bullshit
#CFLAGS='-O0 -std=c99 -pedantic'env CC=gcc time make -sreal 5.950000user 5.470000sys 0.430000make cleanenv CC=tcc time make -sreal 1.610000user 1.440000sys 0.150000
I prefer working binaries to fast ones. Both gcc and clang perform unsafe optimizations in the race to the bottom to produce the fastest binaries.

yeah, a meme among a bunch of "software engineers" who see a line of Ada and be like "XDDD what is this zomg it's not like !!!!!!"

actual "programmers" test something once on the machine they're running on and ship it. they aren't even aware that a concept like portability exists in C. and why should they be? the schools don't even teach this either

you can't check carry in C you oblivious shithead. and the only time you'd bother to do it in assembly is in a highly optimized piece of code, not every single time you do arithemetic.
now this leads to the mystery of how a retard like you programs C? if you have assembly shit everywhere, how will you ever run it on anything other than your x86 macbook? why even program C in this case when you can just program assembly? i guess it's faster to scribble down most of the code in C, but then this leads to the next question, how are you dealing with UB? do you even know what that is? probably not since you don't even seem to understand portability beyond for the sake of arguing that it's not needed

refer to the disaster of the x86 32-64-bit transition

cve.mitre.org/cgi-bin/cvekey.cgi?keyword=buffer
Th-thank you, C.

I am not arguing for portability. Haven't and won't. If a programmer wants his program to run on multiple arch's, he knows the arch's he wants to run them on and writes appropriate source for them. You think java compilers or rust compilers or lisp interpreters aren't susceptible to programming errors like C or assembly has?

Java compilers have errors but that shouldn't be hard to fix. The point of using Java is so that the programmer targets only the JVM and the JVM does the hard work of translating to a specific architecture.

Someone has to write the JVM for the specific arch.

All that traffic goes over a switching network written in COBOL. Checkmate C-theist.