16 hours of troubleshooting and debugging on a microcontroller to realize that C requires a cast before bitshifting to...

CUT MY LIFE INTO PIECES

Attached: 0f6099e333f6da977d23997c97156049b713298d_hq.gif (500x281, 1000.27K)

I know you're angry at your own incompetence but this doesn't need it's own thread.

WHEN YOU DO OPERATIONS ON AN INT AND A DOUBLE THAT RESULTS IN A DOUBLE VALUE THEN ASSIGN IT TO A DOUBLE, C AUTOMATICALLY DOES THE CONVERSATIONS FOR YOU. YOU DON'T HAVE TO EXPLICITLY CAST THE INT TO A FLOAT PRIOR TO CALCULATIONS (EXCEPT DIVISION). HOW IS IT NOT REASONABLE TO ASSUME C ALSO DOES THIS FOR OTHER DATA TYPES?

Also, this board is a little slow so having threads that naturally find a subject (i.e. bad programming stories) is good. Too bad you're too incompetent to understand that was implied :^)

what µC were you using

checked, but youre wrong
if you wanted discussion you could have started a thread about Electronics or MCUs

Should have made a programming stories thread then

Attached: 12641.jpg (313x313, 11.37K)

Teensy dev board - ATMEGA32U4

No sane language should do implicit type conversion IMHO.

Why don't you just use assembly and/or Forth? C is for the niggers of Unix.

This was an insane thing to think in the first place, OP. Think it through.

I was going to insult C, but
That is your fault. You should have been able to pin point the issue using a debugger or something else in a much shorter amount of time.

Name one forth program which I depend on every day. protip: you can't

FTFY

That's the definition of weak typing. What are you trying to say?

This is the problem.

SWIFT transaction systems that keep the entire world from collapsing and rebelling against (((them))).

That they shouldn't exist, duh.

So you were saying the exact same thing as the post you were "fixing"?

Strongly typed language can do some type conversions too, but mistakes are caught at compile time instead.

If the type conversion is implicit then it's weak typing.
A lot of languages that are otherwise strongly typed do use weak typing when it comes to integers and floating point numbers, though.
Haskell has static strong typing. C has static weak typing. Python has dynamic strong typing. Javascript has dynamic weak typing.

Wrong. A strongly typed language can use implicit type conversion. A strongly typed language just means that variables must be declared with data types.
So do many languages with shitty type systems, such as Java. The difference is that Haskell has full type inference.

That's called static typing. Static and strong typing are different things.
Static means types of variables don't change. Strong means types of values don't change implicitly.

Wrong again. Static typing means that type checking are done at compile time.

Self-correction: this is a consequence of strong-typing in a static language. Strong typing means that data types and their interaction are well-defined. Just to pull back to the original remark:

This statement still makes sense, in that a strongly-typed language can do implicit type conversion.

Attached: 1497732035251.jpg (578x461, 28.69K)

I'm making a demo for C64 and I've made this mistake at least 3 times already, I always laught when I finally find it out.

C64 has 64kb of memory and the video chip has a 16kb window into that memory, you can select which part of it you want it to access (vic banks). By default it's the first bank ($0000 - $3fff).

I start my code at let's say $1000 and set the vic bank to $4000 - $8000. in that vic bank memory area there are graphics and sprite pointers which are constantly changing (if you use sprites of course) and these 3 or 4 times my code grew past $4000 and my sprite pointers were overwriting my code causing weird things to happen or crash. The worst thing that can happen is to overwrite the code with zeros, because the opcode 0 is a break instruction and it jumps to your interrupt handler. So I set some interrupt handler for a specific line on the screen and then it's being executed couple of times durring the frame and you just don't know what the fuck is happening. Funny shit.

It could be yours, if only you wrote it.

So, which was it? Were you compiling without Wall, used some obscure or old compiler, or did you flat out ignore the warnings?

C is a strongly-typed, static language. it's not dynamic so it won't cast for you.

This would have never happened had you been MISRA checking your code!!

It's like you posted without reading the thread first.

You're trying to fit terms like "strong" or "weak" typing to a shit "language" that's "gee, I don't know, whatever the PDP-11 compiler did." The words "strong" and "weak" aren't really precise terms anyway because people mean different things when they say those words. Whatever word is used to describe C will give that word a bad reputation, so I call it a UNIX language.

I've been confusing my compiler class with this one for a while now. I pull it out any time someone counters my claim that C has no block structure. They're usually under the delusion that {decl stmt} introduces a block with its own local storage, probably because it looks like it does, and because they are C programmers who use lots of globals and wouldn't know what to do with block structure even if they really had it. But because you can jump into the middle of a block, the compiler is forced to allocate all storage on entry to a procedure, not entry to a block. But it's much worse than than because you need to invokethis procedure call before entering the block.Preallocating the storage doesn't help you. I'll almostguarantee you that the answer to the question "what'ssupposed to happen when I do ?" used to be"gee, I don't know, whatever the PDP-11 compiler did." Nowof course, they're trying to rationalize the language afterthe fact. I wonder if some poor bastard has tried to do adenotational semantics for C. It would probably amount to atranslation of the PDP-11 C compiler into lambda calculus.

Based UNIX hater fag. Keep triggering the UNIX weenies on Zig Forums.

Strong typing means all type errors are caught. C is weak because of void* and unions letting you do unsound things.

RRRRRRREEEEEEEEEEEEEEEEEEEE

Yeah. C and UNIX are so shit they're literally used everywhere.
- Posted this from my UNIX-based OS written in C using libraries written in C to a RISC server with its HTTP server written in C to now respond to your browser written in C on your OS written in C.

Attached: serveimage.jpg (900x810, 51.93K)

...

are you going to rewrite and reimplement all the software those routers are running between your computer and the server?

The only "UNIX-Based" OS currently in use is BSD as Linux is not UNIX-Based

Install redox OS on it and copy+paste the assembly as a kernel module from the FOSS code of the linux kernel since that is what they are optimizing down to on routers whose purpose is fast transfer of bandwidth. Done, and that's assuming redox doesn't already have something like this.
Compile from C the rust library, then recompile the rust library with itself like bootstrapping a compiler. Or make an assembly implementation of said architecture in rust. Should be easier then dealing with specter of x86/64, middle aged powerPC, some MIPS, and all of ARM being instantly pwned by said exploit. You reduce attack surface very very greatly by completely eliminating a few classess of security bugs. Imagine the costs if your router was hacked to MITM and you were framed for it, or something worse. You would pay out the ass in law suits compared to the MAYBE 60,000 dollars it takes to pay a dev to take a year to upgrade your infrastructure to redox or something more secure.

Linux and the GNU coreutils are very UNIX like, more so than "offical" UNIX OSes.

...

Incorrect. You are forgetting Solaris, OpenIndiana, HP-UX, AIX, and undoubtedly others.

But the guy you replied to probably isn't using any of those. Maybe one of the BSDs.

That isn't even to mention the profit lossess your company would take just in being defamed as "insecure" "botnet". Like sure, if you are a ISP you are still getting kickbacks from the (((Government))) to keep everything backdoored, but you will lose out big time on profit in any other buisness.

Why couldn't it be maybe 120,000 or 1,000,000 Dollars? How long would it take to implement all this crap into operation? What are the costs of taking down the infrastructure during this transition? What are the losses incurred by losing customers during this downtime? How many times have routers been attacked? How many times has any personal information of people been leaked because of these attacks? How many lawsuits were there?

'clone' =/= 'based'

It depends on the company and the size of the buisness. For a large ISP this would take like four years in a best case scenario. For a small buisness it could take a week. So it just depends on the company's infrastructure. But regardless you would save money in the longterm with both upkeep and security/the fame of the company. If you want examples of companies getting shit on because of a hack look at the effects on paypal years ago or walmart. Sure they still became profitable eventually, but paypal has government subsidies for that and walmart has less profits then it would have but yet it is profitable because monopoly.

You could do type casting in a strongly-typed language and cause it to fail at runtime.

You will need to show me how routers on the Internet are "small" companies. I have my doubts about that.

A small company, such as a coffee shop or a small town's library, may have a router for public access that is insecure and could defame that specific buisnesses reputation if their customers keep getting hacked. Either they won't use the wifi, or in the case of the coffee shop, they will go to (((starbucks))) instead. Or since you are fishing for an example of a node that goes to a centralized server via hops, say a dataprovider such as cloudfare was hacked and all its node's/routers were openly compromised, such as the recent hacking of the 1.1.1.1 routers for DNS. People stop using their service and cloudfare became less trustworthy in the proccess which costs them future buisness deals with clients looking for cloud services. Or as routers/nodes for a local services like how 8ch is one server but is served from many different routers in the cloudfare network.

You are confusing what router means with an actual piece of hardware. ISP's don't have connections to every single other network on the planet, so they connect with routers that transfer data between these different networks. Get it now?

Yeah the catching can be the form of a runtime error, like a bad base-to-derived cast exception.

No, it just happens to have an overloaded operator that takes two doubles and it feels safe casting that and returning a double.

Where the did you get the idea that it was "being operated on in dynamic memory"? What does that even mean? You sound like the "soldering CPUs" kid.

Attached: 5c331d0c35c4bb6c660b41175fad318f7c6165b510fca43110f2ce934e9359ac.png (442x472, 323.88K)

Programs don't just in-place change data as if it were some kind of big file. CPUs have to load this data into memory and then into registers in order to operate on it.

Dynamic memory, as in, there was some variable-length section of memory that was used to temporarily store variable data.

I'VE REACHED MY LAST RESORT

You sure sound like you're a fucking retard

What is that even supposed to mean?
Do you think the compiler should somehow magically know when your int overflows, and put it in a bigger data type?
That's not how shit works.
The bitshift operator in itself is just a function, that takes an int and returns an int.
The compiler just looks at the method signature and reserves enough space for the returned type to put the result into.
If you truly think, the compiler could somehow magically know beforehand how big the return type should be, according if a result overflows or not, then you are just clinically retarded.


Wew, you really are just a stupid nigger

You should consider killing yourself you fucking waste of space

What can you do with unions?

If you read a union member without it being the last member assigned to, then that memory doesn't necessarily hold the type you're asking for, which lets you pass around an x-pretending-to-be-a-y and violate type contracts with no error.

One thread had to die because you're mad as fuck at your own incompetence.

What does that even mean. Why should bitshifting affect the size of your data? How would the compiler know when a bitshifted datum should be enlarged and when it should not?
what the fuck

Attached: zmRKd9m.png (407x446, 27.08K)

Oh, when is that useful?

When you want to get raped by UB.

I've seen situations where an union would have been useful, mostly in data processing software that use some sort of compression scheme involving dark bit-twiddling magic between integers and floats.

Someone who thinks web developers work on microcontrollers in C is the one who is a fucking retard.

...

But they do. Even since arduino got popular, web dev tier niggers hopped on board the choochoo train. They got their fancy IDEs and their broken C, and see what else they can fuck up next.

...

So your saying that they take complicated things and make them easy to use?

And you obviously don't know how registers work because you seem to think it will be happy to take in two pieces of data with different types and then guess which one you want.

Not only are guesses like this ill-advised and rare, but the processor itself is not generally the one doing them. Different registers hold different types of values and have different instructions they work with. Worse, moving to a "larger" data type (int -> long, for example) will probably require multiple registers be used together, depending on your architectured. The data itself is meaningless without the representation.

Finally, even in languages where implicit casts can occur, you are always better off forcing it. If you're going to make an assumption, enforce that assumption.

Attached: 3566f3ce0f76bee30cbc3eeec22f9f66ecea5445e65b43be04788a5a0535ca80.png (473x488, 187.55K)

let me try to get this straight.

you had - let's say - an unit_8t, shifted that to the left and thought it would turn into an unit_16t?

wtf? wtf is this Zig Forums? this has to be a troll.

can you pls post your code. thanks.

Yeah, because the CPU also just randomly guesses that you want a float when you multiply an int by a float.... wait no, that's right. There's literally thousands of things that C and the compiler do to make your code transform into machine instructions.

I still don't understand how programs actually execute at any low level though, the difference is that I actually admit it, unlike talking about "unit_8t"s

he's fucking right though. OP's expectation that a left-shift would change the width-width of the type he was working with was completely absurd. This some numeric tower bullshit that Scheme might do, damn the costs, but nothing to expect from C. Just think it through, like I suggested way back in . Fucking use your brain and stay focused on a single subject for more than 5ns. How many questions do you come to have about this operation? Does bit-width extend when you shift a bunch of 0s off the end, or only when a 1 is shifted off? If you care about 0s, can bit-width ever shrink? if char c = 1, what type do you need to store the result of ((((c1)1)

Yes. What loser fucks up something so simple? What an utter retard!

Small operands do get promoted to int, so maybe that's what OP was dimly thinking of.

Don't program a uC in anything but its native assembly language. C might compile "almost to the same machine code as an assembler program" but almost isn't good enough in that environment. When you're working with 4K of program memory and 128 bytes of RAM, you need to know where every bit and byte goes, and to fully understand your hardware. Anything less results in an amateurish crap job.

Addendum: There is literally nothing wrong with C, used in its intended environment, for its intended purpose, by a competent programmer.

You seem sane. You obviously don't belong here.

You didn't read the thread fam. I already mentioned asm, and also Forth (which btw can take up less room than asm). Then some Unix/C nigger got his panties in a wad "hur dur name Forth program, u can't".

bullshite.

Since when is it acceptable for a language to incorporate
two entirely diverse concepts such as setf and cadr into the
same operator (=), the sole semantic distinction being that
if you mean cadr and not setf, you have to bracket your
variable with the characters that are used to represent
swearing in cartoons? Or do you have to do that if you mean
setf, not cadr? Sigh.

Wouldn't hurt to have an error handling hook, real memory
allocation (and garbage collection) routines, real data
types with machine independent sizes (and string data types
that don't barf if you have a NUL in them), reasonable
equality testing for all types of variables without having
to call some heinous library routine like strncmp,
and... and... and... Sheesh.

I've always loved the "elevator controller" paradigm,
because C is well suited to programming embedded controllers
and not much else. Not that I'd knowingly risk my life in
an elevator that was controlled by a program written in C,
mind you...