I keep seeing posters on here like trying to be galaxy brain by writing stuff in Assembly and ending up producing horrible code. I don't blame them, because pretty much the stuff about Assembly online is trash or unparsable.
So I've written Hello World properly, using modern interfaces & good code hygiene.
I rather hang myself then write x86 assembly. nice try intel.
I weep for what this board once was
Thanks OP. I don't program too much assembly, but I am sure there are nuances to it like every other language.
macro shits. Try a monad ya dingus.
How would you break these down into monads?
I mean, it's kinda cool if for some reason one really needs to write in assembly, but it still looks like dogshit compared to any normal language. Seriously, that print function is not something anyone should ever write by hand.
I presume he's concerned about duplication of work, which is understandable. But if you're in a situation where you're needing to write Assembly for speed, having a custom print may be helpful.
Try to explain what that function does exactly, in a way that allows a non-programmer to understand it.
I've been wanting to learn assembly too. Do you have any suggested resources/books for modern assembly OP?
Clearly I'm not going to start writing shit in pure assembly but I would like to learn how, to atleast be able to inline parts in C
Most of the shit I find online is out of date.
from the anchored thread
clearly some things in the C standard library are full of bloat, writing parts in assembly directly can result in dramatic performance improvements.
I learnt a ton from a guy on Zig Forums a while back, the rest was mostly looking at the output of compilers (the -S flag will spit out Assembly for you) and a lot of googling, scrolling through obscure forums etc. Books and dedicated sites seem to be a dead end unfortunately.
In my opinion, duplication of work is a good method for learning how something is done.
It's pretty clear that the macro, which is different than a function, especially in assembly, sets 4 memory registers in the CPU that contain the system call to write, the location where to write, the message to write, and the length of that message. After preparing the computer for this system call, it then pushes a call onto memory, the stack, for a function that is ran after the system call to write is made. The system call is then made and at return it jumps to where the user function is called.
Personally, I don't know if I would've used macros for it or not.
The C Standard library isn't standard across implementations unfortunately.
store all registers -> call IO -> restore all registers
There's no need to restore any registers here.
That's not what you're supposed to do, autismo. You're only supposed to touch the assembly if your code has proven to be too slow even after every possible language and compiler optimization.
Please point to the post in which I said otherwise.
Yeah sure thing bucko.
What did you think lea ebp, [esp-12] does?
Great. And how are you supposed to fix slow code with assembly if you've seen it.
By not LARPing and studying SICP/CLRS for at least one hour every time you fantasize about rewriting standard library functions like print in assembly.
1. There is LARPing 2. There is a cause for LARPing 3. There is a way to the cessation of LARPing 4. The ignoble 8ch path does not lead to the cessation of LARPing
the 4 noble truths of Zig Forums
this is not how magic and revolutionary things happen
What are you blathering on about, not every program is going to have a magical solution where the hotspot can be solved with an optimised algorithm.
I'm saying that you should understand the fundamentals before fucking with shit you clearly have no idea when to fuck with or leave alone.
How are you supposed to understand anything if you don't write any code?
I've written fizzbuzz in over 70 languages, designed over 15 logos for various projects on Zig Forums, and have read SICP 3.2 times. Show some respect kiddo. Now optimize that print function for us, optimize it so well that it prints before you give it any input.
I don't know why people don't absolutely hate programmers. You guys are destroying society with the shit you create. You guys have no principles, no dignity and no respect. We're fucked as a whole becuase of AI alone And that's apart from all the other shit you faggots make.
user, what are you talking about. Nothing you're replying to has anything to do with the issue you're discussing. Could you calm your inner sperg and try to actually explain what you take issue with in words?
AI won't be the overlord. The overlord will be the person or people who use AI to gain power. This is the ultimate goal for the development of AI.
'in words' is an expression used to imply that a person is speaking in a convoluted or confused manner and should rephrase in order to clarify. Is English your second language?
Was there honestly a time when it wasn't a dumpster fire?
Do you have a brain or do you need to write a program that will analyze what I said and break it down into smaller parts so it's easier for you to understand? Just stfu and let your computer do the thinking for you.
This is cool. Are you using GNU implementation of stdio.h?
Yeah, 2014 Zig Forums was awesome.
No I don't have a brain. Are you going to explain what you meant?
Wow. I wonder what they do inside of puts() that slows it down that much.
What the fuck did you just fucking say about me, you little bitch? I'll have you know I graduated top of my class in every Microsoft certification, and I've been involved in numerous secret raids on Apple and Xerox HQ, with over 300 confirmed poachings of employees. I've written fizzbuzz in over 70 languages, designed over 15 logos for various projects on Zig Forums, and have read SICP 3.2 times. Show some respect kiddo. As we speak I am contacting my secret network of hackers across the USA and your IP is being traced right now so you better prepare for the storm, maggot. Now optimize that print function for us, optimize it so well that it prints before you give it any input. Or I'll compute some floating points with precision the likes of which you've seen.
printf is just as slow, it seems to make multiple write syscalls, one on every newline, instead of just a single syscall for the whole thing.
so was Zig Forums But it was never going to last. Regression to the mean is the worst law of our world.
I've never been a fan of Zig Forums, but they were alot more well behaved back then.
I think the userbase of Zig Forums has totally changed but I'm pretty sure Zig Forums is mostly the same. At least I see the same posters for years.
Weird. Sounds like a fun project to optimize.
Are you blind? There's a couple of the same old posters but nowadays we're full of /g/ refugees and /tv/-tier shitters.
x86 is ugly too. You wouldn't be writing it if you were just going off aesthetics. You probably write it because your CPU interprets it. Your CPU likely is x86_64, so shouldn't that be what you should use? It's still messy. Additionally your exit macro should take an argument for the exit code. Your "print_clobber" is not an abstraction over write else you would call it write. Notice that print_clobber always prints to stdout. Also why don't you call exit, exit_clobber? Not in this program, but when you go to reuse this code later, it might come back to haunt you. Did I say it wasn't a macro? Conceptually macros are functions. But if there was you would have to manually store them somewhere and then the subroutine you pass in for the "callback" needs to know where to restore them from. Well it's not faster if you need to preserve registers anyways. This is why it is better working at a higher level language as you can abstract over register allocation and saving / restoring registers. No, you're I'm not a Ctard.
this is why it seems like the ideal thing to do is use inline assembly where you feel like trying to optimize and
do I need to worry about any of this shit with the top code of
I assume not. i don't give a fuck what's going on with the registers after asm("syscall"); i need to confirm but i'm pretty sure none of the shit you mentioned matters with inline assembly; it's still getting pumped through the compiler and the compiler is dealing with it.
gcc.gnu.org/onlinedocs/gcc-4.4.2/gcc/Explicit-Reg-Vars.html "Local register variables in specific registers do not reserve the registers, except at the point where they are used as input or output operands in an asm statement and the asm statement itself is not deleted. The compiler's data flow analysis is capable of determining where the specified registers contain live values, and where they are available for other uses. Stores into local register variables may be deleted when they appear to be dead according to dataflow analysis. References to local register variables may be deleted or moved or simplified. "
x86 is ugly relative to other Assemblies, but it's nicer than x86_64 and still runs on my CPU so that's what I do for fun.
No, it shouldn't, because there's never a situation where I'm returning anything other than 0.
I'd say that writing to a special fd counts as an abstraction. This is mostly a semantic issue though.
I'm not writing a macro library, I'm refactoring code. You don't code preemptively when refactoring else you end up with bloat.
No they aren't, a function adheres to a calling convention.
No, I'd just tweak my macro to push ecx and edx (which are the registers the kernel clobbers), the kernel does the job of restoring those registers.
Sure, but I don't.
Why are you trying to compare Assembly to a high level language? They have different use cases.
Then what are you? A Rust dev? Go?
it only does that when stdout has a tty
Do you mean when stdout is a tty? Wouldn't that still be the case for a hardware terminal attached serially? It would still need to print to /dev/ttyS# instead of /dev/tty#.
Read a bible, intelaviv kike.
First off, it doesn't matter since you are abstracting over the exit syscall which takes an argument and secondly the write syscall can fail so you might want to exit with an error in that case There's different levels to the abstractions. You can have one that just wraps the sycall itself and one that is a special case of the other for printing out to stdout. So what? It's good practice to not add special cases everywhere. As you said before, "stop thinking like a C programmer." There are no functions in x86 assembly. Not in this specific case, but if you were expanding this, making it more complicated, you might need to. Assembly has zero use cases. Every use case it has is better suited to assembly + some other abstractions that make it easier to work with. No, I did not develop the Rust programming language, nor am I a Go. I mainly program in Haskell and Idris.
Acktually you should write everything in lisp and ada, languages that are barely used by anyone except the us government which hasn't won an armed engagement since WWII
The special cases aren't 'added', they are a natural consequence of it doing its job, it's good practice not to add bloat.
Uh, yeah there are. What do you think CALL and RET are for?
Yes, and I would add it as needed.
This sentence is contradictory.
I don't know what you mean by that. The first half of that disagrees with me where the second half agrees. Then why are you adding bloat to your code to optimize takes less space and breaks dependencies clearing a register? Uh, no there aren't. CALL and RET are for procedures, not functions. Feel free to look at any documentation and it will back up the fact that they are for procedures. Let me clarify. If I were to have a language that was x86 assembly + if statements, there really wouldn't be a good reason to just uses x86 assembly.
this but unironically and without Lisp. Interesting letter. if life were an MMO and I had to make a choice of one static and one dynamic language to stick with, I guess I'd go with Ada and Lua. Without that kind of artificial constraint, it's easier to just learn a bunch of shitty dynamic languages and use whichever one has the most investment for a given task.
Oh, shit! Did you really get a letter from Uncle Ted?
this. these people are the reason why google and other bad things exists. they just do their job like robots without thinking what the thing they create will be used for.
Ada was literally written by a kike and its used by the most kosher nation on the planet to kill bad goy mudslimes.
the language is good though, so his plot failed. Now you can use the enemy's own tools against them. "the grass on the other side of the fence is brown! I don't even want to go over there! This dick in my ass is better than the dick that's probably over there! pls sir can you spare one of your first few amendments pls sir I'm trying to support my family ok god bless" I'll admit we generally kill the wrong Muslims, but what does this have to do with Ada anymore? The NSA uses GNU products in C you know.
What's with all the ada shilling here lately? The government is literally the only place that uses it. And even they probably wouldn't shill it like you faggots do as if it's in any way better than Rust. LARPing as a glow nigger ada programmer is a really strange hobby tbh. It's kind of like those guys who stand outside the 7/11 in chinese ebay camo claiming they fought in the gulf war.
It's not bloat, since I'm not changing the actual code that gets output. I'm refactoring in order to make the code more readable.
I meant to say procedures then.
Wrong, there are optimisations you can make in Assembly with regards to branching that you cannot do with a generic solution.
You can still use regular assembly. When it's convenient to use the provided if statement then you use it, else use the old way. It's similar to comparing C with only goto and no control flow statements and C without goto.
Why do you care that the government uses Ada? Ada is cool even without government approval.
I'd be inclined to believe part of that is using an assembler to start with.
I largely agree. It is in a sorry state.
Your program is no longer available to me.
If you'd care to see my thoughts on this topic, here is my thread:
If you have any questions, I can answer them here or there.
It's not, otherwise it would not take 4 lines to explain a simple print. That's what is wrong with that code, 90% of the complexity is implementation details. Also "macros are not functions" is semantic autism in this case.
Also you need to test for many various lenghts to see how well things scale.
What makes you think I care? I'm not the one who larps as a glow nigger ada programmer. And no, it's not. Without government funding and continued support in the form of outrageous government-priced license fees ada would not even exist.
Oh, now you are arguing semantics? Get lost dude. user (you) asked for an explanation for someone who doesn't program. It's an entirely different explanation to someone who does program. The only thing missing from this code are comments, like explaining "4" is the syscall for write, or that "1" is the system fp for stdout. You might want to get some tums because you are green from jealousy. And you are saying I am arguing semantics. Don't walk behind this guy because it stinks.
Ada was (and is) designed by a committee.
GNAT is a free Ada compiler and it's free software. The compiler is free and so are the standards. So what's your point, nigger?
The entire point was, something as conceptually simple as print has to be so simple even without any tech background can understand it. OP's print has side effects, which is anything but simple. Stay salty tho.
If I ever want try do assembly should I just start with 6502?
Side effects don't matter until you have a situation where they do. This is autism.
I would start with Z80 or ARM.
I am not the one salty. You are the one moving goal posts.
I tend to try to avoid macros, in part because they're a bit of a heavyweight item in my tool.
I'll use your set macro, as an example. If you really wanted to have a macro that set a register "in the most efficient way possible", you'd effectively have a tiny compiler rather than this simple macro.
That is, to truly set a register in the most efficient way requires understanding the state of the other registers, to see if you can modify their values with a shorter instruction, say, as an example. If you're programming at this level, you'd also gain by choosing numbers that the machine can manipulate more easily to start with, ARM being an example where this could be particularly important.
Those are my thoughts, anyway.
In more honest words, that means they lead to hard to reproduce and debug problems. Putting side effects where they are not needed is pajeet tier and will fuck you over even in toy projects.
pic microcontroller assembly is pretty simple. it's use is limited though, to pic microcontrollers
If the point is to write the code in the most efficient way as possible then it is needed.
tl;dr shithub user pooinloo tries to out-galaxy-brain a galaxy brain by fucking up a simple hello, world program in (32-bit!) x86 asm with a bunch of irrelevant scaffolding
I'm not sure if this is true. I think spending an extra byte or two in your icache is worth not adding a dependency on the value of another register meaning we have to wait on previous instructions to finish before we can set.
I care more about size efficiency than speed efficiency is an important detail.
Anyway, my point was writing maximally efficient machine code with macros can require increasingly complex macros to properly encode the knowledge a human programmer would have.
Making your code readable and maintainable is not 'irrelevant scaffolding'.
good job op
Sorry, I did that wrong Next throw portability in there by writing an assembly file that compiles for both arm and x86