How much of a real problem are the CoCs and SJWs in the Rust community? I want to learn Rust because Iinear types are interesting. Will I get a CoC PRed if my open source project becomes popular? Will they attack me if I refuse to merge it?
ATS is that way: ats-lang.org
Although it supports a GC and an ML-styled stdlib that requires it, it also has a stdlib that extensively uses linear types for memory management (and resource management, like open files). It's the only language I've run into that's just as practical with as without a GC. It also offers dependent types for compile-time checked safe precision, Ada-tier safety without the runtime hits, and it has theorem proving. Rust is like a "1.0" prototype-to-throw-away shitty language has has features that aren't really available for use by the programmer, but are reserved for the language designer. So Rust
has a borrow checker that tries to prove that your memory accesses are safe, which you must satisfy; but in ATS the theorem proving is a feature of the language and you can work with it and use for your own purposes. And ATS more mundane stuff, like templates and macros.
ATS however is the work of one dude, Hongwei Xi, and he's currently rewriting the language (for the third time if you don't count pre-ATS languages of his like Dependent ML; this time to make it more accessible, as ATS's defaults are relatively burdensome for newbies to understand). But if you're interested in linear types, you're wasting your time looking anywhere else.
Yes of course they will attack you for not merging their CoC. They 'feel unsafe' without a CoC, and your refusal is creating a space of unsafety for them. That you even ask this question shows that you don't know what you're dealing with. Better not to learn.
I don't believe you. Else it wouldn't have a GC to begin with.
ATS is like economics: it's right on the edge of what humans can even understand, such that you need an ultra-supergenius to prepare material that supergeniuses can use to teach the language to geniuses. (If you doubt that this is the case with economics, I'll just say that ATS is not like economics in that nobody can fill their pockets by spreading faulty teachings about ATS.)
But if you use the language for personal projects, you can get shit done. I think products on the scale of a competitive time-series database aren't out of the question. You could also graft a bit of ATS onto a project that's majority some other language, using it only where you're most concerned with correctness and efficiency, and not as concerned with productivity.
It's completely true though. Manual memory management without a GC at all is very easy in ATS, from allocating memory on the stack, to dynamically allocating it and then ensuring at compile-time that something must free it, to (easiness stops:) tracking whether memory is initialized and to what. You have to change compilation flags and include the ML-style stdlib to work with a GC, and a reason to do that is "I don't have to deal with linear types". Otherwise you're dealing with linear types.
It was a shitpost, lad, the setup was too good to pass.
Can ATS be used for core implementation of libraries? I'm at a point where I would really like Python bindings to a high performance implementation of a fairly lengthy personal project. I've been doing it in rust because I lack sufficient Cnility-points to code like a white man. I've already given up on GNAT. I'm afraid of rust being my best option. I'm not going to git gud at pic related, tbh
Of course you can use ATS to do that. ATS is a systems programming language. If you wish you can write a compiler, you can write an OS kernel, you can write hardware drivers with ATS.
Have you tried D with @nogc?
D doesn't have a noGC copy of its stdlib. The "no gc" option is basically -betterC, which is nice, but it's not D.
(not yet anyway. D's currently pulling a Nim by adding Rust's core feature as a minor upgrade.)
If Rust is so good how come no one has made a text editor with it?
Behold! Broken lispfag now wanders from thread to thread asking the same question he was asked a long time ago and couldn't answer or solve. We now have our own board ghost.
I honestly don't know who is more annoying, lispfag or you. inb4 the inevitable spergout as you accuse me of being lispfag.
Rust is cancer but I just found amp.rs which seems to check out from a cursory glance. Just goes to show that even cancer is objectively superior to lisp.
How much of a real problem is GRIDS and other STDs in the gay community? I want to learn anal because same sex types are interesting. Will I get a cock raped if my open ass hole becomes popular? Will they attack me if I refuse to poz it?
Reference counting is garbage collection. There is Automatic Reference Counting (which the macfag should have known about) which is compile time insertion of alloc/dealloc calls.
Which isn't what ATS is doing. When you have a linear type, functions that operate on it will either consume it or borrow it. If they borrow it, the caller must either also be borrowing it, or must go on to pass it to something that will consume it. So for a linear string in ATS for example, there's a universe of non-destructive functions that borrow the string, and then there some that consume it, and those might produce new strings or they might free() it or whatever. ATS doesn't care that you free() the string precisely; it only cares that the string is eventually consumed by something, and then is never referenced afterwards.
I wasn't speaking about ATS, but that sounds interesting. Will give their paper a read.
YET ANOTHER RUST SHILL THREAD
That sounds like what this guy is doing youtube.com
It still is more work than run-time GC, because you must tell the compiler whether you're transferring the object's ownership or adding another client to the object.
And with manual memory management you don't need to specify that for every object which uses the object to be freed, as long as you know to free() it when it's guaranteed not to be accessed by any function anymore by virtue of hitting a certain point in the code, and you have more control on at what execution point you're going to take the latency hit of the kernel taking back ownership of a set of memory blocks with potentially lots of blocks to be freed. And you can decouple the potentially unrelated concepts of when the data is not going to be accessed anymore from when it's no longer referenced by any objects. With ref counting if you don't need an object anymore but you don't need the pointer to it pointing to anything else, you would have to first overwrite the pointer before the memory used for the object that was pointed at becomes available to the system again, even when you don't necessarily need to point to something else.
there's no reference counting.
if a function borrows as string, then this is not +1 reference. It just means that you still have the string (must still get something to consume it, can still pass it to other borrowers) after the function returns.
What do you do if a linear value is added to a data structure you ask? The answer is not "ref counting". It's: you do this either with a borrowing function that makes a copy of the value, or you do this with a consuming function and the caller no longer has the value.
it's also not true that something like ATS couldn't still be smart about when to bite the bullet and free() stuff.
... but, yeah, what's done currently is that you free() stuff all the time and this operation isn't free and could be more expensive than what a GC gets you.
Hongwei Xi (ATS dev) is more concerned with GC's negative performance implications for parallel code.
And this severe limitation is what makes rust so shitty. You end up having to make tons of copies or using ARC for operations that are actually safe.
I'm coming at it from the C++ perspective, in which "objects" borrow the string. But I imagine it's the same for regular functions.
How else will the compiler know how many functions have borrowed the string other than adding one to an internal counter every time it is borrowed, and subtracting one every time the string is returned?
I mean, sure, instead of decreasing by one and then adding one again when the function is passed around, you could keep the counter the same, but that's just a minor implementation detail.
Well then I don't get what the difference is.
You surely mean it makes a copy of the pointer.
Copying the value itself would be very expensive.
What do you even think we're talking about right now?
What good would a copy of a pointer do when the callers goes on to free or reuse the memory the pointer points to?
What is the difference between a single call to a borrowing function and ten million such calls?
The answer is: there's no difference at all. So there's no meaning to such a counter. The borrowing is over when the function returns.
It would allow the function which borrows the object to have access to it. Or do you think a function can access variables which live in the local scope of another function?
That wouldn't happen, because you are counting how many times the object has been borrowed and given back, and you don't slap a call to free() until the count reaches 0.
Nothing, because to call the borrowing function again it has to have returned first, which would leave the counter at 1 before you call it a second time (1+1=2, 2-1=1).
You do realize reference counting is incredibly slow right. Imagine calling a function 10 thousand times. With rust it just does the function 10k times. With RC you have to increment and decrement a number 10k times. Both only free at the end.
I never said the reference counting needed to be done at run time. C++ can do it at compile time too as long as each time you pass the pointer around you add information on whether you are transferring ownership of the pointer or making a copy of the pointer. I assume you have to include that info too when doing it in Rust, right? It's called "value semantics". I suggest you watch the talk I linked to earlier on the thread.
To expand on that, what I mean by reference counting at compile time, is for the compiler to keep track of how many objects own a pointer at any given point of the code, and at the point in the code where the pointer is not owned by anyone anymore, slap a call to that object's destructor in the compiled ASM output. Also checkem.
That doesn't bother me in the grand scheme of things. Processing times of code that is O(n) are not worth my effort compared to code that's bigger than that.
Fucko my day job is c++ I know r values, x values all that shit. The fact is even with something like std::unique_ptr that in theory can be optimized away it is still totally unsafe for XYZ shitty c++ reason. BTW std::unique_ptr is not reference counting there is no counter. std::shared_ptr is. std::shared_ptr absolutely has a runtime overhead.
Except for the fact that this is the real world not asymptotic land. Something that takes 1 operation, and then adding another operation that takes one operation, means over 10k iterations you now take 20k operations instead of 10k.
I can believe that, but what proof do you have that the rust (or whatever you're shilling) version doesn't have runtime overhead? Do you have a PoC that compares execution speed of an algorithm written in C++ and [insert your preferred language here]?
I tought this board was against spam