OSX/iOS Memory management

developer.apple.com/library/archive/documentation/Performance/Conceptual/ManagingMemory/Articles/AboutMemory.html#//apple_ref/doc/uid/20001880-BCICIHAB


I found this approach to memory management intriguing because of how much it differed from standard Linux/Windows use of swap and virtual memory
The tl;dr is this;
The advantage to this approach is obvious, superior system responsiveness at any given time while avoiding core OS out of memory conditions avoiding system instability and also improving the life of solid state storage. The disadvantage is that it doesn’t favor many background processes and puts extra burden on application developers beyond simple garbage collection.

Would you ever want to see a comparable solution on GNU/Linux for mainstream user experience focused distros?

Attached: 3C61C22B-AB5E-414D-8049-AF9965825918.gif (300x300, 2.71K)

Other urls found in this thread:

manpages.ubuntu.com/manpages/artful/man8/swapspace.8.html
multicians.org/daley-dennis.html
multicians.org/multics-vm.html
twitter.com/NSFWRedditVideo

You mean like hiding a swap inside GRUB?

How is that any different then just having a swap partition? OSX doesn't have swap or a swap partition. Virtual memory is automatically granted to applications by the OS and applications basically have to manage their own "swap" memory

I'm pretty sure you can use a swap file fine without needing a dedicated partition.
Hell, i'm doing this right now on my laptop.

It does sound interesting, I must have been confused I guess.

You don't seem to understand. OSX has no swap file nor does it have a swap partition. It just has the entire disk

It would be even nicer if the "backing store" was strictly a temp format instead of written to disk, like an autoremove temp file. Not sure if I'm truly understanding the full depth of it's inner workings though.

That sounds disgusting.

The backing store is temportary. The biggest difference here is that while swap files have a fixed size and location on the disc the backing store doesn't. OSX can get away with this because of how memory management works, the backing store will never be used for active data or dynamic application assets, its purely cache so we don't have to worry about swap slowing down the system

does it? Or does it only sound disgusting through the scope of someone used to having to manage swap space because the OS was too dumb to prevent page thrashing?

Attached: 1316824354488.jpg (383x362, 89.55K)

let's discuss swapping strategies

My swap is a zram block, (compressed RAM device) one "drive"/swap partition per cpu core I have. Sounds abhorrent but works surprisingly well and is still a lot faster than HDD swap.

technically doesn't zram still use HDD as an intermediate and when main memory is full?
I've never actually heard of a zram setup like that before but it does make a lot of sense in terms of memory intensive applications that don't need as much CPU time, but isn't it a bottleneck for CPU-heavy applications since the ZRAM is naturally stealing CPU time compressing/decompressing memory constantly?

This sounds pretty similar to what I had in mind, so yes. The OOM handling on Linux is batshit insane and has bothered me for ages.

Can a ssd be used as a dedicated swap partition? Is this retarded because of wear leveling issues or something?

You can give priorities to swap storage. So for example you can give zram the highest priority, then a normal swap partition on your HDD the next highest one and then you can even do something insane like use a network-mounted file as next-priority swapspace. Linux doesn't stop you from doing it. In my experience zram works surprisingly well, and you don't select a compression algorithm like bzip but a "real-time" usable one like lzo. I only use zram swap and it usually only gets touched on my 8 GB machine when I'm compiling something big. I set the zram size to the size of my memory (in 2 GB blocks) and have never exhausted it. Simply having more RAM would of course always be faster but even with the compression overhead it still seems to be noticeably faster than harddrive swap, which shouldn't be surprising really as access times to RAM even compared to SSD are by many factors lower and sacrificing the CPU cycles for compression/decompression will still be faster than waiting for the drive. (where the cycles then get wasted anyways, can't do computing if the data to compute isn't in) It's of course also a lot faster than running into Linux' OOM-killer, which basically locks up the machine as I'm sure lots of people reading this have experienced. Conceptually it's a lot like the "memory doublers" of DOS times past, but it works better now as the algorithms are better and we have multicore machines which are much faster.

It's also safer because if you turn the machine off, the swapped data is gone. Some people encrypt their swap partition/files and you can imagine that this gotta be crazy slow.


It can. You won't realistically be able to wear down a modern SSD like this. You'll want a new drive long before it's worn down. It's a non-issue, even on noname sandforce china SSDs.

That's inaccurate. It automatically creates swap files in /private/var/vm, but there's no limit to how many.

The way Apples documentation words it seems as though OSX doesn’t use swap files but a vague ‘backing store’ that grows and shrinks as needed

it's still a swap file.........

Not the user you're replying to but what is a 'backing store'?
Is it a file, mutiple files or part of the file system specification? Data can't be saved on thin air.

You learn all this shit in the Computer Architecture and the Operating Systems lectures.
Or you could read the countless GNU/Linux distributions wikis and learn all about swap, fucking niggers.

...

that's called a "swap file". Partition space is allocated by the "file system". I'm not surprised Apple users don't know this, Apple users are taught by Apple not to ask questions.

Swap files are by definition slower than swap partitions, or preallocated swap files.

There is a distinction that you don't understand about OS X memory swapping technology. The technology known as "backing storage" uses the available HDD space as the memory swap. This is in contrast to Windows and Linux where a file is preallocated some space as swap memory, or a whole partition is preallocated as swap memory.

manpages.ubuntu.com/manpages/artful/man8/swapspace.8.html
kys, retard

Who are you quoting, mongoloid?

you

no you mongoloid.

OP here, iOS doesn’t use swap at all

So it's files because files are the only unit in the filesystem that can use all of the available space on the machine’s boot partition.

This.

...

The iOS approach only work on phones. Imagine having some important shit running in the background while you cool off and browse the internet or something. Come back 10 minutes later and wala it's all gone.

The OSX thing just sounds like a swapfile but with variable size. On linux systemd can do this. Windows has a set size but if you go over it it'll make a new file to store.

it sounds like the way linux uses free ram for hard drive cache.
does OSX encrypt it's boot partition? do mac users dump sensitive data all over their free space as a matter of course?
even if the boot partition was encrypted it's irrelevant because it's unlocked at boot. the standard way to do this is to encrypt your swap with a random key every boot, a key that's never saved so it's unrecoverable. Does OSX swap persist across reboots?

It works like shit on phones too. Imagine typing a long comment, tabbing away, and coming back and finding it deleted.

This literally never happened to me on Android. Do iFags really have to deal with shit like that?

The shit running in the background will be marked as active so it will only be flushed if there’s not enough memory for the current foreground program as if there’s an out of memory condition. If it’s that important you probably wouldn’t want to use the internet on the same machine anyways. IIRC iOS also uses memory compression like Linux zram


No, the web pages will become unloaded from memory but the unsubmitted comment text will likely be market as active memory

Yes
No

Assuming this works the same way it did in 2007 when I last used OS X, what happens is the OS starts by allocating a 64 MB swap file in some hidden directory somewhere on the root partition when the system boots. When that file begins to fill up, it allocates an additional 128 MB right next to it, then a 256 MB and so on and so forth. Once it gets to a certain size, IIRC somewhere around a gigabyte, it starts to allocate more files of the same size rather than doubling the size. If the system pages in enough memory to free up a significant amount of swap, it starts to prune excess swap files. This means there's never much more swap allocated than is neccessary for system operation. The term "backing store" basically gives them the flexibility to not have to rewrite the documentation if they come up with an alternative implementation that doesn't rely on the filesystem.

Linux can do this as well with a simple script that monitors /proc/swaps and creates and enables swap files with incrementally lower priorities, and cleans up old ones that are almost empty. Most distros instead allocate a swap partition near the top of the disk to maintain consistently high performance on spinning platter hard drives. The overhead from the file system and the random placement of the file probably near the bottom of the disk does slow I/O down significantly.


It's actually not. When a process is waiting for I/O, it isn't using any CPU time. A spinning hard drive might be able to put out a maximum of 1-2 MB/s during sustained random 4k reads and writes. Even my old Core Duo L2400 at 1.67 GHz can write sequentially to a twofish partition at about 40 MB/s and newer CPUs with hardware accelerated AES and AES-encrypted swap partitions can easily process tens of gigabytes per second. As a result, the CPU overhead of encrypted swap is negligible in the real world unless the basis for a comparison is a non-hardware accelerated algorithm on a relatively slow CPU versus a fast SATA III or better SSD.

You should have the option of deciding if you want a swap or not. It might not even be necessary. It will also slow everything down to a crawl if you're using a 5400RPM drive. Not giving the user an option is retarded baby shit you would expect from Apple.

The point is that it shouldn’t matter if you have swap or not if the applications are properly developed to handle it

Are you fucking retarded?

RAMlets detected.

Retard detected.

These are better than UNIX, but still worse than Multics. The way it works in Multics is that there is no swap. Instead, each segment is associated with a named file. This file can grow or shrink and has access permissions. All segments have at least one name, including code and stack segments. A process is just a collection of mapped segments. This promotes binary data structures and using ordinary machine instructions to get and set variables instead of having to parse text. Segments also contain pointers, so you can have an actual tree with pointers instead of text formats like XML, or even a garbage collected heap. Under a real virtual memory system, millions of lines of UNIX bullshit do not need to exist and actually get in the way and reduce productivity. This technique is possible on 16/32-bit x86 protected modes and was the way Intel intended it to be used.

multicians.org/daley-dennis.html

multicians.org/multics-vm.html

Raise your hand if you remember when file systems had version numbers. Don't. The paranoiac weenies in charge of Unix proselytizing will shoot you dead. They don't like people who know the truth.Heck, I remember when the filesystem was mapped into theaddress space! I even re

Either point us in the general direction of a modern Multics-like system to use, or just fuck off already.

Can't handle that better systems existed once?

That sounds fucking retarded. Stop shilling Multics. It's unbased.
Hating on UNIX is based.

If you use systemd/Linux it creates a swapfile on boot even if you have a swap partition.

There is no meaning in a utopia that I cannot use today. If I cannot install it today into my computer, then it doesn't exist and it has no meaning.

You're Jewish.

As much as the Unix hater hates admitting it, that's why worse is better succeeded. Multics had interesting ideas and probably would have scaled better to today's hardware, but until very late in its lifespan it ran like ass and was incredibly fat compared to Unix. Symbolics's Lisp Machines also had interesting ideas but were very expensive and unresponsive. Unix, on the other hand, worked just well enough that people were willing to put up with and fix some of the jank.
Did OS design stagnate after Unix went big? Definitely. We won't get a straight-up Multics or Symbolics OS again without custom hardware, emulating certain features in software, or in Multics' case restricting ourselves to 32-bit protected mode on x86 processors, and even then they'd need significant reengineering to support modern workloads well to be fair, so does Unix. Also, even with custom hardware that would still leave the millions (if not billions) of x86 and ARM machines in need of a better OS and the Unix hater community has completely failed at offering modern day hardware something better. At least Unix-like OSes have offered incremental improvements alongside the jank and bloating over time, which is still a better option than fucking nothing.

If you think MUH 300MS INPUT LAG had any part in the death of Symbolics you are clinically retarded. Don't pretend to be an expert after reading some shitpost on Zig Forums.

This however is correct, it's how Unix keeps and expands its user base quickly. It is very difficult to make substantial improvements to something well-designed; on the language side you can see this with Common Lisp and Ada. On Unix (or C to keep the language example), however, there are so many stupid mistakes that improving it is trivial, so when at some point you no longer want to put up with its idiocy, you fix it -- and another Unix developer to spread the plague is born.

Symbolics machines were already more expensive than their competitors and sold by a badly managed, financially incompetent company that only got anywhere in the first place by jewing MIT's AI lab. The unbelievably bad latency was just a fun little extra, something which certainly didn't help sales.
Yeah, like how Multics good design meant the later speedups came from rewriting PL/I code into assembly and the hell compiler writers went through to achieve Ada and Common Lisp's good design, especially since the latter was designed for HLLCAs that never caught on. Take a hint from Tony Hoare:
Is Unix either? No, but it and Plan 9 definitely have software which count.

based

Neither did the boring paintjob, but that doesn't make it a relevant factor. Stop larping.
Are you just throwing random shit at the wall to see what sticks? At least try to be coherent.

Oh right, I forgot:
This is factually wrong, CL forwent several features specifically because they would be hard to support outside of Lisp machines. I really wonder what compulses retards to form strong opinions on things they have no idea about.

topkek
Nope. You might want to work on your reading comprehension.

CL forwent several features specifically because they would be hard to support outside of Lisp machines.
And that's called a concession.

Attached: seriously nigger.jpg (403x403, 90.94K)

I can't even call that a good strawman, it's just trash.

Attached: smugbird.png (96x201, 11.71K)

Shitty replies for shitty posts.