I would not worry about virtual memory usage. Virtual memory can include memory mapped files and does not indicate actual ram usage - only the address space that the program has opened at some point. There is little point in worrying about it.
Linux
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Do people even need virtual memory these days?
Also, isn't it harmful to SSDs?
Virtual memory is different from swap memory.
Swap memory is used when you run out of physical memory, so the memory is extended to your storage.
Virtual memory is an abstraction that lies between programs using memory and the physical memory in the device. It can be something like compression and memory-mapped files, like mentioned.
And yes, some swap is still useful, up to something like 4G for larger systems.
And if you want to hibernate to disk, you may need as much swap as your physical memory. But maybe that’s changed. I haven’t done that in years.
I've thought they were the same thing all these years.
Virtual memory isn't swap, it is a mechanism that allows the operating system to give processes a view of memory that is almost completely decoupled from real physical memory and other processes. For example some programs require their code and data to be placed at exact memory locations in order to work - virtual memory allows you to run as many of these programs as you wish, because one process's address 0x1000 has nothing to do with another one's 0x1000, unless they set it up as shared memory (but even the same chunk of shared memory might be mapped to different addresses in the processes that share it).
Swapping is a cool trick that you can do with virtual memory, though. Basically you store a piece of memory somewhere outside the physical memory, and then make the address invalid in virtual memory. When the process tries to access it, it will crash. The OS will be notified of the crash, see that it was due to the process trying to access swapped out memory, load the chunk back from disk (maybe to a different physical location), update the virtual memory to correctly point to this chunk, and restart the crashed process from the instruction that caused the crash. So from the point of view of the process, nothing went wrong at all, except that one instruction took a very long time to execute.
Also, isn't it harmful to SSDs?
Swapping doesn't do enough writes to matter, unless your system is running really low on RAM.
Others have answered why this isn't a memory leak as such and is not as big a deal as you may think.
But if you are still concerned, you can reduce it, even if doing so is a bad idea.
-
You're running it natively which means you're probably using a systemd .service file to manage jackett. Research the .system setting "RuntimeMaxSec" - that will force a restart of the service every N seconds and prevent it growing. (This is a bad idea, but if you want to boss it around, you can)
-
Run it in docker and force a max memory setting. Docker will prevent it using more than you set. You can also restrict cpu usage this way too. docker-compose example goes something like:
deploy: resources: limits: cpus: 0.5 memory: 100m
Cache commitment is not a memory leak. Hard commit is.
Think about limiting the resources it can see/use and move forward.