Performance issue with in-Memory computing for containers
I'm trying to configure an environment for in-memory computing where containers run completely in memory and persistent storage volumes are attached only. My workload consists of many small io quieres and lots of computing (model rendering, relation computing and then simulation tests) so my hope is to speed up the build cycle by this. Since there's no actual data produced (or better say I can be easily reproduced by re-rendering the predefined model) the possibility of data loss on server crash is negligible.
I've got a testing machine with two Epyc 7543 and 512GB RAM. So far my approach was to create a ramdisk (with tmpfs) and create the containers within this directory. This works, I can see the impact on RAM usage with "free -m" when starting the containers but the build cycle reports a lower io speed than I get when I run this on my Softwareraid-10 NVMe Gen4 (2GBit/s vs 5GBit/s). I was expecting the in-memory containers to be much faster than the NVMe one.
To simplify my setup for testing, I replaced the containerized setup by a Proxmox installation, created again a ramdisk, added this as storage directory to Proxmox and spawned a container with Debian 12 to test. Same result, the ramdisk container isn't as performant as the NVMe one. Ok, no big surprise changing the setup wasn't helpful but I thought from this point on you can help me better because more users are familiar with Proxmox than Rancher.
Did I miss a point why in-memory it can't be faster than on NVMe? Or is there an error in my setup?
Appreciate any thoughts and help on this.