The MaxMemInQueues Myth For Guard Memory

By 1AEO Team • January 2026

Experiment: 7 relays over 9 days (Sept 2025) testing 2GB and 4GB limits vs control

A common piece of advice for high-memory relays: "Just set MaxMemInQueues to a lower value." We put this to the test. It doesn't work.

MaxMemInQueues: Why It Fails

We configured test groups with strict limits: MaxMemInQueues 2GB and 4GB. Logic suggests the process should stay within these bounds. In reality, both groups fragmented just as badly as the control—hitting ~5 GB within 48 hours.

ConfigurationAvg MemoryResult
MaxMemInQueues 2GB4.17 GBFragmented
MaxMemInQueues 4GB4.76 GBFragmented
Control (default)5.14 GBFragmented

Why it failed: MaxMemInQueues strictly limits memory for circuit and connection buffers. It does not control the directory cache or the overhead from the allocator itself. The fragmentation happens in memory glibc won't release, not in Tor's queues.

MaxMemInQueues Comparison

The Real Fix

Treat MaxMemInQueues as a congestion/buffer safety setting—not a solution to allocator-driven fragmentation. It's still good practice for overload protection, but if you're seeing RSS "stick" high on guards, you need allocator-level changes (jemalloc/mimalloc) rather than a larger or smaller MaxMem cap.

📊 Raw data: View experiment data and relay configs on GitHub

Join the Mission