Home » Misc Tech Stuff, Planet Ubuntu, Ubuntu, Ubuntu Tutorials

Why (on Linux) am I seeing so much RAM usage?

29 April 2009 37,765 views 10 Comments

This is a guest post by Dayid http://dayid.org

There are better options to see your memory usage; however it seems `free` is more attune to creating the confusion I’m attempting to quell here. That said, see the Redhat docs about /proc/meminfo

Other commands to use to see memory usage

$ vmstat -aS M #see the "inactive" column for a rough "free" idea.

The real answer

There’s no reason to clear what’s in RAM until you need more space to write to it.

The short answer analogy

Buffers and cache in RAM being cleared is silly. Imagine a professor, who rather than writing all the way across the chalkboard, finishes a sentence and immediately erases and starts writing in the upper left corner AGAIN and AGAIN and AGAIN.

OR imagine you like a song. You record it to the beginning of a cassette tape. When you want a new song, do you re-record over the first song or record after it?

AKA: The horrible House/Barn analogy

Many people new to Linux or computers in general have a poor understanding of how RAM works. On Linux systems, most users will look at `top` or use `free` to see the amount of memory installed and/or free. Below is an example:

dayid@emiline ~ $ free -m
             total       used       free     shared    buffers     cached
Mem:          2024       1970         53          0         19       1669
-/+ buffers/cache:        281       1742
Swap:         1953          4       1948

At first glance, they may look at their machine with 2GB of RAM and wonder how they only have 53MB free! While this is true, the surprise, fear, or angst about this comes from a misunderstanding.

We could take a trip to a million places for this horrible analogy, but let’s pretend we’re on a country farm.
Rather than working with 2024MB of RAM and 1953MB of SWAP, we’ll say we’ve got 20 beds in the house, and 20 beds in the barn.
Rather than programs we’ll have people occupying the space.
For our purposes, ignore costs of cleaning the bedding, water, etc.
The house can hold active workers or non-active workers.

Due to its distance and the time to get to/from it, the barn can only hold non-active workers. When a worker is called from the barn they will have to pass through the house and stay in the house while they work.

  • 10 laborers show up to a job. Since the house is closer to the food, showers, and work they’ll be doing, we let them stay in the house.
  • 10 of our 20 beds are used by active workers.
  • Our farm in `free -m`:
                       total       used         free       shared      buffers      cached
    House:                20         10           10            0            0           0
    -/+ buffers/cache:               10           10 
    Barn:                 20          0           20 
  • 8 more people show up for another job. They also stay in the house since we have the space for them.
  • 18 of our 20 beds are used by active workers.
  • Our farm in `free -m`:
                       total       used         free       shared      buffers      cached
    House:                20         18            2            0            0           0
    -/+ buffers/cache:               18            2  
    Barn:                 20          0           20 
  • The first job is over, we no longer need to keep around the first 10 laborers; however, letting them stay doesn’t cost us anything, as if they weren’t there the beds would just be empty (i.e., go to waste).
  • 18 of 20 beds are used. 8 by active workers, 10 by non-active workers.
  • Our farm in `free -m`:
                       total       used         free       shared      buffers      cached
    House:                20         18            2            0            0          10 
    -/+ buffers/cache:                8           12 
    Barn:                 20          0           20 
  • Let’s take a timeout and review the above output. Right now we have 20 rooms. 18 are being used so only 2 are free. However, since 10 workers aren’t being used, they are in cache – kept around because we have no reason to kick them out – so, we actually have an operating space of 12 workers we could hire. 2 to stay in the unused rooms, and 10 to replace those that are already here.
  • We have a new job on the farm, so we have 4 new people show up. We do not have enough beds for them. 2 of the 10 who are not active leave. We move in those 4 new people.
  • 20 of 20 beds are used. 12 by active workers, 8 by non-active workers.
  • Our farm:
                       total       used         free       shared      buffers      cached
    House:                20         20            0            0            0           8 
    -/+ buffers/cache:               12            8 
    Barn:                 20          0           20 
  • Right now we have 20 rooms filled. 8 are filled by people who aren’t working though, so technically we have 8 beds we can use if we need to. Now let’s get crazy.
  • It’s production season and we have a lot to do around the farm. We setup another program and need to hire 14 new workers for it. We’ll have to kick out the 8 non-active workers and move in 8 of the new workers. However, because we run out of rooms in the house, our least important workers will have to stay in the barn. The barn is still good storing area, but it will take them longer to get to and from the job each time they are required to.
  • 20 of 20 beds are used by active workers. 6 rooms in the barn are used.
  • Our farm:
                       total       used         free       shared      buffers      cached
    House:                20         20            0           0            0            0 
    -/+ buffers/cache:               20            0 
    Barn:                 20          6           14 
  • Now, things calm down again and only 4 workers are going to remain active. We’re not going to toss out the rest though as they’re not harming anything just taking up space (at least not until we need the space again)
  • Our farm:
                       total       used         free       shared      buffers      cached
    House:                20         20            0           0            0           16 
    -/+ buffers/cache:                4           16 
    Barn:                 20          6           14 
  • That’s right, our “free” stays 0, as we still have no space available. The important thing to look at here is how much do we have available if we clean out the buffers and cache – which are not necessary to keep, but we generally keep until it needs to be discard.

That’s right. You just read an entire horrible article about RAM just to know to look at your buffers/cache line before wondering why so much RAM is used.

Unused RAM is “wasted RAM”, let it cache!

1 Star2 Stars3 Stars4 Stars5 Stars (5 votes, average: 5.00 out of 5)


  • Andrea said:

    Nice article 🙂

  • rod said:

    that was a pretty awesome article, halfway trough it i had my “oh.. duh” moment. 😀

  • Chris Routh said:

    What about the green argument? Holding information in RAM requires power to keep it there, or else the computer forgets it. RAM that has nothing in it has no power cost to the system, therefore you are being more power-wise by keeping system memory clear?

  • JT said:

    Re: Chris Routh’s comment above about being green.

    More information in RAM doesn’t mean more power consumption (or vise versa). Most memory technology is made up of “destructive technology” (capacitors) because it’s so cheap. Whenever a bit is read it’s destroyed. Picture a bottle of water with the outside painted completely black. You have to empty it to know what’s inside! This happens billions of times a second.

    Now, pretend there’s a hole in the bottle of water with the outside painted black. You’d have to empty and refill it just to make sure it retained some semblance of its contents. This is true of memory too. Capacitors are like leaky water tanks that continually drain. The computer has to read (which takes power), calculate (which takes power), and write (which takes power too) thousands of times a second just to keep information in memory. More goes into it, of course, but in the grand scheme of things, it doesn’t matter whether the information is full or not in terms of power consumption.

    Not to mention that unallocated memory is undefined and can be all 1s, all 0s, or any combination of the two ;-).

  • syamsul said:

    Thanks Chris for the insightful article. Just one question – do OpenVZ and Xen based VPSes differ in the way they manage/allocate memory?

    The reason I’m asking is that on my OpenVZ VPS, I hardly see anything under Buffers or Cached after doing a free -m.

    OTOH, the Xen VPS seems to be allocating quite a bit to buffers and cache like you said.

  • kersurk said:

    “OR imagine you like a song. You record it to the beginning of a cassette tape. When you want a new song, do you re-record over the first song or record after it?”

    Damn, it’s 2009 🙂 Who on earth uses the cassette tapes for recording anymore.
    Anyhow, thanks for the explanation.

  • Nate Johnston said:


    There are some applications that will have issues when confronting a large cache size. In particular, I had a tomcat that was complaining of memory exhaustion on an 8GB linux host I was able to drop the cache size from 4015 megabytes to 50 megabytes.

    First, here is the status quo ante. The “cached” field indicates the combination of the pagecache, the inode cache, and the dentries cache. The pagecache is a copy of files on disk for speedier access. Since this application does not need speedier access to files on disk, the size of this can be tuned down.

    [natej@iloga-m02]~% free
    total used free shared buffers cached
    Mem: 8176800 8111792 65008 0 255144 4109096
    -/+ buffers/cache: 3747552 4429248
    Swap: 4192880 0 4192880

    I checked the kernel cache, but the kernel only reported ~267 megabytes cache. See the “Active / Total Size” line in the top block of slabtop output:

    [natej@iloga-m02]~% slabtop -s c
    Active / Total Objects (% used) : 1385680 / 1402585 (98.8%)
    Active / Total Slabs (% used) : 76175 / 76188 (100.0%)
    Active / Total Caches (% used) : 85 / 133 (63.9%)
    Active / Total Size (% used) : 271147.38K / 273534.20K (99.1%)
    Minimum / Average / Maximum Object : 0.02K / 0.19K / 128.00K

    134296 134271 99% 0.82K 33574 4 134296K ext3_inode_cache
    1006335 1006166 99% 0.09K 22363 45 89452K buffer_head
    169552 169429 99% 0.23K 10597 16 42388K dentry_cache
    37933 37857 99% 0.52K 5419 7 21676K radix_tree_node
    2968 2968 100% 0.55K 424 7 1696K inode_cache
    750 718 95% 2.00K 375 2 1500K size-2048
    2016 2016 100% 0.59K 336 6 1344K proc_inode_cache
    608 608 100% 1.94K 304 2 1216K task_struct
    1112 529 47% 1.00K 278 4 1112K size-1024
    256 256 100% 4.00K 256 1 1024K biovec-(256)
    4686 4461 95% 0.17K 213 22 852K vm_area_struct
    198 188 94% 4.00K 198 1 792K size-4096
    5828 5619 96% 0.12K 188 31 752K size-128

    Then I checked the kernel variables for cacheability:

    [natej@iloga-m02]/proc/sys/vm% sysctl -A | egrep “swap|cache”
    vm.drop_caches = 0
    vm.pagecache = 100
    vm.vfs_cache_pressure = 100
    vm.swappiness = 60
    fs.quota.cache_hits = 0

    Adjusting the vfs_cache_pressure variable to more aggressively prune the dentry and inode caches doesn’t do much of anything.

    But adjusting the maximum size of the pagecache instructing the kernel not to use 100% of memory for cache does help for new cache allocations.

    [root@iloga-m02]/proc/sys/vm# echo “10 20 40” > /proc/sys/vm/pagecache

    Finally, in order to cause the cache to be freed I used the drop_cache to drop just the clean pages in the pagecache. Always, always sync thrice before doing this, because in very rare instances it can cause a kernel panic if the number of dirty pages causes the swapout mechanism to choke.

    [root@iloga-m02]/proc/sys/vm# free
    total used free shared buffers cached
    Mem: 8176800 8115404 61396 0 255148 4110724
    -/+ buffers/cache: 3749532 4427268
    Swap: 4192880 0 4192880
    [root@iloga-m02]/proc/sys/vm# sync && sync && sync
    [root@iloga-m02]/proc/sys/vm# echo 1 > /proc/sys/vm/drop_caches
    [root@iloga-m02]/proc/sys/vm# free
    total used free shared buffers cached
    Mem: 8176800 3680508 4496292 0 608 38832
    -/+ buffers/cache: 3641068 4535732
    Swap: 4192880 0 4192880

    The pagecache can vary upwards, but hopefully it should not grow larger than 20% of RAM under this configuration. That would be the second of the three numbers page cache is set to. The pagecache setting could be made permanent across reboots by adding it to /etc/sysctl.conf with the following line syntax.

    vm.pagecache = 10 20 40

    Tuning the dentry and inode caches via vfs_cache_pressure is not necessary. Those caches were not dropped when I did a drop_cache, yet the requisite bulk of memory was dropped. I think that in a system with the webserver-like high-network I/O and low disk-I/O profile of the iloga hosts, that adjusting the pagecache variable is enough to get the system to a state where the memory is visibly free.

    Information on:
    * /proc/sys/vm/drop_caches: http://www.linuxinsight.com/proc_sys_vm_drop_caches.html
    * /proc/sys/vm/pagecache: http://www.redhat.com/magazine/001nov04/features/vm/#pagecache
    * The basics of the pagecache: http://www.linux-tutorial.info/modules.php?name=MContent&pageid=310

  • babagau said:

    Excellent and explanatory!

  • alieblice said:

    nice story

  • farouk chi said:

    An outstanding share! I have just forwarded this onto a
    coworker who was doing a little research on this.
    And he actually ordered me dinner simply because I discovered it
    for him… lol. So allow me to reword this…. Thank YOU for the
    meal!! But yeah, thanks for spending the time to discuss this issue here on your blog.

Leave your response!

Add your comment below, or trackback from your own site. You can also subscribe to these comments via RSS.

Be nice. Keep it clean. Stay on topic. No spam.

You can use these tags:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

This is a Gravatar-enabled weblog. To get your own globally-recognized-avatar, please register at Gravatar.