Linux filesystem is filling, despite no large files or directories

The Problem

On a Linux-based system, the root file system is filling up by some unknown process. It continues regardless of what files are moved or cleaned from the file system.

# df -hP /
/dev/mapper/VGExaDb-LVDbSys1    ext3     30G   29G     0 100%    /

There are no large files to account for the full file system:

# find / -xdev -type f -size +100M -exec ls -lh {} \;

There are no large directories to account for the full file system:

# du -h --max-depth=1 /
42M     /sbin
13M     /etc
2.4G    /usr
45M    /tmp
451M   /var
192M   /lib
(and so on)
...

The Solution

At one point in the past, two or more processes were using a file; for example,/tmp/top.log. One process stopped it’s access to /tmp/top.log by deleting it (actually it’s directory entry) and the other process continued to write to the inode reference, allowing the file to continue to grow.

This can be seen in the output of: “lsof +L 1

# lsof +L 1
COMMAND    PID      USER   FD     TYPE   DEVICE     SIZE/OFF      NLINK          NODE       NAME
top        34261    root   1W     REG   252,0       21460567592   0              1785896    /tmp/top.log (deleted)

Other files were listed too but were much smaller.

This shows that user=root was running a top command that was spooling to /tmp/top.log, and that there are currently no links to that file. That spool file was 21Gb in size but did not report in “du –h –max-depth=1 /” output, where /tmp listed as only 45M.

Follow the steps below to identify and kill such a process.

1. Identify the files on the system that have fewer than 1 link with the command:

# lsof +L 1

2. Kill any processes that are writing to any unusually large file listed. In the example above, you would run:

# kill 34261

3. Space will be released when the final process stops using the file.

Related Post