I'm writing about this because it came up as an issue while at work one day.
Quite a while ago I wrote some software which keeps an eye on a number of servers and sends me an email if anything goes wrong, one of the things being if the servers hard disk becomes full / the system cannot create new files.
So I'm sat doing my daily java programming and I receive and email saying our main server is full, pretty strange because each server should have at least 20GB of space free at all times. So I log on to the system to see whats taking up all this room, the first command I do is df -h and I see the following output:
This is not the output I expected, so I thought maybe there was a problem writing files to the disk so I run a touch text.txt command and this seems to run perfectly fine. After looking thought my email software to see if it had somehow made a mistake I get another email saying the server is having problems writing logs, this is when I start getting confused.
I log back onto the system and check the logs. The logs had indeed stopped, as a test I wrote a small bash script to create a new file, it didn't work, instead it came back saying the hard disk was full.
After some extreme head scratching and chin stoking I found the source of my problem by running the command df -i and seeing that my IUse% was 100%.
The problem was these strange little things called 'i-nodes'.
Index Nodes (i-nodes)
An index node is a data-structure used to represent a file system object, this can be disk block location, creation time and a whole host of other metadata. Each index node is roughly 128 bytes in size and all index nodes should really only ever take up 1% of your hard disk.
So for each file on your hard disk there is a i-node. So the solution presented itself, All I had to do was find the largest directory and backup and remove the files. To find the largest directory on the system I ran this command: du -a | sort -n -r and I instantly found the issue. Another developer had written a program which was producing a hell of a lot of log files, every 2 seconds a new log file (very small in size) was created. While this wasn't taking up much room on the hard drive it was taking up alot of index nodes.
So there's a solution to anyone else having this issue.