The Problem
After a server has been restarted after patching – The error below can possibly be witnessed during boot and the same error is visible in /var/log/boot.log:
Starting udev: udevd inotify_init failed: too many open files
Due to udev fails to start, network and bonding interfaces are missing (including modules/drivers).
The Solution
There was a change in /etc/sysctl.conf, /etc/sysctl.d/99-install-oracle which included below stanza to overcome some issue with Veritas Cluster:
fs.inotify.max_queued_events = 0 fs.inotify.max_user_instances = 0 fs.inotify.max_user_watches = 0 fs.dir-notify-enable = 0
fs.inotify is used by various programs/apps and udev as well to track changes in files – in this case watchers are set to 0 hence udev can’t use watchers to track all changes on OS and throw error around too many open files. As system was restarted – OS started to use new settings for fs.inotify and caused whole issue with udev.
To resolve the issue, follow the steps outlined below:
1. Revert changes from /etc/sysctl.conf and any file which might still hold new value in /etc/sysctl.d/ folder, below command can be used to easily find all files where change was applied in /etc.
# grep -rnw /etc -e "fs.inotify" 2>/dev/null
2. To revert changes open vi edit for /etc/sysctl.conf and comment new stanza:
#fs.inotify.max_queued_events = 0 #fs.inotify.max_user_instances = 0 #fs.inotify.max_user_watches = 0 #fs.dir-notify-enable = 0
3. Save the file and reboot – after reboot verify if interfaces are up and if udev starts without any issues. By default on CentOS/RHEL 6, fs.inotify stanzas are set to:
fs.inotify.max_queued_events = 16384 fs.inotify.max_user_instances = 128 fs.inotify.max_user_watches = 8192 fs.dir-notify-enable = 1
4. You can check current setting for fs.inotify by executing sysctl command:
# sysctl -a | grep fs.inotify