[pve-devel] [PATCH kernel] fix #1042: inotify: increase watches, instances & queue default limits

Thomas Lamprecht t.lamprecht at proxmox.com
Tue Jul 16 18:46:34 CEST 2019


Some recent distributions running as a LXC container eat the relative
low default limits up very fast. Thus increase all those
(semi-related) limits by a factor of 512. This was chosen by using
one of our bigger know CT setup (~1500 CTs per host) and the fact
that I can have only a very low count (circa 5 - 7) of running
"inotify watch hungry" CTs (e.g., ones with a recent systemd > 240).

So, as 5 * 512 is well >> 1500, we can assume with confidence to
allow most reasonable and existing setups by default.

As with commit 46eb14b735b11927d4bdc2d1854c311af19de6d "fs: fsnotify:
account fsnotify metadata to kmemcg" the memory usage from the watch
and queue overhead is accounted to the users respective memory CGroup
(i.e., for LXC containers their memory limit) we can do this with
relative ease.

Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
---
 ...inotify-user-increase-default-limits.patch | 43 +++++++++++++++++++
 1 file changed, 43 insertions(+)
 create mode 100644 patches/kernel/0006-inotify-user-increase-default-limits.patch

diff --git a/patches/kernel/0006-inotify-user-increase-default-limits.patch b/patches/kernel/0006-inotify-user-increase-default-limits.patch
new file mode 100644
index 0000000..27d9f87
--- /dev/null
+++ b/patches/kernel/0006-inotify-user-increase-default-limits.patch
@@ -0,0 +1,43 @@
+From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
+From: Thomas Lamprecht <t.lamprecht at proxmox.com>
+Date: Tue, 16 Jul 2019 15:55:37 +0200
+Subject: [PATCH] inotify user: increase default limits
+
+Some recent distributions running as a LXC container eat the relative
+low default limits up very fast. Thus increase all those
+(semi-related) limits by a factor of 512. This was chosen by using
+one of our bigger know CT setup (~1500 CTs per host) and the fact
+that I can have only a very low count (circa 5 - 7) of running
+"inotify watch hungry" CTs (e.g., ones with a recent systemd > 240).
+
+So, as 5 * 512 is >> 1500 we can assume with confidence to allow most
+reasonable and existing setups by default.
+
+As with commit 46eb14b735b11927d4bdc2d1854c311af19de6d "fs: fsnotify:
+account fsnotify metadata to kmemcg" the memory usage from the watch
+and queue overhead is accounted to the users respective memory CGroup
+(i.e., for LXC containers their memory limit) we can do this with
+relative ease.
+
+Signed-off-by: Thomas Lamprecht <t.lamprecht at proxmox.com>
+---
+ fs/notify/inotify/inotify_user.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/fs/notify/inotify/inotify_user.c b/fs/notify/inotify/inotify_user.c
+index 3b7b8e95c98a..65d3f839633c 100644
+--- a/fs/notify/inotify/inotify_user.c
++++ b/fs/notify/inotify/inotify_user.c
+@@ -825,9 +825,9 @@ static int __init inotify_user_setup(void)
+ 	inotify_inode_mark_cachep = KMEM_CACHE(inotify_inode_mark,
+ 					       SLAB_PANIC|SLAB_ACCOUNT);
+ 
+-	inotify_max_queued_events = 16384;
+-	init_user_ns.ucount_max[UCOUNT_INOTIFY_INSTANCES] = 128;
+-	init_user_ns.ucount_max[UCOUNT_INOTIFY_WATCHES] = 8192;
++	inotify_max_queued_events = 8388608; // 2^23
++	init_user_ns.ucount_max[UCOUNT_INOTIFY_INSTANCES] = 65536; // 2^16
++	init_user_ns.ucount_max[UCOUNT_INOTIFY_WATCHES] = 4194304; // 2^22
+ 
+ 	return 0;
+ }
-- 
2.20.1





More information about the pve-devel mailing list