memory behavior on Amazon Linux AMI release 2018.03





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







2















We observe an increasing memory usage of our ec2 instances over time.
After two weeks we have to reboot our systems.



On this machines run some docker containers. Let's have a look
with 'free -m' after 14 days(I stopped the docker daemon now):



$free -m
total used free shared buffers cached
Mem: 7977 7852 124 0 4 573
-/+ buffers/cache: 7273 703
Swap: 0 0 0


Now I run 'ps_mem':



Private  +   Shared  =  RAM used        Program

124.0 KiB + 64.5 KiB = 188.5 KiB agetty
140.0 KiB + 60.5 KiB = 200.5 KiB acpid
180.0 KiB + 41.5 KiB = 221.5 KiB rngd
200.0 KiB + 205.5 KiB = 405.5 KiB lvmpolld
320.0 KiB + 89.5 KiB = 409.5 KiB irqbalance
320.0 KiB + 232.5 KiB = 552.5 KiB lvmetad
476.0 KiB + 99.5 KiB = 575.5 KiB auditd
624.0 KiB + 105.5 KiB = 729.5 KiB init
756.0 KiB + 72.5 KiB = 828.5 KiB crond
292.0 KiB + 622.5 KiB = 914.5 KiB udevd (3)
560.0 KiB + 377.0 KiB = 937.0 KiB mingetty (6)
1.0 MiB + 194.5 KiB = 1.2 MiB ntpd
1.1 MiB + 256.0 KiB = 1.4 MiB dhclient (2)
2.5 MiB + 103.5 KiB = 2.6 MiB rsyslogd
3.1 MiB + 259.0 KiB = 3.4 MiB sendmail.sendmail (2)
3.0 MiB + 609.0 KiB = 3.6 MiB sudo (2)
3.6 MiB + 1.6 MiB = 5.2 MiB bash (5)
2.9 MiB + 4.3 MiB = 7.2 MiB sshd (9)
14.5 MiB + 413.5 KiB = 14.9 MiB dmeventd
---------------------------------
45.4 MiB
=================================


Now I try to allocate new memory with the 'stress' tool(http://people.seas.harvard.edu/~apw/stress/):



$ stress --vm 1 --vm-bytes 1G --timeout 10s --verbose
stress: info: [11120] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [11120] using backoff sleep of 3000us
stress: dbug: [11120] setting timeout to 10s
stress: dbug: [11120] --> hogvm worker 1 [11121] forked
stress: dbug: [11121] allocating 1073741824 bytes ...
stress: FAIL: [11121] (494) hogvm malloc failed: Cannot allocate memory
stress: FAIL: [11120] (394) <-- worker 11121 returned error 1
stress: WARN: [11120] (396) now reaping child worker processes
stress: FAIL: [11120] (451) failed run completed in 0s


==> 'stress' is not able to allocate 1G of new memory.



But I do not understand where all my memory is gone?!



Here comes the output of 'top' (similiar to ps_mem):



Tasks: 107 total,   1 running,  66 sleeping,   0 stopped,   0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8168828k total, 8045784k used, 123044k free, 5656k buffers
Swap: 0k total, 0k used, 0k free, 589372k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2030 root 20 0 102m 17m 5812 S 0.0 0.2 1:21.36 dmeventd
11145 root 20 0 82664 6604 5752 S 0.0 0.1 0:00.00 sshd
11130 root 20 0 183m 4472 3824 S 0.0 0.1 0:00.00 sudo
18339 ec2-user 20 0 114m 3896 1744 S 0.0 0.0 0:00.08 bash
2419 root 20 0 241m 3552 1188 S 0.0 0.0 2:07.85 rsyslogd
11146 sshd 20 0 80588 3440 2612 S 0.0 0.0 0:00.00 sshd
11131 root 20 0 112m 3288 2924 S 0.0 0.0 0:00.00 bash
17134 root 20 0 117m 3084 2008 S 0.0 0.0 0:00.00 sshd
17148 ec2-user 20 0 112m 2992 2620 S 0.0 0.0 0:00.01 bash
2605 root 20 0 85496 2776 1064 S 0.0 0.0 0:21.44 sendmail
2614 smmsp 20 0 81088 2704 1208 S 0.0 0.0 0:00.17 sendmail
15228 root 20 0 112m 2632 2228 S 0.0 0.0 0:00.02 bash
1 root 20 0 19684 2376 2068 S 0.0 0.0 0:01.91 init
2626 root 20 0 118m 2276 1644 S 0.0 0.0 0:02.45 crond
2233 root 20 0 9412 2244 1748 S 0.0 0.0 0:00.49 dhclient
11147 root 20 0 15364 2176 1856 R 0.0 0.0 0:00.00 top
2584 ntp 20 0 113m 2128 1308 S 0.0 0.0 0:49.60 ntpd


Where are these damned 7273MB memory consumed?



cat /proc/meminfo



MemTotal:        8168828 kB
MemFree: 129736 kB
MemAvailable: 567464 kB
Buffers: 5116 kB
Cached: 585504 kB
SwapCached: 0 kB
Active: 476920 kB
Inactive: 130228 kB
Active(anon): 22340 kB
Inactive(anon): 80 kB
Active(file): 454580 kB
Inactive(file): 130148 kB
Unevictable: 17620 kB
Mlocked: 17620 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 16 kB
Writeback: 0 kB
AnonPages: 34088 kB
Mapped: 14668 kB
Shmem: 80 kB
Slab: 5625876 kB
SReclaimable: 142240 kB
SUnreclaim: 5483636 kB
KernelStack: 2016 kB
PageTables: 4384 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4084412 kB
Committed_AS: 109856 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 7286784 kB
DirectMap2M: 1101824 kB


Output of slabtop



 Active / Total Objects (% used)    : 8445426 / 11391340 (74.1%)
Active / Total Slabs (% used) : 533926 / 533926 (100.0%)
Active / Total Caches (% used) : 78 / 101 (77.2%)
Active / Total Size (% used) : 5033325.10K / 5414048.91K (93.0%)
Minimum / Average / Maximum Object : 0.01K / 0.47K / 9.44K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
3216990 525372 16% 0.09K 76595 42 306380K kmalloc-96
3101208 3101011 99% 1.00K 219166 32 7013312K kmalloc-1024
2066976 2066841 99% 0.32K 86124 24 688992K taskstats
1040384 1039935 99% 0.03K 8128 128 32512K kmalloc-32
1038080 1037209 99% 0.06K 16220 64 64880K kmalloc-64
516719 516719 100% 2.09K 113785 15 3641120K request_queue
223356 22610 10% 0.57K 7977 28 127632K radix_tree_node
52740 39903 75% 0.13K 1758 30 7032K kernfs_node_cache


Now I rebooted the machine and did a 'perf kmem record --caller'. After some seconds I had to cancel because the file data.perf was already over 1GB lager. I did a 'perf kmem stat --caller' and here comes the output:



---------------------------------------------------------------------------------------------------------
Callsite | Total_alloc/Per | Total_req/Per | Hit | Ping-pong | Frag
---------------------------------------------------------------------------------------------------------
dm_open+2b | 240/8 | 120/4 | 30 | 0 | 50,000%
match_number+2a | 120/8 | 60/4 | 15 | 0 | 50,000%
rebuild_sched_domains_locked+dd | 72/8 | 36/4 | 9 | 0 | 50,000%
dm_btree_del+2b | 40960/4096 | 20720/2072 | 10 | 0 | 49,414%
sk_prot_alloc+7c | 86016/2048 | 44016/1048 | 42 | 0 | 48,828%
hugetlb_cgroup_css_alloc+29 | 2560/512 | 1320/264 | 5 | 0 | 48,438%
blk_throtl_init+2a | 15360/1024 | 8040/536 | 15 | 0 | 47,656%
bpf_int_jit_compile+6e | 40960/8192 | 21440/4288 | 5 | 0 | 47,656%
mem_cgroup_css_alloc+2f | 10240/2048 | 5360/1072 | 5 | 2 | 47,656%
alloc_disk_node+32 | 30720/2048 | 16560/1104 | 15 | 0 | 46,094%
mem_cgroup_css_alloc+166 | 5120/1024 | 2800/560 | 5 | 2 | 45,312%
blkcg_css_alloc+3b | 2560/512 | 1400/280 | 5 | 0 | 45,312%
kobject_uevent_env+be | 1224704/4096 | 698464/2336 | 299 | 0 | 42,969%
uevent_show+81 | 675840/4096 | 385440/2336 | 165 | 0 | 42,969%
blkg_alloc+3c | 40960/1024 | 23680/592 | 40 | 0 | 42,188%
dm_table_create+34 | 7680/512 | 4560/304 | 15 | 0 | 40,625%
journal_init_common+34 | 30720/2048 | 18360/1224 | 15 | 0 | 40,234%
throtl_pd_alloc+2b | 56320/1024 | 34320/624 | 55 | 0 | 39,062%
strndup_user+3f | 14496/17 | 8917/10 | 829 | 0 | 38,486%
alloc_trial_cpuset+19 | 14336/1024 | 8848/632 | 14 | 0 | 38,281%
cpuset_css_alloc+29 | 5120/1024 | 3160/632 | 5 | 0 | 38,281%
proc_reg_open+33 | 48768/64 | 30480/40 | 762 | 0 | 37,500%
get_mountpoint+73 | 26432/64 | 16520/40 | 413 | 0 | 37,500%
alloc_pipe_info+aa | 219136/1024 | 136960/640 | 214 | 12 | 37,500%
alloc_fair_sched_group+f0 | 38400/512 | 24000/320 | 75 | 0 | 37,500%
__alloc_workqueue_key+77 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
newary+69 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
disk_expand_part_tbl+74 | 960/64 | 600/40 | 15 | 0 | 37,500%
alloc_dax+29 | 120/8 | 75/5 | 15 | 0 | 37,500%
kernfs_mount_ns+3c | 320/64 | 200/40 | 5 | 0 | 37,500%
bucket_table_alloc+be | 16640/978 | 10496/617 | 17 | 12 | 36,923%
__alloc_workqueue_key+250 | 7680/512 | 4920/328 | 15 | 0 | 35,938%
journal_init_common+1b9 | 61440/4096 | 40920/2728 | 15 | 0 | 33,398%
kernfs_fop_write+b3 | 2248/11 | 1507/7 | 191 | 0 | 32,963%
__alloc_skb+72 | 3698176/876 | 2578048/611 | 4217 | 115 | 30,289%
alloc_pid+33 | 80896/128 | 56944/90 | 632 | 43 | 29,608%
alloc_pipe_info+3d | 41088/192 | 29104/136 | 214 | 12 | 29,167%
device_create_groups_vargs+59 | 15360/1024 | 10920/728 | 15 | 0 | 28,906%
sget_userns+ee | 112640/2048 | 80960/1472 | 55 | 8 | 28,125%
key_alloc+13e | 480/96 | 350/70 | 5 | 0 | 27,083%
load_elf_phdrs+49 | 153600/602 | 113176/443 | 255 | 0 | 26,318%
alloc_vfsmnt+aa | 11752/22 | 8765/17 | 513 | 130 | 25,417%
__memcg_init_list_lru_node+6b | 35200/32 | 26400/24 | 1100 | 160 | 25,000%
proc_self_get_link+96 | 12352/16 | 9264/12 | 772 | 0 | 25,000%
memcg_kmem_get_cache+9e | 46336/64 | 34752/48 | 724 | 0 | 25,000%
kernfs_fop_open+286 | 45056/64 | 33792/48 | 704 | 0 | 25,000%
insert_shadow+27 | 16544/32 | 12408/24 | 517 | 3 | 25,000%
allocate_cgrp_cset_links+70 | 28800/64 | 21600/48 | 450 | 0 | 25,000%

ext4_ext_remove_space+8db | 12352/64 | 9264/48 | 193 | 0 | 25,000%
dev_exception_add+25 | 5760/64 | 4320/48 | 90 | 0 | 25,000%
mempool_create_node+4e | 8160/96 | 6120/72 | 85 | 0 | 25,000%
alloc_rt_sched_group+11d | 7200/96 | 5400/72 | 75 | 0 | 25,000%
copy_semundo+60 | 2400/32 | 1800/24 | 75 | 7 | 25,000%
ext4_readdir+825 | 3264/64 | 2448/48 | 51 | 0 | 25,000%
alloc_worker+1d | 8640/192 | 6480/144 | 45 | 0 | 25,000%
alloc_workqueue_attrs+27 | 1440/32 | 1080/24 | 45 | 0 | 25,000%
ext4_fill_super+57 | 30720/2048 | 23040/1536 | 15 | 0 | 25,000%
apply_wqattrs_prepare+32 | 960/64 | 720/48 | 15 | 0 | 25,000%
inotify_handle_event+68 | 960/64 | 720/48 | 15 | 1 | 25,000%
blk_alloc_queue_stats+1b | 480/32 | 360/24 | 15 | 0 | 25,000%
proc_self_get_link+57 | 160/16 | 120/12 | 10 | 0 | 25,000%
disk_seqf_start+25 | 256/32 | 192/24 | 8 | 0 | 25,000%
memcg_write_event_control+8a | 960/192 | 720/144 | 5 | 0 | 25,000%
eventfd_file_create.part.3+28 | 320/64 | 240/48 | 5 | 0 | 25,000%
do_seccomp+249 | 160/32 | 120/24 | 5 | 0 | 25,000%
mem_cgroup_oom_register_event+29 | 160/32 | 120/24 | 5 | 0 | 25,000%
bucket_table_alloc+32 | 512/512 | 384/384 | 1 | 0 | 25,000%
__kernfs_new_node+25 | 42424/33 | 32046/24 | 1284 | 2 | 24,463%
single_open_size+2f | 45056/4096 | 35024/3184 | 11 | 0 | 22,266%
alloc_fdtable+ae | 544/90 | 424/70 | 6 | 0 | 22,059%
__register_sysctl_paths+10f | 2304/256 | 1800/200 | 9 | 0 | 21,875%
pskb_expand_head+71 | 10240/2048 | 8000/1600 | 5 | 0 | 21,875%
cpuacct_css_alloc+28 | 1280/256 | 1000/200 | 5 | 0 | 21,875%
shmem_symlink+a5 | 1440/13 | 1135/10 | 105 | 1 | 21,181%
kernfs_fop_open+d5 | 135168/192 | 107008/152 | 704 | 0 | 20,833%
mb_cache_create+2c | 2880/192 | 2280/152 | 15 | 0 | 20,833%
crypto_create_tfm+32 | 1440/96 | 1140/76 | 15 | 0 | 20,833%
bpf_prog_alloc+9d | 960/192 | 760/152 | 5 | 0 | 20,833%
pidlist_array_load+172 | 768/192 | 608/152 | 4 | 0 | 20,833%
cgroup_mkdir+ca | 46080/1024 | 36540/812 | 45 | 2 | 20,703%
__proc_create+a1 | 17280/192 | 13740/152 | 90 | 0 | 20,486%
__nf_conntrack_alloc+4e | 20800/320 | 16640/256 | 65 | 2 | 20,000%
devcgroup_css_alloc+1b | 1280/256 | 1040/208 | 5 | 0 | 18,750%
ext4_htree_store_dirent+35 | 27584/77 | 22770/64 | 354 | 0 | 17,452%
copy_ipcs+63 | 5120/1024 | 4240/848 | 5 | 4 | 17,188%
__list_lru_init+225 | 10560/96 | 8800/80 | 110 | 16 | 16,667%
device_private_init+1f | 5760/192 | 4800/160 | 30 | 0 | 16,667%
alloc_rt_sched_group+ef | 153600/2048 | 129600/1728 | 75 | 0 | 15,625%
ext4_fill_super+2907 | 1920/128 | 1620/108 | 15 | 0 | 15,625%
__d_alloc+169 | 107648/115 | 91360/97 | 934 | 0 | 15,131%
copy_utsname+85 | 2560/512 | 2200/440 | 5 | 4 | 14,062%
kobject_set_name_vargs+1e | 11904/66 | 10261/57 | 179 | 33 | 13,802%
kasprintf+3a | 11744/91 | 10196/79 | 129 | 33 | 13,181%
prepare_creds+21 | 191808/192 | 167832/168 | 999 | 31 | 12,500%
__seq_open_private+1c | 16896/64 | 14784/56 | 264 | 0 | 12,500%
start_this_handle+2da | 29440/256 | 25760/224 | 115 | 90 | 12,500%
load_elf_binary+1e8 | 3520/32 | 3080/28 | 110 | 0 | 12,500%
alloc_fair_sched_group+11d | 38400/512 | 33600/448 | 75 | 0 | 12,500%

__kthread_create_on_node+5e | 3840/64 | 3360/56 | 60 | 0 | 12,500%
wb_congested_get_create+86 | 2560/64 | 2240/56 | 40 | 0 | 12,500%
bioset_create+2e | 3840/128 | 3360/112 | 30 | 0 | 12,500%
kobj_map+83 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+54 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+2c | 480/32 | 420/28 | 15 | 0 | 12,500%
alloc_fdtable+4b | 384/64 | 336/56 | 6 | 0 | 12,500%
unix_bind+1a2 | 640/128 | 560/112 | 5 | 0 | 12,500%
kobject_get_path+56 | 30016/100 | 26522/88 | 299 | 0 | 11,640%
__register_sysctl_table+51 | 20448/151 | 18288/135 | 135 | 0 | 10,563%
create_cache+3e | 45696/384 | 40936/344 | 119 | 33 | 10,417%
kvmalloc_node+3e | 2113536/25161 | 1897896/22594 | 84 | 0 | 10,203%
dev_create+ab | 1440/96 | 1300/86 | 15 | 0 | 9,722%
__anon_vma_prepare+d2 | 290576/88 | 264160/80 | 3302 | 81 | 9,091%
anon_vma_fork+5e | 166672/88 | 151520/80 | 1894 | 187 | 9,091%
kthread+3f | 5760/96 | 5280/88 | 60 | 0 | 8,333%
thin_ctr+6f | 2880/192 | 2640/176 | 15 | 0 | 8,333%
sock_alloc_inode+18 | 492096/704 | 452952/648 | 699 | 11 | 7,955%
jbd2_journal_add_journal_head+67 | 48480/120 | 45248/112 | 404 | 1 | 6,667%
do_execveat_common.isra.31+c0 | 37120/256 | 34800/240 | 145 | 6 | 6,250%
kernfs_iattrs.isra.4+59 | 10112/128 | 9480/120 | 79 | 0 | 6,250%
shmem_fill_super+25 | 3200/128 | 3000/120 | 25 | 0 | 6,250%
bdi_alloc_node+2a | 15360/1024 | 14400/960 | 15 | 0 | 6,250%
alloc_mnt_ns+54 | 1152/128 | 1080/120 | 9 | 4 | 6,250%
alloc_fair_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_fair_sched_group+4e | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+4d | 640/128 | 600/120 | 5 | 0 | 6,250%
__register_sysctl_table+434 | 43776/270 | 41157/254 | 162 | 0 | 5,983%
__kernfs_new_node+42 | 839528/136 | 790144/128 | 6173 | 1025 | 5,882%
mqueue_alloc_inode+16 | 4800/960 | 4520/904 | 5 | 4 | 5,833%
bpf_prepare_filter+24b | 40960/8192 | 38920/7784 | 5 | 0 | 4,980%
bpf_convert_filter+57 | 20480/4096 | 19460/3892 | 5 | 0 | 4,980%
bpf_prepare_filter+111 | 10240/2048 | 9730/1946 | 5 | 0 | 4,980%
ep_alloc+3d | 24576/192 | 23552/184 | 128 | 11 | 4,167%
inet_twsk_alloc+3a | 1736/248 | 1680/240 | 7 | 0 | 3,226%
mm_alloc+16 | 296960/2048 | 290000/2000 | 145 | 10 | 2,344%
copy_process.part.40+9e6 | 202752/2048 | 198000/2000 | 99 | 11 | 2,344%
mempool_create_node+f3 | 235920/425 | 230520/415 | 555 | 0 | 2,289%
dax_alloc_inode+16 | 11520/768 | 11280/752 | 15 | 0 | 2,083%
bdev_alloc_inode+16 | 12480/832 | 12240/816 | 15 | 0 | 1,923%
ext4_find_extent+290 | 423552/99 | 415440/97 | 4243 | 0 | 1,915%
radix_tree_node_alloc.constprop.19+78 | 5331336/584 | 5258304/576 | 9129 | 8 | 1,370%
alloc_inode+66 | 528352/608 | 521400/600 | 869 | 20 | 1,316%
proc_alloc_inode+16 | 928200/680 | 917280/672 | 1365 | 45 | 1,176%
copy_process.part.40+95d | 483648/2112 | 478152/2088 | 229 | 8 | 1,136%
shmem_alloc_inode+16 | 326096/712 | 322432/704 | 458 | 6 | 1,124%
copy_process.part.40+10fe | 234496/1024 | 232664/1016 | 229 | 12 | 0,781%
ext4_alloc_inode+17 | 647360/1088 | 642600/1080 | 595 | 1 | 0,735%
__vmalloc_node_range+d3 | 13192/30 | 13112/30 | 426 | 36 | 0,606%
sk_prot_alloc+2f | 544768/1127 | 542080/1122 | 483 | 6 | 0,493%
...

SUMMARY (SLAB allocator)
========================
Total bytes requested: 818.739.691
Total bytes allocated: 821.951.696
Total bytes freed: 763.705.848
Net total bytes allocated: 58.245.848
Total bytes wasted on internal fragmentation: 3.212.005
Internal fragmentation: 0,390778%
Cross CPU allocations: 28.844/10.157.339









share|improve this question

























  • try top -o %MEM

    – Ctx
    Jan 3 at 13:04













  • This is not the problem: I can sort the top-output with SHIFT-M, however most memory- comsuming process is 'dmeventd' with 17MB RES-memory. The output above was also ordered after memory.

    – Thomas Seehofchen
    Jan 3 at 13:33













  • Ok, I see... Then what does cat /proc/meminfo say?

    – Ctx
    Jan 3 at 13:46











  • I added it to my inital question.

    – Thomas Seehofchen
    Jan 3 at 13:50











  • Ok, so slab is the problem (SUnreclaim: 5483636 kB). We need /proc/slabinfo now ;) Ok, this might be big, maybe you can identify yourself which kernel driver allocates so much memory. I just saw that there is also a tool called slabtop, which might come in handy here

    – Ctx
    Jan 3 at 13:54




















2















We observe an increasing memory usage of our ec2 instances over time.
After two weeks we have to reboot our systems.



On this machines run some docker containers. Let's have a look
with 'free -m' after 14 days(I stopped the docker daemon now):



$free -m
total used free shared buffers cached
Mem: 7977 7852 124 0 4 573
-/+ buffers/cache: 7273 703
Swap: 0 0 0


Now I run 'ps_mem':



Private  +   Shared  =  RAM used        Program

124.0 KiB + 64.5 KiB = 188.5 KiB agetty
140.0 KiB + 60.5 KiB = 200.5 KiB acpid
180.0 KiB + 41.5 KiB = 221.5 KiB rngd
200.0 KiB + 205.5 KiB = 405.5 KiB lvmpolld
320.0 KiB + 89.5 KiB = 409.5 KiB irqbalance
320.0 KiB + 232.5 KiB = 552.5 KiB lvmetad
476.0 KiB + 99.5 KiB = 575.5 KiB auditd
624.0 KiB + 105.5 KiB = 729.5 KiB init
756.0 KiB + 72.5 KiB = 828.5 KiB crond
292.0 KiB + 622.5 KiB = 914.5 KiB udevd (3)
560.0 KiB + 377.0 KiB = 937.0 KiB mingetty (6)
1.0 MiB + 194.5 KiB = 1.2 MiB ntpd
1.1 MiB + 256.0 KiB = 1.4 MiB dhclient (2)
2.5 MiB + 103.5 KiB = 2.6 MiB rsyslogd
3.1 MiB + 259.0 KiB = 3.4 MiB sendmail.sendmail (2)
3.0 MiB + 609.0 KiB = 3.6 MiB sudo (2)
3.6 MiB + 1.6 MiB = 5.2 MiB bash (5)
2.9 MiB + 4.3 MiB = 7.2 MiB sshd (9)
14.5 MiB + 413.5 KiB = 14.9 MiB dmeventd
---------------------------------
45.4 MiB
=================================


Now I try to allocate new memory with the 'stress' tool(http://people.seas.harvard.edu/~apw/stress/):



$ stress --vm 1 --vm-bytes 1G --timeout 10s --verbose
stress: info: [11120] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [11120] using backoff sleep of 3000us
stress: dbug: [11120] setting timeout to 10s
stress: dbug: [11120] --> hogvm worker 1 [11121] forked
stress: dbug: [11121] allocating 1073741824 bytes ...
stress: FAIL: [11121] (494) hogvm malloc failed: Cannot allocate memory
stress: FAIL: [11120] (394) <-- worker 11121 returned error 1
stress: WARN: [11120] (396) now reaping child worker processes
stress: FAIL: [11120] (451) failed run completed in 0s


==> 'stress' is not able to allocate 1G of new memory.



But I do not understand where all my memory is gone?!



Here comes the output of 'top' (similiar to ps_mem):



Tasks: 107 total,   1 running,  66 sleeping,   0 stopped,   0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8168828k total, 8045784k used, 123044k free, 5656k buffers
Swap: 0k total, 0k used, 0k free, 589372k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2030 root 20 0 102m 17m 5812 S 0.0 0.2 1:21.36 dmeventd
11145 root 20 0 82664 6604 5752 S 0.0 0.1 0:00.00 sshd
11130 root 20 0 183m 4472 3824 S 0.0 0.1 0:00.00 sudo
18339 ec2-user 20 0 114m 3896 1744 S 0.0 0.0 0:00.08 bash
2419 root 20 0 241m 3552 1188 S 0.0 0.0 2:07.85 rsyslogd
11146 sshd 20 0 80588 3440 2612 S 0.0 0.0 0:00.00 sshd
11131 root 20 0 112m 3288 2924 S 0.0 0.0 0:00.00 bash
17134 root 20 0 117m 3084 2008 S 0.0 0.0 0:00.00 sshd
17148 ec2-user 20 0 112m 2992 2620 S 0.0 0.0 0:00.01 bash
2605 root 20 0 85496 2776 1064 S 0.0 0.0 0:21.44 sendmail
2614 smmsp 20 0 81088 2704 1208 S 0.0 0.0 0:00.17 sendmail
15228 root 20 0 112m 2632 2228 S 0.0 0.0 0:00.02 bash
1 root 20 0 19684 2376 2068 S 0.0 0.0 0:01.91 init
2626 root 20 0 118m 2276 1644 S 0.0 0.0 0:02.45 crond
2233 root 20 0 9412 2244 1748 S 0.0 0.0 0:00.49 dhclient
11147 root 20 0 15364 2176 1856 R 0.0 0.0 0:00.00 top
2584 ntp 20 0 113m 2128 1308 S 0.0 0.0 0:49.60 ntpd


Where are these damned 7273MB memory consumed?



cat /proc/meminfo



MemTotal:        8168828 kB
MemFree: 129736 kB
MemAvailable: 567464 kB
Buffers: 5116 kB
Cached: 585504 kB
SwapCached: 0 kB
Active: 476920 kB
Inactive: 130228 kB
Active(anon): 22340 kB
Inactive(anon): 80 kB
Active(file): 454580 kB
Inactive(file): 130148 kB
Unevictable: 17620 kB
Mlocked: 17620 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 16 kB
Writeback: 0 kB
AnonPages: 34088 kB
Mapped: 14668 kB
Shmem: 80 kB
Slab: 5625876 kB
SReclaimable: 142240 kB
SUnreclaim: 5483636 kB
KernelStack: 2016 kB
PageTables: 4384 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4084412 kB
Committed_AS: 109856 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 7286784 kB
DirectMap2M: 1101824 kB


Output of slabtop



 Active / Total Objects (% used)    : 8445426 / 11391340 (74.1%)
Active / Total Slabs (% used) : 533926 / 533926 (100.0%)
Active / Total Caches (% used) : 78 / 101 (77.2%)
Active / Total Size (% used) : 5033325.10K / 5414048.91K (93.0%)
Minimum / Average / Maximum Object : 0.01K / 0.47K / 9.44K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
3216990 525372 16% 0.09K 76595 42 306380K kmalloc-96
3101208 3101011 99% 1.00K 219166 32 7013312K kmalloc-1024
2066976 2066841 99% 0.32K 86124 24 688992K taskstats
1040384 1039935 99% 0.03K 8128 128 32512K kmalloc-32
1038080 1037209 99% 0.06K 16220 64 64880K kmalloc-64
516719 516719 100% 2.09K 113785 15 3641120K request_queue
223356 22610 10% 0.57K 7977 28 127632K radix_tree_node
52740 39903 75% 0.13K 1758 30 7032K kernfs_node_cache


Now I rebooted the machine and did a 'perf kmem record --caller'. After some seconds I had to cancel because the file data.perf was already over 1GB lager. I did a 'perf kmem stat --caller' and here comes the output:



---------------------------------------------------------------------------------------------------------
Callsite | Total_alloc/Per | Total_req/Per | Hit | Ping-pong | Frag
---------------------------------------------------------------------------------------------------------
dm_open+2b | 240/8 | 120/4 | 30 | 0 | 50,000%
match_number+2a | 120/8 | 60/4 | 15 | 0 | 50,000%
rebuild_sched_domains_locked+dd | 72/8 | 36/4 | 9 | 0 | 50,000%
dm_btree_del+2b | 40960/4096 | 20720/2072 | 10 | 0 | 49,414%
sk_prot_alloc+7c | 86016/2048 | 44016/1048 | 42 | 0 | 48,828%
hugetlb_cgroup_css_alloc+29 | 2560/512 | 1320/264 | 5 | 0 | 48,438%
blk_throtl_init+2a | 15360/1024 | 8040/536 | 15 | 0 | 47,656%
bpf_int_jit_compile+6e | 40960/8192 | 21440/4288 | 5 | 0 | 47,656%
mem_cgroup_css_alloc+2f | 10240/2048 | 5360/1072 | 5 | 2 | 47,656%
alloc_disk_node+32 | 30720/2048 | 16560/1104 | 15 | 0 | 46,094%
mem_cgroup_css_alloc+166 | 5120/1024 | 2800/560 | 5 | 2 | 45,312%
blkcg_css_alloc+3b | 2560/512 | 1400/280 | 5 | 0 | 45,312%
kobject_uevent_env+be | 1224704/4096 | 698464/2336 | 299 | 0 | 42,969%
uevent_show+81 | 675840/4096 | 385440/2336 | 165 | 0 | 42,969%
blkg_alloc+3c | 40960/1024 | 23680/592 | 40 | 0 | 42,188%
dm_table_create+34 | 7680/512 | 4560/304 | 15 | 0 | 40,625%
journal_init_common+34 | 30720/2048 | 18360/1224 | 15 | 0 | 40,234%
throtl_pd_alloc+2b | 56320/1024 | 34320/624 | 55 | 0 | 39,062%
strndup_user+3f | 14496/17 | 8917/10 | 829 | 0 | 38,486%
alloc_trial_cpuset+19 | 14336/1024 | 8848/632 | 14 | 0 | 38,281%
cpuset_css_alloc+29 | 5120/1024 | 3160/632 | 5 | 0 | 38,281%
proc_reg_open+33 | 48768/64 | 30480/40 | 762 | 0 | 37,500%
get_mountpoint+73 | 26432/64 | 16520/40 | 413 | 0 | 37,500%
alloc_pipe_info+aa | 219136/1024 | 136960/640 | 214 | 12 | 37,500%
alloc_fair_sched_group+f0 | 38400/512 | 24000/320 | 75 | 0 | 37,500%
__alloc_workqueue_key+77 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
newary+69 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
disk_expand_part_tbl+74 | 960/64 | 600/40 | 15 | 0 | 37,500%
alloc_dax+29 | 120/8 | 75/5 | 15 | 0 | 37,500%
kernfs_mount_ns+3c | 320/64 | 200/40 | 5 | 0 | 37,500%
bucket_table_alloc+be | 16640/978 | 10496/617 | 17 | 12 | 36,923%
__alloc_workqueue_key+250 | 7680/512 | 4920/328 | 15 | 0 | 35,938%
journal_init_common+1b9 | 61440/4096 | 40920/2728 | 15 | 0 | 33,398%
kernfs_fop_write+b3 | 2248/11 | 1507/7 | 191 | 0 | 32,963%
__alloc_skb+72 | 3698176/876 | 2578048/611 | 4217 | 115 | 30,289%
alloc_pid+33 | 80896/128 | 56944/90 | 632 | 43 | 29,608%
alloc_pipe_info+3d | 41088/192 | 29104/136 | 214 | 12 | 29,167%
device_create_groups_vargs+59 | 15360/1024 | 10920/728 | 15 | 0 | 28,906%
sget_userns+ee | 112640/2048 | 80960/1472 | 55 | 8 | 28,125%
key_alloc+13e | 480/96 | 350/70 | 5 | 0 | 27,083%
load_elf_phdrs+49 | 153600/602 | 113176/443 | 255 | 0 | 26,318%
alloc_vfsmnt+aa | 11752/22 | 8765/17 | 513 | 130 | 25,417%
__memcg_init_list_lru_node+6b | 35200/32 | 26400/24 | 1100 | 160 | 25,000%
proc_self_get_link+96 | 12352/16 | 9264/12 | 772 | 0 | 25,000%
memcg_kmem_get_cache+9e | 46336/64 | 34752/48 | 724 | 0 | 25,000%
kernfs_fop_open+286 | 45056/64 | 33792/48 | 704 | 0 | 25,000%
insert_shadow+27 | 16544/32 | 12408/24 | 517 | 3 | 25,000%
allocate_cgrp_cset_links+70 | 28800/64 | 21600/48 | 450 | 0 | 25,000%

ext4_ext_remove_space+8db | 12352/64 | 9264/48 | 193 | 0 | 25,000%
dev_exception_add+25 | 5760/64 | 4320/48 | 90 | 0 | 25,000%
mempool_create_node+4e | 8160/96 | 6120/72 | 85 | 0 | 25,000%
alloc_rt_sched_group+11d | 7200/96 | 5400/72 | 75 | 0 | 25,000%
copy_semundo+60 | 2400/32 | 1800/24 | 75 | 7 | 25,000%
ext4_readdir+825 | 3264/64 | 2448/48 | 51 | 0 | 25,000%
alloc_worker+1d | 8640/192 | 6480/144 | 45 | 0 | 25,000%
alloc_workqueue_attrs+27 | 1440/32 | 1080/24 | 45 | 0 | 25,000%
ext4_fill_super+57 | 30720/2048 | 23040/1536 | 15 | 0 | 25,000%
apply_wqattrs_prepare+32 | 960/64 | 720/48 | 15 | 0 | 25,000%
inotify_handle_event+68 | 960/64 | 720/48 | 15 | 1 | 25,000%
blk_alloc_queue_stats+1b | 480/32 | 360/24 | 15 | 0 | 25,000%
proc_self_get_link+57 | 160/16 | 120/12 | 10 | 0 | 25,000%
disk_seqf_start+25 | 256/32 | 192/24 | 8 | 0 | 25,000%
memcg_write_event_control+8a | 960/192 | 720/144 | 5 | 0 | 25,000%
eventfd_file_create.part.3+28 | 320/64 | 240/48 | 5 | 0 | 25,000%
do_seccomp+249 | 160/32 | 120/24 | 5 | 0 | 25,000%
mem_cgroup_oom_register_event+29 | 160/32 | 120/24 | 5 | 0 | 25,000%
bucket_table_alloc+32 | 512/512 | 384/384 | 1 | 0 | 25,000%
__kernfs_new_node+25 | 42424/33 | 32046/24 | 1284 | 2 | 24,463%
single_open_size+2f | 45056/4096 | 35024/3184 | 11 | 0 | 22,266%
alloc_fdtable+ae | 544/90 | 424/70 | 6 | 0 | 22,059%
__register_sysctl_paths+10f | 2304/256 | 1800/200 | 9 | 0 | 21,875%
pskb_expand_head+71 | 10240/2048 | 8000/1600 | 5 | 0 | 21,875%
cpuacct_css_alloc+28 | 1280/256 | 1000/200 | 5 | 0 | 21,875%
shmem_symlink+a5 | 1440/13 | 1135/10 | 105 | 1 | 21,181%
kernfs_fop_open+d5 | 135168/192 | 107008/152 | 704 | 0 | 20,833%
mb_cache_create+2c | 2880/192 | 2280/152 | 15 | 0 | 20,833%
crypto_create_tfm+32 | 1440/96 | 1140/76 | 15 | 0 | 20,833%
bpf_prog_alloc+9d | 960/192 | 760/152 | 5 | 0 | 20,833%
pidlist_array_load+172 | 768/192 | 608/152 | 4 | 0 | 20,833%
cgroup_mkdir+ca | 46080/1024 | 36540/812 | 45 | 2 | 20,703%
__proc_create+a1 | 17280/192 | 13740/152 | 90 | 0 | 20,486%
__nf_conntrack_alloc+4e | 20800/320 | 16640/256 | 65 | 2 | 20,000%
devcgroup_css_alloc+1b | 1280/256 | 1040/208 | 5 | 0 | 18,750%
ext4_htree_store_dirent+35 | 27584/77 | 22770/64 | 354 | 0 | 17,452%
copy_ipcs+63 | 5120/1024 | 4240/848 | 5 | 4 | 17,188%
__list_lru_init+225 | 10560/96 | 8800/80 | 110 | 16 | 16,667%
device_private_init+1f | 5760/192 | 4800/160 | 30 | 0 | 16,667%
alloc_rt_sched_group+ef | 153600/2048 | 129600/1728 | 75 | 0 | 15,625%
ext4_fill_super+2907 | 1920/128 | 1620/108 | 15 | 0 | 15,625%
__d_alloc+169 | 107648/115 | 91360/97 | 934 | 0 | 15,131%
copy_utsname+85 | 2560/512 | 2200/440 | 5 | 4 | 14,062%
kobject_set_name_vargs+1e | 11904/66 | 10261/57 | 179 | 33 | 13,802%
kasprintf+3a | 11744/91 | 10196/79 | 129 | 33 | 13,181%
prepare_creds+21 | 191808/192 | 167832/168 | 999 | 31 | 12,500%
__seq_open_private+1c | 16896/64 | 14784/56 | 264 | 0 | 12,500%
start_this_handle+2da | 29440/256 | 25760/224 | 115 | 90 | 12,500%
load_elf_binary+1e8 | 3520/32 | 3080/28 | 110 | 0 | 12,500%
alloc_fair_sched_group+11d | 38400/512 | 33600/448 | 75 | 0 | 12,500%

__kthread_create_on_node+5e | 3840/64 | 3360/56 | 60 | 0 | 12,500%
wb_congested_get_create+86 | 2560/64 | 2240/56 | 40 | 0 | 12,500%
bioset_create+2e | 3840/128 | 3360/112 | 30 | 0 | 12,500%
kobj_map+83 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+54 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+2c | 480/32 | 420/28 | 15 | 0 | 12,500%
alloc_fdtable+4b | 384/64 | 336/56 | 6 | 0 | 12,500%
unix_bind+1a2 | 640/128 | 560/112 | 5 | 0 | 12,500%
kobject_get_path+56 | 30016/100 | 26522/88 | 299 | 0 | 11,640%
__register_sysctl_table+51 | 20448/151 | 18288/135 | 135 | 0 | 10,563%
create_cache+3e | 45696/384 | 40936/344 | 119 | 33 | 10,417%
kvmalloc_node+3e | 2113536/25161 | 1897896/22594 | 84 | 0 | 10,203%
dev_create+ab | 1440/96 | 1300/86 | 15 | 0 | 9,722%
__anon_vma_prepare+d2 | 290576/88 | 264160/80 | 3302 | 81 | 9,091%
anon_vma_fork+5e | 166672/88 | 151520/80 | 1894 | 187 | 9,091%
kthread+3f | 5760/96 | 5280/88 | 60 | 0 | 8,333%
thin_ctr+6f | 2880/192 | 2640/176 | 15 | 0 | 8,333%
sock_alloc_inode+18 | 492096/704 | 452952/648 | 699 | 11 | 7,955%
jbd2_journal_add_journal_head+67 | 48480/120 | 45248/112 | 404 | 1 | 6,667%
do_execveat_common.isra.31+c0 | 37120/256 | 34800/240 | 145 | 6 | 6,250%
kernfs_iattrs.isra.4+59 | 10112/128 | 9480/120 | 79 | 0 | 6,250%
shmem_fill_super+25 | 3200/128 | 3000/120 | 25 | 0 | 6,250%
bdi_alloc_node+2a | 15360/1024 | 14400/960 | 15 | 0 | 6,250%
alloc_mnt_ns+54 | 1152/128 | 1080/120 | 9 | 4 | 6,250%
alloc_fair_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_fair_sched_group+4e | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+4d | 640/128 | 600/120 | 5 | 0 | 6,250%
__register_sysctl_table+434 | 43776/270 | 41157/254 | 162 | 0 | 5,983%
__kernfs_new_node+42 | 839528/136 | 790144/128 | 6173 | 1025 | 5,882%
mqueue_alloc_inode+16 | 4800/960 | 4520/904 | 5 | 4 | 5,833%
bpf_prepare_filter+24b | 40960/8192 | 38920/7784 | 5 | 0 | 4,980%
bpf_convert_filter+57 | 20480/4096 | 19460/3892 | 5 | 0 | 4,980%
bpf_prepare_filter+111 | 10240/2048 | 9730/1946 | 5 | 0 | 4,980%
ep_alloc+3d | 24576/192 | 23552/184 | 128 | 11 | 4,167%
inet_twsk_alloc+3a | 1736/248 | 1680/240 | 7 | 0 | 3,226%
mm_alloc+16 | 296960/2048 | 290000/2000 | 145 | 10 | 2,344%
copy_process.part.40+9e6 | 202752/2048 | 198000/2000 | 99 | 11 | 2,344%
mempool_create_node+f3 | 235920/425 | 230520/415 | 555 | 0 | 2,289%
dax_alloc_inode+16 | 11520/768 | 11280/752 | 15 | 0 | 2,083%
bdev_alloc_inode+16 | 12480/832 | 12240/816 | 15 | 0 | 1,923%
ext4_find_extent+290 | 423552/99 | 415440/97 | 4243 | 0 | 1,915%
radix_tree_node_alloc.constprop.19+78 | 5331336/584 | 5258304/576 | 9129 | 8 | 1,370%
alloc_inode+66 | 528352/608 | 521400/600 | 869 | 20 | 1,316%
proc_alloc_inode+16 | 928200/680 | 917280/672 | 1365 | 45 | 1,176%
copy_process.part.40+95d | 483648/2112 | 478152/2088 | 229 | 8 | 1,136%
shmem_alloc_inode+16 | 326096/712 | 322432/704 | 458 | 6 | 1,124%
copy_process.part.40+10fe | 234496/1024 | 232664/1016 | 229 | 12 | 0,781%
ext4_alloc_inode+17 | 647360/1088 | 642600/1080 | 595 | 1 | 0,735%
__vmalloc_node_range+d3 | 13192/30 | 13112/30 | 426 | 36 | 0,606%
sk_prot_alloc+2f | 544768/1127 | 542080/1122 | 483 | 6 | 0,493%
...

SUMMARY (SLAB allocator)
========================
Total bytes requested: 818.739.691
Total bytes allocated: 821.951.696
Total bytes freed: 763.705.848
Net total bytes allocated: 58.245.848
Total bytes wasted on internal fragmentation: 3.212.005
Internal fragmentation: 0,390778%
Cross CPU allocations: 28.844/10.157.339









share|improve this question

























  • try top -o %MEM

    – Ctx
    Jan 3 at 13:04













  • This is not the problem: I can sort the top-output with SHIFT-M, however most memory- comsuming process is 'dmeventd' with 17MB RES-memory. The output above was also ordered after memory.

    – Thomas Seehofchen
    Jan 3 at 13:33













  • Ok, I see... Then what does cat /proc/meminfo say?

    – Ctx
    Jan 3 at 13:46











  • I added it to my inital question.

    – Thomas Seehofchen
    Jan 3 at 13:50











  • Ok, so slab is the problem (SUnreclaim: 5483636 kB). We need /proc/slabinfo now ;) Ok, this might be big, maybe you can identify yourself which kernel driver allocates so much memory. I just saw that there is also a tool called slabtop, which might come in handy here

    – Ctx
    Jan 3 at 13:54
















2












2








2


1






We observe an increasing memory usage of our ec2 instances over time.
After two weeks we have to reboot our systems.



On this machines run some docker containers. Let's have a look
with 'free -m' after 14 days(I stopped the docker daemon now):



$free -m
total used free shared buffers cached
Mem: 7977 7852 124 0 4 573
-/+ buffers/cache: 7273 703
Swap: 0 0 0


Now I run 'ps_mem':



Private  +   Shared  =  RAM used        Program

124.0 KiB + 64.5 KiB = 188.5 KiB agetty
140.0 KiB + 60.5 KiB = 200.5 KiB acpid
180.0 KiB + 41.5 KiB = 221.5 KiB rngd
200.0 KiB + 205.5 KiB = 405.5 KiB lvmpolld
320.0 KiB + 89.5 KiB = 409.5 KiB irqbalance
320.0 KiB + 232.5 KiB = 552.5 KiB lvmetad
476.0 KiB + 99.5 KiB = 575.5 KiB auditd
624.0 KiB + 105.5 KiB = 729.5 KiB init
756.0 KiB + 72.5 KiB = 828.5 KiB crond
292.0 KiB + 622.5 KiB = 914.5 KiB udevd (3)
560.0 KiB + 377.0 KiB = 937.0 KiB mingetty (6)
1.0 MiB + 194.5 KiB = 1.2 MiB ntpd
1.1 MiB + 256.0 KiB = 1.4 MiB dhclient (2)
2.5 MiB + 103.5 KiB = 2.6 MiB rsyslogd
3.1 MiB + 259.0 KiB = 3.4 MiB sendmail.sendmail (2)
3.0 MiB + 609.0 KiB = 3.6 MiB sudo (2)
3.6 MiB + 1.6 MiB = 5.2 MiB bash (5)
2.9 MiB + 4.3 MiB = 7.2 MiB sshd (9)
14.5 MiB + 413.5 KiB = 14.9 MiB dmeventd
---------------------------------
45.4 MiB
=================================


Now I try to allocate new memory with the 'stress' tool(http://people.seas.harvard.edu/~apw/stress/):



$ stress --vm 1 --vm-bytes 1G --timeout 10s --verbose
stress: info: [11120] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [11120] using backoff sleep of 3000us
stress: dbug: [11120] setting timeout to 10s
stress: dbug: [11120] --> hogvm worker 1 [11121] forked
stress: dbug: [11121] allocating 1073741824 bytes ...
stress: FAIL: [11121] (494) hogvm malloc failed: Cannot allocate memory
stress: FAIL: [11120] (394) <-- worker 11121 returned error 1
stress: WARN: [11120] (396) now reaping child worker processes
stress: FAIL: [11120] (451) failed run completed in 0s


==> 'stress' is not able to allocate 1G of new memory.



But I do not understand where all my memory is gone?!



Here comes the output of 'top' (similiar to ps_mem):



Tasks: 107 total,   1 running,  66 sleeping,   0 stopped,   0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8168828k total, 8045784k used, 123044k free, 5656k buffers
Swap: 0k total, 0k used, 0k free, 589372k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2030 root 20 0 102m 17m 5812 S 0.0 0.2 1:21.36 dmeventd
11145 root 20 0 82664 6604 5752 S 0.0 0.1 0:00.00 sshd
11130 root 20 0 183m 4472 3824 S 0.0 0.1 0:00.00 sudo
18339 ec2-user 20 0 114m 3896 1744 S 0.0 0.0 0:00.08 bash
2419 root 20 0 241m 3552 1188 S 0.0 0.0 2:07.85 rsyslogd
11146 sshd 20 0 80588 3440 2612 S 0.0 0.0 0:00.00 sshd
11131 root 20 0 112m 3288 2924 S 0.0 0.0 0:00.00 bash
17134 root 20 0 117m 3084 2008 S 0.0 0.0 0:00.00 sshd
17148 ec2-user 20 0 112m 2992 2620 S 0.0 0.0 0:00.01 bash
2605 root 20 0 85496 2776 1064 S 0.0 0.0 0:21.44 sendmail
2614 smmsp 20 0 81088 2704 1208 S 0.0 0.0 0:00.17 sendmail
15228 root 20 0 112m 2632 2228 S 0.0 0.0 0:00.02 bash
1 root 20 0 19684 2376 2068 S 0.0 0.0 0:01.91 init
2626 root 20 0 118m 2276 1644 S 0.0 0.0 0:02.45 crond
2233 root 20 0 9412 2244 1748 S 0.0 0.0 0:00.49 dhclient
11147 root 20 0 15364 2176 1856 R 0.0 0.0 0:00.00 top
2584 ntp 20 0 113m 2128 1308 S 0.0 0.0 0:49.60 ntpd


Where are these damned 7273MB memory consumed?



cat /proc/meminfo



MemTotal:        8168828 kB
MemFree: 129736 kB
MemAvailable: 567464 kB
Buffers: 5116 kB
Cached: 585504 kB
SwapCached: 0 kB
Active: 476920 kB
Inactive: 130228 kB
Active(anon): 22340 kB
Inactive(anon): 80 kB
Active(file): 454580 kB
Inactive(file): 130148 kB
Unevictable: 17620 kB
Mlocked: 17620 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 16 kB
Writeback: 0 kB
AnonPages: 34088 kB
Mapped: 14668 kB
Shmem: 80 kB
Slab: 5625876 kB
SReclaimable: 142240 kB
SUnreclaim: 5483636 kB
KernelStack: 2016 kB
PageTables: 4384 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4084412 kB
Committed_AS: 109856 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 7286784 kB
DirectMap2M: 1101824 kB


Output of slabtop



 Active / Total Objects (% used)    : 8445426 / 11391340 (74.1%)
Active / Total Slabs (% used) : 533926 / 533926 (100.0%)
Active / Total Caches (% used) : 78 / 101 (77.2%)
Active / Total Size (% used) : 5033325.10K / 5414048.91K (93.0%)
Minimum / Average / Maximum Object : 0.01K / 0.47K / 9.44K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
3216990 525372 16% 0.09K 76595 42 306380K kmalloc-96
3101208 3101011 99% 1.00K 219166 32 7013312K kmalloc-1024
2066976 2066841 99% 0.32K 86124 24 688992K taskstats
1040384 1039935 99% 0.03K 8128 128 32512K kmalloc-32
1038080 1037209 99% 0.06K 16220 64 64880K kmalloc-64
516719 516719 100% 2.09K 113785 15 3641120K request_queue
223356 22610 10% 0.57K 7977 28 127632K radix_tree_node
52740 39903 75% 0.13K 1758 30 7032K kernfs_node_cache


Now I rebooted the machine and did a 'perf kmem record --caller'. After some seconds I had to cancel because the file data.perf was already over 1GB lager. I did a 'perf kmem stat --caller' and here comes the output:



---------------------------------------------------------------------------------------------------------
Callsite | Total_alloc/Per | Total_req/Per | Hit | Ping-pong | Frag
---------------------------------------------------------------------------------------------------------
dm_open+2b | 240/8 | 120/4 | 30 | 0 | 50,000%
match_number+2a | 120/8 | 60/4 | 15 | 0 | 50,000%
rebuild_sched_domains_locked+dd | 72/8 | 36/4 | 9 | 0 | 50,000%
dm_btree_del+2b | 40960/4096 | 20720/2072 | 10 | 0 | 49,414%
sk_prot_alloc+7c | 86016/2048 | 44016/1048 | 42 | 0 | 48,828%
hugetlb_cgroup_css_alloc+29 | 2560/512 | 1320/264 | 5 | 0 | 48,438%
blk_throtl_init+2a | 15360/1024 | 8040/536 | 15 | 0 | 47,656%
bpf_int_jit_compile+6e | 40960/8192 | 21440/4288 | 5 | 0 | 47,656%
mem_cgroup_css_alloc+2f | 10240/2048 | 5360/1072 | 5 | 2 | 47,656%
alloc_disk_node+32 | 30720/2048 | 16560/1104 | 15 | 0 | 46,094%
mem_cgroup_css_alloc+166 | 5120/1024 | 2800/560 | 5 | 2 | 45,312%
blkcg_css_alloc+3b | 2560/512 | 1400/280 | 5 | 0 | 45,312%
kobject_uevent_env+be | 1224704/4096 | 698464/2336 | 299 | 0 | 42,969%
uevent_show+81 | 675840/4096 | 385440/2336 | 165 | 0 | 42,969%
blkg_alloc+3c | 40960/1024 | 23680/592 | 40 | 0 | 42,188%
dm_table_create+34 | 7680/512 | 4560/304 | 15 | 0 | 40,625%
journal_init_common+34 | 30720/2048 | 18360/1224 | 15 | 0 | 40,234%
throtl_pd_alloc+2b | 56320/1024 | 34320/624 | 55 | 0 | 39,062%
strndup_user+3f | 14496/17 | 8917/10 | 829 | 0 | 38,486%
alloc_trial_cpuset+19 | 14336/1024 | 8848/632 | 14 | 0 | 38,281%
cpuset_css_alloc+29 | 5120/1024 | 3160/632 | 5 | 0 | 38,281%
proc_reg_open+33 | 48768/64 | 30480/40 | 762 | 0 | 37,500%
get_mountpoint+73 | 26432/64 | 16520/40 | 413 | 0 | 37,500%
alloc_pipe_info+aa | 219136/1024 | 136960/640 | 214 | 12 | 37,500%
alloc_fair_sched_group+f0 | 38400/512 | 24000/320 | 75 | 0 | 37,500%
__alloc_workqueue_key+77 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
newary+69 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
disk_expand_part_tbl+74 | 960/64 | 600/40 | 15 | 0 | 37,500%
alloc_dax+29 | 120/8 | 75/5 | 15 | 0 | 37,500%
kernfs_mount_ns+3c | 320/64 | 200/40 | 5 | 0 | 37,500%
bucket_table_alloc+be | 16640/978 | 10496/617 | 17 | 12 | 36,923%
__alloc_workqueue_key+250 | 7680/512 | 4920/328 | 15 | 0 | 35,938%
journal_init_common+1b9 | 61440/4096 | 40920/2728 | 15 | 0 | 33,398%
kernfs_fop_write+b3 | 2248/11 | 1507/7 | 191 | 0 | 32,963%
__alloc_skb+72 | 3698176/876 | 2578048/611 | 4217 | 115 | 30,289%
alloc_pid+33 | 80896/128 | 56944/90 | 632 | 43 | 29,608%
alloc_pipe_info+3d | 41088/192 | 29104/136 | 214 | 12 | 29,167%
device_create_groups_vargs+59 | 15360/1024 | 10920/728 | 15 | 0 | 28,906%
sget_userns+ee | 112640/2048 | 80960/1472 | 55 | 8 | 28,125%
key_alloc+13e | 480/96 | 350/70 | 5 | 0 | 27,083%
load_elf_phdrs+49 | 153600/602 | 113176/443 | 255 | 0 | 26,318%
alloc_vfsmnt+aa | 11752/22 | 8765/17 | 513 | 130 | 25,417%
__memcg_init_list_lru_node+6b | 35200/32 | 26400/24 | 1100 | 160 | 25,000%
proc_self_get_link+96 | 12352/16 | 9264/12 | 772 | 0 | 25,000%
memcg_kmem_get_cache+9e | 46336/64 | 34752/48 | 724 | 0 | 25,000%
kernfs_fop_open+286 | 45056/64 | 33792/48 | 704 | 0 | 25,000%
insert_shadow+27 | 16544/32 | 12408/24 | 517 | 3 | 25,000%
allocate_cgrp_cset_links+70 | 28800/64 | 21600/48 | 450 | 0 | 25,000%

ext4_ext_remove_space+8db | 12352/64 | 9264/48 | 193 | 0 | 25,000%
dev_exception_add+25 | 5760/64 | 4320/48 | 90 | 0 | 25,000%
mempool_create_node+4e | 8160/96 | 6120/72 | 85 | 0 | 25,000%
alloc_rt_sched_group+11d | 7200/96 | 5400/72 | 75 | 0 | 25,000%
copy_semundo+60 | 2400/32 | 1800/24 | 75 | 7 | 25,000%
ext4_readdir+825 | 3264/64 | 2448/48 | 51 | 0 | 25,000%
alloc_worker+1d | 8640/192 | 6480/144 | 45 | 0 | 25,000%
alloc_workqueue_attrs+27 | 1440/32 | 1080/24 | 45 | 0 | 25,000%
ext4_fill_super+57 | 30720/2048 | 23040/1536 | 15 | 0 | 25,000%
apply_wqattrs_prepare+32 | 960/64 | 720/48 | 15 | 0 | 25,000%
inotify_handle_event+68 | 960/64 | 720/48 | 15 | 1 | 25,000%
blk_alloc_queue_stats+1b | 480/32 | 360/24 | 15 | 0 | 25,000%
proc_self_get_link+57 | 160/16 | 120/12 | 10 | 0 | 25,000%
disk_seqf_start+25 | 256/32 | 192/24 | 8 | 0 | 25,000%
memcg_write_event_control+8a | 960/192 | 720/144 | 5 | 0 | 25,000%
eventfd_file_create.part.3+28 | 320/64 | 240/48 | 5 | 0 | 25,000%
do_seccomp+249 | 160/32 | 120/24 | 5 | 0 | 25,000%
mem_cgroup_oom_register_event+29 | 160/32 | 120/24 | 5 | 0 | 25,000%
bucket_table_alloc+32 | 512/512 | 384/384 | 1 | 0 | 25,000%
__kernfs_new_node+25 | 42424/33 | 32046/24 | 1284 | 2 | 24,463%
single_open_size+2f | 45056/4096 | 35024/3184 | 11 | 0 | 22,266%
alloc_fdtable+ae | 544/90 | 424/70 | 6 | 0 | 22,059%
__register_sysctl_paths+10f | 2304/256 | 1800/200 | 9 | 0 | 21,875%
pskb_expand_head+71 | 10240/2048 | 8000/1600 | 5 | 0 | 21,875%
cpuacct_css_alloc+28 | 1280/256 | 1000/200 | 5 | 0 | 21,875%
shmem_symlink+a5 | 1440/13 | 1135/10 | 105 | 1 | 21,181%
kernfs_fop_open+d5 | 135168/192 | 107008/152 | 704 | 0 | 20,833%
mb_cache_create+2c | 2880/192 | 2280/152 | 15 | 0 | 20,833%
crypto_create_tfm+32 | 1440/96 | 1140/76 | 15 | 0 | 20,833%
bpf_prog_alloc+9d | 960/192 | 760/152 | 5 | 0 | 20,833%
pidlist_array_load+172 | 768/192 | 608/152 | 4 | 0 | 20,833%
cgroup_mkdir+ca | 46080/1024 | 36540/812 | 45 | 2 | 20,703%
__proc_create+a1 | 17280/192 | 13740/152 | 90 | 0 | 20,486%
__nf_conntrack_alloc+4e | 20800/320 | 16640/256 | 65 | 2 | 20,000%
devcgroup_css_alloc+1b | 1280/256 | 1040/208 | 5 | 0 | 18,750%
ext4_htree_store_dirent+35 | 27584/77 | 22770/64 | 354 | 0 | 17,452%
copy_ipcs+63 | 5120/1024 | 4240/848 | 5 | 4 | 17,188%
__list_lru_init+225 | 10560/96 | 8800/80 | 110 | 16 | 16,667%
device_private_init+1f | 5760/192 | 4800/160 | 30 | 0 | 16,667%
alloc_rt_sched_group+ef | 153600/2048 | 129600/1728 | 75 | 0 | 15,625%
ext4_fill_super+2907 | 1920/128 | 1620/108 | 15 | 0 | 15,625%
__d_alloc+169 | 107648/115 | 91360/97 | 934 | 0 | 15,131%
copy_utsname+85 | 2560/512 | 2200/440 | 5 | 4 | 14,062%
kobject_set_name_vargs+1e | 11904/66 | 10261/57 | 179 | 33 | 13,802%
kasprintf+3a | 11744/91 | 10196/79 | 129 | 33 | 13,181%
prepare_creds+21 | 191808/192 | 167832/168 | 999 | 31 | 12,500%
__seq_open_private+1c | 16896/64 | 14784/56 | 264 | 0 | 12,500%
start_this_handle+2da | 29440/256 | 25760/224 | 115 | 90 | 12,500%
load_elf_binary+1e8 | 3520/32 | 3080/28 | 110 | 0 | 12,500%
alloc_fair_sched_group+11d | 38400/512 | 33600/448 | 75 | 0 | 12,500%

__kthread_create_on_node+5e | 3840/64 | 3360/56 | 60 | 0 | 12,500%
wb_congested_get_create+86 | 2560/64 | 2240/56 | 40 | 0 | 12,500%
bioset_create+2e | 3840/128 | 3360/112 | 30 | 0 | 12,500%
kobj_map+83 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+54 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+2c | 480/32 | 420/28 | 15 | 0 | 12,500%
alloc_fdtable+4b | 384/64 | 336/56 | 6 | 0 | 12,500%
unix_bind+1a2 | 640/128 | 560/112 | 5 | 0 | 12,500%
kobject_get_path+56 | 30016/100 | 26522/88 | 299 | 0 | 11,640%
__register_sysctl_table+51 | 20448/151 | 18288/135 | 135 | 0 | 10,563%
create_cache+3e | 45696/384 | 40936/344 | 119 | 33 | 10,417%
kvmalloc_node+3e | 2113536/25161 | 1897896/22594 | 84 | 0 | 10,203%
dev_create+ab | 1440/96 | 1300/86 | 15 | 0 | 9,722%
__anon_vma_prepare+d2 | 290576/88 | 264160/80 | 3302 | 81 | 9,091%
anon_vma_fork+5e | 166672/88 | 151520/80 | 1894 | 187 | 9,091%
kthread+3f | 5760/96 | 5280/88 | 60 | 0 | 8,333%
thin_ctr+6f | 2880/192 | 2640/176 | 15 | 0 | 8,333%
sock_alloc_inode+18 | 492096/704 | 452952/648 | 699 | 11 | 7,955%
jbd2_journal_add_journal_head+67 | 48480/120 | 45248/112 | 404 | 1 | 6,667%
do_execveat_common.isra.31+c0 | 37120/256 | 34800/240 | 145 | 6 | 6,250%
kernfs_iattrs.isra.4+59 | 10112/128 | 9480/120 | 79 | 0 | 6,250%
shmem_fill_super+25 | 3200/128 | 3000/120 | 25 | 0 | 6,250%
bdi_alloc_node+2a | 15360/1024 | 14400/960 | 15 | 0 | 6,250%
alloc_mnt_ns+54 | 1152/128 | 1080/120 | 9 | 4 | 6,250%
alloc_fair_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_fair_sched_group+4e | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+4d | 640/128 | 600/120 | 5 | 0 | 6,250%
__register_sysctl_table+434 | 43776/270 | 41157/254 | 162 | 0 | 5,983%
__kernfs_new_node+42 | 839528/136 | 790144/128 | 6173 | 1025 | 5,882%
mqueue_alloc_inode+16 | 4800/960 | 4520/904 | 5 | 4 | 5,833%
bpf_prepare_filter+24b | 40960/8192 | 38920/7784 | 5 | 0 | 4,980%
bpf_convert_filter+57 | 20480/4096 | 19460/3892 | 5 | 0 | 4,980%
bpf_prepare_filter+111 | 10240/2048 | 9730/1946 | 5 | 0 | 4,980%
ep_alloc+3d | 24576/192 | 23552/184 | 128 | 11 | 4,167%
inet_twsk_alloc+3a | 1736/248 | 1680/240 | 7 | 0 | 3,226%
mm_alloc+16 | 296960/2048 | 290000/2000 | 145 | 10 | 2,344%
copy_process.part.40+9e6 | 202752/2048 | 198000/2000 | 99 | 11 | 2,344%
mempool_create_node+f3 | 235920/425 | 230520/415 | 555 | 0 | 2,289%
dax_alloc_inode+16 | 11520/768 | 11280/752 | 15 | 0 | 2,083%
bdev_alloc_inode+16 | 12480/832 | 12240/816 | 15 | 0 | 1,923%
ext4_find_extent+290 | 423552/99 | 415440/97 | 4243 | 0 | 1,915%
radix_tree_node_alloc.constprop.19+78 | 5331336/584 | 5258304/576 | 9129 | 8 | 1,370%
alloc_inode+66 | 528352/608 | 521400/600 | 869 | 20 | 1,316%
proc_alloc_inode+16 | 928200/680 | 917280/672 | 1365 | 45 | 1,176%
copy_process.part.40+95d | 483648/2112 | 478152/2088 | 229 | 8 | 1,136%
shmem_alloc_inode+16 | 326096/712 | 322432/704 | 458 | 6 | 1,124%
copy_process.part.40+10fe | 234496/1024 | 232664/1016 | 229 | 12 | 0,781%
ext4_alloc_inode+17 | 647360/1088 | 642600/1080 | 595 | 1 | 0,735%
__vmalloc_node_range+d3 | 13192/30 | 13112/30 | 426 | 36 | 0,606%
sk_prot_alloc+2f | 544768/1127 | 542080/1122 | 483 | 6 | 0,493%
...

SUMMARY (SLAB allocator)
========================
Total bytes requested: 818.739.691
Total bytes allocated: 821.951.696
Total bytes freed: 763.705.848
Net total bytes allocated: 58.245.848
Total bytes wasted on internal fragmentation: 3.212.005
Internal fragmentation: 0,390778%
Cross CPU allocations: 28.844/10.157.339









share|improve this question
















We observe an increasing memory usage of our ec2 instances over time.
After two weeks we have to reboot our systems.



On this machines run some docker containers. Let's have a look
with 'free -m' after 14 days(I stopped the docker daemon now):



$free -m
total used free shared buffers cached
Mem: 7977 7852 124 0 4 573
-/+ buffers/cache: 7273 703
Swap: 0 0 0


Now I run 'ps_mem':



Private  +   Shared  =  RAM used        Program

124.0 KiB + 64.5 KiB = 188.5 KiB agetty
140.0 KiB + 60.5 KiB = 200.5 KiB acpid
180.0 KiB + 41.5 KiB = 221.5 KiB rngd
200.0 KiB + 205.5 KiB = 405.5 KiB lvmpolld
320.0 KiB + 89.5 KiB = 409.5 KiB irqbalance
320.0 KiB + 232.5 KiB = 552.5 KiB lvmetad
476.0 KiB + 99.5 KiB = 575.5 KiB auditd
624.0 KiB + 105.5 KiB = 729.5 KiB init
756.0 KiB + 72.5 KiB = 828.5 KiB crond
292.0 KiB + 622.5 KiB = 914.5 KiB udevd (3)
560.0 KiB + 377.0 KiB = 937.0 KiB mingetty (6)
1.0 MiB + 194.5 KiB = 1.2 MiB ntpd
1.1 MiB + 256.0 KiB = 1.4 MiB dhclient (2)
2.5 MiB + 103.5 KiB = 2.6 MiB rsyslogd
3.1 MiB + 259.0 KiB = 3.4 MiB sendmail.sendmail (2)
3.0 MiB + 609.0 KiB = 3.6 MiB sudo (2)
3.6 MiB + 1.6 MiB = 5.2 MiB bash (5)
2.9 MiB + 4.3 MiB = 7.2 MiB sshd (9)
14.5 MiB + 413.5 KiB = 14.9 MiB dmeventd
---------------------------------
45.4 MiB
=================================


Now I try to allocate new memory with the 'stress' tool(http://people.seas.harvard.edu/~apw/stress/):



$ stress --vm 1 --vm-bytes 1G --timeout 10s --verbose
stress: info: [11120] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
stress: dbug: [11120] using backoff sleep of 3000us
stress: dbug: [11120] setting timeout to 10s
stress: dbug: [11120] --> hogvm worker 1 [11121] forked
stress: dbug: [11121] allocating 1073741824 bytes ...
stress: FAIL: [11121] (494) hogvm malloc failed: Cannot allocate memory
stress: FAIL: [11120] (394) <-- worker 11121 returned error 1
stress: WARN: [11120] (396) now reaping child worker processes
stress: FAIL: [11120] (451) failed run completed in 0s


==> 'stress' is not able to allocate 1G of new memory.



But I do not understand where all my memory is gone?!



Here comes the output of 'top' (similiar to ps_mem):



Tasks: 107 total,   1 running,  66 sleeping,   0 stopped,   0 zombie
Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8168828k total, 8045784k used, 123044k free, 5656k buffers
Swap: 0k total, 0k used, 0k free, 589372k cached

PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2030 root 20 0 102m 17m 5812 S 0.0 0.2 1:21.36 dmeventd
11145 root 20 0 82664 6604 5752 S 0.0 0.1 0:00.00 sshd
11130 root 20 0 183m 4472 3824 S 0.0 0.1 0:00.00 sudo
18339 ec2-user 20 0 114m 3896 1744 S 0.0 0.0 0:00.08 bash
2419 root 20 0 241m 3552 1188 S 0.0 0.0 2:07.85 rsyslogd
11146 sshd 20 0 80588 3440 2612 S 0.0 0.0 0:00.00 sshd
11131 root 20 0 112m 3288 2924 S 0.0 0.0 0:00.00 bash
17134 root 20 0 117m 3084 2008 S 0.0 0.0 0:00.00 sshd
17148 ec2-user 20 0 112m 2992 2620 S 0.0 0.0 0:00.01 bash
2605 root 20 0 85496 2776 1064 S 0.0 0.0 0:21.44 sendmail
2614 smmsp 20 0 81088 2704 1208 S 0.0 0.0 0:00.17 sendmail
15228 root 20 0 112m 2632 2228 S 0.0 0.0 0:00.02 bash
1 root 20 0 19684 2376 2068 S 0.0 0.0 0:01.91 init
2626 root 20 0 118m 2276 1644 S 0.0 0.0 0:02.45 crond
2233 root 20 0 9412 2244 1748 S 0.0 0.0 0:00.49 dhclient
11147 root 20 0 15364 2176 1856 R 0.0 0.0 0:00.00 top
2584 ntp 20 0 113m 2128 1308 S 0.0 0.0 0:49.60 ntpd


Where are these damned 7273MB memory consumed?



cat /proc/meminfo



MemTotal:        8168828 kB
MemFree: 129736 kB
MemAvailable: 567464 kB
Buffers: 5116 kB
Cached: 585504 kB
SwapCached: 0 kB
Active: 476920 kB
Inactive: 130228 kB
Active(anon): 22340 kB
Inactive(anon): 80 kB
Active(file): 454580 kB
Inactive(file): 130148 kB
Unevictable: 17620 kB
Mlocked: 17620 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 16 kB
Writeback: 0 kB
AnonPages: 34088 kB
Mapped: 14668 kB
Shmem: 80 kB
Slab: 5625876 kB
SReclaimable: 142240 kB
SUnreclaim: 5483636 kB
KernelStack: 2016 kB
PageTables: 4384 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 4084412 kB
Committed_AS: 109856 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 7286784 kB
DirectMap2M: 1101824 kB


Output of slabtop



 Active / Total Objects (% used)    : 8445426 / 11391340 (74.1%)
Active / Total Slabs (% used) : 533926 / 533926 (100.0%)
Active / Total Caches (% used) : 78 / 101 (77.2%)
Active / Total Size (% used) : 5033325.10K / 5414048.91K (93.0%)
Minimum / Average / Maximum Object : 0.01K / 0.47K / 9.44K

OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
3216990 525372 16% 0.09K 76595 42 306380K kmalloc-96
3101208 3101011 99% 1.00K 219166 32 7013312K kmalloc-1024
2066976 2066841 99% 0.32K 86124 24 688992K taskstats
1040384 1039935 99% 0.03K 8128 128 32512K kmalloc-32
1038080 1037209 99% 0.06K 16220 64 64880K kmalloc-64
516719 516719 100% 2.09K 113785 15 3641120K request_queue
223356 22610 10% 0.57K 7977 28 127632K radix_tree_node
52740 39903 75% 0.13K 1758 30 7032K kernfs_node_cache


Now I rebooted the machine and did a 'perf kmem record --caller'. After some seconds I had to cancel because the file data.perf was already over 1GB lager. I did a 'perf kmem stat --caller' and here comes the output:



---------------------------------------------------------------------------------------------------------
Callsite | Total_alloc/Per | Total_req/Per | Hit | Ping-pong | Frag
---------------------------------------------------------------------------------------------------------
dm_open+2b | 240/8 | 120/4 | 30 | 0 | 50,000%
match_number+2a | 120/8 | 60/4 | 15 | 0 | 50,000%
rebuild_sched_domains_locked+dd | 72/8 | 36/4 | 9 | 0 | 50,000%
dm_btree_del+2b | 40960/4096 | 20720/2072 | 10 | 0 | 49,414%
sk_prot_alloc+7c | 86016/2048 | 44016/1048 | 42 | 0 | 48,828%
hugetlb_cgroup_css_alloc+29 | 2560/512 | 1320/264 | 5 | 0 | 48,438%
blk_throtl_init+2a | 15360/1024 | 8040/536 | 15 | 0 | 47,656%
bpf_int_jit_compile+6e | 40960/8192 | 21440/4288 | 5 | 0 | 47,656%
mem_cgroup_css_alloc+2f | 10240/2048 | 5360/1072 | 5 | 2 | 47,656%
alloc_disk_node+32 | 30720/2048 | 16560/1104 | 15 | 0 | 46,094%
mem_cgroup_css_alloc+166 | 5120/1024 | 2800/560 | 5 | 2 | 45,312%
blkcg_css_alloc+3b | 2560/512 | 1400/280 | 5 | 0 | 45,312%
kobject_uevent_env+be | 1224704/4096 | 698464/2336 | 299 | 0 | 42,969%
uevent_show+81 | 675840/4096 | 385440/2336 | 165 | 0 | 42,969%
blkg_alloc+3c | 40960/1024 | 23680/592 | 40 | 0 | 42,188%
dm_table_create+34 | 7680/512 | 4560/304 | 15 | 0 | 40,625%
journal_init_common+34 | 30720/2048 | 18360/1224 | 15 | 0 | 40,234%
throtl_pd_alloc+2b | 56320/1024 | 34320/624 | 55 | 0 | 39,062%
strndup_user+3f | 14496/17 | 8917/10 | 829 | 0 | 38,486%
alloc_trial_cpuset+19 | 14336/1024 | 8848/632 | 14 | 0 | 38,281%
cpuset_css_alloc+29 | 5120/1024 | 3160/632 | 5 | 0 | 38,281%
proc_reg_open+33 | 48768/64 | 30480/40 | 762 | 0 | 37,500%
get_mountpoint+73 | 26432/64 | 16520/40 | 413 | 0 | 37,500%
alloc_pipe_info+aa | 219136/1024 | 136960/640 | 214 | 12 | 37,500%
alloc_fair_sched_group+f0 | 38400/512 | 24000/320 | 75 | 0 | 37,500%
__alloc_workqueue_key+77 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
newary+69 | 15360/512 | 9600/320 | 30 | 0 | 37,500%
disk_expand_part_tbl+74 | 960/64 | 600/40 | 15 | 0 | 37,500%
alloc_dax+29 | 120/8 | 75/5 | 15 | 0 | 37,500%
kernfs_mount_ns+3c | 320/64 | 200/40 | 5 | 0 | 37,500%
bucket_table_alloc+be | 16640/978 | 10496/617 | 17 | 12 | 36,923%
__alloc_workqueue_key+250 | 7680/512 | 4920/328 | 15 | 0 | 35,938%
journal_init_common+1b9 | 61440/4096 | 40920/2728 | 15 | 0 | 33,398%
kernfs_fop_write+b3 | 2248/11 | 1507/7 | 191 | 0 | 32,963%
__alloc_skb+72 | 3698176/876 | 2578048/611 | 4217 | 115 | 30,289%
alloc_pid+33 | 80896/128 | 56944/90 | 632 | 43 | 29,608%
alloc_pipe_info+3d | 41088/192 | 29104/136 | 214 | 12 | 29,167%
device_create_groups_vargs+59 | 15360/1024 | 10920/728 | 15 | 0 | 28,906%
sget_userns+ee | 112640/2048 | 80960/1472 | 55 | 8 | 28,125%
key_alloc+13e | 480/96 | 350/70 | 5 | 0 | 27,083%
load_elf_phdrs+49 | 153600/602 | 113176/443 | 255 | 0 | 26,318%
alloc_vfsmnt+aa | 11752/22 | 8765/17 | 513 | 130 | 25,417%
__memcg_init_list_lru_node+6b | 35200/32 | 26400/24 | 1100 | 160 | 25,000%
proc_self_get_link+96 | 12352/16 | 9264/12 | 772 | 0 | 25,000%
memcg_kmem_get_cache+9e | 46336/64 | 34752/48 | 724 | 0 | 25,000%
kernfs_fop_open+286 | 45056/64 | 33792/48 | 704 | 0 | 25,000%
insert_shadow+27 | 16544/32 | 12408/24 | 517 | 3 | 25,000%
allocate_cgrp_cset_links+70 | 28800/64 | 21600/48 | 450 | 0 | 25,000%

ext4_ext_remove_space+8db | 12352/64 | 9264/48 | 193 | 0 | 25,000%
dev_exception_add+25 | 5760/64 | 4320/48 | 90 | 0 | 25,000%
mempool_create_node+4e | 8160/96 | 6120/72 | 85 | 0 | 25,000%
alloc_rt_sched_group+11d | 7200/96 | 5400/72 | 75 | 0 | 25,000%
copy_semundo+60 | 2400/32 | 1800/24 | 75 | 7 | 25,000%
ext4_readdir+825 | 3264/64 | 2448/48 | 51 | 0 | 25,000%
alloc_worker+1d | 8640/192 | 6480/144 | 45 | 0 | 25,000%
alloc_workqueue_attrs+27 | 1440/32 | 1080/24 | 45 | 0 | 25,000%
ext4_fill_super+57 | 30720/2048 | 23040/1536 | 15 | 0 | 25,000%
apply_wqattrs_prepare+32 | 960/64 | 720/48 | 15 | 0 | 25,000%
inotify_handle_event+68 | 960/64 | 720/48 | 15 | 1 | 25,000%
blk_alloc_queue_stats+1b | 480/32 | 360/24 | 15 | 0 | 25,000%
proc_self_get_link+57 | 160/16 | 120/12 | 10 | 0 | 25,000%
disk_seqf_start+25 | 256/32 | 192/24 | 8 | 0 | 25,000%
memcg_write_event_control+8a | 960/192 | 720/144 | 5 | 0 | 25,000%
eventfd_file_create.part.3+28 | 320/64 | 240/48 | 5 | 0 | 25,000%
do_seccomp+249 | 160/32 | 120/24 | 5 | 0 | 25,000%
mem_cgroup_oom_register_event+29 | 160/32 | 120/24 | 5 | 0 | 25,000%
bucket_table_alloc+32 | 512/512 | 384/384 | 1 | 0 | 25,000%
__kernfs_new_node+25 | 42424/33 | 32046/24 | 1284 | 2 | 24,463%
single_open_size+2f | 45056/4096 | 35024/3184 | 11 | 0 | 22,266%
alloc_fdtable+ae | 544/90 | 424/70 | 6 | 0 | 22,059%
__register_sysctl_paths+10f | 2304/256 | 1800/200 | 9 | 0 | 21,875%
pskb_expand_head+71 | 10240/2048 | 8000/1600 | 5 | 0 | 21,875%
cpuacct_css_alloc+28 | 1280/256 | 1000/200 | 5 | 0 | 21,875%
shmem_symlink+a5 | 1440/13 | 1135/10 | 105 | 1 | 21,181%
kernfs_fop_open+d5 | 135168/192 | 107008/152 | 704 | 0 | 20,833%
mb_cache_create+2c | 2880/192 | 2280/152 | 15 | 0 | 20,833%
crypto_create_tfm+32 | 1440/96 | 1140/76 | 15 | 0 | 20,833%
bpf_prog_alloc+9d | 960/192 | 760/152 | 5 | 0 | 20,833%
pidlist_array_load+172 | 768/192 | 608/152 | 4 | 0 | 20,833%
cgroup_mkdir+ca | 46080/1024 | 36540/812 | 45 | 2 | 20,703%
__proc_create+a1 | 17280/192 | 13740/152 | 90 | 0 | 20,486%
__nf_conntrack_alloc+4e | 20800/320 | 16640/256 | 65 | 2 | 20,000%
devcgroup_css_alloc+1b | 1280/256 | 1040/208 | 5 | 0 | 18,750%
ext4_htree_store_dirent+35 | 27584/77 | 22770/64 | 354 | 0 | 17,452%
copy_ipcs+63 | 5120/1024 | 4240/848 | 5 | 4 | 17,188%
__list_lru_init+225 | 10560/96 | 8800/80 | 110 | 16 | 16,667%
device_private_init+1f | 5760/192 | 4800/160 | 30 | 0 | 16,667%
alloc_rt_sched_group+ef | 153600/2048 | 129600/1728 | 75 | 0 | 15,625%
ext4_fill_super+2907 | 1920/128 | 1620/108 | 15 | 0 | 15,625%
__d_alloc+169 | 107648/115 | 91360/97 | 934 | 0 | 15,131%
copy_utsname+85 | 2560/512 | 2200/440 | 5 | 4 | 14,062%
kobject_set_name_vargs+1e | 11904/66 | 10261/57 | 179 | 33 | 13,802%
kasprintf+3a | 11744/91 | 10196/79 | 129 | 33 | 13,181%
prepare_creds+21 | 191808/192 | 167832/168 | 999 | 31 | 12,500%
__seq_open_private+1c | 16896/64 | 14784/56 | 264 | 0 | 12,500%
start_this_handle+2da | 29440/256 | 25760/224 | 115 | 90 | 12,500%
load_elf_binary+1e8 | 3520/32 | 3080/28 | 110 | 0 | 12,500%
alloc_fair_sched_group+11d | 38400/512 | 33600/448 | 75 | 0 | 12,500%

__kthread_create_on_node+5e | 3840/64 | 3360/56 | 60 | 0 | 12,500%
wb_congested_get_create+86 | 2560/64 | 2240/56 | 40 | 0 | 12,500%
bioset_create+2e | 3840/128 | 3360/112 | 30 | 0 | 12,500%
kobj_map+83 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+54 | 960/64 | 840/56 | 15 | 0 | 12,500%
ext4_mb_init+2c | 480/32 | 420/28 | 15 | 0 | 12,500%
alloc_fdtable+4b | 384/64 | 336/56 | 6 | 0 | 12,500%
unix_bind+1a2 | 640/128 | 560/112 | 5 | 0 | 12,500%
kobject_get_path+56 | 30016/100 | 26522/88 | 299 | 0 | 11,640%
__register_sysctl_table+51 | 20448/151 | 18288/135 | 135 | 0 | 10,563%
create_cache+3e | 45696/384 | 40936/344 | 119 | 33 | 10,417%
kvmalloc_node+3e | 2113536/25161 | 1897896/22594 | 84 | 0 | 10,203%
dev_create+ab | 1440/96 | 1300/86 | 15 | 0 | 9,722%
__anon_vma_prepare+d2 | 290576/88 | 264160/80 | 3302 | 81 | 9,091%
anon_vma_fork+5e | 166672/88 | 151520/80 | 1894 | 187 | 9,091%
kthread+3f | 5760/96 | 5280/88 | 60 | 0 | 8,333%
thin_ctr+6f | 2880/192 | 2640/176 | 15 | 0 | 8,333%
sock_alloc_inode+18 | 492096/704 | 452952/648 | 699 | 11 | 7,955%
jbd2_journal_add_journal_head+67 | 48480/120 | 45248/112 | 404 | 1 | 6,667%
do_execveat_common.isra.31+c0 | 37120/256 | 34800/240 | 145 | 6 | 6,250%
kernfs_iattrs.isra.4+59 | 10112/128 | 9480/120 | 79 | 0 | 6,250%
shmem_fill_super+25 | 3200/128 | 3000/120 | 25 | 0 | 6,250%
bdi_alloc_node+2a | 15360/1024 | 14400/960 | 15 | 0 | 6,250%
alloc_mnt_ns+54 | 1152/128 | 1080/120 | 9 | 4 | 6,250%
alloc_fair_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_fair_sched_group+4e | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+29 | 640/128 | 600/120 | 5 | 0 | 6,250%
alloc_rt_sched_group+4d | 640/128 | 600/120 | 5 | 0 | 6,250%
__register_sysctl_table+434 | 43776/270 | 41157/254 | 162 | 0 | 5,983%
__kernfs_new_node+42 | 839528/136 | 790144/128 | 6173 | 1025 | 5,882%
mqueue_alloc_inode+16 | 4800/960 | 4520/904 | 5 | 4 | 5,833%
bpf_prepare_filter+24b | 40960/8192 | 38920/7784 | 5 | 0 | 4,980%
bpf_convert_filter+57 | 20480/4096 | 19460/3892 | 5 | 0 | 4,980%
bpf_prepare_filter+111 | 10240/2048 | 9730/1946 | 5 | 0 | 4,980%
ep_alloc+3d | 24576/192 | 23552/184 | 128 | 11 | 4,167%
inet_twsk_alloc+3a | 1736/248 | 1680/240 | 7 | 0 | 3,226%
mm_alloc+16 | 296960/2048 | 290000/2000 | 145 | 10 | 2,344%
copy_process.part.40+9e6 | 202752/2048 | 198000/2000 | 99 | 11 | 2,344%
mempool_create_node+f3 | 235920/425 | 230520/415 | 555 | 0 | 2,289%
dax_alloc_inode+16 | 11520/768 | 11280/752 | 15 | 0 | 2,083%
bdev_alloc_inode+16 | 12480/832 | 12240/816 | 15 | 0 | 1,923%
ext4_find_extent+290 | 423552/99 | 415440/97 | 4243 | 0 | 1,915%
radix_tree_node_alloc.constprop.19+78 | 5331336/584 | 5258304/576 | 9129 | 8 | 1,370%
alloc_inode+66 | 528352/608 | 521400/600 | 869 | 20 | 1,316%
proc_alloc_inode+16 | 928200/680 | 917280/672 | 1365 | 45 | 1,176%
copy_process.part.40+95d | 483648/2112 | 478152/2088 | 229 | 8 | 1,136%
shmem_alloc_inode+16 | 326096/712 | 322432/704 | 458 | 6 | 1,124%
copy_process.part.40+10fe | 234496/1024 | 232664/1016 | 229 | 12 | 0,781%
ext4_alloc_inode+17 | 647360/1088 | 642600/1080 | 595 | 1 | 0,735%
__vmalloc_node_range+d3 | 13192/30 | 13112/30 | 426 | 36 | 0,606%
sk_prot_alloc+2f | 544768/1127 | 542080/1122 | 483 | 6 | 0,493%
...

SUMMARY (SLAB allocator)
========================
Total bytes requested: 818.739.691
Total bytes allocated: 821.951.696
Total bytes freed: 763.705.848
Net total bytes allocated: 58.245.848
Total bytes wasted on internal fragmentation: 3.212.005
Internal fragmentation: 0,390778%
Cross CPU allocations: 28.844/10.157.339






memory-leaks out-of-memory






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Jan 3 at 14:52







Thomas Seehofchen

















asked Jan 3 at 12:59









Thomas SeehofchenThomas Seehofchen

5219




5219













  • try top -o %MEM

    – Ctx
    Jan 3 at 13:04













  • This is not the problem: I can sort the top-output with SHIFT-M, however most memory- comsuming process is 'dmeventd' with 17MB RES-memory. The output above was also ordered after memory.

    – Thomas Seehofchen
    Jan 3 at 13:33













  • Ok, I see... Then what does cat /proc/meminfo say?

    – Ctx
    Jan 3 at 13:46











  • I added it to my inital question.

    – Thomas Seehofchen
    Jan 3 at 13:50











  • Ok, so slab is the problem (SUnreclaim: 5483636 kB). We need /proc/slabinfo now ;) Ok, this might be big, maybe you can identify yourself which kernel driver allocates so much memory. I just saw that there is also a tool called slabtop, which might come in handy here

    – Ctx
    Jan 3 at 13:54





















  • try top -o %MEM

    – Ctx
    Jan 3 at 13:04













  • This is not the problem: I can sort the top-output with SHIFT-M, however most memory- comsuming process is 'dmeventd' with 17MB RES-memory. The output above was also ordered after memory.

    – Thomas Seehofchen
    Jan 3 at 13:33













  • Ok, I see... Then what does cat /proc/meminfo say?

    – Ctx
    Jan 3 at 13:46











  • I added it to my inital question.

    – Thomas Seehofchen
    Jan 3 at 13:50











  • Ok, so slab is the problem (SUnreclaim: 5483636 kB). We need /proc/slabinfo now ;) Ok, this might be big, maybe you can identify yourself which kernel driver allocates so much memory. I just saw that there is also a tool called slabtop, which might come in handy here

    – Ctx
    Jan 3 at 13:54



















try top -o %MEM

– Ctx
Jan 3 at 13:04







try top -o %MEM

– Ctx
Jan 3 at 13:04















This is not the problem: I can sort the top-output with SHIFT-M, however most memory- comsuming process is 'dmeventd' with 17MB RES-memory. The output above was also ordered after memory.

– Thomas Seehofchen
Jan 3 at 13:33







This is not the problem: I can sort the top-output with SHIFT-M, however most memory- comsuming process is 'dmeventd' with 17MB RES-memory. The output above was also ordered after memory.

– Thomas Seehofchen
Jan 3 at 13:33















Ok, I see... Then what does cat /proc/meminfo say?

– Ctx
Jan 3 at 13:46





Ok, I see... Then what does cat /proc/meminfo say?

– Ctx
Jan 3 at 13:46













I added it to my inital question.

– Thomas Seehofchen
Jan 3 at 13:50





I added it to my inital question.

– Thomas Seehofchen
Jan 3 at 13:50













Ok, so slab is the problem (SUnreclaim: 5483636 kB). We need /proc/slabinfo now ;) Ok, this might be big, maybe you can identify yourself which kernel driver allocates so much memory. I just saw that there is also a tool called slabtop, which might come in handy here

– Ctx
Jan 3 at 13:54







Ok, so slab is the problem (SUnreclaim: 5483636 kB). We need /proc/slabinfo now ;) Ok, this might be big, maybe you can identify yourself which kernel driver allocates so much memory. I just saw that there is also a tool called slabtop, which might come in handy here

– Ctx
Jan 3 at 13:54














0






active

oldest

votes












Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54022799%2fmemory-behavior-on-amazon-linux-ami-release-2018-03%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























0






active

oldest

votes








0






active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f54022799%2fmemory-behavior-on-amazon-linux-ami-release-2018-03%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

android studio warns about leanback feature tag usage required on manifest while using Unity exported app?

SQL update select statement

'app-layout' is not a known element: how to share Component with different Modules