| xj | b04a402 | 2021-11-25 15:01:52 +0800 | [diff] [blame] | 1 | Memory Resource Controller | 
|  | 2 |  | 
|  | 3 | NOTE: This document is hopelessly outdated and it asks for a complete | 
|  | 4 | rewrite. It still contains a useful information so we are keeping it | 
|  | 5 | here but make sure to check the current code if you need a deeper | 
|  | 6 | understanding. | 
|  | 7 |  | 
|  | 8 | NOTE: The Memory Resource Controller has generically been referred to as the | 
|  | 9 | memory controller in this document. Do not confuse memory controller | 
|  | 10 | used here with the memory controller that is used in hardware. | 
|  | 11 |  | 
|  | 12 | (For editors) | 
|  | 13 | In this document: | 
|  | 14 | When we mention a cgroup (cgroupfs's directory) with memory controller, | 
|  | 15 | we call it "memory cgroup". When you see git-log and source code, you'll | 
|  | 16 | see patch's title and function names tend to use "memcg". | 
|  | 17 | In this document, we avoid using it. | 
|  | 18 |  | 
|  | 19 | Benefits and Purpose of the memory controller | 
|  | 20 |  | 
|  | 21 | The memory controller isolates the memory behaviour of a group of tasks | 
|  | 22 | from the rest of the system. The article on LWN [12] mentions some probable | 
|  | 23 | uses of the memory controller. The memory controller can be used to | 
|  | 24 |  | 
|  | 25 | a. Isolate an application or a group of applications | 
|  | 26 | Memory-hungry applications can be isolated and limited to a smaller | 
|  | 27 | amount of memory. | 
|  | 28 | b. Create a cgroup with a limited amount of memory; this can be used | 
|  | 29 | as a good alternative to booting with mem=XXXX. | 
|  | 30 | c. Virtualization solutions can control the amount of memory they want | 
|  | 31 | to assign to a virtual machine instance. | 
|  | 32 | d. A CD/DVD burner could control the amount of memory used by the | 
|  | 33 | rest of the system to ensure that burning does not fail due to lack | 
|  | 34 | of available memory. | 
|  | 35 | e. There are several other use cases; find one or use the controller just | 
|  | 36 | for fun (to learn and hack on the VM subsystem). | 
|  | 37 |  | 
|  | 38 | Current Status: linux-2.6.34-mmotm(development version of 2010/April) | 
|  | 39 |  | 
|  | 40 | Features: | 
|  | 41 | - accounting anonymous pages, file caches, swap caches usage and limiting them. | 
|  | 42 | - pages are linked to per-memcg LRU exclusively, and there is no global LRU. | 
|  | 43 | - optionally, memory+swap usage can be accounted and limited. | 
|  | 44 | - hierarchical accounting | 
|  | 45 | - soft limit | 
|  | 46 | - moving (recharging) account at moving a task is selectable. | 
|  | 47 | - usage threshold notifier | 
|  | 48 | - memory pressure notifier | 
|  | 49 | - oom-killer disable knob and oom-notifier | 
|  | 50 | - Root cgroup has no limit controls. | 
|  | 51 |  | 
|  | 52 | Kernel memory support is a work in progress, and the current version provides | 
|  | 53 | basically functionality. (See Section 2.7) | 
|  | 54 |  | 
|  | 55 | Brief summary of control files. | 
|  | 56 |  | 
|  | 57 | tasks				 # attach a task(thread) and show list of threads | 
|  | 58 | cgroup.procs			 # show list of processes | 
|  | 59 | cgroup.event_control		 # an interface for event_fd() | 
|  | 60 | memory.usage_in_bytes		 # show current usage for memory | 
|  | 61 | (See 5.5 for details) | 
|  | 62 | memory.memsw.usage_in_bytes	 # show current usage for memory+Swap | 
|  | 63 | (See 5.5 for details) | 
|  | 64 | memory.limit_in_bytes		 # set/show limit of memory usage | 
|  | 65 | memory.memsw.limit_in_bytes	 # set/show limit of memory+Swap usage | 
|  | 66 | memory.failcnt			 # show the number of memory usage hits limits | 
|  | 67 | memory.memsw.failcnt		 # show the number of memory+Swap hits limits | 
|  | 68 | memory.max_usage_in_bytes	 # show max memory usage recorded | 
|  | 69 | memory.memsw.max_usage_in_bytes # show max memory+Swap usage recorded | 
|  | 70 | memory.soft_limit_in_bytes	 # set/show soft limit of memory usage | 
|  | 71 | memory.stat			 # show various statistics | 
|  | 72 | memory.use_hierarchy		 # set/show hierarchical account enabled | 
|  | 73 | memory.force_empty		 # trigger forced move charge to parent | 
|  | 74 | memory.pressure_level		 # set memory pressure notifications | 
|  | 75 | memory.swappiness		 # set/show swappiness parameter of vmscan | 
|  | 76 | (See sysctl's vm.swappiness) | 
|  | 77 | memory.move_charge_at_immigrate # set/show controls of moving charges | 
|  | 78 | memory.oom_control		 # set/show oom controls. | 
|  | 79 | memory.numa_stat		 # show the number of memory usage per numa node | 
|  | 80 |  | 
|  | 81 | memory.kmem.limit_in_bytes      # set/show hard limit for kernel memory | 
|  | 82 | memory.kmem.usage_in_bytes      # show current kernel memory allocation | 
|  | 83 | memory.kmem.failcnt             # show the number of kernel memory usage hits limits | 
|  | 84 | memory.kmem.max_usage_in_bytes  # show max kernel memory usage recorded | 
|  | 85 |  | 
|  | 86 | memory.kmem.tcp.limit_in_bytes  # set/show hard limit for tcp buf memory | 
|  | 87 | memory.kmem.tcp.usage_in_bytes  # show current tcp buf memory allocation | 
|  | 88 | memory.kmem.tcp.failcnt            # show the number of tcp buf memory usage hits limits | 
|  | 89 | memory.kmem.tcp.max_usage_in_bytes # show max tcp buf memory usage recorded | 
|  | 90 |  | 
|  | 91 | 1. History | 
|  | 92 |  | 
|  | 93 | The memory controller has a long history. A request for comments for the memory | 
|  | 94 | controller was posted by Balbir Singh [1]. At the time the RFC was posted | 
|  | 95 | there were several implementations for memory control. The goal of the | 
|  | 96 | RFC was to build consensus and agreement for the minimal features required | 
|  | 97 | for memory control. The first RSS controller was posted by Balbir Singh[2] | 
|  | 98 | in Feb 2007. Pavel Emelianov [3][4][5] has since posted three versions of the | 
|  | 99 | RSS controller. At OLS, at the resource management BoF, everyone suggested | 
|  | 100 | that we handle both page cache and RSS together. Another request was raised | 
|  | 101 | to allow user space handling of OOM. The current memory controller is | 
|  | 102 | at version 6; it combines both mapped (RSS) and unmapped Page | 
|  | 103 | Cache Control [11]. | 
|  | 104 |  | 
|  | 105 | 2. Memory Control | 
|  | 106 |  | 
|  | 107 | Memory is a unique resource in the sense that it is present in a limited | 
|  | 108 | amount. If a task requires a lot of CPU processing, the task can spread | 
|  | 109 | its processing over a period of hours, days, months or years, but with | 
|  | 110 | memory, the same physical memory needs to be reused to accomplish the task. | 
|  | 111 |  | 
|  | 112 | The memory controller implementation has been divided into phases. These | 
|  | 113 | are: | 
|  | 114 |  | 
|  | 115 | 1. Memory controller | 
|  | 116 | 2. mlock(2) controller | 
|  | 117 | 3. Kernel user memory accounting and slab control | 
|  | 118 | 4. user mappings length controller | 
|  | 119 |  | 
|  | 120 | The memory controller is the first controller developed. | 
|  | 121 |  | 
|  | 122 | 2.1. Design | 
|  | 123 |  | 
|  | 124 | The core of the design is a counter called the page_counter. The | 
|  | 125 | page_counter tracks the current memory usage and limit of the group of | 
|  | 126 | processes associated with the controller. Each cgroup has a memory controller | 
|  | 127 | specific data structure (mem_cgroup) associated with it. | 
|  | 128 |  | 
|  | 129 | 2.2. Accounting | 
|  | 130 |  | 
|  | 131 | +--------------------+ | 
|  | 132 | |  mem_cgroup        | | 
|  | 133 | |  (page_counter)    | | 
|  | 134 | +--------------------+ | 
|  | 135 | /            ^      \ | 
|  | 136 | /             |       \ | 
|  | 137 | +---------------+  |        +---------------+ | 
|  | 138 | | mm_struct     |  |....    | mm_struct     | | 
|  | 139 | |               |  |        |               | | 
|  | 140 | +---------------+  |        +---------------+ | 
|  | 141 | | | 
|  | 142 | + --------------+ | 
|  | 143 | | | 
|  | 144 | +---------------+           +------+--------+ | 
|  | 145 | | page          +---------->  page_cgroup| | 
|  | 146 | |               |           |               | | 
|  | 147 | +---------------+           +---------------+ | 
|  | 148 |  | 
|  | 149 | (Figure 1: Hierarchy of Accounting) | 
|  | 150 |  | 
|  | 151 |  | 
|  | 152 | Figure 1 shows the important aspects of the controller | 
|  | 153 |  | 
|  | 154 | 1. Accounting happens per cgroup | 
|  | 155 | 2. Each mm_struct knows about which cgroup it belongs to | 
|  | 156 | 3. Each page has a pointer to the page_cgroup, which in turn knows the | 
|  | 157 | cgroup it belongs to | 
|  | 158 |  | 
|  | 159 | The accounting is done as follows: mem_cgroup_charge_common() is invoked to | 
|  | 160 | set up the necessary data structures and check if the cgroup that is being | 
|  | 161 | charged is over its limit. If it is, then reclaim is invoked on the cgroup. | 
|  | 162 | More details can be found in the reclaim section of this document. | 
|  | 163 | If everything goes well, a page meta-data-structure called page_cgroup is | 
|  | 164 | updated. page_cgroup has its own LRU on cgroup. | 
|  | 165 | (*) page_cgroup structure is allocated at boot/memory-hotplug time. | 
|  | 166 |  | 
|  | 167 | 2.2.1 Accounting details | 
|  | 168 |  | 
|  | 169 | All mapped anon pages (RSS) and cache pages (Page Cache) are accounted. | 
|  | 170 | Some pages which are never reclaimable and will not be on the LRU | 
|  | 171 | are not accounted. We just account pages under usual VM management. | 
|  | 172 |  | 
|  | 173 | RSS pages are accounted at page_fault unless they've already been accounted | 
|  | 174 | for earlier. A file page will be accounted for as Page Cache when it's | 
|  | 175 | inserted into inode (radix-tree). While it's mapped into the page tables of | 
|  | 176 | processes, duplicate accounting is carefully avoided. | 
|  | 177 |  | 
|  | 178 | An RSS page is unaccounted when it's fully unmapped. A PageCache page is | 
|  | 179 | unaccounted when it's removed from radix-tree. Even if RSS pages are fully | 
|  | 180 | unmapped (by kswapd), they may exist as SwapCache in the system until they | 
|  | 181 | are really freed. Such SwapCaches are also accounted. | 
|  | 182 | A swapped-in page is not accounted until it's mapped. | 
|  | 183 |  | 
|  | 184 | Note: The kernel does swapin-readahead and reads multiple swaps at once. | 
|  | 185 | This means swapped-in pages may contain pages for other tasks than a task | 
|  | 186 | causing page fault. So, we avoid accounting at swap-in I/O. | 
|  | 187 |  | 
|  | 188 | At page migration, accounting information is kept. | 
|  | 189 |  | 
|  | 190 | Note: we just account pages-on-LRU because our purpose is to control amount | 
|  | 191 | of used pages; not-on-LRU pages tend to be out-of-control from VM view. | 
|  | 192 |  | 
|  | 193 | 2.3 Shared Page Accounting | 
|  | 194 |  | 
|  | 195 | Shared pages are accounted on the basis of the first touch approach. The | 
|  | 196 | cgroup that first touches a page is accounted for the page. The principle | 
|  | 197 | behind this approach is that a cgroup that aggressively uses a shared | 
|  | 198 | page will eventually get charged for it (once it is uncharged from | 
|  | 199 | the cgroup that brought it in -- this will happen on memory pressure). | 
|  | 200 |  | 
|  | 201 | But see section 8.2: when moving a task to another cgroup, its pages may | 
|  | 202 | be recharged to the new cgroup, if move_charge_at_immigrate has been chosen. | 
|  | 203 |  | 
|  | 204 | Exception: If CONFIG_MEMCG_SWAP is not used. | 
|  | 205 | When you do swapoff and make swapped-out pages of shmem(tmpfs) to | 
|  | 206 | be backed into memory in force, charges for pages are accounted against the | 
|  | 207 | caller of swapoff rather than the users of shmem. | 
|  | 208 |  | 
|  | 209 | 2.4 Swap Extension (CONFIG_MEMCG_SWAP) | 
|  | 210 |  | 
|  | 211 | Swap Extension allows you to record charge for swap. A swapped-in page is | 
|  | 212 | charged back to original page allocator if possible. | 
|  | 213 |  | 
|  | 214 | When swap is accounted, following files are added. | 
|  | 215 | - memory.memsw.usage_in_bytes. | 
|  | 216 | - memory.memsw.limit_in_bytes. | 
|  | 217 |  | 
|  | 218 | memsw means memory+swap. Usage of memory+swap is limited by | 
|  | 219 | memsw.limit_in_bytes. | 
|  | 220 |  | 
|  | 221 | Example: Assume a system with 4G of swap. A task which allocates 6G of memory | 
|  | 222 | (by mistake) under 2G memory limitation will use all swap. | 
|  | 223 | In this case, setting memsw.limit_in_bytes=3G will prevent bad use of swap. | 
|  | 224 | By using the memsw limit, you can avoid system OOM which can be caused by swap | 
|  | 225 | shortage. | 
|  | 226 |  | 
|  | 227 | * why 'memory+swap' rather than swap. | 
|  | 228 | The global LRU(kswapd) can swap out arbitrary pages. Swap-out means | 
|  | 229 | to move account from memory to swap...there is no change in usage of | 
|  | 230 | memory+swap. In other words, when we want to limit the usage of swap without | 
|  | 231 | affecting global LRU, memory+swap limit is better than just limiting swap from | 
|  | 232 | an OS point of view. | 
|  | 233 |  | 
|  | 234 | * What happens when a cgroup hits memory.memsw.limit_in_bytes | 
|  | 235 | When a cgroup hits memory.memsw.limit_in_bytes, it's useless to do swap-out | 
|  | 236 | in this cgroup. Then, swap-out will not be done by cgroup routine and file | 
|  | 237 | caches are dropped. But as mentioned above, global LRU can do swapout memory | 
|  | 238 | from it for sanity of the system's memory management state. You can't forbid | 
|  | 239 | it by cgroup. | 
|  | 240 |  | 
|  | 241 | 2.5 Reclaim | 
|  | 242 |  | 
|  | 243 | Each cgroup maintains a per cgroup LRU which has the same structure as | 
|  | 244 | global VM. When a cgroup goes over its limit, we first try | 
|  | 245 | to reclaim memory from the cgroup so as to make space for the new | 
|  | 246 | pages that the cgroup has touched. If the reclaim is unsuccessful, | 
|  | 247 | an OOM routine is invoked to select and kill the bulkiest task in the | 
|  | 248 | cgroup. (See 10. OOM Control below.) | 
|  | 249 |  | 
|  | 250 | The reclaim algorithm has not been modified for cgroups, except that | 
|  | 251 | pages that are selected for reclaiming come from the per-cgroup LRU | 
|  | 252 | list. | 
|  | 253 |  | 
|  | 254 | NOTE: Reclaim does not work for the root cgroup, since we cannot set any | 
|  | 255 | limits on the root cgroup. | 
|  | 256 |  | 
|  | 257 | Note2: When panic_on_oom is set to "2", the whole system will panic. | 
|  | 258 |  | 
|  | 259 | When oom event notifier is registered, event will be delivered. | 
|  | 260 | (See oom_control section) | 
|  | 261 |  | 
|  | 262 | 2.6 Locking | 
|  | 263 |  | 
|  | 264 | lock_page_cgroup()/unlock_page_cgroup() should not be called under | 
|  | 265 | the i_pages lock. | 
|  | 266 |  | 
|  | 267 | Other lock order is following: | 
|  | 268 | PG_locked. | 
|  | 269 | mm->page_table_lock | 
|  | 270 | zone_lru_lock | 
|  | 271 | lock_page_cgroup. | 
|  | 272 | In many cases, just lock_page_cgroup() is called. | 
|  | 273 | per-zone-per-cgroup LRU (cgroup's private LRU) is just guarded by | 
|  | 274 | zone_lru_lock, it has no lock of its own. | 
|  | 275 |  | 
|  | 276 | 2.7 Kernel Memory Extension (CONFIG_MEMCG_KMEM) | 
|  | 277 |  | 
|  | 278 | With the Kernel memory extension, the Memory Controller is able to limit | 
|  | 279 | the amount of kernel memory used by the system. Kernel memory is fundamentally | 
|  | 280 | different than user memory, since it can't be swapped out, which makes it | 
|  | 281 | possible to DoS the system by consuming too much of this precious resource. | 
|  | 282 |  | 
|  | 283 | Kernel memory accounting is enabled for all memory cgroups by default. But | 
|  | 284 | it can be disabled system-wide by passing cgroup.memory=nokmem to the kernel | 
|  | 285 | at boot time. In this case, kernel memory will not be accounted at all. | 
|  | 286 |  | 
|  | 287 | Kernel memory limits are not imposed for the root cgroup. Usage for the root | 
|  | 288 | cgroup may or may not be accounted. The memory used is accumulated into | 
|  | 289 | memory.kmem.usage_in_bytes, or in a separate counter when it makes sense. | 
|  | 290 | (currently only for tcp). | 
|  | 291 | The main "kmem" counter is fed into the main counter, so kmem charges will | 
|  | 292 | also be visible from the user counter. | 
|  | 293 |  | 
|  | 294 | Currently no soft limit is implemented for kernel memory. It is future work | 
|  | 295 | to trigger slab reclaim when those limits are reached. | 
|  | 296 |  | 
|  | 297 | 2.7.1 Current Kernel Memory resources accounted | 
|  | 298 |  | 
|  | 299 | * stack pages: every process consumes some stack pages. By accounting into | 
|  | 300 | kernel memory, we prevent new processes from being created when the kernel | 
|  | 301 | memory usage is too high. | 
|  | 302 |  | 
|  | 303 | * slab pages: pages allocated by the SLAB or SLUB allocator are tracked. A copy | 
|  | 304 | of each kmem_cache is created every time the cache is touched by the first time | 
|  | 305 | from inside the memcg. The creation is done lazily, so some objects can still be | 
|  | 306 | skipped while the cache is being created. All objects in a slab page should | 
|  | 307 | belong to the same memcg. This only fails to hold when a task is migrated to a | 
|  | 308 | different memcg during the page allocation by the cache. | 
|  | 309 |  | 
|  | 310 | * sockets memory pressure: some sockets protocols have memory pressure | 
|  | 311 | thresholds. The Memory Controller allows them to be controlled individually | 
|  | 312 | per cgroup, instead of globally. | 
|  | 313 |  | 
|  | 314 | * tcp memory pressure: sockets memory pressure for the tcp protocol. | 
|  | 315 |  | 
|  | 316 | 2.7.2 Common use cases | 
|  | 317 |  | 
|  | 318 | Because the "kmem" counter is fed to the main user counter, kernel memory can | 
|  | 319 | never be limited completely independently of user memory. Say "U" is the user | 
|  | 320 | limit, and "K" the kernel limit. There are three possible ways limits can be | 
|  | 321 | set: | 
|  | 322 |  | 
|  | 323 | U != 0, K = unlimited: | 
|  | 324 | This is the standard memcg limitation mechanism already present before kmem | 
|  | 325 | accounting. Kernel memory is completely ignored. | 
|  | 326 |  | 
|  | 327 | U != 0, K < U: | 
|  | 328 | Kernel memory is a subset of the user memory. This setup is useful in | 
|  | 329 | deployments where the total amount of memory per-cgroup is overcommited. | 
|  | 330 | Overcommiting kernel memory limits is definitely not recommended, since the | 
|  | 331 | box can still run out of non-reclaimable memory. | 
|  | 332 | In this case, the admin could set up K so that the sum of all groups is | 
|  | 333 | never greater than the total memory, and freely set U at the cost of his | 
|  | 334 | QoS. | 
|  | 335 | WARNING: In the current implementation, memory reclaim will NOT be | 
|  | 336 | triggered for a cgroup when it hits K while staying below U, which makes | 
|  | 337 | this setup impractical. | 
|  | 338 |  | 
|  | 339 | U != 0, K >= U: | 
|  | 340 | Since kmem charges will also be fed to the user counter and reclaim will be | 
|  | 341 | triggered for the cgroup for both kinds of memory. This setup gives the | 
|  | 342 | admin a unified view of memory, and it is also useful for people who just | 
|  | 343 | want to track kernel memory usage. | 
|  | 344 |  | 
|  | 345 | 3. User Interface | 
|  | 346 |  | 
|  | 347 | 3.0. Configuration | 
|  | 348 |  | 
|  | 349 | a. Enable CONFIG_CGROUPS | 
|  | 350 | b. Enable CONFIG_MEMCG | 
|  | 351 | c. Enable CONFIG_MEMCG_SWAP (to use swap extension) | 
|  | 352 | d. Enable CONFIG_MEMCG_KMEM (to use kmem extension) | 
|  | 353 |  | 
|  | 354 | 3.1. Prepare the cgroups (see cgroups.txt, Why are cgroups needed?) | 
|  | 355 | # mount -t tmpfs none /sys/fs/cgroup | 
|  | 356 | # mkdir /sys/fs/cgroup/memory | 
|  | 357 | # mount -t cgroup none /sys/fs/cgroup/memory -o memory | 
|  | 358 |  | 
|  | 359 | 3.2. Make the new group and move bash into it | 
|  | 360 | # mkdir /sys/fs/cgroup/memory/0 | 
|  | 361 | # echo $$ > /sys/fs/cgroup/memory/0/tasks | 
|  | 362 |  | 
|  | 363 | Since now we're in the 0 cgroup, we can alter the memory limit: | 
|  | 364 | # echo 4M > /sys/fs/cgroup/memory/0/memory.limit_in_bytes | 
|  | 365 |  | 
|  | 366 | NOTE: We can use a suffix (k, K, m, M, g or G) to indicate values in kilo, | 
|  | 367 | mega or gigabytes. (Here, Kilo, Mega, Giga are Kibibytes, Mebibytes, Gibibytes.) | 
|  | 368 |  | 
|  | 369 | NOTE: We can write "-1" to reset the *.limit_in_bytes(unlimited). | 
|  | 370 | NOTE: We cannot set limits on the root cgroup any more. | 
|  | 371 |  | 
|  | 372 | # cat /sys/fs/cgroup/memory/0/memory.limit_in_bytes | 
|  | 373 | 4194304 | 
|  | 374 |  | 
|  | 375 | We can check the usage: | 
|  | 376 | # cat /sys/fs/cgroup/memory/0/memory.usage_in_bytes | 
|  | 377 | 1216512 | 
|  | 378 |  | 
|  | 379 | A successful write to this file does not guarantee a successful setting of | 
|  | 380 | this limit to the value written into the file. This can be due to a | 
|  | 381 | number of factors, such as rounding up to page boundaries or the total | 
|  | 382 | availability of memory on the system. The user is required to re-read | 
|  | 383 | this file after a write to guarantee the value committed by the kernel. | 
|  | 384 |  | 
|  | 385 | # echo 1 > memory.limit_in_bytes | 
|  | 386 | # cat memory.limit_in_bytes | 
|  | 387 | 4096 | 
|  | 388 |  | 
|  | 389 | The memory.failcnt field gives the number of times that the cgroup limit was | 
|  | 390 | exceeded. | 
|  | 391 |  | 
|  | 392 | The memory.stat file gives accounting information. Now, the number of | 
|  | 393 | caches, RSS and Active pages/Inactive pages are shown. | 
|  | 394 |  | 
|  | 395 | 4. Testing | 
|  | 396 |  | 
|  | 397 | For testing features and implementation, see memcg_test.txt. | 
|  | 398 |  | 
|  | 399 | Performance test is also important. To see pure memory controller's overhead, | 
|  | 400 | testing on tmpfs will give you good numbers of small overheads. | 
|  | 401 | Example: do kernel make on tmpfs. | 
|  | 402 |  | 
|  | 403 | Page-fault scalability is also important. At measuring parallel | 
|  | 404 | page fault test, multi-process test may be better than multi-thread | 
|  | 405 | test because it has noise of shared objects/status. | 
|  | 406 |  | 
|  | 407 | But the above two are testing extreme situations. | 
|  | 408 | Trying usual test under memory controller is always helpful. | 
|  | 409 |  | 
|  | 410 | 4.1 Troubleshooting | 
|  | 411 |  | 
|  | 412 | Sometimes a user might find that the application under a cgroup is | 
|  | 413 | terminated by the OOM killer. There are several causes for this: | 
|  | 414 |  | 
|  | 415 | 1. The cgroup limit is too low (just too low to do anything useful) | 
|  | 416 | 2. The user is using anonymous memory and swap is turned off or too low | 
|  | 417 |  | 
|  | 418 | A sync followed by echo 1 > /proc/sys/vm/drop_caches will help get rid of | 
|  | 419 | some of the pages cached in the cgroup (page cache pages). | 
|  | 420 |  | 
|  | 421 | To know what happens, disabling OOM_Kill as per "10. OOM Control" (below) and | 
|  | 422 | seeing what happens will be helpful. | 
|  | 423 |  | 
|  | 424 | 4.2 Task migration | 
|  | 425 |  | 
|  | 426 | When a task migrates from one cgroup to another, its charge is not | 
|  | 427 | carried forward by default. The pages allocated from the original cgroup still | 
|  | 428 | remain charged to it, the charge is dropped when the page is freed or | 
|  | 429 | reclaimed. | 
|  | 430 |  | 
|  | 431 | You can move charges of a task along with task migration. | 
|  | 432 | See 8. "Move charges at task migration" | 
|  | 433 |  | 
|  | 434 | 4.3 Removing a cgroup | 
|  | 435 |  | 
|  | 436 | A cgroup can be removed by rmdir, but as discussed in sections 4.1 and 4.2, a | 
|  | 437 | cgroup might have some charge associated with it, even though all | 
|  | 438 | tasks have migrated away from it. (because we charge against pages, not | 
|  | 439 | against tasks.) | 
|  | 440 |  | 
|  | 441 | We move the stats to root (if use_hierarchy==0) or parent (if | 
|  | 442 | use_hierarchy==1), and no change on the charge except uncharging | 
|  | 443 | from the child. | 
|  | 444 |  | 
|  | 445 | Charges recorded in swap information is not updated at removal of cgroup. | 
|  | 446 | Recorded information is discarded and a cgroup which uses swap (swapcache) | 
|  | 447 | will be charged as a new owner of it. | 
|  | 448 |  | 
|  | 449 | About use_hierarchy, see Section 6. | 
|  | 450 |  | 
|  | 451 | 5. Misc. interfaces. | 
|  | 452 |  | 
|  | 453 | 5.1 force_empty | 
|  | 454 | memory.force_empty interface is provided to make cgroup's memory usage empty. | 
|  | 455 | When writing anything to this | 
|  | 456 |  | 
|  | 457 | # echo 0 > memory.force_empty | 
|  | 458 |  | 
|  | 459 | the cgroup will be reclaimed and as many pages reclaimed as possible. | 
|  | 460 |  | 
|  | 461 | The typical use case for this interface is before calling rmdir(). | 
|  | 462 | Because rmdir() moves all pages to parent, some out-of-use page caches can be | 
|  | 463 | moved to the parent. If you want to avoid that, force_empty will be useful. | 
|  | 464 |  | 
|  | 465 | Also, note that when memory.kmem.limit_in_bytes is set the charges due to | 
|  | 466 | kernel pages will still be seen. This is not considered a failure and the | 
|  | 467 | write will still return success. In this case, it is expected that | 
|  | 468 | memory.kmem.usage_in_bytes == memory.usage_in_bytes. | 
|  | 469 |  | 
|  | 470 | About use_hierarchy, see Section 6. | 
|  | 471 |  | 
|  | 472 | 5.2 stat file | 
|  | 473 |  | 
|  | 474 | memory.stat file includes following statistics | 
|  | 475 |  | 
|  | 476 | # per-memory cgroup local status | 
|  | 477 | cache		- # of bytes of page cache memory. | 
|  | 478 | rss		- # of bytes of anonymous and swap cache memory (includes | 
|  | 479 | transparent hugepages). | 
|  | 480 | rss_huge	- # of bytes of anonymous transparent hugepages. | 
|  | 481 | mapped_file	- # of bytes of mapped file (includes tmpfs/shmem) | 
|  | 482 | pgpgin		- # of charging events to the memory cgroup. The charging | 
|  | 483 | event happens each time a page is accounted as either mapped | 
|  | 484 | anon page(RSS) or cache page(Page Cache) to the cgroup. | 
|  | 485 | pgpgout		- # of uncharging events to the memory cgroup. The uncharging | 
|  | 486 | event happens each time a page is unaccounted from the cgroup. | 
|  | 487 | swap		- # of bytes of swap usage | 
|  | 488 | dirty		- # of bytes that are waiting to get written back to the disk. | 
|  | 489 | writeback	- # of bytes of file/anon cache that are queued for syncing to | 
|  | 490 | disk. | 
|  | 491 | inactive_anon	- # of bytes of anonymous and swap cache memory on inactive | 
|  | 492 | LRU list. | 
|  | 493 | active_anon	- # of bytes of anonymous and swap cache memory on active | 
|  | 494 | LRU list. | 
|  | 495 | inactive_file	- # of bytes of file-backed memory on inactive LRU list. | 
|  | 496 | active_file	- # of bytes of file-backed memory on active LRU list. | 
|  | 497 | unevictable	- # of bytes of memory that cannot be reclaimed (mlocked etc). | 
|  | 498 |  | 
|  | 499 | # status considering hierarchy (see memory.use_hierarchy settings) | 
|  | 500 |  | 
|  | 501 | hierarchical_memory_limit - # of bytes of memory limit with regard to hierarchy | 
|  | 502 | under which the memory cgroup is | 
|  | 503 | hierarchical_memsw_limit - # of bytes of memory+swap limit with regard to | 
|  | 504 | hierarchy under which memory cgroup is. | 
|  | 505 |  | 
|  | 506 | total_<counter>		- # hierarchical version of <counter>, which in | 
|  | 507 | addition to the cgroup's own value includes the | 
|  | 508 | sum of all hierarchical children's values of | 
|  | 509 | <counter>, i.e. total_cache | 
|  | 510 |  | 
|  | 511 | # The following additional stats are dependent on CONFIG_DEBUG_VM. | 
|  | 512 |  | 
|  | 513 | recent_rotated_anon	- VM internal parameter. (see mm/vmscan.c) | 
|  | 514 | recent_rotated_file	- VM internal parameter. (see mm/vmscan.c) | 
|  | 515 | recent_scanned_anon	- VM internal parameter. (see mm/vmscan.c) | 
|  | 516 | recent_scanned_file	- VM internal parameter. (see mm/vmscan.c) | 
|  | 517 |  | 
|  | 518 | Memo: | 
|  | 519 | recent_rotated means recent frequency of LRU rotation. | 
|  | 520 | recent_scanned means recent # of scans to LRU. | 
|  | 521 | showing for better debug please see the code for meanings. | 
|  | 522 |  | 
|  | 523 | Note: | 
|  | 524 | Only anonymous and swap cache memory is listed as part of 'rss' stat. | 
|  | 525 | This should not be confused with the true 'resident set size' or the | 
|  | 526 | amount of physical memory used by the cgroup. | 
|  | 527 | 'rss + mapped_file" will give you resident set size of cgroup. | 
|  | 528 | (Note: file and shmem may be shared among other cgroups. In that case, | 
|  | 529 | mapped_file is accounted only when the memory cgroup is owner of page | 
|  | 530 | cache.) | 
|  | 531 |  | 
|  | 532 | 5.3 swappiness | 
|  | 533 |  | 
|  | 534 | Overrides /proc/sys/vm/swappiness for the particular group. The tunable | 
|  | 535 | in the root cgroup corresponds to the global swappiness setting. | 
|  | 536 |  | 
|  | 537 | Please note that unlike during the global reclaim, limit reclaim | 
|  | 538 | enforces that 0 swappiness really prevents from any swapping even if | 
|  | 539 | there is a swap storage available. This might lead to memcg OOM killer | 
|  | 540 | if there are no file pages to reclaim. | 
|  | 541 |  | 
|  | 542 | 5.4 failcnt | 
|  | 543 |  | 
|  | 544 | A memory cgroup provides memory.failcnt and memory.memsw.failcnt files. | 
|  | 545 | This failcnt(== failure count) shows the number of times that a usage counter | 
|  | 546 | hit its limit. When a memory cgroup hits a limit, failcnt increases and | 
|  | 547 | memory under it will be reclaimed. | 
|  | 548 |  | 
|  | 549 | You can reset failcnt by writing 0 to failcnt file. | 
|  | 550 | # echo 0 > .../memory.failcnt | 
|  | 551 |  | 
|  | 552 | 5.5 usage_in_bytes | 
|  | 553 |  | 
|  | 554 | For efficiency, as other kernel components, memory cgroup uses some optimization | 
|  | 555 | to avoid unnecessary cacheline false sharing. usage_in_bytes is affected by the | 
|  | 556 | method and doesn't show 'exact' value of memory (and swap) usage, it's a fuzz | 
|  | 557 | value for efficient access. (Of course, when necessary, it's synchronized.) | 
|  | 558 | If you want to know more exact memory usage, you should use RSS+CACHE(+SWAP) | 
|  | 559 | value in memory.stat(see 5.2). | 
|  | 560 |  | 
|  | 561 | 5.6 numa_stat | 
|  | 562 |  | 
|  | 563 | This is similar to numa_maps but operates on a per-memcg basis.  This is | 
|  | 564 | useful for providing visibility into the numa locality information within | 
|  | 565 | an memcg since the pages are allowed to be allocated from any physical | 
|  | 566 | node.  One of the use cases is evaluating application performance by | 
|  | 567 | combining this information with the application's CPU allocation. | 
|  | 568 |  | 
|  | 569 | Each memcg's numa_stat file includes "total", "file", "anon" and "unevictable" | 
|  | 570 | per-node page counts including "hierarchical_<counter>" which sums up all | 
|  | 571 | hierarchical children's values in addition to the memcg's own value. | 
|  | 572 |  | 
|  | 573 | The output format of memory.numa_stat is: | 
|  | 574 |  | 
|  | 575 | total=<total pages> N0=<node 0 pages> N1=<node 1 pages> ... | 
|  | 576 | file=<total file pages> N0=<node 0 pages> N1=<node 1 pages> ... | 
|  | 577 | anon=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... | 
|  | 578 | unevictable=<total anon pages> N0=<node 0 pages> N1=<node 1 pages> ... | 
|  | 579 | hierarchical_<counter>=<counter pages> N0=<node 0 pages> N1=<node 1 pages> ... | 
|  | 580 |  | 
|  | 581 | The "total" count is sum of file + anon + unevictable. | 
|  | 582 |  | 
|  | 583 | 6. Hierarchy support | 
|  | 584 |  | 
|  | 585 | The memory controller supports a deep hierarchy and hierarchical accounting. | 
|  | 586 | The hierarchy is created by creating the appropriate cgroups in the | 
|  | 587 | cgroup filesystem. Consider for example, the following cgroup filesystem | 
|  | 588 | hierarchy | 
|  | 589 |  | 
|  | 590 | root | 
|  | 591 | /  |   \ | 
|  | 592 | /	|    \ | 
|  | 593 | a	b     c | 
|  | 594 | | \ | 
|  | 595 | |  \ | 
|  | 596 | d   e | 
|  | 597 |  | 
|  | 598 | In the diagram above, with hierarchical accounting enabled, all memory | 
|  | 599 | usage of e, is accounted to its ancestors up until the root (i.e, c and root), | 
|  | 600 | that has memory.use_hierarchy enabled. If one of the ancestors goes over its | 
|  | 601 | limit, the reclaim algorithm reclaims from the tasks in the ancestor and the | 
|  | 602 | children of the ancestor. | 
|  | 603 |  | 
|  | 604 | 6.1 Enabling hierarchical accounting and reclaim | 
|  | 605 |  | 
|  | 606 | A memory cgroup by default disables the hierarchy feature. Support | 
|  | 607 | can be enabled by writing 1 to memory.use_hierarchy file of the root cgroup | 
|  | 608 |  | 
|  | 609 | # echo 1 > memory.use_hierarchy | 
|  | 610 |  | 
|  | 611 | The feature can be disabled by | 
|  | 612 |  | 
|  | 613 | # echo 0 > memory.use_hierarchy | 
|  | 614 |  | 
|  | 615 | NOTE1: Enabling/disabling will fail if either the cgroup already has other | 
|  | 616 | cgroups created below it, or if the parent cgroup has use_hierarchy | 
|  | 617 | enabled. | 
|  | 618 |  | 
|  | 619 | NOTE2: When panic_on_oom is set to "2", the whole system will panic in | 
|  | 620 | case of an OOM event in any cgroup. | 
|  | 621 |  | 
|  | 622 | 7. Soft limits | 
|  | 623 |  | 
|  | 624 | Soft limits allow for greater sharing of memory. The idea behind soft limits | 
|  | 625 | is to allow control groups to use as much of the memory as needed, provided | 
|  | 626 |  | 
|  | 627 | a. There is no memory contention | 
|  | 628 | b. They do not exceed their hard limit | 
|  | 629 |  | 
|  | 630 | When the system detects memory contention or low memory, control groups | 
|  | 631 | are pushed back to their soft limits. If the soft limit of each control | 
|  | 632 | group is very high, they are pushed back as much as possible to make | 
|  | 633 | sure that one control group does not starve the others of memory. | 
|  | 634 |  | 
|  | 635 | Please note that soft limits is a best-effort feature; it comes with | 
|  | 636 | no guarantees, but it does its best to make sure that when memory is | 
|  | 637 | heavily contended for, memory is allocated based on the soft limit | 
|  | 638 | hints/setup. Currently soft limit based reclaim is set up such that | 
|  | 639 | it gets invoked from balance_pgdat (kswapd). | 
|  | 640 |  | 
|  | 641 | 7.1 Interface | 
|  | 642 |  | 
|  | 643 | Soft limits can be setup by using the following commands (in this example we | 
|  | 644 | assume a soft limit of 256 MiB) | 
|  | 645 |  | 
|  | 646 | # echo 256M > memory.soft_limit_in_bytes | 
|  | 647 |  | 
|  | 648 | If we want to change this to 1G, we can at any time use | 
|  | 649 |  | 
|  | 650 | # echo 1G > memory.soft_limit_in_bytes | 
|  | 651 |  | 
|  | 652 | NOTE1: Soft limits take effect over a long period of time, since they involve | 
|  | 653 | reclaiming memory for balancing between memory cgroups | 
|  | 654 | NOTE2: It is recommended to set the soft limit always below the hard limit, | 
|  | 655 | otherwise the hard limit will take precedence. | 
|  | 656 |  | 
|  | 657 | 8. Move charges at task migration | 
|  | 658 |  | 
|  | 659 | Users can move charges associated with a task along with task migration, that | 
|  | 660 | is, uncharge task's pages from the old cgroup and charge them to the new cgroup. | 
|  | 661 | This feature is not supported in !CONFIG_MMU environments because of lack of | 
|  | 662 | page tables. | 
|  | 663 |  | 
|  | 664 | 8.1 Interface | 
|  | 665 |  | 
|  | 666 | This feature is disabled by default. It can be enabled (and disabled again) by | 
|  | 667 | writing to memory.move_charge_at_immigrate of the destination cgroup. | 
|  | 668 |  | 
|  | 669 | If you want to enable it: | 
|  | 670 |  | 
|  | 671 | # echo (some positive value) > memory.move_charge_at_immigrate | 
|  | 672 |  | 
|  | 673 | Note: Each bits of move_charge_at_immigrate has its own meaning about what type | 
|  | 674 | of charges should be moved. See 8.2 for details. | 
|  | 675 | Note: Charges are moved only when you move mm->owner, in other words, | 
|  | 676 | a leader of a thread group. | 
|  | 677 | Note: If we cannot find enough space for the task in the destination cgroup, we | 
|  | 678 | try to make space by reclaiming memory. Task migration may fail if we | 
|  | 679 | cannot make enough space. | 
|  | 680 | Note: It can take several seconds if you move charges much. | 
|  | 681 |  | 
|  | 682 | And if you want disable it again: | 
|  | 683 |  | 
|  | 684 | # echo 0 > memory.move_charge_at_immigrate | 
|  | 685 |  | 
|  | 686 | 8.2 Type of charges which can be moved | 
|  | 687 |  | 
|  | 688 | Each bit in move_charge_at_immigrate has its own meaning about what type of | 
|  | 689 | charges should be moved. But in any case, it must be noted that an account of | 
|  | 690 | a page or a swap can be moved only when it is charged to the task's current | 
|  | 691 | (old) memory cgroup. | 
|  | 692 |  | 
|  | 693 | bit | what type of charges would be moved ? | 
|  | 694 | -----+------------------------------------------------------------------------ | 
|  | 695 | 0  | A charge of an anonymous page (or swap of it) used by the target task. | 
|  | 696 | | You must enable Swap Extension (see 2.4) to enable move of swap charges. | 
|  | 697 | -----+------------------------------------------------------------------------ | 
|  | 698 | 1  | A charge of file pages (normal file, tmpfs file (e.g. ipc shared memory) | 
|  | 699 | | and swaps of tmpfs file) mmapped by the target task. Unlike the case of | 
|  | 700 | | anonymous pages, file pages (and swaps) in the range mmapped by the task | 
|  | 701 | | will be moved even if the task hasn't done page fault, i.e. they might | 
|  | 702 | | not be the task's "RSS", but other task's "RSS" that maps the same file. | 
|  | 703 | | And mapcount of the page is ignored (the page can be moved even if | 
|  | 704 | | page_mapcount(page) > 1). You must enable Swap Extension (see 2.4) to | 
|  | 705 | | enable move of swap charges. | 
|  | 706 |  | 
|  | 707 | 8.3 TODO | 
|  | 708 |  | 
|  | 709 | - All of moving charge operations are done under cgroup_mutex. It's not good | 
|  | 710 | behavior to hold the mutex too long, so we may need some trick. | 
|  | 711 |  | 
|  | 712 | 9. Memory thresholds | 
|  | 713 |  | 
|  | 714 | Memory cgroup implements memory thresholds using the cgroups notification | 
|  | 715 | API (see cgroups.txt). It allows to register multiple memory and memsw | 
|  | 716 | thresholds and gets notifications when it crosses. | 
|  | 717 |  | 
|  | 718 | To register a threshold, an application must: | 
|  | 719 | - create an eventfd using eventfd(2); | 
|  | 720 | - open memory.usage_in_bytes or memory.memsw.usage_in_bytes; | 
|  | 721 | - write string like "<event_fd> <fd of memory.usage_in_bytes> <threshold>" to | 
|  | 722 | cgroup.event_control. | 
|  | 723 |  | 
|  | 724 | Application will be notified through eventfd when memory usage crosses | 
|  | 725 | threshold in any direction. | 
|  | 726 |  | 
|  | 727 | It's applicable for root and non-root cgroup. | 
|  | 728 |  | 
|  | 729 | 10. OOM Control | 
|  | 730 |  | 
|  | 731 | memory.oom_control file is for OOM notification and other controls. | 
|  | 732 |  | 
|  | 733 | Memory cgroup implements OOM notifier using the cgroup notification | 
|  | 734 | API (See cgroups.txt). It allows to register multiple OOM notification | 
|  | 735 | delivery and gets notification when OOM happens. | 
|  | 736 |  | 
|  | 737 | To register a notifier, an application must: | 
|  | 738 | - create an eventfd using eventfd(2) | 
|  | 739 | - open memory.oom_control file | 
|  | 740 | - write string like "<event_fd> <fd of memory.oom_control>" to | 
|  | 741 | cgroup.event_control | 
|  | 742 |  | 
|  | 743 | The application will be notified through eventfd when OOM happens. | 
|  | 744 | OOM notification doesn't work for the root cgroup. | 
|  | 745 |  | 
|  | 746 | You can disable the OOM-killer by writing "1" to memory.oom_control file, as: | 
|  | 747 |  | 
|  | 748 | #echo 1 > memory.oom_control | 
|  | 749 |  | 
|  | 750 | If OOM-killer is disabled, tasks under cgroup will hang/sleep | 
|  | 751 | in memory cgroup's OOM-waitqueue when they request accountable memory. | 
|  | 752 |  | 
|  | 753 | For running them, you have to relax the memory cgroup's OOM status by | 
|  | 754 | * enlarge limit or reduce usage. | 
|  | 755 | To reduce usage, | 
|  | 756 | * kill some tasks. | 
|  | 757 | * move some tasks to other group with account migration. | 
|  | 758 | * remove some files (on tmpfs?) | 
|  | 759 |  | 
|  | 760 | Then, stopped tasks will work again. | 
|  | 761 |  | 
|  | 762 | At reading, current status of OOM is shown. | 
|  | 763 | oom_kill_disable 0 or 1 (if 1, oom-killer is disabled) | 
|  | 764 | under_oom	 0 or 1 (if 1, the memory cgroup is under OOM, tasks may | 
|  | 765 | be stopped.) | 
|  | 766 |  | 
|  | 767 | 11. Memory Pressure | 
|  | 768 |  | 
|  | 769 | The pressure level notifications can be used to monitor the memory | 
|  | 770 | allocation cost; based on the pressure, applications can implement | 
|  | 771 | different strategies of managing their memory resources. The pressure | 
|  | 772 | levels are defined as following: | 
|  | 773 |  | 
|  | 774 | The "low" level means that the system is reclaiming memory for new | 
|  | 775 | allocations. Monitoring this reclaiming activity might be useful for | 
|  | 776 | maintaining cache level. Upon notification, the program (typically | 
|  | 777 | "Activity Manager") might analyze vmstat and act in advance (i.e. | 
|  | 778 | prematurely shutdown unimportant services). | 
|  | 779 |  | 
|  | 780 | The "medium" level means that the system is experiencing medium memory | 
|  | 781 | pressure, the system might be making swap, paging out active file caches, | 
|  | 782 | etc. Upon this event applications may decide to further analyze | 
|  | 783 | vmstat/zoneinfo/memcg or internal memory usage statistics and free any | 
|  | 784 | resources that can be easily reconstructed or re-read from a disk. | 
|  | 785 |  | 
|  | 786 | The "critical" level means that the system is actively thrashing, it is | 
|  | 787 | about to out of memory (OOM) or even the in-kernel OOM killer is on its | 
|  | 788 | way to trigger. Applications should do whatever they can to help the | 
|  | 789 | system. It might be too late to consult with vmstat or any other | 
|  | 790 | statistics, so it's advisable to take an immediate action. | 
|  | 791 |  | 
|  | 792 | By default, events are propagated upward until the event is handled, i.e. the | 
|  | 793 | events are not pass-through. For example, you have three cgroups: A->B->C. Now | 
|  | 794 | you set up an event listener on cgroups A, B and C, and suppose group C | 
|  | 795 | experiences some pressure. In this situation, only group C will receive the | 
|  | 796 | notification, i.e. groups A and B will not receive it. This is done to avoid | 
|  | 797 | excessive "broadcasting" of messages, which disturbs the system and which is | 
|  | 798 | especially bad if we are low on memory or thrashing. Group B, will receive | 
|  | 799 | notification only if there are no event listers for group C. | 
|  | 800 |  | 
|  | 801 | There are three optional modes that specify different propagation behavior: | 
|  | 802 |  | 
|  | 803 | - "default": this is the default behavior specified above. This mode is the | 
|  | 804 | same as omitting the optional mode parameter, preserved by backwards | 
|  | 805 | compatibility. | 
|  | 806 |  | 
|  | 807 | - "hierarchy": events always propagate up to the root, similar to the default | 
|  | 808 | behavior, except that propagation continues regardless of whether there are | 
|  | 809 | event listeners at each level, with the "hierarchy" mode. In the above | 
|  | 810 | example, groups A, B, and C will receive notification of memory pressure. | 
|  | 811 |  | 
|  | 812 | - "local": events are pass-through, i.e. they only receive notifications when | 
|  | 813 | memory pressure is experienced in the memcg for which the notification is | 
|  | 814 | registered. In the above example, group C will receive notification if | 
|  | 815 | registered for "local" notification and the group experiences memory | 
|  | 816 | pressure. However, group B will never receive notification, regardless if | 
|  | 817 | there is an event listener for group C or not, if group B is registered for | 
|  | 818 | local notification. | 
|  | 819 |  | 
|  | 820 | The level and event notification mode ("hierarchy" or "local", if necessary) are | 
|  | 821 | specified by a comma-delimited string, i.e. "low,hierarchy" specifies | 
|  | 822 | hierarchical, pass-through, notification for all ancestor memcgs. Notification | 
|  | 823 | that is the default, non pass-through behavior, does not specify a mode. | 
|  | 824 | "medium,local" specifies pass-through notification for the medium level. | 
|  | 825 |  | 
|  | 826 | The file memory.pressure_level is only used to setup an eventfd. To | 
|  | 827 | register a notification, an application must: | 
|  | 828 |  | 
|  | 829 | - create an eventfd using eventfd(2); | 
|  | 830 | - open memory.pressure_level; | 
|  | 831 | - write string as "<event_fd> <fd of memory.pressure_level> <level[,mode]>" | 
|  | 832 | to cgroup.event_control. | 
|  | 833 |  | 
|  | 834 | Application will be notified through eventfd when memory pressure is at | 
|  | 835 | the specific level (or higher). Read/write operations to | 
|  | 836 | memory.pressure_level are no implemented. | 
|  | 837 |  | 
|  | 838 | Test: | 
|  | 839 |  | 
|  | 840 | Here is a small script example that makes a new cgroup, sets up a | 
|  | 841 | memory limit, sets up a notification in the cgroup and then makes child | 
|  | 842 | cgroup experience a critical pressure: | 
|  | 843 |  | 
|  | 844 | # cd /sys/fs/cgroup/memory/ | 
|  | 845 | # mkdir foo | 
|  | 846 | # cd foo | 
|  | 847 | # cgroup_event_listener memory.pressure_level low,hierarchy & | 
|  | 848 | # echo 8000000 > memory.limit_in_bytes | 
|  | 849 | # echo 8000000 > memory.memsw.limit_in_bytes | 
|  | 850 | # echo $$ > tasks | 
|  | 851 | # dd if=/dev/zero | read x | 
|  | 852 |  | 
|  | 853 | (Expect a bunch of notifications, and eventually, the oom-killer will | 
|  | 854 | trigger.) | 
|  | 855 |  | 
|  | 856 | 12. TODO | 
|  | 857 |  | 
|  | 858 | 1. Make per-cgroup scanner reclaim not-shared pages first | 
|  | 859 | 2. Teach controller to account for shared-pages | 
|  | 860 | 3. Start reclamation in the background when the limit is | 
|  | 861 | not yet hit but the usage is getting closer | 
|  | 862 |  | 
|  | 863 | Summary | 
|  | 864 |  | 
|  | 865 | Overall, the memory controller has been a stable controller and has been | 
|  | 866 | commented and discussed quite extensively in the community. | 
|  | 867 |  | 
|  | 868 | References | 
|  | 869 |  | 
|  | 870 | 1. Singh, Balbir. RFC: Memory Controller, http://lwn.net/Articles/206697/ | 
|  | 871 | 2. Singh, Balbir. Memory Controller (RSS Control), | 
|  | 872 | http://lwn.net/Articles/222762/ | 
|  | 873 | 3. Emelianov, Pavel. Resource controllers based on process cgroups | 
|  | 874 | http://lkml.org/lkml/2007/3/6/198 | 
|  | 875 | 4. Emelianov, Pavel. RSS controller based on process cgroups (v2) | 
|  | 876 | http://lkml.org/lkml/2007/4/9/78 | 
|  | 877 | 5. Emelianov, Pavel. RSS controller based on process cgroups (v3) | 
|  | 878 | http://lkml.org/lkml/2007/5/30/244 | 
|  | 879 | 6. Menage, Paul. Control Groups v10, http://lwn.net/Articles/236032/ | 
|  | 880 | 7. Vaidyanathan, Srinivasan, Control Groups: Pagecache accounting and control | 
|  | 881 | subsystem (v3), http://lwn.net/Articles/235534/ | 
|  | 882 | 8. Singh, Balbir. RSS controller v2 test results (lmbench), | 
|  | 883 | http://lkml.org/lkml/2007/5/17/232 | 
|  | 884 | 9. Singh, Balbir. RSS controller v2 AIM9 results | 
|  | 885 | http://lkml.org/lkml/2007/5/18/1 | 
|  | 886 | 10. Singh, Balbir. Memory controller v6 test results, | 
|  | 887 | http://lkml.org/lkml/2007/8/19/36 | 
|  | 888 | 11. Singh, Balbir. Memory controller introduction (v6), | 
|  | 889 | http://lkml.org/lkml/2007/8/17/69 | 
|  | 890 | 12. Corbet, Jonathan, Controlling memory use in cgroups, | 
|  | 891 | http://lwn.net/Articles/243795/ |