| xj | b04a402 | 2021-11-25 15:01:52 +0800 | [diff] [blame] | 1 | .. _kernel_hacking_lock: | 
|  | 2 |  | 
|  | 3 | =========================== | 
|  | 4 | Unreliable Guide To Locking | 
|  | 5 | =========================== | 
|  | 6 |  | 
|  | 7 | :Author: Rusty Russell | 
|  | 8 |  | 
|  | 9 | Introduction | 
|  | 10 | ============ | 
|  | 11 |  | 
|  | 12 | Welcome, to Rusty's Remarkably Unreliable Guide to Kernel Locking | 
|  | 13 | issues. This document describes the locking systems in the Linux Kernel | 
|  | 14 | in 2.6. | 
|  | 15 |  | 
|  | 16 | With the wide availability of HyperThreading, and preemption in the | 
|  | 17 | Linux Kernel, everyone hacking on the kernel needs to know the | 
|  | 18 | fundamentals of concurrency and locking for SMP. | 
|  | 19 |  | 
|  | 20 | The Problem With Concurrency | 
|  | 21 | ============================ | 
|  | 22 |  | 
|  | 23 | (Skip this if you know what a Race Condition is). | 
|  | 24 |  | 
|  | 25 | In a normal program, you can increment a counter like so: | 
|  | 26 |  | 
|  | 27 | :: | 
|  | 28 |  | 
|  | 29 | very_important_count++; | 
|  | 30 |  | 
|  | 31 |  | 
|  | 32 | This is what they would expect to happen: | 
|  | 33 |  | 
|  | 34 |  | 
|  | 35 | .. table:: Expected Results | 
|  | 36 |  | 
|  | 37 | +------------------------------------+------------------------------------+ | 
|  | 38 | | Instance 1                         | Instance 2                         | | 
|  | 39 | +====================================+====================================+ | 
|  | 40 | | read very_important_count (5)      |                                    | | 
|  | 41 | +------------------------------------+------------------------------------+ | 
|  | 42 | | add 1 (6)                          |                                    | | 
|  | 43 | +------------------------------------+------------------------------------+ | 
|  | 44 | | write very_important_count (6)     |                                    | | 
|  | 45 | +------------------------------------+------------------------------------+ | 
|  | 46 | |                                    | read very_important_count (6)      | | 
|  | 47 | +------------------------------------+------------------------------------+ | 
|  | 48 | |                                    | add 1 (7)                          | | 
|  | 49 | +------------------------------------+------------------------------------+ | 
|  | 50 | |                                    | write very_important_count (7)     | | 
|  | 51 | +------------------------------------+------------------------------------+ | 
|  | 52 |  | 
|  | 53 | This is what might happen: | 
|  | 54 |  | 
|  | 55 | .. table:: Possible Results | 
|  | 56 |  | 
|  | 57 | +------------------------------------+------------------------------------+ | 
|  | 58 | | Instance 1                         | Instance 2                         | | 
|  | 59 | +====================================+====================================+ | 
|  | 60 | | read very_important_count (5)      |                                    | | 
|  | 61 | +------------------------------------+------------------------------------+ | 
|  | 62 | |                                    | read very_important_count (5)      | | 
|  | 63 | +------------------------------------+------------------------------------+ | 
|  | 64 | | add 1 (6)                          |                                    | | 
|  | 65 | +------------------------------------+------------------------------------+ | 
|  | 66 | |                                    | add 1 (6)                          | | 
|  | 67 | +------------------------------------+------------------------------------+ | 
|  | 68 | | write very_important_count (6)     |                                    | | 
|  | 69 | +------------------------------------+------------------------------------+ | 
|  | 70 | |                                    | write very_important_count (6)     | | 
|  | 71 | +------------------------------------+------------------------------------+ | 
|  | 72 |  | 
|  | 73 |  | 
|  | 74 | Race Conditions and Critical Regions | 
|  | 75 | ------------------------------------ | 
|  | 76 |  | 
|  | 77 | This overlap, where the result depends on the relative timing of | 
|  | 78 | multiple tasks, is called a race condition. The piece of code containing | 
|  | 79 | the concurrency issue is called a critical region. And especially since | 
|  | 80 | Linux starting running on SMP machines, they became one of the major | 
|  | 81 | issues in kernel design and implementation. | 
|  | 82 |  | 
|  | 83 | Preemption can have the same effect, even if there is only one CPU: by | 
|  | 84 | preempting one task during the critical region, we have exactly the same | 
|  | 85 | race condition. In this case the thread which preempts might run the | 
|  | 86 | critical region itself. | 
|  | 87 |  | 
|  | 88 | The solution is to recognize when these simultaneous accesses occur, and | 
|  | 89 | use locks to make sure that only one instance can enter the critical | 
|  | 90 | region at any time. There are many friendly primitives in the Linux | 
|  | 91 | kernel to help you do this. And then there are the unfriendly | 
|  | 92 | primitives, but I'll pretend they don't exist. | 
|  | 93 |  | 
|  | 94 | Locking in the Linux Kernel | 
|  | 95 | =========================== | 
|  | 96 |  | 
|  | 97 | If I could give you one piece of advice: never sleep with anyone crazier | 
|  | 98 | than yourself. But if I had to give you advice on locking: **keep it | 
|  | 99 | simple**. | 
|  | 100 |  | 
|  | 101 | Be reluctant to introduce new locks. | 
|  | 102 |  | 
|  | 103 | Strangely enough, this last one is the exact reverse of my advice when | 
|  | 104 | you **have** slept with someone crazier than yourself. And you should | 
|  | 105 | think about getting a big dog. | 
|  | 106 |  | 
|  | 107 | Two Main Types of Kernel Locks: Spinlocks and Mutexes | 
|  | 108 | ----------------------------------------------------- | 
|  | 109 |  | 
|  | 110 | There are two main types of kernel locks. The fundamental type is the | 
|  | 111 | spinlock (``include/asm/spinlock.h``), which is a very simple | 
|  | 112 | single-holder lock: if you can't get the spinlock, you keep trying | 
|  | 113 | (spinning) until you can. Spinlocks are very small and fast, and can be | 
|  | 114 | used anywhere. | 
|  | 115 |  | 
|  | 116 | The second type is a mutex (``include/linux/mutex.h``): it is like a | 
|  | 117 | spinlock, but you may block holding a mutex. If you can't lock a mutex, | 
|  | 118 | your task will suspend itself, and be woken up when the mutex is | 
|  | 119 | released. This means the CPU can do something else while you are | 
|  | 120 | waiting. There are many cases when you simply can't sleep (see | 
|  | 121 | `What Functions Are Safe To Call From Interrupts? <#sleeping-things>`__), | 
|  | 122 | and so have to use a spinlock instead. | 
|  | 123 |  | 
|  | 124 | Neither type of lock is recursive: see | 
|  | 125 | `Deadlock: Simple and Advanced <#deadlock>`__. | 
|  | 126 |  | 
|  | 127 | Locks and Uniprocessor Kernels | 
|  | 128 | ------------------------------ | 
|  | 129 |  | 
|  | 130 | For kernels compiled without ``CONFIG_SMP``, and without | 
|  | 131 | ``CONFIG_PREEMPT`` spinlocks do not exist at all. This is an excellent | 
|  | 132 | design decision: when no-one else can run at the same time, there is no | 
|  | 133 | reason to have a lock. | 
|  | 134 |  | 
|  | 135 | If the kernel is compiled without ``CONFIG_SMP``, but ``CONFIG_PREEMPT`` | 
|  | 136 | is set, then spinlocks simply disable preemption, which is sufficient to | 
|  | 137 | prevent any races. For most purposes, we can think of preemption as | 
|  | 138 | equivalent to SMP, and not worry about it separately. | 
|  | 139 |  | 
|  | 140 | You should always test your locking code with ``CONFIG_SMP`` and | 
|  | 141 | ``CONFIG_PREEMPT`` enabled, even if you don't have an SMP test box, | 
|  | 142 | because it will still catch some kinds of locking bugs. | 
|  | 143 |  | 
|  | 144 | Mutexes still exist, because they are required for synchronization | 
|  | 145 | between user contexts, as we will see below. | 
|  | 146 |  | 
|  | 147 | Locking Only In User Context | 
|  | 148 | ---------------------------- | 
|  | 149 |  | 
|  | 150 | If you have a data structure which is only ever accessed from user | 
|  | 151 | context, then you can use a simple mutex (``include/linux/mutex.h``) to | 
|  | 152 | protect it. This is the most trivial case: you initialize the mutex. | 
|  | 153 | Then you can call :c:func:`mutex_lock_interruptible()` to grab the | 
|  | 154 | mutex, and :c:func:`mutex_unlock()` to release it. There is also a | 
|  | 155 | :c:func:`mutex_lock()`, which should be avoided, because it will | 
|  | 156 | not return if a signal is received. | 
|  | 157 |  | 
|  | 158 | Example: ``net/netfilter/nf_sockopt.c`` allows registration of new | 
|  | 159 | :c:func:`setsockopt()` and :c:func:`getsockopt()` calls, with | 
|  | 160 | :c:func:`nf_register_sockopt()`. Registration and de-registration | 
|  | 161 | are only done on module load and unload (and boot time, where there is | 
|  | 162 | no concurrency), and the list of registrations is only consulted for an | 
|  | 163 | unknown :c:func:`setsockopt()` or :c:func:`getsockopt()` system | 
|  | 164 | call. The ``nf_sockopt_mutex`` is perfect to protect this, especially | 
|  | 165 | since the setsockopt and getsockopt calls may well sleep. | 
|  | 166 |  | 
|  | 167 | Locking Between User Context and Softirqs | 
|  | 168 | ----------------------------------------- | 
|  | 169 |  | 
|  | 170 | If a softirq shares data with user context, you have two problems. | 
|  | 171 | Firstly, the current user context can be interrupted by a softirq, and | 
|  | 172 | secondly, the critical region could be entered from another CPU. This is | 
|  | 173 | where :c:func:`spin_lock_bh()` (``include/linux/spinlock.h``) is | 
|  | 174 | used. It disables softirqs on that CPU, then grabs the lock. | 
|  | 175 | :c:func:`spin_unlock_bh()` does the reverse. (The '_bh' suffix is | 
|  | 176 | a historical reference to "Bottom Halves", the old name for software | 
|  | 177 | interrupts. It should really be called spin_lock_softirq()' in a | 
|  | 178 | perfect world). | 
|  | 179 |  | 
|  | 180 | Note that you can also use :c:func:`spin_lock_irq()` or | 
|  | 181 | :c:func:`spin_lock_irqsave()` here, which stop hardware interrupts | 
|  | 182 | as well: see `Hard IRQ Context <#hard-irq-context>`__. | 
|  | 183 |  | 
|  | 184 | This works perfectly for UP as well: the spin lock vanishes, and this | 
|  | 185 | macro simply becomes :c:func:`local_bh_disable()` | 
|  | 186 | (``include/linux/interrupt.h``), which protects you from the softirq | 
|  | 187 | being run. | 
|  | 188 |  | 
|  | 189 | Locking Between User Context and Tasklets | 
|  | 190 | ----------------------------------------- | 
|  | 191 |  | 
|  | 192 | This is exactly the same as above, because tasklets are actually run | 
|  | 193 | from a softirq. | 
|  | 194 |  | 
|  | 195 | Locking Between User Context and Timers | 
|  | 196 | --------------------------------------- | 
|  | 197 |  | 
|  | 198 | This, too, is exactly the same as above, because timers are actually run | 
|  | 199 | from a softirq. From a locking point of view, tasklets and timers are | 
|  | 200 | identical. | 
|  | 201 |  | 
|  | 202 | Locking Between Tasklets/Timers | 
|  | 203 | ------------------------------- | 
|  | 204 |  | 
|  | 205 | Sometimes a tasklet or timer might want to share data with another | 
|  | 206 | tasklet or timer. | 
|  | 207 |  | 
|  | 208 | The Same Tasklet/Timer | 
|  | 209 | ~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 210 |  | 
|  | 211 | Since a tasklet is never run on two CPUs at once, you don't need to | 
|  | 212 | worry about your tasklet being reentrant (running twice at once), even | 
|  | 213 | on SMP. | 
|  | 214 |  | 
|  | 215 | Different Tasklets/Timers | 
|  | 216 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 217 |  | 
|  | 218 | If another tasklet/timer wants to share data with your tasklet or timer | 
|  | 219 | , you will both need to use :c:func:`spin_lock()` and | 
|  | 220 | :c:func:`spin_unlock()` calls. :c:func:`spin_lock_bh()` is | 
|  | 221 | unnecessary here, as you are already in a tasklet, and none will be run | 
|  | 222 | on the same CPU. | 
|  | 223 |  | 
|  | 224 | Locking Between Softirqs | 
|  | 225 | ------------------------ | 
|  | 226 |  | 
|  | 227 | Often a softirq might want to share data with itself or a tasklet/timer. | 
|  | 228 |  | 
|  | 229 | The Same Softirq | 
|  | 230 | ~~~~~~~~~~~~~~~~ | 
|  | 231 |  | 
|  | 232 | The same softirq can run on the other CPUs: you can use a per-CPU array | 
|  | 233 | (see `Per-CPU Data <#per-cpu-data>`__) for better performance. If you're | 
|  | 234 | going so far as to use a softirq, you probably care about scalable | 
|  | 235 | performance enough to justify the extra complexity. | 
|  | 236 |  | 
|  | 237 | You'll need to use :c:func:`spin_lock()` and | 
|  | 238 | :c:func:`spin_unlock()` for shared data. | 
|  | 239 |  | 
|  | 240 | Different Softirqs | 
|  | 241 | ~~~~~~~~~~~~~~~~~~ | 
|  | 242 |  | 
|  | 243 | You'll need to use :c:func:`spin_lock()` and | 
|  | 244 | :c:func:`spin_unlock()` for shared data, whether it be a timer, | 
|  | 245 | tasklet, different softirq or the same or another softirq: any of them | 
|  | 246 | could be running on a different CPU. | 
|  | 247 |  | 
|  | 248 | Hard IRQ Context | 
|  | 249 | ================ | 
|  | 250 |  | 
|  | 251 | Hardware interrupts usually communicate with a tasklet or softirq. | 
|  | 252 | Frequently this involves putting work in a queue, which the softirq will | 
|  | 253 | take out. | 
|  | 254 |  | 
|  | 255 | Locking Between Hard IRQ and Softirqs/Tasklets | 
|  | 256 | ---------------------------------------------- | 
|  | 257 |  | 
|  | 258 | If a hardware irq handler shares data with a softirq, you have two | 
|  | 259 | concerns. Firstly, the softirq processing can be interrupted by a | 
|  | 260 | hardware interrupt, and secondly, the critical region could be entered | 
|  | 261 | by a hardware interrupt on another CPU. This is where | 
|  | 262 | :c:func:`spin_lock_irq()` is used. It is defined to disable | 
|  | 263 | interrupts on that cpu, then grab the lock. | 
|  | 264 | :c:func:`spin_unlock_irq()` does the reverse. | 
|  | 265 |  | 
|  | 266 | The irq handler does not to use :c:func:`spin_lock_irq()`, because | 
|  | 267 | the softirq cannot run while the irq handler is running: it can use | 
|  | 268 | :c:func:`spin_lock()`, which is slightly faster. The only exception | 
|  | 269 | would be if a different hardware irq handler uses the same lock: | 
|  | 270 | :c:func:`spin_lock_irq()` will stop that from interrupting us. | 
|  | 271 |  | 
|  | 272 | This works perfectly for UP as well: the spin lock vanishes, and this | 
|  | 273 | macro simply becomes :c:func:`local_irq_disable()` | 
|  | 274 | (``include/asm/smp.h``), which protects you from the softirq/tasklet/BH | 
|  | 275 | being run. | 
|  | 276 |  | 
|  | 277 | :c:func:`spin_lock_irqsave()` (``include/linux/spinlock.h``) is a | 
|  | 278 | variant which saves whether interrupts were on or off in a flags word, | 
|  | 279 | which is passed to :c:func:`spin_unlock_irqrestore()`. This means | 
|  | 280 | that the same code can be used inside an hard irq handler (where | 
|  | 281 | interrupts are already off) and in softirqs (where the irq disabling is | 
|  | 282 | required). | 
|  | 283 |  | 
|  | 284 | Note that softirqs (and hence tasklets and timers) are run on return | 
|  | 285 | from hardware interrupts, so :c:func:`spin_lock_irq()` also stops | 
|  | 286 | these. In that sense, :c:func:`spin_lock_irqsave()` is the most | 
|  | 287 | general and powerful locking function. | 
|  | 288 |  | 
|  | 289 | Locking Between Two Hard IRQ Handlers | 
|  | 290 | ------------------------------------- | 
|  | 291 |  | 
|  | 292 | It is rare to have to share data between two IRQ handlers, but if you | 
|  | 293 | do, :c:func:`spin_lock_irqsave()` should be used: it is | 
|  | 294 | architecture-specific whether all interrupts are disabled inside irq | 
|  | 295 | handlers themselves. | 
|  | 296 |  | 
|  | 297 | Cheat Sheet For Locking | 
|  | 298 | ======================= | 
|  | 299 |  | 
|  | 300 | Pete Zaitcev gives the following summary: | 
|  | 301 |  | 
|  | 302 | -  If you are in a process context (any syscall) and want to lock other | 
|  | 303 | process out, use a mutex. You can take a mutex and sleep | 
|  | 304 | (``copy_from_user*(`` or ``kmalloc(x,GFP_KERNEL)``). | 
|  | 305 |  | 
|  | 306 | -  Otherwise (== data can be touched in an interrupt), use | 
|  | 307 | :c:func:`spin_lock_irqsave()` and | 
|  | 308 | :c:func:`spin_unlock_irqrestore()`. | 
|  | 309 |  | 
|  | 310 | -  Avoid holding spinlock for more than 5 lines of code and across any | 
|  | 311 | function call (except accessors like :c:func:`readb()`). | 
|  | 312 |  | 
|  | 313 | Table of Minimum Requirements | 
|  | 314 | ----------------------------- | 
|  | 315 |  | 
|  | 316 | The following table lists the **minimum** locking requirements between | 
|  | 317 | various contexts. In some cases, the same context can only be running on | 
|  | 318 | one CPU at a time, so no locking is required for that context (eg. a | 
|  | 319 | particular thread can only run on one CPU at a time, but if it needs | 
|  | 320 | shares data with another thread, locking is required). | 
|  | 321 |  | 
|  | 322 | Remember the advice above: you can always use | 
|  | 323 | :c:func:`spin_lock_irqsave()`, which is a superset of all other | 
|  | 324 | spinlock primitives. | 
|  | 325 |  | 
|  | 326 | ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ============== | 
|  | 327 | .              IRQ Handler A IRQ Handler B Softirq A Softirq B Tasklet A Tasklet B Timer A Timer B User Context A User Context B | 
|  | 328 | ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ============== | 
|  | 329 | IRQ Handler A  None | 
|  | 330 | IRQ Handler B  SLIS          None | 
|  | 331 | Softirq A      SLI           SLI           SL | 
|  | 332 | Softirq B      SLI           SLI           SL        SL | 
|  | 333 | Tasklet A      SLI           SLI           SL        SL        None | 
|  | 334 | Tasklet B      SLI           SLI           SL        SL        SL        None | 
|  | 335 | Timer A        SLI           SLI           SL        SL        SL        SL        None | 
|  | 336 | Timer B        SLI           SLI           SL        SL        SL        SL        SL      None | 
|  | 337 | User Context A SLI           SLI           SLBH      SLBH      SLBH      SLBH      SLBH    SLBH    None | 
|  | 338 | User Context B SLI           SLI           SLBH      SLBH      SLBH      SLBH      SLBH    SLBH    MLI            None | 
|  | 339 | ============== ============= ============= ========= ========= ========= ========= ======= ======= ============== ============== | 
|  | 340 |  | 
|  | 341 | Table: Table of Locking Requirements | 
|  | 342 |  | 
|  | 343 | +--------+----------------------------+ | 
|  | 344 | | SLIS   | spin_lock_irqsave          | | 
|  | 345 | +--------+----------------------------+ | 
|  | 346 | | SLI    | spin_lock_irq              | | 
|  | 347 | +--------+----------------------------+ | 
|  | 348 | | SL     | spin_lock                  | | 
|  | 349 | +--------+----------------------------+ | 
|  | 350 | | SLBH   | spin_lock_bh               | | 
|  | 351 | +--------+----------------------------+ | 
|  | 352 | | MLI    | mutex_lock_interruptible   | | 
|  | 353 | +--------+----------------------------+ | 
|  | 354 |  | 
|  | 355 | Table: Legend for Locking Requirements Table | 
|  | 356 |  | 
|  | 357 | The trylock Functions | 
|  | 358 | ===================== | 
|  | 359 |  | 
|  | 360 | There are functions that try to acquire a lock only once and immediately | 
|  | 361 | return a value telling about success or failure to acquire the lock. | 
|  | 362 | They can be used if you need no access to the data protected with the | 
|  | 363 | lock when some other thread is holding the lock. You should acquire the | 
|  | 364 | lock later if you then need access to the data protected with the lock. | 
|  | 365 |  | 
|  | 366 | :c:func:`spin_trylock()` does not spin but returns non-zero if it | 
|  | 367 | acquires the spinlock on the first try or 0 if not. This function can be | 
|  | 368 | used in all contexts like :c:func:`spin_lock()`: you must have | 
|  | 369 | disabled the contexts that might interrupt you and acquire the spin | 
|  | 370 | lock. | 
|  | 371 |  | 
|  | 372 | :c:func:`mutex_trylock()` does not suspend your task but returns | 
|  | 373 | non-zero if it could lock the mutex on the first try or 0 if not. This | 
|  | 374 | function cannot be safely used in hardware or software interrupt | 
|  | 375 | contexts despite not sleeping. | 
|  | 376 |  | 
|  | 377 | Common Examples | 
|  | 378 | =============== | 
|  | 379 |  | 
|  | 380 | Let's step through a simple example: a cache of number to name mappings. | 
|  | 381 | The cache keeps a count of how often each of the objects is used, and | 
|  | 382 | when it gets full, throws out the least used one. | 
|  | 383 |  | 
|  | 384 | All In User Context | 
|  | 385 | ------------------- | 
|  | 386 |  | 
|  | 387 | For our first example, we assume that all operations are in user context | 
|  | 388 | (ie. from system calls), so we can sleep. This means we can use a mutex | 
|  | 389 | to protect the cache and all the objects within it. Here's the code:: | 
|  | 390 |  | 
|  | 391 | #include <linux/list.h> | 
|  | 392 | #include <linux/slab.h> | 
|  | 393 | #include <linux/string.h> | 
|  | 394 | #include <linux/mutex.h> | 
|  | 395 | #include <asm/errno.h> | 
|  | 396 |  | 
|  | 397 | struct object | 
|  | 398 | { | 
|  | 399 | struct list_head list; | 
|  | 400 | int id; | 
|  | 401 | char name[32]; | 
|  | 402 | int popularity; | 
|  | 403 | }; | 
|  | 404 |  | 
|  | 405 | /* Protects the cache, cache_num, and the objects within it */ | 
|  | 406 | static DEFINE_MUTEX(cache_lock); | 
|  | 407 | static LIST_HEAD(cache); | 
|  | 408 | static unsigned int cache_num = 0; | 
|  | 409 | #define MAX_CACHE_SIZE 10 | 
|  | 410 |  | 
|  | 411 | /* Must be holding cache_lock */ | 
|  | 412 | static struct object *__cache_find(int id) | 
|  | 413 | { | 
|  | 414 | struct object *i; | 
|  | 415 |  | 
|  | 416 | list_for_each_entry(i, &cache, list) | 
|  | 417 | if (i->id == id) { | 
|  | 418 | i->popularity++; | 
|  | 419 | return i; | 
|  | 420 | } | 
|  | 421 | return NULL; | 
|  | 422 | } | 
|  | 423 |  | 
|  | 424 | /* Must be holding cache_lock */ | 
|  | 425 | static void __cache_delete(struct object *obj) | 
|  | 426 | { | 
|  | 427 | BUG_ON(!obj); | 
|  | 428 | list_del(&obj->list); | 
|  | 429 | kfree(obj); | 
|  | 430 | cache_num--; | 
|  | 431 | } | 
|  | 432 |  | 
|  | 433 | /* Must be holding cache_lock */ | 
|  | 434 | static void __cache_add(struct object *obj) | 
|  | 435 | { | 
|  | 436 | list_add(&obj->list, &cache); | 
|  | 437 | if (++cache_num > MAX_CACHE_SIZE) { | 
|  | 438 | struct object *i, *outcast = NULL; | 
|  | 439 | list_for_each_entry(i, &cache, list) { | 
|  | 440 | if (!outcast || i->popularity < outcast->popularity) | 
|  | 441 | outcast = i; | 
|  | 442 | } | 
|  | 443 | __cache_delete(outcast); | 
|  | 444 | } | 
|  | 445 | } | 
|  | 446 |  | 
|  | 447 | int cache_add(int id, const char *name) | 
|  | 448 | { | 
|  | 449 | struct object *obj; | 
|  | 450 |  | 
|  | 451 | if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) | 
|  | 452 | return -ENOMEM; | 
|  | 453 |  | 
|  | 454 | strlcpy(obj->name, name, sizeof(obj->name)); | 
|  | 455 | obj->id = id; | 
|  | 456 | obj->popularity = 0; | 
|  | 457 |  | 
|  | 458 | mutex_lock(&cache_lock); | 
|  | 459 | __cache_add(obj); | 
|  | 460 | mutex_unlock(&cache_lock); | 
|  | 461 | return 0; | 
|  | 462 | } | 
|  | 463 |  | 
|  | 464 | void cache_delete(int id) | 
|  | 465 | { | 
|  | 466 | mutex_lock(&cache_lock); | 
|  | 467 | __cache_delete(__cache_find(id)); | 
|  | 468 | mutex_unlock(&cache_lock); | 
|  | 469 | } | 
|  | 470 |  | 
|  | 471 | int cache_find(int id, char *name) | 
|  | 472 | { | 
|  | 473 | struct object *obj; | 
|  | 474 | int ret = -ENOENT; | 
|  | 475 |  | 
|  | 476 | mutex_lock(&cache_lock); | 
|  | 477 | obj = __cache_find(id); | 
|  | 478 | if (obj) { | 
|  | 479 | ret = 0; | 
|  | 480 | strcpy(name, obj->name); | 
|  | 481 | } | 
|  | 482 | mutex_unlock(&cache_lock); | 
|  | 483 | return ret; | 
|  | 484 | } | 
|  | 485 |  | 
|  | 486 | Note that we always make sure we have the cache_lock when we add, | 
|  | 487 | delete, or look up the cache: both the cache infrastructure itself and | 
|  | 488 | the contents of the objects are protected by the lock. In this case it's | 
|  | 489 | easy, since we copy the data for the user, and never let them access the | 
|  | 490 | objects directly. | 
|  | 491 |  | 
|  | 492 | There is a slight (and common) optimization here: in | 
|  | 493 | :c:func:`cache_add()` we set up the fields of the object before | 
|  | 494 | grabbing the lock. This is safe, as no-one else can access it until we | 
|  | 495 | put it in cache. | 
|  | 496 |  | 
|  | 497 | Accessing From Interrupt Context | 
|  | 498 | -------------------------------- | 
|  | 499 |  | 
|  | 500 | Now consider the case where :c:func:`cache_find()` can be called | 
|  | 501 | from interrupt context: either a hardware interrupt or a softirq. An | 
|  | 502 | example would be a timer which deletes object from the cache. | 
|  | 503 |  | 
|  | 504 | The change is shown below, in standard patch format: the ``-`` are lines | 
|  | 505 | which are taken away, and the ``+`` are lines which are added. | 
|  | 506 |  | 
|  | 507 | :: | 
|  | 508 |  | 
|  | 509 | --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100 | 
|  | 510 | +++ cache.c.interrupt   2003-12-09 14:07:49.000000000 +1100 | 
|  | 511 | @@ -12,7 +12,7 @@ | 
|  | 512 | int popularity; | 
|  | 513 | }; | 
|  | 514 |  | 
|  | 515 | -static DEFINE_MUTEX(cache_lock); | 
|  | 516 | +static DEFINE_SPINLOCK(cache_lock); | 
|  | 517 | static LIST_HEAD(cache); | 
|  | 518 | static unsigned int cache_num = 0; | 
|  | 519 | #define MAX_CACHE_SIZE 10 | 
|  | 520 | @@ -55,6 +55,7 @@ | 
|  | 521 | int cache_add(int id, const char *name) | 
|  | 522 | { | 
|  | 523 | struct object *obj; | 
|  | 524 | +        unsigned long flags; | 
|  | 525 |  | 
|  | 526 | if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL) | 
|  | 527 | return -ENOMEM; | 
|  | 528 | @@ -63,30 +64,33 @@ | 
|  | 529 | obj->id = id; | 
|  | 530 | obj->popularity = 0; | 
|  | 531 |  | 
|  | 532 | -        mutex_lock(&cache_lock); | 
|  | 533 | +        spin_lock_irqsave(&cache_lock, flags); | 
|  | 534 | __cache_add(obj); | 
|  | 535 | -        mutex_unlock(&cache_lock); | 
|  | 536 | +        spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 537 | return 0; | 
|  | 538 | } | 
|  | 539 |  | 
|  | 540 | void cache_delete(int id) | 
|  | 541 | { | 
|  | 542 | -        mutex_lock(&cache_lock); | 
|  | 543 | +        unsigned long flags; | 
|  | 544 | + | 
|  | 545 | +        spin_lock_irqsave(&cache_lock, flags); | 
|  | 546 | __cache_delete(__cache_find(id)); | 
|  | 547 | -        mutex_unlock(&cache_lock); | 
|  | 548 | +        spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 549 | } | 
|  | 550 |  | 
|  | 551 | int cache_find(int id, char *name) | 
|  | 552 | { | 
|  | 553 | struct object *obj; | 
|  | 554 | int ret = -ENOENT; | 
|  | 555 | +        unsigned long flags; | 
|  | 556 |  | 
|  | 557 | -        mutex_lock(&cache_lock); | 
|  | 558 | +        spin_lock_irqsave(&cache_lock, flags); | 
|  | 559 | obj = __cache_find(id); | 
|  | 560 | if (obj) { | 
|  | 561 | ret = 0; | 
|  | 562 | strcpy(name, obj->name); | 
|  | 563 | } | 
|  | 564 | -        mutex_unlock(&cache_lock); | 
|  | 565 | +        spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 566 | return ret; | 
|  | 567 | } | 
|  | 568 |  | 
|  | 569 | Note that the :c:func:`spin_lock_irqsave()` will turn off | 
|  | 570 | interrupts if they are on, otherwise does nothing (if we are already in | 
|  | 571 | an interrupt handler), hence these functions are safe to call from any | 
|  | 572 | context. | 
|  | 573 |  | 
|  | 574 | Unfortunately, :c:func:`cache_add()` calls :c:func:`kmalloc()` | 
|  | 575 | with the ``GFP_KERNEL`` flag, which is only legal in user context. I | 
|  | 576 | have assumed that :c:func:`cache_add()` is still only called in | 
|  | 577 | user context, otherwise this should become a parameter to | 
|  | 578 | :c:func:`cache_add()`. | 
|  | 579 |  | 
|  | 580 | Exposing Objects Outside This File | 
|  | 581 | ---------------------------------- | 
|  | 582 |  | 
|  | 583 | If our objects contained more information, it might not be sufficient to | 
|  | 584 | copy the information in and out: other parts of the code might want to | 
|  | 585 | keep pointers to these objects, for example, rather than looking up the | 
|  | 586 | id every time. This produces two problems. | 
|  | 587 |  | 
|  | 588 | The first problem is that we use the ``cache_lock`` to protect objects: | 
|  | 589 | we'd need to make this non-static so the rest of the code can use it. | 
|  | 590 | This makes locking trickier, as it is no longer all in one place. | 
|  | 591 |  | 
|  | 592 | The second problem is the lifetime problem: if another structure keeps a | 
|  | 593 | pointer to an object, it presumably expects that pointer to remain | 
|  | 594 | valid. Unfortunately, this is only guaranteed while you hold the lock, | 
|  | 595 | otherwise someone might call :c:func:`cache_delete()` and even | 
|  | 596 | worse, add another object, re-using the same address. | 
|  | 597 |  | 
|  | 598 | As there is only one lock, you can't hold it forever: no-one else would | 
|  | 599 | get any work done. | 
|  | 600 |  | 
|  | 601 | The solution to this problem is to use a reference count: everyone who | 
|  | 602 | has a pointer to the object increases it when they first get the object, | 
|  | 603 | and drops the reference count when they're finished with it. Whoever | 
|  | 604 | drops it to zero knows it is unused, and can actually delete it. | 
|  | 605 |  | 
|  | 606 | Here is the code:: | 
|  | 607 |  | 
|  | 608 | --- cache.c.interrupt   2003-12-09 14:25:43.000000000 +1100 | 
|  | 609 | +++ cache.c.refcnt  2003-12-09 14:33:05.000000000 +1100 | 
|  | 610 | @@ -7,6 +7,7 @@ | 
|  | 611 | struct object | 
|  | 612 | { | 
|  | 613 | struct list_head list; | 
|  | 614 | +        unsigned int refcnt; | 
|  | 615 | int id; | 
|  | 616 | char name[32]; | 
|  | 617 | int popularity; | 
|  | 618 | @@ -17,6 +18,35 @@ | 
|  | 619 | static unsigned int cache_num = 0; | 
|  | 620 | #define MAX_CACHE_SIZE 10 | 
|  | 621 |  | 
|  | 622 | +static void __object_put(struct object *obj) | 
|  | 623 | +{ | 
|  | 624 | +        if (--obj->refcnt == 0) | 
|  | 625 | +                kfree(obj); | 
|  | 626 | +} | 
|  | 627 | + | 
|  | 628 | +static void __object_get(struct object *obj) | 
|  | 629 | +{ | 
|  | 630 | +        obj->refcnt++; | 
|  | 631 | +} | 
|  | 632 | + | 
|  | 633 | +void object_put(struct object *obj) | 
|  | 634 | +{ | 
|  | 635 | +        unsigned long flags; | 
|  | 636 | + | 
|  | 637 | +        spin_lock_irqsave(&cache_lock, flags); | 
|  | 638 | +        __object_put(obj); | 
|  | 639 | +        spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 640 | +} | 
|  | 641 | + | 
|  | 642 | +void object_get(struct object *obj) | 
|  | 643 | +{ | 
|  | 644 | +        unsigned long flags; | 
|  | 645 | + | 
|  | 646 | +        spin_lock_irqsave(&cache_lock, flags); | 
|  | 647 | +        __object_get(obj); | 
|  | 648 | +        spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 649 | +} | 
|  | 650 | + | 
|  | 651 | /* Must be holding cache_lock */ | 
|  | 652 | static struct object *__cache_find(int id) | 
|  | 653 | { | 
|  | 654 | @@ -35,6 +65,7 @@ | 
|  | 655 | { | 
|  | 656 | BUG_ON(!obj); | 
|  | 657 | list_del(&obj->list); | 
|  | 658 | +        __object_put(obj); | 
|  | 659 | cache_num--; | 
|  | 660 | } | 
|  | 661 |  | 
|  | 662 | @@ -63,6 +94,7 @@ | 
|  | 663 | strlcpy(obj->name, name, sizeof(obj->name)); | 
|  | 664 | obj->id = id; | 
|  | 665 | obj->popularity = 0; | 
|  | 666 | +        obj->refcnt = 1; /* The cache holds a reference */ | 
|  | 667 |  | 
|  | 668 | spin_lock_irqsave(&cache_lock, flags); | 
|  | 669 | __cache_add(obj); | 
|  | 670 | @@ -79,18 +111,15 @@ | 
|  | 671 | spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 672 | } | 
|  | 673 |  | 
|  | 674 | -int cache_find(int id, char *name) | 
|  | 675 | +struct object *cache_find(int id) | 
|  | 676 | { | 
|  | 677 | struct object *obj; | 
|  | 678 | -        int ret = -ENOENT; | 
|  | 679 | unsigned long flags; | 
|  | 680 |  | 
|  | 681 | spin_lock_irqsave(&cache_lock, flags); | 
|  | 682 | obj = __cache_find(id); | 
|  | 683 | -        if (obj) { | 
|  | 684 | -                ret = 0; | 
|  | 685 | -                strcpy(name, obj->name); | 
|  | 686 | -        } | 
|  | 687 | +        if (obj) | 
|  | 688 | +                __object_get(obj); | 
|  | 689 | spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 690 | -        return ret; | 
|  | 691 | +        return obj; | 
|  | 692 | } | 
|  | 693 |  | 
|  | 694 | We encapsulate the reference counting in the standard 'get' and 'put' | 
|  | 695 | functions. Now we can return the object itself from | 
|  | 696 | :c:func:`cache_find()` which has the advantage that the user can | 
|  | 697 | now sleep holding the object (eg. to :c:func:`copy_to_user()` to | 
|  | 698 | name to userspace). | 
|  | 699 |  | 
|  | 700 | The other point to note is that I said a reference should be held for | 
|  | 701 | every pointer to the object: thus the reference count is 1 when first | 
|  | 702 | inserted into the cache. In some versions the framework does not hold a | 
|  | 703 | reference count, but they are more complicated. | 
|  | 704 |  | 
|  | 705 | Using Atomic Operations For The Reference Count | 
|  | 706 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 707 |  | 
|  | 708 | In practice, :c:type:`atomic_t` would usually be used for refcnt. There are a | 
|  | 709 | number of atomic operations defined in ``include/asm/atomic.h``: these | 
|  | 710 | are guaranteed to be seen atomically from all CPUs in the system, so no | 
|  | 711 | lock is required. In this case, it is simpler than using spinlocks, | 
|  | 712 | although for anything non-trivial using spinlocks is clearer. The | 
|  | 713 | :c:func:`atomic_inc()` and :c:func:`atomic_dec_and_test()` | 
|  | 714 | are used instead of the standard increment and decrement operators, and | 
|  | 715 | the lock is no longer used to protect the reference count itself. | 
|  | 716 |  | 
|  | 717 | :: | 
|  | 718 |  | 
|  | 719 | --- cache.c.refcnt  2003-12-09 15:00:35.000000000 +1100 | 
|  | 720 | +++ cache.c.refcnt-atomic   2003-12-11 15:49:42.000000000 +1100 | 
|  | 721 | @@ -7,7 +7,7 @@ | 
|  | 722 | struct object | 
|  | 723 | { | 
|  | 724 | struct list_head list; | 
|  | 725 | -        unsigned int refcnt; | 
|  | 726 | +        atomic_t refcnt; | 
|  | 727 | int id; | 
|  | 728 | char name[32]; | 
|  | 729 | int popularity; | 
|  | 730 | @@ -18,33 +18,15 @@ | 
|  | 731 | static unsigned int cache_num = 0; | 
|  | 732 | #define MAX_CACHE_SIZE 10 | 
|  | 733 |  | 
|  | 734 | -static void __object_put(struct object *obj) | 
|  | 735 | -{ | 
|  | 736 | -        if (--obj->refcnt == 0) | 
|  | 737 | -                kfree(obj); | 
|  | 738 | -} | 
|  | 739 | - | 
|  | 740 | -static void __object_get(struct object *obj) | 
|  | 741 | -{ | 
|  | 742 | -        obj->refcnt++; | 
|  | 743 | -} | 
|  | 744 | - | 
|  | 745 | void object_put(struct object *obj) | 
|  | 746 | { | 
|  | 747 | -        unsigned long flags; | 
|  | 748 | - | 
|  | 749 | -        spin_lock_irqsave(&cache_lock, flags); | 
|  | 750 | -        __object_put(obj); | 
|  | 751 | -        spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 752 | +        if (atomic_dec_and_test(&obj->refcnt)) | 
|  | 753 | +                kfree(obj); | 
|  | 754 | } | 
|  | 755 |  | 
|  | 756 | void object_get(struct object *obj) | 
|  | 757 | { | 
|  | 758 | -        unsigned long flags; | 
|  | 759 | - | 
|  | 760 | -        spin_lock_irqsave(&cache_lock, flags); | 
|  | 761 | -        __object_get(obj); | 
|  | 762 | -        spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 763 | +        atomic_inc(&obj->refcnt); | 
|  | 764 | } | 
|  | 765 |  | 
|  | 766 | /* Must be holding cache_lock */ | 
|  | 767 | @@ -65,7 +47,7 @@ | 
|  | 768 | { | 
|  | 769 | BUG_ON(!obj); | 
|  | 770 | list_del(&obj->list); | 
|  | 771 | -        __object_put(obj); | 
|  | 772 | +        object_put(obj); | 
|  | 773 | cache_num--; | 
|  | 774 | } | 
|  | 775 |  | 
|  | 776 | @@ -94,7 +76,7 @@ | 
|  | 777 | strlcpy(obj->name, name, sizeof(obj->name)); | 
|  | 778 | obj->id = id; | 
|  | 779 | obj->popularity = 0; | 
|  | 780 | -        obj->refcnt = 1; /* The cache holds a reference */ | 
|  | 781 | +        atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ | 
|  | 782 |  | 
|  | 783 | spin_lock_irqsave(&cache_lock, flags); | 
|  | 784 | __cache_add(obj); | 
|  | 785 | @@ -119,7 +101,7 @@ | 
|  | 786 | spin_lock_irqsave(&cache_lock, flags); | 
|  | 787 | obj = __cache_find(id); | 
|  | 788 | if (obj) | 
|  | 789 | -                __object_get(obj); | 
|  | 790 | +                object_get(obj); | 
|  | 791 | spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 792 | return obj; | 
|  | 793 | } | 
|  | 794 |  | 
|  | 795 | Protecting The Objects Themselves | 
|  | 796 | --------------------------------- | 
|  | 797 |  | 
|  | 798 | In these examples, we assumed that the objects (except the reference | 
|  | 799 | counts) never changed once they are created. If we wanted to allow the | 
|  | 800 | name to change, there are three possibilities: | 
|  | 801 |  | 
|  | 802 | -  You can make ``cache_lock`` non-static, and tell people to grab that | 
|  | 803 | lock before changing the name in any object. | 
|  | 804 |  | 
|  | 805 | -  You can provide a :c:func:`cache_obj_rename()` which grabs this | 
|  | 806 | lock and changes the name for the caller, and tell everyone to use | 
|  | 807 | that function. | 
|  | 808 |  | 
|  | 809 | -  You can make the ``cache_lock`` protect only the cache itself, and | 
|  | 810 | use another lock to protect the name. | 
|  | 811 |  | 
|  | 812 | Theoretically, you can make the locks as fine-grained as one lock for | 
|  | 813 | every field, for every object. In practice, the most common variants | 
|  | 814 | are: | 
|  | 815 |  | 
|  | 816 | -  One lock which protects the infrastructure (the ``cache`` list in | 
|  | 817 | this example) and all the objects. This is what we have done so far. | 
|  | 818 |  | 
|  | 819 | -  One lock which protects the infrastructure (including the list | 
|  | 820 | pointers inside the objects), and one lock inside the object which | 
|  | 821 | protects the rest of that object. | 
|  | 822 |  | 
|  | 823 | -  Multiple locks to protect the infrastructure (eg. one lock per hash | 
|  | 824 | chain), possibly with a separate per-object lock. | 
|  | 825 |  | 
|  | 826 | Here is the "lock-per-object" implementation: | 
|  | 827 |  | 
|  | 828 | :: | 
|  | 829 |  | 
|  | 830 | --- cache.c.refcnt-atomic   2003-12-11 15:50:54.000000000 +1100 | 
|  | 831 | +++ cache.c.perobjectlock   2003-12-11 17:15:03.000000000 +1100 | 
|  | 832 | @@ -6,11 +6,17 @@ | 
|  | 833 |  | 
|  | 834 | struct object | 
|  | 835 | { | 
|  | 836 | +        /* These two protected by cache_lock. */ | 
|  | 837 | struct list_head list; | 
|  | 838 | +        int popularity; | 
|  | 839 | + | 
|  | 840 | atomic_t refcnt; | 
|  | 841 | + | 
|  | 842 | +        /* Doesn't change once created. */ | 
|  | 843 | int id; | 
|  | 844 | + | 
|  | 845 | +        spinlock_t lock; /* Protects the name */ | 
|  | 846 | char name[32]; | 
|  | 847 | -        int popularity; | 
|  | 848 | }; | 
|  | 849 |  | 
|  | 850 | static DEFINE_SPINLOCK(cache_lock); | 
|  | 851 | @@ -77,6 +84,7 @@ | 
|  | 852 | obj->id = id; | 
|  | 853 | obj->popularity = 0; | 
|  | 854 | atomic_set(&obj->refcnt, 1); /* The cache holds a reference */ | 
|  | 855 | +        spin_lock_init(&obj->lock); | 
|  | 856 |  | 
|  | 857 | spin_lock_irqsave(&cache_lock, flags); | 
|  | 858 | __cache_add(obj); | 
|  | 859 |  | 
|  | 860 | Note that I decide that the popularity count should be protected by the | 
|  | 861 | ``cache_lock`` rather than the per-object lock: this is because it (like | 
|  | 862 | the :c:type:`struct list_head <list_head>` inside the object) | 
|  | 863 | is logically part of the infrastructure. This way, I don't need to grab | 
|  | 864 | the lock of every object in :c:func:`__cache_add()` when seeking | 
|  | 865 | the least popular. | 
|  | 866 |  | 
|  | 867 | I also decided that the id member is unchangeable, so I don't need to | 
|  | 868 | grab each object lock in :c:func:`__cache_find()` to examine the | 
|  | 869 | id: the object lock is only used by a caller who wants to read or write | 
|  | 870 | the name field. | 
|  | 871 |  | 
|  | 872 | Note also that I added a comment describing what data was protected by | 
|  | 873 | which locks. This is extremely important, as it describes the runtime | 
|  | 874 | behavior of the code, and can be hard to gain from just reading. And as | 
|  | 875 | Alan Cox says, “Lock data, not code”. | 
|  | 876 |  | 
|  | 877 | Common Problems | 
|  | 878 | =============== | 
|  | 879 |  | 
|  | 880 | Deadlock: Simple and Advanced | 
|  | 881 | ----------------------------- | 
|  | 882 |  | 
|  | 883 | There is a coding bug where a piece of code tries to grab a spinlock | 
|  | 884 | twice: it will spin forever, waiting for the lock to be released | 
|  | 885 | (spinlocks, rwlocks and mutexes are not recursive in Linux). This is | 
|  | 886 | trivial to diagnose: not a | 
|  | 887 | stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem. | 
|  | 888 |  | 
|  | 889 | For a slightly more complex case, imagine you have a region shared by a | 
|  | 890 | softirq and user context. If you use a :c:func:`spin_lock()` call | 
|  | 891 | to protect it, it is possible that the user context will be interrupted | 
|  | 892 | by the softirq while it holds the lock, and the softirq will then spin | 
|  | 893 | forever trying to get the same lock. | 
|  | 894 |  | 
|  | 895 | Both of these are called deadlock, and as shown above, it can occur even | 
|  | 896 | with a single CPU (although not on UP compiles, since spinlocks vanish | 
|  | 897 | on kernel compiles with ``CONFIG_SMP``\ =n. You'll still get data | 
|  | 898 | corruption in the second example). | 
|  | 899 |  | 
|  | 900 | This complete lockup is easy to diagnose: on SMP boxes the watchdog | 
|  | 901 | timer or compiling with ``DEBUG_SPINLOCK`` set | 
|  | 902 | (``include/linux/spinlock.h``) will show this up immediately when it | 
|  | 903 | happens. | 
|  | 904 |  | 
|  | 905 | A more complex problem is the so-called 'deadly embrace', involving two | 
|  | 906 | or more locks. Say you have a hash table: each entry in the table is a | 
|  | 907 | spinlock, and a chain of hashed objects. Inside a softirq handler, you | 
|  | 908 | sometimes want to alter an object from one place in the hash to another: | 
|  | 909 | you grab the spinlock of the old hash chain and the spinlock of the new | 
|  | 910 | hash chain, and delete the object from the old one, and insert it in the | 
|  | 911 | new one. | 
|  | 912 |  | 
|  | 913 | There are two problems here. First, if your code ever tries to move the | 
|  | 914 | object to the same chain, it will deadlock with itself as it tries to | 
|  | 915 | lock it twice. Secondly, if the same softirq on another CPU is trying to | 
|  | 916 | move another object in the reverse direction, the following could | 
|  | 917 | happen: | 
|  | 918 |  | 
|  | 919 | +-----------------------+-----------------------+ | 
|  | 920 | | CPU 1                 | CPU 2                 | | 
|  | 921 | +=======================+=======================+ | 
|  | 922 | | Grab lock A -> OK     | Grab lock B -> OK     | | 
|  | 923 | +-----------------------+-----------------------+ | 
|  | 924 | | Grab lock B -> spin   | Grab lock A -> spin   | | 
|  | 925 | +-----------------------+-----------------------+ | 
|  | 926 |  | 
|  | 927 | Table: Consequences | 
|  | 928 |  | 
|  | 929 | The two CPUs will spin forever, waiting for the other to give up their | 
|  | 930 | lock. It will look, smell, and feel like a crash. | 
|  | 931 |  | 
|  | 932 | Preventing Deadlock | 
|  | 933 | ------------------- | 
|  | 934 |  | 
|  | 935 | Textbooks will tell you that if you always lock in the same order, you | 
|  | 936 | will never get this kind of deadlock. Practice will tell you that this | 
|  | 937 | approach doesn't scale: when I create a new lock, I don't understand | 
|  | 938 | enough of the kernel to figure out where in the 5000 lock hierarchy it | 
|  | 939 | will fit. | 
|  | 940 |  | 
|  | 941 | The best locks are encapsulated: they never get exposed in headers, and | 
|  | 942 | are never held around calls to non-trivial functions outside the same | 
|  | 943 | file. You can read through this code and see that it will never | 
|  | 944 | deadlock, because it never tries to grab another lock while it has that | 
|  | 945 | one. People using your code don't even need to know you are using a | 
|  | 946 | lock. | 
|  | 947 |  | 
|  | 948 | A classic problem here is when you provide callbacks or hooks: if you | 
|  | 949 | call these with the lock held, you risk simple deadlock, or a deadly | 
|  | 950 | embrace (who knows what the callback will do?). Remember, the other | 
|  | 951 | programmers are out to get you, so don't do this. | 
|  | 952 |  | 
|  | 953 | Overzealous Prevention Of Deadlocks | 
|  | 954 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 955 |  | 
|  | 956 | Deadlocks are problematic, but not as bad as data corruption. Code which | 
|  | 957 | grabs a read lock, searches a list, fails to find what it wants, drops | 
|  | 958 | the read lock, grabs a write lock and inserts the object has a race | 
|  | 959 | condition. | 
|  | 960 |  | 
|  | 961 | If you don't see why, please stay the fuck away from my code. | 
|  | 962 |  | 
|  | 963 | Racing Timers: A Kernel Pastime | 
|  | 964 | ------------------------------- | 
|  | 965 |  | 
|  | 966 | Timers can produce their own special problems with races. Consider a | 
|  | 967 | collection of objects (list, hash, etc) where each object has a timer | 
|  | 968 | which is due to destroy it. | 
|  | 969 |  | 
|  | 970 | If you want to destroy the entire collection (say on module removal), | 
|  | 971 | you might do the following:: | 
|  | 972 |  | 
|  | 973 | /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE | 
|  | 974 | HUNGARIAN NOTATION */ | 
|  | 975 | spin_lock_bh(&list_lock); | 
|  | 976 |  | 
|  | 977 | while (list) { | 
|  | 978 | struct foo *next = list->next; | 
|  | 979 | del_timer(&list->timer); | 
|  | 980 | kfree(list); | 
|  | 981 | list = next; | 
|  | 982 | } | 
|  | 983 |  | 
|  | 984 | spin_unlock_bh(&list_lock); | 
|  | 985 |  | 
|  | 986 |  | 
|  | 987 | Sooner or later, this will crash on SMP, because a timer can have just | 
|  | 988 | gone off before the :c:func:`spin_lock_bh()`, and it will only get | 
|  | 989 | the lock after we :c:func:`spin_unlock_bh()`, and then try to free | 
|  | 990 | the element (which has already been freed!). | 
|  | 991 |  | 
|  | 992 | This can be avoided by checking the result of | 
|  | 993 | :c:func:`del_timer()`: if it returns 1, the timer has been deleted. | 
|  | 994 | If 0, it means (in this case) that it is currently running, so we can | 
|  | 995 | do:: | 
|  | 996 |  | 
|  | 997 | retry: | 
|  | 998 | spin_lock_bh(&list_lock); | 
|  | 999 |  | 
|  | 1000 | while (list) { | 
|  | 1001 | struct foo *next = list->next; | 
|  | 1002 | if (!del_timer(&list->timer)) { | 
|  | 1003 | /* Give timer a chance to delete this */ | 
|  | 1004 | spin_unlock_bh(&list_lock); | 
|  | 1005 | goto retry; | 
|  | 1006 | } | 
|  | 1007 | kfree(list); | 
|  | 1008 | list = next; | 
|  | 1009 | } | 
|  | 1010 |  | 
|  | 1011 | spin_unlock_bh(&list_lock); | 
|  | 1012 |  | 
|  | 1013 |  | 
|  | 1014 | Another common problem is deleting timers which restart themselves (by | 
|  | 1015 | calling :c:func:`add_timer()` at the end of their timer function). | 
|  | 1016 | Because this is a fairly common case which is prone to races, you should | 
|  | 1017 | use :c:func:`del_timer_sync()` (``include/linux/timer.h``) to | 
|  | 1018 | handle this case. It returns the number of times the timer had to be | 
|  | 1019 | deleted before we finally stopped it from adding itself back in. | 
|  | 1020 |  | 
|  | 1021 | Locking Speed | 
|  | 1022 | ============= | 
|  | 1023 |  | 
|  | 1024 | There are three main things to worry about when considering speed of | 
|  | 1025 | some code which does locking. First is concurrency: how many things are | 
|  | 1026 | going to be waiting while someone else is holding a lock. Second is the | 
|  | 1027 | time taken to actually acquire and release an uncontended lock. Third is | 
|  | 1028 | using fewer, or smarter locks. I'm assuming that the lock is used fairly | 
|  | 1029 | often: otherwise, you wouldn't be concerned about efficiency. | 
|  | 1030 |  | 
|  | 1031 | Concurrency depends on how long the lock is usually held: you should | 
|  | 1032 | hold the lock for as long as needed, but no longer. In the cache | 
|  | 1033 | example, we always create the object without the lock held, and then | 
|  | 1034 | grab the lock only when we are ready to insert it in the list. | 
|  | 1035 |  | 
|  | 1036 | Acquisition times depend on how much damage the lock operations do to | 
|  | 1037 | the pipeline (pipeline stalls) and how likely it is that this CPU was | 
|  | 1038 | the last one to grab the lock (ie. is the lock cache-hot for this CPU): | 
|  | 1039 | on a machine with more CPUs, this likelihood drops fast. Consider a | 
|  | 1040 | 700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic | 
|  | 1041 | increment takes about 58ns, a lock which is cache-hot on this CPU takes | 
|  | 1042 | 160ns, and a cacheline transfer from another CPU takes an additional 170 | 
|  | 1043 | to 360ns. (These figures from Paul McKenney's `Linux Journal RCU | 
|  | 1044 | article <http://www.linuxjournal.com/article.php?sid=6993>`__). | 
|  | 1045 |  | 
|  | 1046 | These two aims conflict: holding a lock for a short time might be done | 
|  | 1047 | by splitting locks into parts (such as in our final per-object-lock | 
|  | 1048 | example), but this increases the number of lock acquisitions, and the | 
|  | 1049 | results are often slower than having a single lock. This is another | 
|  | 1050 | reason to advocate locking simplicity. | 
|  | 1051 |  | 
|  | 1052 | The third concern is addressed below: there are some methods to reduce | 
|  | 1053 | the amount of locking which needs to be done. | 
|  | 1054 |  | 
|  | 1055 | Read/Write Lock Variants | 
|  | 1056 | ------------------------ | 
|  | 1057 |  | 
|  | 1058 | Both spinlocks and mutexes have read/write variants: ``rwlock_t`` and | 
|  | 1059 | :c:type:`struct rw_semaphore <rw_semaphore>`. These divide | 
|  | 1060 | users into two classes: the readers and the writers. If you are only | 
|  | 1061 | reading the data, you can get a read lock, but to write to the data you | 
|  | 1062 | need the write lock. Many people can hold a read lock, but a writer must | 
|  | 1063 | be sole holder. | 
|  | 1064 |  | 
|  | 1065 | If your code divides neatly along reader/writer lines (as our cache code | 
|  | 1066 | does), and the lock is held by readers for significant lengths of time, | 
|  | 1067 | using these locks can help. They are slightly slower than the normal | 
|  | 1068 | locks though, so in practice ``rwlock_t`` is not usually worthwhile. | 
|  | 1069 |  | 
|  | 1070 | Avoiding Locks: Read Copy Update | 
|  | 1071 | -------------------------------- | 
|  | 1072 |  | 
|  | 1073 | There is a special method of read/write locking called Read Copy Update. | 
|  | 1074 | Using RCU, the readers can avoid taking a lock altogether: as we expect | 
|  | 1075 | our cache to be read more often than updated (otherwise the cache is a | 
|  | 1076 | waste of time), it is a candidate for this optimization. | 
|  | 1077 |  | 
|  | 1078 | How do we get rid of read locks? Getting rid of read locks means that | 
|  | 1079 | writers may be changing the list underneath the readers. That is | 
|  | 1080 | actually quite simple: we can read a linked list while an element is | 
|  | 1081 | being added if the writer adds the element very carefully. For example, | 
|  | 1082 | adding ``new`` to a single linked list called ``list``:: | 
|  | 1083 |  | 
|  | 1084 | new->next = list->next; | 
|  | 1085 | wmb(); | 
|  | 1086 | list->next = new; | 
|  | 1087 |  | 
|  | 1088 |  | 
|  | 1089 | The :c:func:`wmb()` is a write memory barrier. It ensures that the | 
|  | 1090 | first operation (setting the new element's ``next`` pointer) is complete | 
|  | 1091 | and will be seen by all CPUs, before the second operation is (putting | 
|  | 1092 | the new element into the list). This is important, since modern | 
|  | 1093 | compilers and modern CPUs can both reorder instructions unless told | 
|  | 1094 | otherwise: we want a reader to either not see the new element at all, or | 
|  | 1095 | see the new element with the ``next`` pointer correctly pointing at the | 
|  | 1096 | rest of the list. | 
|  | 1097 |  | 
|  | 1098 | Fortunately, there is a function to do this for standard | 
|  | 1099 | :c:type:`struct list_head <list_head>` lists: | 
|  | 1100 | :c:func:`list_add_rcu()` (``include/linux/list.h``). | 
|  | 1101 |  | 
|  | 1102 | Removing an element from the list is even simpler: we replace the | 
|  | 1103 | pointer to the old element with a pointer to its successor, and readers | 
|  | 1104 | will either see it, or skip over it. | 
|  | 1105 |  | 
|  | 1106 | :: | 
|  | 1107 |  | 
|  | 1108 | list->next = old->next; | 
|  | 1109 |  | 
|  | 1110 |  | 
|  | 1111 | There is :c:func:`list_del_rcu()` (``include/linux/list.h``) which | 
|  | 1112 | does this (the normal version poisons the old object, which we don't | 
|  | 1113 | want). | 
|  | 1114 |  | 
|  | 1115 | The reader must also be careful: some CPUs can look through the ``next`` | 
|  | 1116 | pointer to start reading the contents of the next element early, but | 
|  | 1117 | don't realize that the pre-fetched contents is wrong when the ``next`` | 
|  | 1118 | pointer changes underneath them. Once again, there is a | 
|  | 1119 | :c:func:`list_for_each_entry_rcu()` (``include/linux/list.h``) | 
|  | 1120 | to help you. Of course, writers can just use | 
|  | 1121 | :c:func:`list_for_each_entry()`, since there cannot be two | 
|  | 1122 | simultaneous writers. | 
|  | 1123 |  | 
|  | 1124 | Our final dilemma is this: when can we actually destroy the removed | 
|  | 1125 | element? Remember, a reader might be stepping through this element in | 
|  | 1126 | the list right now: if we free this element and the ``next`` pointer | 
|  | 1127 | changes, the reader will jump off into garbage and crash. We need to | 
|  | 1128 | wait until we know that all the readers who were traversing the list | 
|  | 1129 | when we deleted the element are finished. We use | 
|  | 1130 | :c:func:`call_rcu()` to register a callback which will actually | 
|  | 1131 | destroy the object once all pre-existing readers are finished. | 
|  | 1132 | Alternatively, :c:func:`synchronize_rcu()` may be used to block | 
|  | 1133 | until all pre-existing are finished. | 
|  | 1134 |  | 
|  | 1135 | But how does Read Copy Update know when the readers are finished? The | 
|  | 1136 | method is this: firstly, the readers always traverse the list inside | 
|  | 1137 | :c:func:`rcu_read_lock()`/:c:func:`rcu_read_unlock()` pairs: | 
|  | 1138 | these simply disable preemption so the reader won't go to sleep while | 
|  | 1139 | reading the list. | 
|  | 1140 |  | 
|  | 1141 | RCU then waits until every other CPU has slept at least once: since | 
|  | 1142 | readers cannot sleep, we know that any readers which were traversing the | 
|  | 1143 | list during the deletion are finished, and the callback is triggered. | 
|  | 1144 | The real Read Copy Update code is a little more optimized than this, but | 
|  | 1145 | this is the fundamental idea. | 
|  | 1146 |  | 
|  | 1147 | :: | 
|  | 1148 |  | 
|  | 1149 | --- cache.c.perobjectlock   2003-12-11 17:15:03.000000000 +1100 | 
|  | 1150 | +++ cache.c.rcupdate    2003-12-11 17:55:14.000000000 +1100 | 
|  | 1151 | @@ -1,15 +1,18 @@ | 
|  | 1152 | #include <linux/list.h> | 
|  | 1153 | #include <linux/slab.h> | 
|  | 1154 | #include <linux/string.h> | 
|  | 1155 | +#include <linux/rcupdate.h> | 
|  | 1156 | #include <linux/mutex.h> | 
|  | 1157 | #include <asm/errno.h> | 
|  | 1158 |  | 
|  | 1159 | struct object | 
|  | 1160 | { | 
|  | 1161 | -        /* These two protected by cache_lock. */ | 
|  | 1162 | +        /* This is protected by RCU */ | 
|  | 1163 | struct list_head list; | 
|  | 1164 | int popularity; | 
|  | 1165 |  | 
|  | 1166 | +        struct rcu_head rcu; | 
|  | 1167 | + | 
|  | 1168 | atomic_t refcnt; | 
|  | 1169 |  | 
|  | 1170 | /* Doesn't change once created. */ | 
|  | 1171 | @@ -40,7 +43,7 @@ | 
|  | 1172 | { | 
|  | 1173 | struct object *i; | 
|  | 1174 |  | 
|  | 1175 | -        list_for_each_entry(i, &cache, list) { | 
|  | 1176 | +        list_for_each_entry_rcu(i, &cache, list) { | 
|  | 1177 | if (i->id == id) { | 
|  | 1178 | i->popularity++; | 
|  | 1179 | return i; | 
|  | 1180 | @@ -49,19 +52,25 @@ | 
|  | 1181 | return NULL; | 
|  | 1182 | } | 
|  | 1183 |  | 
|  | 1184 | +/* Final discard done once we know no readers are looking. */ | 
|  | 1185 | +static void cache_delete_rcu(void *arg) | 
|  | 1186 | +{ | 
|  | 1187 | +        object_put(arg); | 
|  | 1188 | +} | 
|  | 1189 | + | 
|  | 1190 | /* Must be holding cache_lock */ | 
|  | 1191 | static void __cache_delete(struct object *obj) | 
|  | 1192 | { | 
|  | 1193 | BUG_ON(!obj); | 
|  | 1194 | -        list_del(&obj->list); | 
|  | 1195 | -        object_put(obj); | 
|  | 1196 | +        list_del_rcu(&obj->list); | 
|  | 1197 | cache_num--; | 
|  | 1198 | +        call_rcu(&obj->rcu, cache_delete_rcu); | 
|  | 1199 | } | 
|  | 1200 |  | 
|  | 1201 | /* Must be holding cache_lock */ | 
|  | 1202 | static void __cache_add(struct object *obj) | 
|  | 1203 | { | 
|  | 1204 | -        list_add(&obj->list, &cache); | 
|  | 1205 | +        list_add_rcu(&obj->list, &cache); | 
|  | 1206 | if (++cache_num > MAX_CACHE_SIZE) { | 
|  | 1207 | struct object *i, *outcast = NULL; | 
|  | 1208 | list_for_each_entry(i, &cache, list) { | 
|  | 1209 | @@ -104,12 +114,11 @@ | 
|  | 1210 | struct object *cache_find(int id) | 
|  | 1211 | { | 
|  | 1212 | struct object *obj; | 
|  | 1213 | -        unsigned long flags; | 
|  | 1214 |  | 
|  | 1215 | -        spin_lock_irqsave(&cache_lock, flags); | 
|  | 1216 | +        rcu_read_lock(); | 
|  | 1217 | obj = __cache_find(id); | 
|  | 1218 | if (obj) | 
|  | 1219 | object_get(obj); | 
|  | 1220 | -        spin_unlock_irqrestore(&cache_lock, flags); | 
|  | 1221 | +        rcu_read_unlock(); | 
|  | 1222 | return obj; | 
|  | 1223 | } | 
|  | 1224 |  | 
|  | 1225 | Note that the reader will alter the popularity member in | 
|  | 1226 | :c:func:`__cache_find()`, and now it doesn't hold a lock. One | 
|  | 1227 | solution would be to make it an ``atomic_t``, but for this usage, we | 
|  | 1228 | don't really care about races: an approximate result is good enough, so | 
|  | 1229 | I didn't change it. | 
|  | 1230 |  | 
|  | 1231 | The result is that :c:func:`cache_find()` requires no | 
|  | 1232 | synchronization with any other functions, so is almost as fast on SMP as | 
|  | 1233 | it would be on UP. | 
|  | 1234 |  | 
|  | 1235 | There is a further optimization possible here: remember our original | 
|  | 1236 | cache code, where there were no reference counts and the caller simply | 
|  | 1237 | held the lock whenever using the object? This is still possible: if you | 
|  | 1238 | hold the lock, no one can delete the object, so you don't need to get | 
|  | 1239 | and put the reference count. | 
|  | 1240 |  | 
|  | 1241 | Now, because the 'read lock' in RCU is simply disabling preemption, a | 
|  | 1242 | caller which always has preemption disabled between calling | 
|  | 1243 | :c:func:`cache_find()` and :c:func:`object_put()` does not | 
|  | 1244 | need to actually get and put the reference count: we could expose | 
|  | 1245 | :c:func:`__cache_find()` by making it non-static, and such | 
|  | 1246 | callers could simply call that. | 
|  | 1247 |  | 
|  | 1248 | The benefit here is that the reference count is not written to: the | 
|  | 1249 | object is not altered in any way, which is much faster on SMP machines | 
|  | 1250 | due to caching. | 
|  | 1251 |  | 
|  | 1252 | Per-CPU Data | 
|  | 1253 | ------------ | 
|  | 1254 |  | 
|  | 1255 | Another technique for avoiding locking which is used fairly widely is to | 
|  | 1256 | duplicate information for each CPU. For example, if you wanted to keep a | 
|  | 1257 | count of a common condition, you could use a spin lock and a single | 
|  | 1258 | counter. Nice and simple. | 
|  | 1259 |  | 
|  | 1260 | If that was too slow (it's usually not, but if you've got a really big | 
|  | 1261 | machine to test on and can show that it is), you could instead use a | 
|  | 1262 | counter for each CPU, then none of them need an exclusive lock. See | 
|  | 1263 | :c:func:`DEFINE_PER_CPU()`, :c:func:`get_cpu_var()` and | 
|  | 1264 | :c:func:`put_cpu_var()` (``include/linux/percpu.h``). | 
|  | 1265 |  | 
|  | 1266 | Of particular use for simple per-cpu counters is the ``local_t`` type, | 
|  | 1267 | and the :c:func:`cpu_local_inc()` and related functions, which are | 
|  | 1268 | more efficient than simple code on some architectures | 
|  | 1269 | (``include/asm/local.h``). | 
|  | 1270 |  | 
|  | 1271 | Note that there is no simple, reliable way of getting an exact value of | 
|  | 1272 | such a counter, without introducing more locks. This is not a problem | 
|  | 1273 | for some uses. | 
|  | 1274 |  | 
|  | 1275 | Data Which Mostly Used By An IRQ Handler | 
|  | 1276 | ---------------------------------------- | 
|  | 1277 |  | 
|  | 1278 | If data is always accessed from within the same IRQ handler, you don't | 
|  | 1279 | need a lock at all: the kernel already guarantees that the irq handler | 
|  | 1280 | will not run simultaneously on multiple CPUs. | 
|  | 1281 |  | 
|  | 1282 | Manfred Spraul points out that you can still do this, even if the data | 
|  | 1283 | is very occasionally accessed in user context or softirqs/tasklets. The | 
|  | 1284 | irq handler doesn't use a lock, and all other accesses are done as so:: | 
|  | 1285 |  | 
|  | 1286 | spin_lock(&lock); | 
|  | 1287 | disable_irq(irq); | 
|  | 1288 | ... | 
|  | 1289 | enable_irq(irq); | 
|  | 1290 | spin_unlock(&lock); | 
|  | 1291 |  | 
|  | 1292 | The :c:func:`disable_irq()` prevents the irq handler from running | 
|  | 1293 | (and waits for it to finish if it's currently running on other CPUs). | 
|  | 1294 | The spinlock prevents any other accesses happening at the same time. | 
|  | 1295 | Naturally, this is slower than just a :c:func:`spin_lock_irq()` | 
|  | 1296 | call, so it only makes sense if this type of access happens extremely | 
|  | 1297 | rarely. | 
|  | 1298 |  | 
|  | 1299 | What Functions Are Safe To Call From Interrupts? | 
|  | 1300 | ================================================ | 
|  | 1301 |  | 
|  | 1302 | Many functions in the kernel sleep (ie. call schedule()) directly or | 
|  | 1303 | indirectly: you can never call them while holding a spinlock, or with | 
|  | 1304 | preemption disabled. This also means you need to be in user context: | 
|  | 1305 | calling them from an interrupt is illegal. | 
|  | 1306 |  | 
|  | 1307 | Some Functions Which Sleep | 
|  | 1308 | -------------------------- | 
|  | 1309 |  | 
|  | 1310 | The most common ones are listed below, but you usually have to read the | 
|  | 1311 | code to find out if other calls are safe. If everyone else who calls it | 
|  | 1312 | can sleep, you probably need to be able to sleep, too. In particular, | 
|  | 1313 | registration and deregistration functions usually expect to be called | 
|  | 1314 | from user context, and can sleep. | 
|  | 1315 |  | 
|  | 1316 | -  Accesses to userspace: | 
|  | 1317 |  | 
|  | 1318 | -  :c:func:`copy_from_user()` | 
|  | 1319 |  | 
|  | 1320 | -  :c:func:`copy_to_user()` | 
|  | 1321 |  | 
|  | 1322 | -  :c:func:`get_user()` | 
|  | 1323 |  | 
|  | 1324 | -  :c:func:`put_user()` | 
|  | 1325 |  | 
|  | 1326 | -  :c:func:`kmalloc(GFP_KERNEL) <kmalloc>` | 
|  | 1327 |  | 
|  | 1328 | -  :c:func:`mutex_lock_interruptible()` and | 
|  | 1329 | :c:func:`mutex_lock()` | 
|  | 1330 |  | 
|  | 1331 | There is a :c:func:`mutex_trylock()` which does not sleep. | 
|  | 1332 | Still, it must not be used inside interrupt context since its | 
|  | 1333 | implementation is not safe for that. :c:func:`mutex_unlock()` | 
|  | 1334 | will also never sleep. It cannot be used in interrupt context either | 
|  | 1335 | since a mutex must be released by the same task that acquired it. | 
|  | 1336 |  | 
|  | 1337 | Some Functions Which Don't Sleep | 
|  | 1338 | -------------------------------- | 
|  | 1339 |  | 
|  | 1340 | Some functions are safe to call from any context, or holding almost any | 
|  | 1341 | lock. | 
|  | 1342 |  | 
|  | 1343 | -  :c:func:`printk()` | 
|  | 1344 |  | 
|  | 1345 | -  :c:func:`kfree()` | 
|  | 1346 |  | 
|  | 1347 | -  :c:func:`add_timer()` and :c:func:`del_timer()` | 
|  | 1348 |  | 
|  | 1349 | Mutex API reference | 
|  | 1350 | =================== | 
|  | 1351 |  | 
|  | 1352 | .. kernel-doc:: include/linux/mutex.h | 
|  | 1353 | :internal: | 
|  | 1354 |  | 
|  | 1355 | .. kernel-doc:: kernel/locking/mutex.c | 
|  | 1356 | :export: | 
|  | 1357 |  | 
|  | 1358 | Futex API reference | 
|  | 1359 | =================== | 
|  | 1360 |  | 
|  | 1361 | .. kernel-doc:: kernel/futex.c | 
|  | 1362 | :internal: | 
|  | 1363 |  | 
|  | 1364 | Further reading | 
|  | 1365 | =============== | 
|  | 1366 |  | 
|  | 1367 | -  ``Documentation/locking/spinlocks.txt``: Linus Torvalds' spinlocking | 
|  | 1368 | tutorial in the kernel sources. | 
|  | 1369 |  | 
|  | 1370 | -  Unix Systems for Modern Architectures: Symmetric Multiprocessing and | 
|  | 1371 | Caching for Kernel Programmers: | 
|  | 1372 |  | 
|  | 1373 | Curt Schimmel's very good introduction to kernel level locking (not | 
|  | 1374 | written for Linux, but nearly everything applies). The book is | 
|  | 1375 | expensive, but really worth every penny to understand SMP locking. | 
|  | 1376 | [ISBN: 0201633388] | 
|  | 1377 |  | 
|  | 1378 | Thanks | 
|  | 1379 | ====== | 
|  | 1380 |  | 
|  | 1381 | Thanks to Telsa Gwynne for DocBooking, neatening and adding style. | 
|  | 1382 |  | 
|  | 1383 | Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras, | 
|  | 1384 | Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev, | 
|  | 1385 | James Morris, Robert Love, Paul McKenney, John Ashby for proofreading, | 
|  | 1386 | correcting, flaming, commenting. | 
|  | 1387 |  | 
|  | 1388 | Thanks to the cabal for having no influence on this document. | 
|  | 1389 |  | 
|  | 1390 | Glossary | 
|  | 1391 | ======== | 
|  | 1392 |  | 
|  | 1393 | preemption | 
|  | 1394 | Prior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user | 
|  | 1395 | context inside the kernel would not preempt each other (ie. you had that | 
|  | 1396 | CPU until you gave it up, except for interrupts). With the addition of | 
|  | 1397 | ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher | 
|  | 1398 | priority tasks can "cut in": spinlocks were changed to disable | 
|  | 1399 | preemption, even on UP. | 
|  | 1400 |  | 
|  | 1401 | bh | 
|  | 1402 | Bottom Half: for historical reasons, functions with '_bh' in them often | 
|  | 1403 | now refer to any software interrupt, e.g. :c:func:`spin_lock_bh()` | 
|  | 1404 | blocks any software interrupt on the current CPU. Bottom halves are | 
|  | 1405 | deprecated, and will eventually be replaced by tasklets. Only one bottom | 
|  | 1406 | half will be running at any time. | 
|  | 1407 |  | 
|  | 1408 | Hardware Interrupt / Hardware IRQ | 
|  | 1409 | Hardware interrupt request. :c:func:`in_irq()` returns true in a | 
|  | 1410 | hardware interrupt handler. | 
|  | 1411 |  | 
|  | 1412 | Interrupt Context | 
|  | 1413 | Not user context: processing a hardware irq or software irq. Indicated | 
|  | 1414 | by the :c:func:`in_interrupt()` macro returning true. | 
|  | 1415 |  | 
|  | 1416 | SMP | 
|  | 1417 | Symmetric Multi-Processor: kernels compiled for multiple-CPU machines. | 
|  | 1418 | (``CONFIG_SMP=y``). | 
|  | 1419 |  | 
|  | 1420 | Software Interrupt / softirq | 
|  | 1421 | Software interrupt handler. :c:func:`in_irq()` returns false; | 
|  | 1422 | :c:func:`in_softirq()` returns true. Tasklets and softirqs both | 
|  | 1423 | fall into the category of 'software interrupts'. | 
|  | 1424 |  | 
|  | 1425 | Strictly speaking a softirq is one of up to 32 enumerated software | 
|  | 1426 | interrupts which can run on multiple CPUs at once. Sometimes used to | 
|  | 1427 | refer to tasklets as well (ie. all software interrupts). | 
|  | 1428 |  | 
|  | 1429 | tasklet | 
|  | 1430 | A dynamically-registrable software interrupt, which is guaranteed to | 
|  | 1431 | only run on one CPU at a time. | 
|  | 1432 |  | 
|  | 1433 | timer | 
|  | 1434 | A dynamically-registrable software interrupt, which is run at (or close | 
|  | 1435 | to) a given time. When running, it is just like a tasklet (in fact, they | 
|  | 1436 | are called from the ``TIMER_SOFTIRQ``). | 
|  | 1437 |  | 
|  | 1438 | UP | 
|  | 1439 | Uni-Processor: Non-SMP. (``CONFIG_SMP=n``). | 
|  | 1440 |  | 
|  | 1441 | User Context | 
|  | 1442 | The kernel executing on behalf of a particular process (ie. a system | 
|  | 1443 | call or trap) or kernel thread. You can tell which process with the | 
|  | 1444 | ``current`` macro.) Not to be confused with userspace. Can be | 
|  | 1445 | interrupted by software or hardware interrupts. | 
|  | 1446 |  | 
|  | 1447 | Userspace | 
|  | 1448 | A process executing its own code outside the kernel. |