| lh | 9ed821d | 2023-04-07 01:36:19 -0700 | [diff] [blame] | 1 |  | 
|  | 2 | JFFS2 LOCKING DOCUMENTATION | 
|  | 3 | --------------------------- | 
|  | 4 |  | 
|  | 5 | At least theoretically, JFFS2 does not require the Big Kernel Lock | 
|  | 6 | (BKL), which was always helpfully obtained for it by Linux 2.4 VFS | 
|  | 7 | code. It has its own locking, as described below. | 
|  | 8 |  | 
|  | 9 | This document attempts to describe the existing locking rules for | 
|  | 10 | JFFS2. It is not expected to remain perfectly up to date, but ought to | 
|  | 11 | be fairly close. | 
|  | 12 |  | 
|  | 13 |  | 
|  | 14 | alloc_sem | 
|  | 15 | --------- | 
|  | 16 |  | 
|  | 17 | The alloc_sem is a per-filesystem mutex, used primarily to ensure | 
|  | 18 | contiguous allocation of space on the medium. It is automatically | 
|  | 19 | obtained during space allocations (jffs2_reserve_space()) and freed | 
|  | 20 | upon write completion (jffs2_complete_reservation()). Note that | 
|  | 21 | the garbage collector will obtain this right at the beginning of | 
|  | 22 | jffs2_garbage_collect_pass() and release it at the end, thereby | 
|  | 23 | preventing any other write activity on the file system during a | 
|  | 24 | garbage collect pass. | 
|  | 25 |  | 
|  | 26 | When writing new nodes, the alloc_sem must be held until the new nodes | 
|  | 27 | have been properly linked into the data structures for the inode to | 
|  | 28 | which they belong. This is for the benefit of NAND flash - adding new | 
|  | 29 | nodes to an inode may obsolete old ones, and by holding the alloc_sem | 
|  | 30 | until this happens we ensure that any data in the write-buffer at the | 
|  | 31 | time this happens are part of the new node, not just something that | 
|  | 32 | was written afterwards. Hence, we can ensure the newly-obsoleted nodes | 
|  | 33 | don't actually get erased until the write-buffer has been flushed to | 
|  | 34 | the medium. | 
|  | 35 |  | 
|  | 36 | With the introduction of NAND flash support and the write-buffer, | 
|  | 37 | the alloc_sem is also used to protect the wbuf-related members of the | 
|  | 38 | jffs2_sb_info structure. Atomically reading the wbuf_len member to see | 
|  | 39 | if the wbuf is currently holding any data is permitted, though. | 
|  | 40 |  | 
|  | 41 | Ordering constraints: See f->sem. | 
|  | 42 |  | 
|  | 43 |  | 
|  | 44 | File Mutex f->sem | 
|  | 45 | --------------------- | 
|  | 46 |  | 
|  | 47 | This is the JFFS2-internal equivalent of the inode mutex i->i_sem. | 
|  | 48 | It protects the contents of the jffs2_inode_info private inode data, | 
|  | 49 | including the linked list of node fragments (but see the notes below on | 
|  | 50 | erase_completion_lock), etc. | 
|  | 51 |  | 
|  | 52 | The reason that the i_sem itself isn't used for this purpose is to | 
|  | 53 | avoid deadlocks with garbage collection -- the VFS will lock the i_sem | 
|  | 54 | before calling a function which may need to allocate space. The | 
|  | 55 | allocation may trigger garbage-collection, which may need to move a | 
|  | 56 | node belonging to the inode which was locked in the first place by the | 
|  | 57 | VFS. If the garbage collection code were to attempt to lock the i_sem | 
|  | 58 | of the inode from which it's garbage-collecting a physical node, this | 
|  | 59 | lead to deadlock, unless we played games with unlocking the i_sem | 
|  | 60 | before calling the space allocation functions. | 
|  | 61 |  | 
|  | 62 | Instead of playing such games, we just have an extra internal | 
|  | 63 | mutex, which is obtained by the garbage collection code and also | 
|  | 64 | by the normal file system code _after_ allocation of space. | 
|  | 65 |  | 
|  | 66 | Ordering constraints: | 
|  | 67 |  | 
|  | 68 | 1. Never attempt to allocate space or lock alloc_sem with | 
|  | 69 | any f->sem held. | 
|  | 70 | 2. Never attempt to lock two file mutexes in one thread. | 
|  | 71 | No ordering rules have been made for doing so. | 
|  | 72 |  | 
|  | 73 |  | 
|  | 74 | erase_completion_lock spinlock | 
|  | 75 | ------------------------------ | 
|  | 76 |  | 
|  | 77 | This is used to serialise access to the eraseblock lists, to the | 
|  | 78 | per-eraseblock lists of physical jffs2_raw_node_ref structures, and | 
|  | 79 | (NB) the per-inode list of physical nodes. The latter is a special | 
|  | 80 | case - see below. | 
|  | 81 |  | 
|  | 82 | As the MTD API no longer permits erase-completion callback functions | 
|  | 83 | to be called from bottom-half (timer) context (on the basis that nobody | 
|  | 84 | ever actually implemented such a thing), it's now sufficient to use | 
|  | 85 | a simple spin_lock() rather than spin_lock_bh(). | 
|  | 86 |  | 
|  | 87 | Note that the per-inode list of physical nodes (f->nodes) is a special | 
|  | 88 | case. Any changes to _valid_ nodes (i.e. ->flash_offset & 1 == 0) in | 
|  | 89 | the list are protected by the file mutex f->sem. But the erase code | 
|  | 90 | may remove _obsolete_ nodes from the list while holding only the | 
|  | 91 | erase_completion_lock. So you can walk the list only while holding the | 
|  | 92 | erase_completion_lock, and can drop the lock temporarily mid-walk as | 
|  | 93 | long as the pointer you're holding is to a _valid_ node, not an | 
|  | 94 | obsolete one. | 
|  | 95 |  | 
|  | 96 | The erase_completion_lock is also used to protect the c->gc_task | 
|  | 97 | pointer when the garbage collection thread exits. The code to kill the | 
|  | 98 | GC thread locks it, sends the signal, then unlocks it - while the GC | 
|  | 99 | thread itself locks it, zeroes c->gc_task, then unlocks on the exit path. | 
|  | 100 |  | 
|  | 101 |  | 
|  | 102 | inocache_lock spinlock | 
|  | 103 | ---------------------- | 
|  | 104 |  | 
|  | 105 | This spinlock protects the hashed list (c->inocache_list) of the | 
|  | 106 | in-core jffs2_inode_cache objects (each inode in JFFS2 has the | 
|  | 107 | correspondent jffs2_inode_cache object). So, the inocache_lock | 
|  | 108 | has to be locked while walking the c->inocache_list hash buckets. | 
|  | 109 |  | 
|  | 110 | This spinlock also covers allocation of new inode numbers, which is | 
|  | 111 | currently just '++->highest_ino++', but might one day get more complicated | 
|  | 112 | if we need to deal with wrapping after 4 milliard inode numbers are used. | 
|  | 113 |  | 
|  | 114 | Note, the f->sem guarantees that the correspondent jffs2_inode_cache | 
|  | 115 | will not be removed. So, it is allowed to access it without locking | 
|  | 116 | the inocache_lock spinlock. | 
|  | 117 |  | 
|  | 118 | Ordering constraints: | 
|  | 119 |  | 
|  | 120 | If both erase_completion_lock and inocache_lock are needed, the | 
|  | 121 | c->erase_completion has to be acquired first. | 
|  | 122 |  | 
|  | 123 |  | 
|  | 124 | erase_free_sem | 
|  | 125 | -------------- | 
|  | 126 |  | 
|  | 127 | This mutex is only used by the erase code which frees obsolete node | 
|  | 128 | references and the jffs2_garbage_collect_deletion_dirent() function. | 
|  | 129 | The latter function on NAND flash must read _obsolete_ nodes to | 
|  | 130 | determine whether the 'deletion dirent' under consideration can be | 
|  | 131 | discarded or whether it is still required to show that an inode has | 
|  | 132 | been unlinked. Because reading from the flash may sleep, the | 
|  | 133 | erase_completion_lock cannot be held, so an alternative, more | 
|  | 134 | heavyweight lock was required to prevent the erase code from freeing | 
|  | 135 | the jffs2_raw_node_ref structures in question while the garbage | 
|  | 136 | collection code is looking at them. | 
|  | 137 |  | 
|  | 138 | Suggestions for alternative solutions to this problem would be welcomed. | 
|  | 139 |  | 
|  | 140 |  | 
|  | 141 | wbuf_sem | 
|  | 142 | -------- | 
|  | 143 |  | 
|  | 144 | This read/write semaphore protects against concurrent access to the | 
|  | 145 | write-behind buffer ('wbuf') used for flash chips where we must write | 
|  | 146 | in blocks. It protects both the contents of the wbuf and the metadata | 
|  | 147 | which indicates which flash region (if any) is currently covered by | 
|  | 148 | the buffer. | 
|  | 149 |  | 
|  | 150 | Ordering constraints: | 
|  | 151 | Lock wbuf_sem last, after the alloc_sem or and f->sem. | 
|  | 152 |  | 
|  | 153 |  | 
|  | 154 | c->xattr_sem | 
|  | 155 | ------------ | 
|  | 156 |  | 
|  | 157 | This read/write semaphore protects against concurrent access to the | 
|  | 158 | xattr related objects which include stuff in superblock and ic->xref. | 
|  | 159 | In read-only path, write-semaphore is too much exclusion. It's enough | 
|  | 160 | by read-semaphore. But you must hold write-semaphore when updating, | 
|  | 161 | creating or deleting any xattr related object. | 
|  | 162 |  | 
|  | 163 | Once xattr_sem released, there would be no assurance for the existence | 
|  | 164 | of those objects. Thus, a series of processes is often required to retry, | 
|  | 165 | when updating such a object is necessary under holding read semaphore. | 
|  | 166 | For example, do_jffs2_getxattr() holds read-semaphore to scan xref and | 
|  | 167 | xdatum at first. But it retries this process with holding write-semaphore | 
|  | 168 | after release read-semaphore, if it's necessary to load name/value pair | 
|  | 169 | from medium. | 
|  | 170 |  | 
|  | 171 | Ordering constraints: | 
|  | 172 | Lock xattr_sem last, after the alloc_sem. |