| xj | b04a402 | 2021-11-25 15:01:52 +0800 | [diff] [blame] | 1 | ============================================ | 
|  | 2 | Dynamic DMA mapping using the generic device | 
|  | 3 | ============================================ | 
|  | 4 |  | 
|  | 5 | :Author: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> | 
|  | 6 |  | 
|  | 7 | This document describes the DMA API.  For a more gentle introduction | 
|  | 8 | of the API (and actual examples), see Documentation/DMA-API-HOWTO.txt. | 
|  | 9 |  | 
|  | 10 | This API is split into two pieces.  Part I describes the basic API. | 
|  | 11 | Part II describes extensions for supporting non-consistent memory | 
|  | 12 | machines.  Unless you know that your driver absolutely has to support | 
|  | 13 | non-consistent platforms (this is usually only legacy platforms) you | 
|  | 14 | should only use the API described in part I. | 
|  | 15 |  | 
|  | 16 | Part I - dma_API | 
|  | 17 | ---------------- | 
|  | 18 |  | 
|  | 19 | To get the dma_API, you must #include <linux/dma-mapping.h>.  This | 
|  | 20 | provides dma_addr_t and the interfaces described below. | 
|  | 21 |  | 
|  | 22 | A dma_addr_t can hold any valid DMA address for the platform.  It can be | 
|  | 23 | given to a device to use as a DMA source or target.  A CPU cannot reference | 
|  | 24 | a dma_addr_t directly because there may be translation between its physical | 
|  | 25 | address space and the DMA address space. | 
|  | 26 |  | 
|  | 27 | Part Ia - Using large DMA-coherent buffers | 
|  | 28 | ------------------------------------------ | 
|  | 29 |  | 
|  | 30 | :: | 
|  | 31 |  | 
|  | 32 | void * | 
|  | 33 | dma_alloc_coherent(struct device *dev, size_t size, | 
|  | 34 | dma_addr_t *dma_handle, gfp_t flag) | 
|  | 35 |  | 
|  | 36 | Consistent memory is memory for which a write by either the device or | 
|  | 37 | the processor can immediately be read by the processor or device | 
|  | 38 | without having to worry about caching effects.  (You may however need | 
|  | 39 | to make sure to flush the processor's write buffers before telling | 
|  | 40 | devices to read that memory.) | 
|  | 41 |  | 
|  | 42 | This routine allocates a region of <size> bytes of consistent memory. | 
|  | 43 |  | 
|  | 44 | It returns a pointer to the allocated region (in the processor's virtual | 
|  | 45 | address space) or NULL if the allocation failed. | 
|  | 46 |  | 
|  | 47 | It also returns a <dma_handle> which may be cast to an unsigned integer the | 
|  | 48 | same width as the bus and given to the device as the DMA address base of | 
|  | 49 | the region. | 
|  | 50 |  | 
|  | 51 | Note: consistent memory can be expensive on some platforms, and the | 
|  | 52 | minimum allocation length may be as big as a page, so you should | 
|  | 53 | consolidate your requests for consistent memory as much as possible. | 
|  | 54 | The simplest way to do that is to use the dma_pool calls (see below). | 
|  | 55 |  | 
|  | 56 | The flag parameter (dma_alloc_coherent() only) allows the caller to | 
|  | 57 | specify the ``GFP_`` flags (see kmalloc()) for the allocation (the | 
|  | 58 | implementation may choose to ignore flags that affect the location of | 
|  | 59 | the returned memory, like GFP_DMA). | 
|  | 60 |  | 
|  | 61 | :: | 
|  | 62 |  | 
|  | 63 | void * | 
|  | 64 | dma_zalloc_coherent(struct device *dev, size_t size, | 
|  | 65 | dma_addr_t *dma_handle, gfp_t flag) | 
|  | 66 |  | 
|  | 67 | Wraps dma_alloc_coherent() and also zeroes the returned memory if the | 
|  | 68 | allocation attempt succeeded. | 
|  | 69 |  | 
|  | 70 | :: | 
|  | 71 |  | 
|  | 72 | void | 
|  | 73 | dma_free_coherent(struct device *dev, size_t size, void *cpu_addr, | 
|  | 74 | dma_addr_t dma_handle) | 
|  | 75 |  | 
|  | 76 | Free a region of consistent memory you previously allocated.  dev, | 
|  | 77 | size and dma_handle must all be the same as those passed into | 
|  | 78 | dma_alloc_coherent().  cpu_addr must be the virtual address returned by | 
|  | 79 | the dma_alloc_coherent(). | 
|  | 80 |  | 
|  | 81 | Note that unlike their sibling allocation calls, these routines | 
|  | 82 | may only be called with IRQs enabled. | 
|  | 83 |  | 
|  | 84 |  | 
|  | 85 | Part Ib - Using small DMA-coherent buffers | 
|  | 86 | ------------------------------------------ | 
|  | 87 |  | 
|  | 88 | To get this part of the dma_API, you must #include <linux/dmapool.h> | 
|  | 89 |  | 
|  | 90 | Many drivers need lots of small DMA-coherent memory regions for DMA | 
|  | 91 | descriptors or I/O buffers.  Rather than allocating in units of a page | 
|  | 92 | or more using dma_alloc_coherent(), you can use DMA pools.  These work | 
|  | 93 | much like a struct kmem_cache, except that they use the DMA-coherent allocator, | 
|  | 94 | not __get_free_pages().  Also, they understand common hardware constraints | 
|  | 95 | for alignment, like queue heads needing to be aligned on N-byte boundaries. | 
|  | 96 |  | 
|  | 97 |  | 
|  | 98 | :: | 
|  | 99 |  | 
|  | 100 | struct dma_pool * | 
|  | 101 | dma_pool_create(const char *name, struct device *dev, | 
|  | 102 | size_t size, size_t align, size_t alloc); | 
|  | 103 |  | 
|  | 104 | dma_pool_create() initializes a pool of DMA-coherent buffers | 
|  | 105 | for use with a given device.  It must be called in a context which | 
|  | 106 | can sleep. | 
|  | 107 |  | 
|  | 108 | The "name" is for diagnostics (like a struct kmem_cache name); dev and size | 
|  | 109 | are like what you'd pass to dma_alloc_coherent().  The device's hardware | 
|  | 110 | alignment requirement for this type of data is "align" (which is expressed | 
|  | 111 | in bytes, and must be a power of two).  If your device has no boundary | 
|  | 112 | crossing restrictions, pass 0 for alloc; passing 4096 says memory allocated | 
|  | 113 | from this pool must not cross 4KByte boundaries. | 
|  | 114 |  | 
|  | 115 | :: | 
|  | 116 |  | 
|  | 117 | void * | 
|  | 118 | dma_pool_zalloc(struct dma_pool *pool, gfp_t mem_flags, | 
|  | 119 | dma_addr_t *handle) | 
|  | 120 |  | 
|  | 121 | Wraps dma_pool_alloc() and also zeroes the returned memory if the | 
|  | 122 | allocation attempt succeeded. | 
|  | 123 |  | 
|  | 124 |  | 
|  | 125 | :: | 
|  | 126 |  | 
|  | 127 | void * | 
|  | 128 | dma_pool_alloc(struct dma_pool *pool, gfp_t gfp_flags, | 
|  | 129 | dma_addr_t *dma_handle); | 
|  | 130 |  | 
|  | 131 | This allocates memory from the pool; the returned memory will meet the | 
|  | 132 | size and alignment requirements specified at creation time.  Pass | 
|  | 133 | GFP_ATOMIC to prevent blocking, or if it's permitted (not | 
|  | 134 | in_interrupt, not holding SMP locks), pass GFP_KERNEL to allow | 
|  | 135 | blocking.  Like dma_alloc_coherent(), this returns two values:  an | 
|  | 136 | address usable by the CPU, and the DMA address usable by the pool's | 
|  | 137 | device. | 
|  | 138 |  | 
|  | 139 | :: | 
|  | 140 |  | 
|  | 141 | void | 
|  | 142 | dma_pool_free(struct dma_pool *pool, void *vaddr, | 
|  | 143 | dma_addr_t addr); | 
|  | 144 |  | 
|  | 145 | This puts memory back into the pool.  The pool is what was passed to | 
|  | 146 | dma_pool_alloc(); the CPU (vaddr) and DMA addresses are what | 
|  | 147 | were returned when that routine allocated the memory being freed. | 
|  | 148 |  | 
|  | 149 | :: | 
|  | 150 |  | 
|  | 151 | void | 
|  | 152 | dma_pool_destroy(struct dma_pool *pool); | 
|  | 153 |  | 
|  | 154 | dma_pool_destroy() frees the resources of the pool.  It must be | 
|  | 155 | called in a context which can sleep.  Make sure you've freed all allocated | 
|  | 156 | memory back to the pool before you destroy it. | 
|  | 157 |  | 
|  | 158 |  | 
|  | 159 | Part Ic - DMA addressing limitations | 
|  | 160 | ------------------------------------ | 
|  | 161 |  | 
|  | 162 | :: | 
|  | 163 |  | 
|  | 164 | int | 
|  | 165 | dma_set_mask_and_coherent(struct device *dev, u64 mask) | 
|  | 166 |  | 
|  | 167 | Checks to see if the mask is possible and updates the device | 
|  | 168 | streaming and coherent DMA mask parameters if it is. | 
|  | 169 |  | 
|  | 170 | Returns: 0 if successful and a negative error if not. | 
|  | 171 |  | 
|  | 172 | :: | 
|  | 173 |  | 
|  | 174 | int | 
|  | 175 | dma_set_mask(struct device *dev, u64 mask) | 
|  | 176 |  | 
|  | 177 | Checks to see if the mask is possible and updates the device | 
|  | 178 | parameters if it is. | 
|  | 179 |  | 
|  | 180 | Returns: 0 if successful and a negative error if not. | 
|  | 181 |  | 
|  | 182 | :: | 
|  | 183 |  | 
|  | 184 | int | 
|  | 185 | dma_set_coherent_mask(struct device *dev, u64 mask) | 
|  | 186 |  | 
|  | 187 | Checks to see if the mask is possible and updates the device | 
|  | 188 | parameters if it is. | 
|  | 189 |  | 
|  | 190 | Returns: 0 if successful and a negative error if not. | 
|  | 191 |  | 
|  | 192 | :: | 
|  | 193 |  | 
|  | 194 | u64 | 
|  | 195 | dma_get_required_mask(struct device *dev) | 
|  | 196 |  | 
|  | 197 | This API returns the mask that the platform requires to | 
|  | 198 | operate efficiently.  Usually this means the returned mask | 
|  | 199 | is the minimum required to cover all of memory.  Examining the | 
|  | 200 | required mask gives drivers with variable descriptor sizes the | 
|  | 201 | opportunity to use smaller descriptors as necessary. | 
|  | 202 |  | 
|  | 203 | Requesting the required mask does not alter the current mask.  If you | 
|  | 204 | wish to take advantage of it, you should issue a dma_set_mask() | 
|  | 205 | call to set the mask to the value returned. | 
|  | 206 |  | 
|  | 207 |  | 
|  | 208 | Part Id - Streaming DMA mappings | 
|  | 209 | -------------------------------- | 
|  | 210 |  | 
|  | 211 | :: | 
|  | 212 |  | 
|  | 213 | dma_addr_t | 
|  | 214 | dma_map_single(struct device *dev, void *cpu_addr, size_t size, | 
|  | 215 | enum dma_data_direction direction) | 
|  | 216 |  | 
|  | 217 | Maps a piece of processor virtual memory so it can be accessed by the | 
|  | 218 | device and returns the DMA address of the memory. | 
|  | 219 |  | 
|  | 220 | The direction for both APIs may be converted freely by casting. | 
|  | 221 | However the dma_API uses a strongly typed enumerator for its | 
|  | 222 | direction: | 
|  | 223 |  | 
|  | 224 | ======================= ============================================= | 
|  | 225 | DMA_NONE		no direction (used for debugging) | 
|  | 226 | DMA_TO_DEVICE		data is going from the memory to the device | 
|  | 227 | DMA_FROM_DEVICE		data is coming from the device to the memory | 
|  | 228 | DMA_BIDIRECTIONAL	direction isn't known | 
|  | 229 | ======================= ============================================= | 
|  | 230 |  | 
|  | 231 | .. note:: | 
|  | 232 |  | 
|  | 233 | Not all memory regions in a machine can be mapped by this API. | 
|  | 234 | Further, contiguous kernel virtual space may not be contiguous as | 
|  | 235 | physical memory.  Since this API does not provide any scatter/gather | 
|  | 236 | capability, it will fail if the user tries to map a non-physically | 
|  | 237 | contiguous piece of memory.  For this reason, memory to be mapped by | 
|  | 238 | this API should be obtained from sources which guarantee it to be | 
|  | 239 | physically contiguous (like kmalloc). | 
|  | 240 |  | 
|  | 241 | Further, the DMA address of the memory must be within the | 
|  | 242 | dma_mask of the device (the dma_mask is a bit mask of the | 
|  | 243 | addressable region for the device, i.e., if the DMA address of | 
|  | 244 | the memory ANDed with the dma_mask is still equal to the DMA | 
|  | 245 | address, then the device can perform DMA to the memory).  To | 
|  | 246 | ensure that the memory allocated by kmalloc is within the dma_mask, | 
|  | 247 | the driver may specify various platform-dependent flags to restrict | 
|  | 248 | the DMA address range of the allocation (e.g., on x86, GFP_DMA | 
|  | 249 | guarantees to be within the first 16MB of available DMA addresses, | 
|  | 250 | as required by ISA devices). | 
|  | 251 |  | 
|  | 252 | Note also that the above constraints on physical contiguity and | 
|  | 253 | dma_mask may not apply if the platform has an IOMMU (a device which | 
|  | 254 | maps an I/O DMA address to a physical memory address).  However, to be | 
|  | 255 | portable, device driver writers may *not* assume that such an IOMMU | 
|  | 256 | exists. | 
|  | 257 |  | 
|  | 258 | .. warning:: | 
|  | 259 |  | 
|  | 260 | Memory coherency operates at a granularity called the cache | 
|  | 261 | line width.  In order for memory mapped by this API to operate | 
|  | 262 | correctly, the mapped region must begin exactly on a cache line | 
|  | 263 | boundary and end exactly on one (to prevent two separately mapped | 
|  | 264 | regions from sharing a single cache line).  Since the cache line size | 
|  | 265 | may not be known at compile time, the API will not enforce this | 
|  | 266 | requirement.  Therefore, it is recommended that driver writers who | 
|  | 267 | don't take special care to determine the cache line size at run time | 
|  | 268 | only map virtual regions that begin and end on page boundaries (which | 
|  | 269 | are guaranteed also to be cache line boundaries). | 
|  | 270 |  | 
|  | 271 | DMA_TO_DEVICE synchronisation must be done after the last modification | 
|  | 272 | of the memory region by the software and before it is handed off to | 
|  | 273 | the device.  Once this primitive is used, memory covered by this | 
|  | 274 | primitive should be treated as read-only by the device.  If the device | 
|  | 275 | may write to it at any point, it should be DMA_BIDIRECTIONAL (see | 
|  | 276 | below). | 
|  | 277 |  | 
|  | 278 | DMA_FROM_DEVICE synchronisation must be done before the driver | 
|  | 279 | accesses data that may be changed by the device.  This memory should | 
|  | 280 | be treated as read-only by the driver.  If the driver needs to write | 
|  | 281 | to it at any point, it should be DMA_BIDIRECTIONAL (see below). | 
|  | 282 |  | 
|  | 283 | DMA_BIDIRECTIONAL requires special handling: it means that the driver | 
|  | 284 | isn't sure if the memory was modified before being handed off to the | 
|  | 285 | device and also isn't sure if the device will also modify it.  Thus, | 
|  | 286 | you must always sync bidirectional memory twice: once before the | 
|  | 287 | memory is handed off to the device (to make sure all memory changes | 
|  | 288 | are flushed from the processor) and once before the data may be | 
|  | 289 | accessed after being used by the device (to make sure any processor | 
|  | 290 | cache lines are updated with data that the device may have changed). | 
|  | 291 |  | 
|  | 292 | :: | 
|  | 293 |  | 
|  | 294 | void | 
|  | 295 | dma_unmap_single(struct device *dev, dma_addr_t dma_addr, size_t size, | 
|  | 296 | enum dma_data_direction direction) | 
|  | 297 |  | 
|  | 298 | Unmaps the region previously mapped.  All the parameters passed in | 
|  | 299 | must be identical to those passed in (and returned) by the mapping | 
|  | 300 | API. | 
|  | 301 |  | 
|  | 302 | :: | 
|  | 303 |  | 
|  | 304 | dma_addr_t | 
|  | 305 | dma_map_page(struct device *dev, struct page *page, | 
|  | 306 | unsigned long offset, size_t size, | 
|  | 307 | enum dma_data_direction direction) | 
|  | 308 |  | 
|  | 309 | void | 
|  | 310 | dma_unmap_page(struct device *dev, dma_addr_t dma_address, size_t size, | 
|  | 311 | enum dma_data_direction direction) | 
|  | 312 |  | 
|  | 313 | API for mapping and unmapping for pages.  All the notes and warnings | 
|  | 314 | for the other mapping APIs apply here.  Also, although the <offset> | 
|  | 315 | and <size> parameters are provided to do partial page mapping, it is | 
|  | 316 | recommended that you never use these unless you really know what the | 
|  | 317 | cache width is. | 
|  | 318 |  | 
|  | 319 | :: | 
|  | 320 |  | 
|  | 321 | dma_addr_t | 
|  | 322 | dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, | 
|  | 323 | enum dma_data_direction dir, unsigned long attrs) | 
|  | 324 |  | 
|  | 325 | void | 
|  | 326 | dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, | 
|  | 327 | enum dma_data_direction dir, unsigned long attrs) | 
|  | 328 |  | 
|  | 329 | API for mapping and unmapping for MMIO resources. All the notes and | 
|  | 330 | warnings for the other mapping APIs apply here. The API should only be | 
|  | 331 | used to map device MMIO resources, mapping of RAM is not permitted. | 
|  | 332 |  | 
|  | 333 | :: | 
|  | 334 |  | 
|  | 335 | int | 
|  | 336 | dma_mapping_error(struct device *dev, dma_addr_t dma_addr) | 
|  | 337 |  | 
|  | 338 | In some circumstances dma_map_single(), dma_map_page() and dma_map_resource() | 
|  | 339 | will fail to create a mapping. A driver can check for these errors by testing | 
|  | 340 | the returned DMA address with dma_mapping_error(). A non-zero return value | 
|  | 341 | means the mapping could not be created and the driver should take appropriate | 
|  | 342 | action (e.g. reduce current DMA mapping usage or delay and try again later). | 
|  | 343 |  | 
|  | 344 | :: | 
|  | 345 |  | 
|  | 346 | int | 
|  | 347 | dma_map_sg(struct device *dev, struct scatterlist *sg, | 
|  | 348 | int nents, enum dma_data_direction direction) | 
|  | 349 |  | 
|  | 350 | Returns: the number of DMA address segments mapped (this may be shorter | 
|  | 351 | than <nents> passed in if some elements of the scatter/gather list are | 
|  | 352 | physically or virtually adjacent and an IOMMU maps them with a single | 
|  | 353 | entry). | 
|  | 354 |  | 
|  | 355 | Please note that the sg cannot be mapped again if it has been mapped once. | 
|  | 356 | The mapping process is allowed to destroy information in the sg. | 
|  | 357 |  | 
|  | 358 | As with the other mapping interfaces, dma_map_sg() can fail. When it | 
|  | 359 | does, 0 is returned and a driver must take appropriate action. It is | 
|  | 360 | critical that the driver do something, in the case of a block driver | 
|  | 361 | aborting the request or even oopsing is better than doing nothing and | 
|  | 362 | corrupting the filesystem. | 
|  | 363 |  | 
|  | 364 | With scatterlists, you use the resulting mapping like this:: | 
|  | 365 |  | 
|  | 366 | int i, count = dma_map_sg(dev, sglist, nents, direction); | 
|  | 367 | struct scatterlist *sg; | 
|  | 368 |  | 
|  | 369 | for_each_sg(sglist, sg, count, i) { | 
|  | 370 | hw_address[i] = sg_dma_address(sg); | 
|  | 371 | hw_len[i] = sg_dma_len(sg); | 
|  | 372 | } | 
|  | 373 |  | 
|  | 374 | where nents is the number of entries in the sglist. | 
|  | 375 |  | 
|  | 376 | The implementation is free to merge several consecutive sglist entries | 
|  | 377 | into one (e.g. with an IOMMU, or if several pages just happen to be | 
|  | 378 | physically contiguous) and returns the actual number of sg entries it | 
|  | 379 | mapped them to. On failure 0, is returned. | 
|  | 380 |  | 
|  | 381 | Then you should loop count times (note: this can be less than nents times) | 
|  | 382 | and use sg_dma_address() and sg_dma_len() macros where you previously | 
|  | 383 | accessed sg->address and sg->length as shown above. | 
|  | 384 |  | 
|  | 385 | :: | 
|  | 386 |  | 
|  | 387 | void | 
|  | 388 | dma_unmap_sg(struct device *dev, struct scatterlist *sg, | 
|  | 389 | int nents, enum dma_data_direction direction) | 
|  | 390 |  | 
|  | 391 | Unmap the previously mapped scatter/gather list.  All the parameters | 
|  | 392 | must be the same as those and passed in to the scatter/gather mapping | 
|  | 393 | API. | 
|  | 394 |  | 
|  | 395 | Note: <nents> must be the number you passed in, *not* the number of | 
|  | 396 | DMA address entries returned. | 
|  | 397 |  | 
|  | 398 | :: | 
|  | 399 |  | 
|  | 400 | void | 
|  | 401 | dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, | 
|  | 402 | size_t size, | 
|  | 403 | enum dma_data_direction direction) | 
|  | 404 |  | 
|  | 405 | void | 
|  | 406 | dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, | 
|  | 407 | size_t size, | 
|  | 408 | enum dma_data_direction direction) | 
|  | 409 |  | 
|  | 410 | void | 
|  | 411 | dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg, | 
|  | 412 | int nents, | 
|  | 413 | enum dma_data_direction direction) | 
|  | 414 |  | 
|  | 415 | void | 
|  | 416 | dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg, | 
|  | 417 | int nents, | 
|  | 418 | enum dma_data_direction direction) | 
|  | 419 |  | 
|  | 420 | Synchronise a single contiguous or scatter/gather mapping for the CPU | 
|  | 421 | and device. With the sync_sg API, all the parameters must be the same | 
|  | 422 | as those passed into the single mapping API. With the sync_single API, | 
|  | 423 | you can use dma_handle and size parameters that aren't identical to | 
|  | 424 | those passed into the single mapping API to do a partial sync. | 
|  | 425 |  | 
|  | 426 |  | 
|  | 427 | .. note:: | 
|  | 428 |  | 
|  | 429 | You must do this: | 
|  | 430 |  | 
|  | 431 | - Before reading values that have been written by DMA from the device | 
|  | 432 | (use the DMA_FROM_DEVICE direction) | 
|  | 433 | - After writing values that will be written to the device using DMA | 
|  | 434 | (use the DMA_TO_DEVICE) direction | 
|  | 435 | - before *and* after handing memory to the device if the memory is | 
|  | 436 | DMA_BIDIRECTIONAL | 
|  | 437 |  | 
|  | 438 | See also dma_map_single(). | 
|  | 439 |  | 
|  | 440 | :: | 
|  | 441 |  | 
|  | 442 | dma_addr_t | 
|  | 443 | dma_map_single_attrs(struct device *dev, void *cpu_addr, size_t size, | 
|  | 444 | enum dma_data_direction dir, | 
|  | 445 | unsigned long attrs) | 
|  | 446 |  | 
|  | 447 | void | 
|  | 448 | dma_unmap_single_attrs(struct device *dev, dma_addr_t dma_addr, | 
|  | 449 | size_t size, enum dma_data_direction dir, | 
|  | 450 | unsigned long attrs) | 
|  | 451 |  | 
|  | 452 | int | 
|  | 453 | dma_map_sg_attrs(struct device *dev, struct scatterlist *sgl, | 
|  | 454 | int nents, enum dma_data_direction dir, | 
|  | 455 | unsigned long attrs) | 
|  | 456 |  | 
|  | 457 | void | 
|  | 458 | dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sgl, | 
|  | 459 | int nents, enum dma_data_direction dir, | 
|  | 460 | unsigned long attrs) | 
|  | 461 |  | 
|  | 462 | The four functions above are just like the counterpart functions | 
|  | 463 | without the _attrs suffixes, except that they pass an optional | 
|  | 464 | dma_attrs. | 
|  | 465 |  | 
|  | 466 | The interpretation of DMA attributes is architecture-specific, and | 
|  | 467 | each attribute should be documented in Documentation/DMA-attributes.txt. | 
|  | 468 |  | 
|  | 469 | If dma_attrs are 0, the semantics of each of these functions | 
|  | 470 | is identical to those of the corresponding function | 
|  | 471 | without the _attrs suffix. As a result dma_map_single_attrs() | 
|  | 472 | can generally replace dma_map_single(), etc. | 
|  | 473 |  | 
|  | 474 | As an example of the use of the ``*_attrs`` functions, here's how | 
|  | 475 | you could pass an attribute DMA_ATTR_FOO when mapping memory | 
|  | 476 | for DMA:: | 
|  | 477 |  | 
|  | 478 | #include <linux/dma-mapping.h> | 
|  | 479 | /* DMA_ATTR_FOO should be defined in linux/dma-mapping.h and | 
|  | 480 | * documented in Documentation/DMA-attributes.txt */ | 
|  | 481 | ... | 
|  | 482 |  | 
|  | 483 | unsigned long attr; | 
|  | 484 | attr |= DMA_ATTR_FOO; | 
|  | 485 | .... | 
|  | 486 | n = dma_map_sg_attrs(dev, sg, nents, DMA_TO_DEVICE, attr); | 
|  | 487 | .... | 
|  | 488 |  | 
|  | 489 | Architectures that care about DMA_ATTR_FOO would check for its | 
|  | 490 | presence in their implementations of the mapping and unmapping | 
|  | 491 | routines, e.g.::: | 
|  | 492 |  | 
|  | 493 | void whizco_dma_map_sg_attrs(struct device *dev, dma_addr_t dma_addr, | 
|  | 494 | size_t size, enum dma_data_direction dir, | 
|  | 495 | unsigned long attrs) | 
|  | 496 | { | 
|  | 497 | .... | 
|  | 498 | if (attrs & DMA_ATTR_FOO) | 
|  | 499 | /* twizzle the frobnozzle */ | 
|  | 500 | .... | 
|  | 501 | } | 
|  | 502 |  | 
|  | 503 |  | 
|  | 504 | Part II - Advanced dma usage | 
|  | 505 | ---------------------------- | 
|  | 506 |  | 
|  | 507 | Warning: These pieces of the DMA API should not be used in the | 
|  | 508 | majority of cases, since they cater for unlikely corner cases that | 
|  | 509 | don't belong in usual drivers. | 
|  | 510 |  | 
|  | 511 | If you don't understand how cache line coherency works between a | 
|  | 512 | processor and an I/O device, you should not be using this part of the | 
|  | 513 | API at all. | 
|  | 514 |  | 
|  | 515 | :: | 
|  | 516 |  | 
|  | 517 | void * | 
|  | 518 | dma_alloc_attrs(struct device *dev, size_t size, dma_addr_t *dma_handle, | 
|  | 519 | gfp_t flag, unsigned long attrs) | 
|  | 520 |  | 
|  | 521 | Identical to dma_alloc_coherent() except that when the | 
|  | 522 | DMA_ATTR_NON_CONSISTENT flags is passed in the attrs argument, the | 
|  | 523 | platform will choose to return either consistent or non-consistent memory | 
|  | 524 | as it sees fit.  By using this API, you are guaranteeing to the platform | 
|  | 525 | that you have all the correct and necessary sync points for this memory | 
|  | 526 | in the driver should it choose to return non-consistent memory. | 
|  | 527 |  | 
|  | 528 | Note: where the platform can return consistent memory, it will | 
|  | 529 | guarantee that the sync points become nops. | 
|  | 530 |  | 
|  | 531 | Warning:  Handling non-consistent memory is a real pain.  You should | 
|  | 532 | only use this API if you positively know your driver will be | 
|  | 533 | required to work on one of the rare (usually non-PCI) architectures | 
|  | 534 | that simply cannot make consistent memory. | 
|  | 535 |  | 
|  | 536 | :: | 
|  | 537 |  | 
|  | 538 | void | 
|  | 539 | dma_free_attrs(struct device *dev, size_t size, void *cpu_addr, | 
|  | 540 | dma_addr_t dma_handle, unsigned long attrs) | 
|  | 541 |  | 
|  | 542 | Free memory allocated by the dma_alloc_attrs().  All parameters common | 
|  | 543 | parameters must identical to those otherwise passed to dma_fre_coherent, | 
|  | 544 | and the attrs argument must be identical to the attrs passed to | 
|  | 545 | dma_alloc_attrs(). | 
|  | 546 |  | 
|  | 547 | :: | 
|  | 548 |  | 
|  | 549 | int | 
|  | 550 | dma_get_cache_alignment(void) | 
|  | 551 |  | 
|  | 552 | Returns the processor cache alignment.  This is the absolute minimum | 
|  | 553 | alignment *and* width that you must observe when either mapping | 
|  | 554 | memory or doing partial flushes. | 
|  | 555 |  | 
|  | 556 | .. note:: | 
|  | 557 |  | 
|  | 558 | This API may return a number *larger* than the actual cache | 
|  | 559 | line, but it will guarantee that one or more cache lines fit exactly | 
|  | 560 | into the width returned by this call.  It will also always be a power | 
|  | 561 | of two for easy alignment. | 
|  | 562 |  | 
|  | 563 | :: | 
|  | 564 |  | 
|  | 565 | void | 
|  | 566 | dma_cache_sync(struct device *dev, void *vaddr, size_t size, | 
|  | 567 | enum dma_data_direction direction) | 
|  | 568 |  | 
|  | 569 | Do a partial sync of memory that was allocated by dma_alloc_attrs() with | 
|  | 570 | the DMA_ATTR_NON_CONSISTENT flag starting at virtual address vaddr and | 
|  | 571 | continuing on for size.  Again, you *must* observe the cache line | 
|  | 572 | boundaries when doing this. | 
|  | 573 |  | 
|  | 574 | :: | 
|  | 575 |  | 
|  | 576 | int | 
|  | 577 | dma_declare_coherent_memory(struct device *dev, phys_addr_t phys_addr, | 
|  | 578 | dma_addr_t device_addr, size_t size, int | 
|  | 579 | flags) | 
|  | 580 |  | 
|  | 581 | Declare region of memory to be handed out by dma_alloc_coherent() when | 
|  | 582 | it's asked for coherent memory for this device. | 
|  | 583 |  | 
|  | 584 | phys_addr is the CPU physical address to which the memory is currently | 
|  | 585 | assigned (this will be ioremapped so the CPU can access the region). | 
|  | 586 |  | 
|  | 587 | device_addr is the DMA address the device needs to be programmed | 
|  | 588 | with to actually address this memory (this will be handed out as the | 
|  | 589 | dma_addr_t in dma_alloc_coherent()). | 
|  | 590 |  | 
|  | 591 | size is the size of the area (must be multiples of PAGE_SIZE). | 
|  | 592 |  | 
|  | 593 | flags can be ORed together and are: | 
|  | 594 |  | 
|  | 595 | - DMA_MEMORY_EXCLUSIVE - only allocate memory from the declared regions. | 
|  | 596 | Do not allow dma_alloc_coherent() to fall back to system memory when | 
|  | 597 | it's out of memory in the declared region. | 
|  | 598 |  | 
|  | 599 | As a simplification for the platforms, only *one* such region of | 
|  | 600 | memory may be declared per device. | 
|  | 601 |  | 
|  | 602 | For reasons of efficiency, most platforms choose to track the declared | 
|  | 603 | region only at the granularity of a page.  For smaller allocations, | 
|  | 604 | you should use the dma_pool() API. | 
|  | 605 |  | 
|  | 606 | :: | 
|  | 607 |  | 
|  | 608 | void | 
|  | 609 | dma_release_declared_memory(struct device *dev) | 
|  | 610 |  | 
|  | 611 | Remove the memory region previously declared from the system.  This | 
|  | 612 | API performs *no* in-use checking for this region and will return | 
|  | 613 | unconditionally having removed all the required structures.  It is the | 
|  | 614 | driver's job to ensure that no parts of this memory region are | 
|  | 615 | currently in use. | 
|  | 616 |  | 
|  | 617 | :: | 
|  | 618 |  | 
|  | 619 | void * | 
|  | 620 | dma_mark_declared_memory_occupied(struct device *dev, | 
|  | 621 | dma_addr_t device_addr, size_t size) | 
|  | 622 |  | 
|  | 623 | This is used to occupy specific regions of the declared space | 
|  | 624 | (dma_alloc_coherent() will hand out the first free region it finds). | 
|  | 625 |  | 
|  | 626 | device_addr is the *device* address of the region requested. | 
|  | 627 |  | 
|  | 628 | size is the size (and should be a page-sized multiple). | 
|  | 629 |  | 
|  | 630 | The return value will be either a pointer to the processor virtual | 
|  | 631 | address of the memory, or an error (via PTR_ERR()) if any part of the | 
|  | 632 | region is occupied. | 
|  | 633 |  | 
|  | 634 | Part III - Debug drivers use of the DMA-API | 
|  | 635 | ------------------------------------------- | 
|  | 636 |  | 
|  | 637 | The DMA-API as described above has some constraints. DMA addresses must be | 
|  | 638 | released with the corresponding function with the same size for example. With | 
|  | 639 | the advent of hardware IOMMUs it becomes more and more important that drivers | 
|  | 640 | do not violate those constraints. In the worst case such a violation can | 
|  | 641 | result in data corruption up to destroyed filesystems. | 
|  | 642 |  | 
|  | 643 | To debug drivers and find bugs in the usage of the DMA-API checking code can | 
|  | 644 | be compiled into the kernel which will tell the developer about those | 
|  | 645 | violations. If your architecture supports it you can select the "Enable | 
|  | 646 | debugging of DMA-API usage" option in your kernel configuration. Enabling this | 
|  | 647 | option has a performance impact. Do not enable it in production kernels. | 
|  | 648 |  | 
|  | 649 | If you boot the resulting kernel will contain code which does some bookkeeping | 
|  | 650 | about what DMA memory was allocated for which device. If this code detects an | 
|  | 651 | error it prints a warning message with some details into your kernel log. An | 
|  | 652 | example warning message may look like this:: | 
|  | 653 |  | 
|  | 654 | WARNING: at /data2/repos/linux-2.6-iommu/lib/dma-debug.c:448 | 
|  | 655 | check_unmap+0x203/0x490() | 
|  | 656 | Hardware name: | 
|  | 657 | forcedeth 0000:00:08.0: DMA-API: device driver frees DMA memory with wrong | 
|  | 658 | function [device address=0x00000000640444be] [size=66 bytes] [mapped as | 
|  | 659 | single] [unmapped as page] | 
|  | 660 | Modules linked in: nfsd exportfs bridge stp llc r8169 | 
|  | 661 | Pid: 0, comm: swapper Tainted: G        W  2.6.28-dmatest-09289-g8bb99c0 #1 | 
|  | 662 | Call Trace: | 
|  | 663 | <IRQ>  [<ffffffff80240b22>] warn_slowpath+0xf2/0x130 | 
|  | 664 | [<ffffffff80647b70>] _spin_unlock+0x10/0x30 | 
|  | 665 | [<ffffffff80537e75>] usb_hcd_link_urb_to_ep+0x75/0xc0 | 
|  | 666 | [<ffffffff80647c22>] _spin_unlock_irqrestore+0x12/0x40 | 
|  | 667 | [<ffffffff8055347f>] ohci_urb_enqueue+0x19f/0x7c0 | 
|  | 668 | [<ffffffff80252f96>] queue_work+0x56/0x60 | 
|  | 669 | [<ffffffff80237e10>] enqueue_task_fair+0x20/0x50 | 
|  | 670 | [<ffffffff80539279>] usb_hcd_submit_urb+0x379/0xbc0 | 
|  | 671 | [<ffffffff803b78c3>] cpumask_next_and+0x23/0x40 | 
|  | 672 | [<ffffffff80235177>] find_busiest_group+0x207/0x8a0 | 
|  | 673 | [<ffffffff8064784f>] _spin_lock_irqsave+0x1f/0x50 | 
|  | 674 | [<ffffffff803c7ea3>] check_unmap+0x203/0x490 | 
|  | 675 | [<ffffffff803c8259>] debug_dma_unmap_page+0x49/0x50 | 
|  | 676 | [<ffffffff80485f26>] nv_tx_done_optimized+0xc6/0x2c0 | 
|  | 677 | [<ffffffff80486c13>] nv_nic_irq_optimized+0x73/0x2b0 | 
|  | 678 | [<ffffffff8026df84>] handle_IRQ_event+0x34/0x70 | 
|  | 679 | [<ffffffff8026ffe9>] handle_edge_irq+0xc9/0x150 | 
|  | 680 | [<ffffffff8020e3ab>] do_IRQ+0xcb/0x1c0 | 
|  | 681 | [<ffffffff8020c093>] ret_from_intr+0x0/0xa | 
|  | 682 | <EOI> <4>---[ end trace f6435a98e2a38c0e ]--- | 
|  | 683 |  | 
|  | 684 | The driver developer can find the driver and the device including a stacktrace | 
|  | 685 | of the DMA-API call which caused this warning. | 
|  | 686 |  | 
|  | 687 | Per default only the first error will result in a warning message. All other | 
|  | 688 | errors will only silently counted. This limitation exist to prevent the code | 
|  | 689 | from flooding your kernel log. To support debugging a device driver this can | 
|  | 690 | be disabled via debugfs. See the debugfs interface documentation below for | 
|  | 691 | details. | 
|  | 692 |  | 
|  | 693 | The debugfs directory for the DMA-API debugging code is called dma-api/. In | 
|  | 694 | this directory the following files can currently be found: | 
|  | 695 |  | 
|  | 696 | =============================== =============================================== | 
|  | 697 | dma-api/all_errors		This file contains a numeric value. If this | 
|  | 698 | value is not equal to zero the debugging code | 
|  | 699 | will print a warning for every error it finds | 
|  | 700 | into the kernel log. Be careful with this | 
|  | 701 | option, as it can easily flood your logs. | 
|  | 702 |  | 
|  | 703 | dma-api/disabled		This read-only file contains the character 'Y' | 
|  | 704 | if the debugging code is disabled. This can | 
|  | 705 | happen when it runs out of memory or if it was | 
|  | 706 | disabled at boot time | 
|  | 707 |  | 
|  | 708 | dma-api/error_count		This file is read-only and shows the total | 
|  | 709 | numbers of errors found. | 
|  | 710 |  | 
|  | 711 | dma-api/num_errors		The number in this file shows how many | 
|  | 712 | warnings will be printed to the kernel log | 
|  | 713 | before it stops. This number is initialized to | 
|  | 714 | one at system boot and be set by writing into | 
|  | 715 | this file | 
|  | 716 |  | 
|  | 717 | dma-api/min_free_entries	This read-only file can be read to get the | 
|  | 718 | minimum number of free dma_debug_entries the | 
|  | 719 | allocator has ever seen. If this value goes | 
|  | 720 | down to zero the code will disable itself | 
|  | 721 | because it is not longer reliable. | 
|  | 722 |  | 
|  | 723 | dma-api/num_free_entries	The current number of free dma_debug_entries | 
|  | 724 | in the allocator. | 
|  | 725 |  | 
|  | 726 | dma-api/driver-filter		You can write a name of a driver into this file | 
|  | 727 | to limit the debug output to requests from that | 
|  | 728 | particular driver. Write an empty string to | 
|  | 729 | that file to disable the filter and see | 
|  | 730 | all errors again. | 
|  | 731 | =============================== =============================================== | 
|  | 732 |  | 
|  | 733 | If you have this code compiled into your kernel it will be enabled by default. | 
|  | 734 | If you want to boot without the bookkeeping anyway you can provide | 
|  | 735 | 'dma_debug=off' as a boot parameter. This will disable DMA-API debugging. | 
|  | 736 | Notice that you can not enable it again at runtime. You have to reboot to do | 
|  | 737 | so. | 
|  | 738 |  | 
|  | 739 | If you want to see debug messages only for a special device driver you can | 
|  | 740 | specify the dma_debug_driver=<drivername> parameter. This will enable the | 
|  | 741 | driver filter at boot time. The debug code will only print errors for that | 
|  | 742 | driver afterwards. This filter can be disabled or changed later using debugfs. | 
|  | 743 |  | 
|  | 744 | When the code disables itself at runtime this is most likely because it ran | 
|  | 745 | out of dma_debug_entries. These entries are preallocated at boot. The number | 
|  | 746 | of preallocated entries is defined per architecture. If it is too low for you | 
|  | 747 | boot with 'dma_debug_entries=<your_desired_number>' to overwrite the | 
|  | 748 | architectural default. | 
|  | 749 |  | 
|  | 750 | :: | 
|  | 751 |  | 
|  | 752 | void | 
|  | 753 | debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr); | 
|  | 754 |  | 
|  | 755 | dma-debug interface debug_dma_mapping_error() to debug drivers that fail | 
|  | 756 | to check DMA mapping errors on addresses returned by dma_map_single() and | 
|  | 757 | dma_map_page() interfaces. This interface clears a flag set by | 
|  | 758 | debug_dma_map_page() to indicate that dma_mapping_error() has been called by | 
|  | 759 | the driver. When driver does unmap, debug_dma_unmap() checks the flag and if | 
|  | 760 | this flag is still set, prints warning message that includes call trace that | 
|  | 761 | leads up to the unmap. This interface can be called from dma_mapping_error() | 
|  | 762 | routines to enable DMA mapping error check debugging. |