| xj | b04a402 | 2021-11-25 15:01:52 +0800 | [diff] [blame^] | 1 |  | 
|  | 2 | How To Write Linux PCI Drivers | 
|  | 3 |  | 
|  | 4 | by Martin Mares <mj@ucw.cz> on 07-Feb-2000 | 
|  | 5 | updated by Grant Grundler <grundler@parisc-linux.org> on 23-Dec-2006 | 
|  | 6 |  | 
|  | 7 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 8 | The world of PCI is vast and full of (mostly unpleasant) surprises. | 
|  | 9 | Since each CPU architecture implements different chip-sets and PCI devices | 
|  | 10 | have different requirements (erm, "features"), the result is the PCI support | 
|  | 11 | in the Linux kernel is not as trivial as one would wish. This short paper | 
|  | 12 | tries to introduce all potential driver authors to Linux APIs for | 
|  | 13 | PCI device drivers. | 
|  | 14 |  | 
|  | 15 | A more complete resource is the third edition of "Linux Device Drivers" | 
|  | 16 | by Jonathan Corbet, Alessandro Rubini, and Greg Kroah-Hartman. | 
|  | 17 | LDD3 is available for free (under Creative Commons License) from: | 
|  | 18 |  | 
|  | 19 | http://lwn.net/Kernel/LDD3/ | 
|  | 20 |  | 
|  | 21 | However, keep in mind that all documents are subject to "bit rot". | 
|  | 22 | Refer to the source code if things are not working as described here. | 
|  | 23 |  | 
|  | 24 | Please send questions/comments/patches about Linux PCI API to the | 
|  | 25 | "Linux PCI" <linux-pci@atrey.karlin.mff.cuni.cz> mailing list. | 
|  | 26 |  | 
|  | 27 |  | 
|  | 28 |  | 
|  | 29 | 0. Structure of PCI drivers | 
|  | 30 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 31 | PCI drivers "discover" PCI devices in a system via pci_register_driver(). | 
|  | 32 | Actually, it's the other way around. When the PCI generic code discovers | 
|  | 33 | a new device, the driver with a matching "description" will be notified. | 
|  | 34 | Details on this below. | 
|  | 35 |  | 
|  | 36 | pci_register_driver() leaves most of the probing for devices to | 
|  | 37 | the PCI layer and supports online insertion/removal of devices [thus | 
|  | 38 | supporting hot-pluggable PCI, CardBus, and Express-Card in a single driver]. | 
|  | 39 | pci_register_driver() call requires passing in a table of function | 
|  | 40 | pointers and thus dictates the high level structure of a driver. | 
|  | 41 |  | 
|  | 42 | Once the driver knows about a PCI device and takes ownership, the | 
|  | 43 | driver generally needs to perform the following initialization: | 
|  | 44 |  | 
|  | 45 | Enable the device | 
|  | 46 | Request MMIO/IOP resources | 
|  | 47 | Set the DMA mask size (for both coherent and streaming DMA) | 
|  | 48 | Allocate and initialize shared control data (pci_allocate_coherent()) | 
|  | 49 | Access device configuration space (if needed) | 
|  | 50 | Register IRQ handler (request_irq()) | 
|  | 51 | Initialize non-PCI (i.e. LAN/SCSI/etc parts of the chip) | 
|  | 52 | Enable DMA/processing engines | 
|  | 53 |  | 
|  | 54 | When done using the device, and perhaps the module needs to be unloaded, | 
|  | 55 | the driver needs to take the follow steps: | 
|  | 56 | Disable the device from generating IRQs | 
|  | 57 | Release the IRQ (free_irq()) | 
|  | 58 | Stop all DMA activity | 
|  | 59 | Release DMA buffers (both streaming and coherent) | 
|  | 60 | Unregister from other subsystems (e.g. scsi or netdev) | 
|  | 61 | Release MMIO/IOP resources | 
|  | 62 | Disable the device | 
|  | 63 |  | 
|  | 64 | Most of these topics are covered in the following sections. | 
|  | 65 | For the rest look at LDD3 or <linux/pci.h> . | 
|  | 66 |  | 
|  | 67 | If the PCI subsystem is not configured (CONFIG_PCI is not set), most of | 
|  | 68 | the PCI functions described below are defined as inline functions either | 
|  | 69 | completely empty or just returning an appropriate error codes to avoid | 
|  | 70 | lots of ifdefs in the drivers. | 
|  | 71 |  | 
|  | 72 |  | 
|  | 73 |  | 
|  | 74 | 1. pci_register_driver() call | 
|  | 75 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 76 |  | 
|  | 77 | PCI device drivers call pci_register_driver() during their | 
|  | 78 | initialization with a pointer to a structure describing the driver | 
|  | 79 | (struct pci_driver): | 
|  | 80 |  | 
|  | 81 | field name	Description | 
|  | 82 | ----------	------------------------------------------------------ | 
|  | 83 | id_table	Pointer to table of device ID's the driver is | 
|  | 84 | interested in.  Most drivers should export this | 
|  | 85 | table using MODULE_DEVICE_TABLE(pci,...). | 
|  | 86 |  | 
|  | 87 | probe		This probing function gets called (during execution | 
|  | 88 | of pci_register_driver() for already existing | 
|  | 89 | devices or later if a new device gets inserted) for | 
|  | 90 | all PCI devices which match the ID table and are not | 
|  | 91 | "owned" by the other drivers yet. This function gets | 
|  | 92 | passed a "struct pci_dev *" for each device whose | 
|  | 93 | entry in the ID table matches the device. The probe | 
|  | 94 | function returns zero when the driver chooses to | 
|  | 95 | take "ownership" of the device or an error code | 
|  | 96 | (negative number) otherwise. | 
|  | 97 | The probe function always gets called from process | 
|  | 98 | context, so it can sleep. | 
|  | 99 |  | 
|  | 100 | remove		The remove() function gets called whenever a device | 
|  | 101 | being handled by this driver is removed (either during | 
|  | 102 | deregistration of the driver or when it's manually | 
|  | 103 | pulled out of a hot-pluggable slot). | 
|  | 104 | The remove function always gets called from process | 
|  | 105 | context, so it can sleep. | 
|  | 106 |  | 
|  | 107 | suspend		Put device into low power state. | 
|  | 108 | suspend_late	Put device into low power state. | 
|  | 109 |  | 
|  | 110 | resume_early	Wake device from low power state. | 
|  | 111 | resume		Wake device from low power state. | 
|  | 112 |  | 
|  | 113 | (Please see Documentation/power/pci.txt for descriptions | 
|  | 114 | of PCI Power Management and the related functions.) | 
|  | 115 |  | 
|  | 116 | shutdown	Hook into reboot_notifier_list (kernel/sys.c). | 
|  | 117 | Intended to stop any idling DMA operations. | 
|  | 118 | Useful for enabling wake-on-lan (NIC) or changing | 
|  | 119 | the power state of a device before reboot. | 
|  | 120 | e.g. drivers/net/e100.c. | 
|  | 121 |  | 
|  | 122 | err_handler	See Documentation/PCI/pci-error-recovery.txt | 
|  | 123 |  | 
|  | 124 |  | 
|  | 125 | The ID table is an array of struct pci_device_id entries ending with an | 
|  | 126 | all-zero entry.  Definitions with static const are generally preferred. | 
|  | 127 |  | 
|  | 128 | Each entry consists of: | 
|  | 129 |  | 
|  | 130 | vendor,device	Vendor and device ID to match (or PCI_ANY_ID) | 
|  | 131 |  | 
|  | 132 | subvendor,	Subsystem vendor and device ID to match (or PCI_ANY_ID) | 
|  | 133 | subdevice, | 
|  | 134 |  | 
|  | 135 | class		Device class, subclass, and "interface" to match. | 
|  | 136 | See Appendix D of the PCI Local Bus Spec or | 
|  | 137 | include/linux/pci_ids.h for a full list of classes. | 
|  | 138 | Most drivers do not need to specify class/class_mask | 
|  | 139 | as vendor/device is normally sufficient. | 
|  | 140 |  | 
|  | 141 | class_mask	limit which sub-fields of the class field are compared. | 
|  | 142 | See drivers/scsi/sym53c8xx_2/ for example of usage. | 
|  | 143 |  | 
|  | 144 | driver_data	Data private to the driver. | 
|  | 145 | Most drivers don't need to use driver_data field. | 
|  | 146 | Best practice is to use driver_data as an index | 
|  | 147 | into a static list of equivalent device types, | 
|  | 148 | instead of using it as a pointer. | 
|  | 149 |  | 
|  | 150 |  | 
|  | 151 | Most drivers only need PCI_DEVICE() or PCI_DEVICE_CLASS() to set up | 
|  | 152 | a pci_device_id table. | 
|  | 153 |  | 
|  | 154 | New PCI IDs may be added to a device driver pci_ids table at runtime | 
|  | 155 | as shown below: | 
|  | 156 |  | 
|  | 157 | echo "vendor device subvendor subdevice class class_mask driver_data" > \ | 
|  | 158 | /sys/bus/pci/drivers/{driver}/new_id | 
|  | 159 |  | 
|  | 160 | All fields are passed in as hexadecimal values (no leading 0x). | 
|  | 161 | The vendor and device fields are mandatory, the others are optional. Users | 
|  | 162 | need pass only as many optional fields as necessary: | 
|  | 163 | o subvendor and subdevice fields default to PCI_ANY_ID (FFFFFFFF) | 
|  | 164 | o class and classmask fields default to 0 | 
|  | 165 | o driver_data defaults to 0UL. | 
|  | 166 |  | 
|  | 167 | Note that driver_data must match the value used by any of the pci_device_id | 
|  | 168 | entries defined in the driver. This makes the driver_data field mandatory | 
|  | 169 | if all the pci_device_id entries have a non-zero driver_data value. | 
|  | 170 |  | 
|  | 171 | Once added, the driver probe routine will be invoked for any unclaimed | 
|  | 172 | PCI devices listed in its (newly updated) pci_ids list. | 
|  | 173 |  | 
|  | 174 | When the driver exits, it just calls pci_unregister_driver() and the PCI layer | 
|  | 175 | automatically calls the remove hook for all devices handled by the driver. | 
|  | 176 |  | 
|  | 177 |  | 
|  | 178 | 1.1 "Attributes" for driver functions/data | 
|  | 179 |  | 
|  | 180 | Please mark the initialization and cleanup functions where appropriate | 
|  | 181 | (the corresponding macros are defined in <linux/init.h>): | 
|  | 182 |  | 
|  | 183 | __init		Initialization code. Thrown away after the driver | 
|  | 184 | initializes. | 
|  | 185 | __exit		Exit code. Ignored for non-modular drivers. | 
|  | 186 |  | 
|  | 187 | Tips on when/where to use the above attributes: | 
|  | 188 | o The module_init()/module_exit() functions (and all | 
|  | 189 | initialization functions called _only_ from these) | 
|  | 190 | should be marked __init/__exit. | 
|  | 191 |  | 
|  | 192 | o Do not mark the struct pci_driver. | 
|  | 193 |  | 
|  | 194 | o Do NOT mark a function if you are not sure which mark to use. | 
|  | 195 | Better to not mark the function than mark the function wrong. | 
|  | 196 |  | 
|  | 197 |  | 
|  | 198 |  | 
|  | 199 | 2. How to find PCI devices manually | 
|  | 200 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 201 |  | 
|  | 202 | PCI drivers should have a really good reason for not using the | 
|  | 203 | pci_register_driver() interface to search for PCI devices. | 
|  | 204 | The main reason PCI devices are controlled by multiple drivers | 
|  | 205 | is because one PCI device implements several different HW services. | 
|  | 206 | E.g. combined serial/parallel port/floppy controller. | 
|  | 207 |  | 
|  | 208 | A manual search may be performed using the following constructs: | 
|  | 209 |  | 
|  | 210 | Searching by vendor and device ID: | 
|  | 211 |  | 
|  | 212 | struct pci_dev *dev = NULL; | 
|  | 213 | while (dev = pci_get_device(VENDOR_ID, DEVICE_ID, dev)) | 
|  | 214 | configure_device(dev); | 
|  | 215 |  | 
|  | 216 | Searching by class ID (iterate in a similar way): | 
|  | 217 |  | 
|  | 218 | pci_get_class(CLASS_ID, dev) | 
|  | 219 |  | 
|  | 220 | Searching by both vendor/device and subsystem vendor/device ID: | 
|  | 221 |  | 
|  | 222 | pci_get_subsys(VENDOR_ID,DEVICE_ID, SUBSYS_VENDOR_ID, SUBSYS_DEVICE_ID, dev). | 
|  | 223 |  | 
|  | 224 | You can use the constant PCI_ANY_ID as a wildcard replacement for | 
|  | 225 | VENDOR_ID or DEVICE_ID.  This allows searching for any device from a | 
|  | 226 | specific vendor, for example. | 
|  | 227 |  | 
|  | 228 | These functions are hotplug-safe. They increment the reference count on | 
|  | 229 | the pci_dev that they return. You must eventually (possibly at module unload) | 
|  | 230 | decrement the reference count on these devices by calling pci_dev_put(). | 
|  | 231 |  | 
|  | 232 |  | 
|  | 233 |  | 
|  | 234 | 3. Device Initialization Steps | 
|  | 235 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 236 |  | 
|  | 237 | As noted in the introduction, most PCI drivers need the following steps | 
|  | 238 | for device initialization: | 
|  | 239 |  | 
|  | 240 | Enable the device | 
|  | 241 | Request MMIO/IOP resources | 
|  | 242 | Set the DMA mask size (for both coherent and streaming DMA) | 
|  | 243 | Allocate and initialize shared control data (pci_allocate_coherent()) | 
|  | 244 | Access device configuration space (if needed) | 
|  | 245 | Register IRQ handler (request_irq()) | 
|  | 246 | Initialize non-PCI (i.e. LAN/SCSI/etc parts of the chip) | 
|  | 247 | Enable DMA/processing engines. | 
|  | 248 |  | 
|  | 249 | The driver can access PCI config space registers at any time. | 
|  | 250 | (Well, almost. When running BIST, config space can go away...but | 
|  | 251 | that will just result in a PCI Bus Master Abort and config reads | 
|  | 252 | will return garbage). | 
|  | 253 |  | 
|  | 254 |  | 
|  | 255 | 3.1 Enable the PCI device | 
|  | 256 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 257 | Before touching any device registers, the driver needs to enable | 
|  | 258 | the PCI device by calling pci_enable_device(). This will: | 
|  | 259 | o wake up the device if it was in suspended state, | 
|  | 260 | o allocate I/O and memory regions of the device (if BIOS did not), | 
|  | 261 | o allocate an IRQ (if BIOS did not). | 
|  | 262 |  | 
|  | 263 | NOTE: pci_enable_device() can fail! Check the return value. | 
|  | 264 |  | 
|  | 265 | [ OS BUG: we don't check resource allocations before enabling those | 
|  | 266 | resources. The sequence would make more sense if we called | 
|  | 267 | pci_request_resources() before calling pci_enable_device(). | 
|  | 268 | Currently, the device drivers can't detect the bug when when two | 
|  | 269 | devices have been allocated the same range. This is not a common | 
|  | 270 | problem and unlikely to get fixed soon. | 
|  | 271 |  | 
|  | 272 | This has been discussed before but not changed as of 2.6.19: | 
|  | 273 | http://lkml.org/lkml/2006/3/2/194 | 
|  | 274 | ] | 
|  | 275 |  | 
|  | 276 | pci_set_master() will enable DMA by setting the bus master bit | 
|  | 277 | in the PCI_COMMAND register. It also fixes the latency timer value if | 
|  | 278 | it's set to something bogus by the BIOS.  pci_clear_master() will | 
|  | 279 | disable DMA by clearing the bus master bit. | 
|  | 280 |  | 
|  | 281 | If the PCI device can use the PCI Memory-Write-Invalidate transaction, | 
|  | 282 | call pci_set_mwi().  This enables the PCI_COMMAND bit for Mem-Wr-Inval | 
|  | 283 | and also ensures that the cache line size register is set correctly. | 
|  | 284 | Check the return value of pci_set_mwi() as not all architectures | 
|  | 285 | or chip-sets may support Memory-Write-Invalidate.  Alternatively, | 
|  | 286 | if Mem-Wr-Inval would be nice to have but is not required, call | 
|  | 287 | pci_try_set_mwi() to have the system do its best effort at enabling | 
|  | 288 | Mem-Wr-Inval. | 
|  | 289 |  | 
|  | 290 |  | 
|  | 291 | 3.2 Request MMIO/IOP resources | 
|  | 292 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 293 | Memory (MMIO), and I/O port addresses should NOT be read directly | 
|  | 294 | from the PCI device config space. Use the values in the pci_dev structure | 
|  | 295 | as the PCI "bus address" might have been remapped to a "host physical" | 
|  | 296 | address by the arch/chip-set specific kernel support. | 
|  | 297 |  | 
|  | 298 | See Documentation/io-mapping.txt for how to access device registers | 
|  | 299 | or device memory. | 
|  | 300 |  | 
|  | 301 | The device driver needs to call pci_request_region() to verify | 
|  | 302 | no other device is already using the same address resource. | 
|  | 303 | Conversely, drivers should call pci_release_region() AFTER | 
|  | 304 | calling pci_disable_device(). | 
|  | 305 | The idea is to prevent two devices colliding on the same address range. | 
|  | 306 |  | 
|  | 307 | [ See OS BUG comment above. Currently (2.6.19), The driver can only | 
|  | 308 | determine MMIO and IO Port resource availability _after_ calling | 
|  | 309 | pci_enable_device(). ] | 
|  | 310 |  | 
|  | 311 | Generic flavors of pci_request_region() are request_mem_region() | 
|  | 312 | (for MMIO ranges) and request_region() (for IO Port ranges). | 
|  | 313 | Use these for address resources that are not described by "normal" PCI | 
|  | 314 | BARs. | 
|  | 315 |  | 
|  | 316 | Also see pci_request_selected_regions() below. | 
|  | 317 |  | 
|  | 318 |  | 
|  | 319 | 3.3 Set the DMA mask size | 
|  | 320 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 321 | [ If anything below doesn't make sense, please refer to | 
|  | 322 | Documentation/DMA-API.txt. This section is just a reminder that | 
|  | 323 | drivers need to indicate DMA capabilities of the device and is not | 
|  | 324 | an authoritative source for DMA interfaces. ] | 
|  | 325 |  | 
|  | 326 | While all drivers should explicitly indicate the DMA capability | 
|  | 327 | (e.g. 32 or 64 bit) of the PCI bus master, devices with more than | 
|  | 328 | 32-bit bus master capability for streaming data need the driver | 
|  | 329 | to "register" this capability by calling pci_set_dma_mask() with | 
|  | 330 | appropriate parameters.  In general this allows more efficient DMA | 
|  | 331 | on systems where System RAM exists above 4G _physical_ address. | 
|  | 332 |  | 
|  | 333 | Drivers for all PCI-X and PCIe compliant devices must call | 
|  | 334 | pci_set_dma_mask() as they are 64-bit DMA devices. | 
|  | 335 |  | 
|  | 336 | Similarly, drivers must also "register" this capability if the device | 
|  | 337 | can directly address "consistent memory" in System RAM above 4G physical | 
|  | 338 | address by calling pci_set_consistent_dma_mask(). | 
|  | 339 | Again, this includes drivers for all PCI-X and PCIe compliant devices. | 
|  | 340 | Many 64-bit "PCI" devices (before PCI-X) and some PCI-X devices are | 
|  | 341 | 64-bit DMA capable for payload ("streaming") data but not control | 
|  | 342 | ("consistent") data. | 
|  | 343 |  | 
|  | 344 |  | 
|  | 345 | 3.4 Setup shared control data | 
|  | 346 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 347 | Once the DMA masks are set, the driver can allocate "consistent" (a.k.a. shared) | 
|  | 348 | memory.  See Documentation/DMA-API.txt for a full description of | 
|  | 349 | the DMA APIs. This section is just a reminder that it needs to be done | 
|  | 350 | before enabling DMA on the device. | 
|  | 351 |  | 
|  | 352 |  | 
|  | 353 | 3.5 Initialize device registers | 
|  | 354 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 355 | Some drivers will need specific "capability" fields programmed | 
|  | 356 | or other "vendor specific" register initialized or reset. | 
|  | 357 | E.g. clearing pending interrupts. | 
|  | 358 |  | 
|  | 359 |  | 
|  | 360 | 3.6 Register IRQ handler | 
|  | 361 | ~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 362 | While calling request_irq() is the last step described here, | 
|  | 363 | this is often just another intermediate step to initialize a device. | 
|  | 364 | This step can often be deferred until the device is opened for use. | 
|  | 365 |  | 
|  | 366 | All interrupt handlers for IRQ lines should be registered with IRQF_SHARED | 
|  | 367 | and use the devid to map IRQs to devices (remember that all PCI IRQ lines | 
|  | 368 | can be shared). | 
|  | 369 |  | 
|  | 370 | request_irq() will associate an interrupt handler and device handle | 
|  | 371 | with an interrupt number. Historically interrupt numbers represent | 
|  | 372 | IRQ lines which run from the PCI device to the Interrupt controller. | 
|  | 373 | With MSI and MSI-X (more below) the interrupt number is a CPU "vector". | 
|  | 374 |  | 
|  | 375 | request_irq() also enables the interrupt. Make sure the device is | 
|  | 376 | quiesced and does not have any interrupts pending before registering | 
|  | 377 | the interrupt handler. | 
|  | 378 |  | 
|  | 379 | MSI and MSI-X are PCI capabilities. Both are "Message Signaled Interrupts" | 
|  | 380 | which deliver interrupts to the CPU via a DMA write to a Local APIC. | 
|  | 381 | The fundamental difference between MSI and MSI-X is how multiple | 
|  | 382 | "vectors" get allocated. MSI requires contiguous blocks of vectors | 
|  | 383 | while MSI-X can allocate several individual ones. | 
|  | 384 |  | 
|  | 385 | MSI capability can be enabled by calling pci_alloc_irq_vectors() with the | 
|  | 386 | PCI_IRQ_MSI and/or PCI_IRQ_MSIX flags before calling request_irq(). This | 
|  | 387 | causes the PCI support to program CPU vector data into the PCI device | 
|  | 388 | capability registers. Many architectures, chip-sets, or BIOSes do NOT | 
|  | 389 | support MSI or MSI-X and a call to pci_alloc_irq_vectors with just | 
|  | 390 | the PCI_IRQ_MSI and PCI_IRQ_MSIX flags will fail, so try to always | 
|  | 391 | specify PCI_IRQ_LEGACY as well. | 
|  | 392 |  | 
|  | 393 | Drivers that have different interrupt handlers for MSI/MSI-X and | 
|  | 394 | legacy INTx should chose the right one based on the msi_enabled | 
|  | 395 | and msix_enabled flags in the pci_dev structure after calling | 
|  | 396 | pci_alloc_irq_vectors. | 
|  | 397 |  | 
|  | 398 | There are (at least) two really good reasons for using MSI: | 
|  | 399 | 1) MSI is an exclusive interrupt vector by definition. | 
|  | 400 | This means the interrupt handler doesn't have to verify | 
|  | 401 | its device caused the interrupt. | 
|  | 402 |  | 
|  | 403 | 2) MSI avoids DMA/IRQ race conditions. DMA to host memory is guaranteed | 
|  | 404 | to be visible to the host CPU(s) when the MSI is delivered. This | 
|  | 405 | is important for both data coherency and avoiding stale control data. | 
|  | 406 | This guarantee allows the driver to omit MMIO reads to flush | 
|  | 407 | the DMA stream. | 
|  | 408 |  | 
|  | 409 | See drivers/infiniband/hw/mthca/ or drivers/net/tg3.c for examples | 
|  | 410 | of MSI/MSI-X usage. | 
|  | 411 |  | 
|  | 412 |  | 
|  | 413 |  | 
|  | 414 | 4. PCI device shutdown | 
|  | 415 | ~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 416 |  | 
|  | 417 | When a PCI device driver is being unloaded, most of the following | 
|  | 418 | steps need to be performed: | 
|  | 419 |  | 
|  | 420 | Disable the device from generating IRQs | 
|  | 421 | Release the IRQ (free_irq()) | 
|  | 422 | Stop all DMA activity | 
|  | 423 | Release DMA buffers (both streaming and consistent) | 
|  | 424 | Unregister from other subsystems (e.g. scsi or netdev) | 
|  | 425 | Disable device from responding to MMIO/IO Port addresses | 
|  | 426 | Release MMIO/IO Port resource(s) | 
|  | 427 |  | 
|  | 428 |  | 
|  | 429 | 4.1 Stop IRQs on the device | 
|  | 430 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 431 | How to do this is chip/device specific. If it's not done, it opens | 
|  | 432 | the possibility of a "screaming interrupt" if (and only if) | 
|  | 433 | the IRQ is shared with another device. | 
|  | 434 |  | 
|  | 435 | When the shared IRQ handler is "unhooked", the remaining devices | 
|  | 436 | using the same IRQ line will still need the IRQ enabled. Thus if the | 
|  | 437 | "unhooked" device asserts IRQ line, the system will respond assuming | 
|  | 438 | it was one of the remaining devices asserted the IRQ line. Since none | 
|  | 439 | of the other devices will handle the IRQ, the system will "hang" until | 
|  | 440 | it decides the IRQ isn't going to get handled and masks the IRQ (100,000 | 
|  | 441 | iterations later). Once the shared IRQ is masked, the remaining devices | 
|  | 442 | will stop functioning properly. Not a nice situation. | 
|  | 443 |  | 
|  | 444 | This is another reason to use MSI or MSI-X if it's available. | 
|  | 445 | MSI and MSI-X are defined to be exclusive interrupts and thus | 
|  | 446 | are not susceptible to the "screaming interrupt" problem. | 
|  | 447 |  | 
|  | 448 |  | 
|  | 449 | 4.2 Release the IRQ | 
|  | 450 | ~~~~~~~~~~~~~~~~~~~ | 
|  | 451 | Once the device is quiesced (no more IRQs), one can call free_irq(). | 
|  | 452 | This function will return control once any pending IRQs are handled, | 
|  | 453 | "unhook" the drivers IRQ handler from that IRQ, and finally release | 
|  | 454 | the IRQ if no one else is using it. | 
|  | 455 |  | 
|  | 456 |  | 
|  | 457 | 4.3 Stop all DMA activity | 
|  | 458 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 459 | It's extremely important to stop all DMA operations BEFORE attempting | 
|  | 460 | to deallocate DMA control data. Failure to do so can result in memory | 
|  | 461 | corruption, hangs, and on some chip-sets a hard crash. | 
|  | 462 |  | 
|  | 463 | Stopping DMA after stopping the IRQs can avoid races where the | 
|  | 464 | IRQ handler might restart DMA engines. | 
|  | 465 |  | 
|  | 466 | While this step sounds obvious and trivial, several "mature" drivers | 
|  | 467 | didn't get this step right in the past. | 
|  | 468 |  | 
|  | 469 |  | 
|  | 470 | 4.4 Release DMA buffers | 
|  | 471 | ~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 472 | Once DMA is stopped, clean up streaming DMA first. | 
|  | 473 | I.e. unmap data buffers and return buffers to "upstream" | 
|  | 474 | owners if there is one. | 
|  | 475 |  | 
|  | 476 | Then clean up "consistent" buffers which contain the control data. | 
|  | 477 |  | 
|  | 478 | See Documentation/DMA-API.txt for details on unmapping interfaces. | 
|  | 479 |  | 
|  | 480 |  | 
|  | 481 | 4.5 Unregister from other subsystems | 
|  | 482 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 483 | Most low level PCI device drivers support some other subsystem | 
|  | 484 | like USB, ALSA, SCSI, NetDev, Infiniband, etc. Make sure your | 
|  | 485 | driver isn't losing resources from that other subsystem. | 
|  | 486 | If this happens, typically the symptom is an Oops (panic) when | 
|  | 487 | the subsystem attempts to call into a driver that has been unloaded. | 
|  | 488 |  | 
|  | 489 |  | 
|  | 490 | 4.6 Disable Device from responding to MMIO/IO Port addresses | 
|  | 491 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 492 | io_unmap() MMIO or IO Port resources and then call pci_disable_device(). | 
|  | 493 | This is the symmetric opposite of pci_enable_device(). | 
|  | 494 | Do not access device registers after calling pci_disable_device(). | 
|  | 495 |  | 
|  | 496 |  | 
|  | 497 | 4.7 Release MMIO/IO Port Resource(s) | 
|  | 498 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 499 | Call pci_release_region() to mark the MMIO or IO Port range as available. | 
|  | 500 | Failure to do so usually results in the inability to reload the driver. | 
|  | 501 |  | 
|  | 502 |  | 
|  | 503 |  | 
|  | 504 | 5. How to access PCI config space | 
|  | 505 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 506 |  | 
|  | 507 | You can use pci_(read|write)_config_(byte|word|dword) to access the config | 
|  | 508 | space of a device represented by struct pci_dev *. All these functions return 0 | 
|  | 509 | when successful or an error code (PCIBIOS_...) which can be translated to a text | 
|  | 510 | string by pcibios_strerror. Most drivers expect that accesses to valid PCI | 
|  | 511 | devices don't fail. | 
|  | 512 |  | 
|  | 513 | If you don't have a struct pci_dev available, you can call | 
|  | 514 | pci_bus_(read|write)_config_(byte|word|dword) to access a given device | 
|  | 515 | and function on that bus. | 
|  | 516 |  | 
|  | 517 | If you access fields in the standard portion of the config header, please | 
|  | 518 | use symbolic names of locations and bits declared in <linux/pci.h>. | 
|  | 519 |  | 
|  | 520 | If you need to access Extended PCI Capability registers, just call | 
|  | 521 | pci_find_capability() for the particular capability and it will find the | 
|  | 522 | corresponding register block for you. | 
|  | 523 |  | 
|  | 524 |  | 
|  | 525 |  | 
|  | 526 | 6. Other interesting functions | 
|  | 527 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 528 |  | 
|  | 529 | pci_get_domain_bus_and_slot()	Find pci_dev corresponding to given domain, | 
|  | 530 | bus and slot and number. If the device is | 
|  | 531 | found, its reference count is increased. | 
|  | 532 | pci_set_power_state()		Set PCI Power Management state (0=D0 ... 3=D3) | 
|  | 533 | pci_find_capability()		Find specified capability in device's capability | 
|  | 534 | list. | 
|  | 535 | pci_resource_start()		Returns bus start address for a given PCI region | 
|  | 536 | pci_resource_end()		Returns bus end address for a given PCI region | 
|  | 537 | pci_resource_len()		Returns the byte length of a PCI region | 
|  | 538 | pci_set_drvdata()		Set private driver data pointer for a pci_dev | 
|  | 539 | pci_get_drvdata()		Return private driver data pointer for a pci_dev | 
|  | 540 | pci_set_mwi()			Enable Memory-Write-Invalidate transactions. | 
|  | 541 | pci_clear_mwi()			Disable Memory-Write-Invalidate transactions. | 
|  | 542 |  | 
|  | 543 |  | 
|  | 544 |  | 
|  | 545 | 7. Miscellaneous hints | 
|  | 546 | ~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 547 |  | 
|  | 548 | When displaying PCI device names to the user (for example when a driver wants | 
|  | 549 | to tell the user what card has it found), please use pci_name(pci_dev). | 
|  | 550 |  | 
|  | 551 | Always refer to the PCI devices by a pointer to the pci_dev structure. | 
|  | 552 | All PCI layer functions use this identification and it's the only | 
|  | 553 | reasonable one. Don't use bus/slot/function numbers except for very | 
|  | 554 | special purposes -- on systems with multiple primary buses their semantics | 
|  | 555 | can be pretty complex. | 
|  | 556 |  | 
|  | 557 | Don't try to turn on Fast Back to Back writes in your driver.  All devices | 
|  | 558 | on the bus need to be capable of doing it, so this is something which needs | 
|  | 559 | to be handled by platform and generic code, not individual drivers. | 
|  | 560 |  | 
|  | 561 |  | 
|  | 562 |  | 
|  | 563 | 8. Vendor and device identifications | 
|  | 564 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 565 |  | 
|  | 566 | Do not add new device or vendor IDs to include/linux/pci_ids.h unless they | 
|  | 567 | are shared across multiple drivers.  You can add private definitions in | 
|  | 568 | your driver if they're helpful, or just use plain hex constants. | 
|  | 569 |  | 
|  | 570 | The device IDs are arbitrary hex numbers (vendor controlled) and normally used | 
|  | 571 | only in a single location, the pci_device_id table. | 
|  | 572 |  | 
|  | 573 | Please DO submit new vendor/device IDs to http://pci-ids.ucw.cz/. | 
|  | 574 | There are mirrors of the pci.ids file at http://pciids.sourceforge.net/ | 
|  | 575 | and https://github.com/pciutils/pciids. | 
|  | 576 |  | 
|  | 577 |  | 
|  | 578 |  | 
|  | 579 | 9. Obsolete functions | 
|  | 580 | ~~~~~~~~~~~~~~~~~~~~~ | 
|  | 581 |  | 
|  | 582 | There are several functions which you might come across when trying to | 
|  | 583 | port an old driver to the new PCI interface.  They are no longer present | 
|  | 584 | in the kernel as they aren't compatible with hotplug or PCI domains or | 
|  | 585 | having sane locking. | 
|  | 586 |  | 
|  | 587 | pci_find_device()	Superseded by pci_get_device() | 
|  | 588 | pci_find_subsys()	Superseded by pci_get_subsys() | 
|  | 589 | pci_find_slot()		Superseded by pci_get_domain_bus_and_slot() | 
|  | 590 | pci_get_slot()		Superseded by pci_get_domain_bus_and_slot() | 
|  | 591 |  | 
|  | 592 |  | 
|  | 593 | The alternative is the traditional PCI device driver that walks PCI | 
|  | 594 | device lists. This is still possible but discouraged. | 
|  | 595 |  | 
|  | 596 |  | 
|  | 597 |  | 
|  | 598 | 10. MMIO Space and "Write Posting" | 
|  | 599 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | 
|  | 600 |  | 
|  | 601 | Converting a driver from using I/O Port space to using MMIO space | 
|  | 602 | often requires some additional changes. Specifically, "write posting" | 
|  | 603 | needs to be handled. Many drivers (e.g. tg3, acenic, sym53c8xx_2) | 
|  | 604 | already do this. I/O Port space guarantees write transactions reach the PCI | 
|  | 605 | device before the CPU can continue. Writes to MMIO space allow the CPU | 
|  | 606 | to continue before the transaction reaches the PCI device. HW weenies | 
|  | 607 | call this "Write Posting" because the write completion is "posted" to | 
|  | 608 | the CPU before the transaction has reached its destination. | 
|  | 609 |  | 
|  | 610 | Thus, timing sensitive code should add readl() where the CPU is | 
|  | 611 | expected to wait before doing other work.  The classic "bit banging" | 
|  | 612 | sequence works fine for I/O Port space: | 
|  | 613 |  | 
|  | 614 | for (i = 8; --i; val >>= 1) { | 
|  | 615 | outb(val & 1, ioport_reg);      /* write bit */ | 
|  | 616 | udelay(10); | 
|  | 617 | } | 
|  | 618 |  | 
|  | 619 | The same sequence for MMIO space should be: | 
|  | 620 |  | 
|  | 621 | for (i = 8; --i; val >>= 1) { | 
|  | 622 | writeb(val & 1, mmio_reg);      /* write bit */ | 
|  | 623 | readb(safe_mmio_reg);           /* flush posted write */ | 
|  | 624 | udelay(10); | 
|  | 625 | } | 
|  | 626 |  | 
|  | 627 | It is important that "safe_mmio_reg" not have any side effects that | 
|  | 628 | interferes with the correct operation of the device. | 
|  | 629 |  | 
|  | 630 | Another case to watch out for is when resetting a PCI device. Use PCI | 
|  | 631 | Configuration space reads to flush the writel(). This will gracefully | 
|  | 632 | handle the PCI master abort on all platforms if the PCI device is | 
|  | 633 | expected to not respond to a readl().  Most x86 platforms will allow | 
|  | 634 | MMIO reads to master abort (a.k.a. "Soft Fail") and return garbage | 
|  | 635 | (e.g. ~0). But many RISC platforms will crash (a.k.a."Hard Fail"). | 
|  | 636 |  |