| b.liu | e958203 | 2025-04-17 19:18:16 +0800 | [diff] [blame] | 1 | tdc - Adding test cases for tdc |
| 2 | |
| 3 | Author: Lucas Bates - lucasb@mojatatu.com |
| 4 | |
| 5 | ADDING TEST CASES |
| 6 | ----------------- |
| 7 | |
| 8 | User-defined tests should be added by defining a separate JSON file. This |
| 9 | will help prevent conflicts when updating the repository. Refer to |
| 10 | template.json for the required JSON format for test cases. |
| 11 | |
| 12 | Include the 'id' field, but do not assign a value. Running tdc with the -i |
| 13 | option will generate a unique ID for that test case. |
| 14 | |
| 15 | tdc will recursively search the 'tc-tests' subdirectory (or the |
| 16 | directories named with the -D option) for .json files. Any test case |
| 17 | files you create in these directories will automatically be included. |
| 18 | If you wish to store your custom test cases elsewhere, be sure to run |
| 19 | tdc with the -f argument and the path to your file, or the -D argument |
| 20 | and the path to your directory(ies). |
| 21 | |
| 22 | Be aware of required escape characters in the JSON data - particularly |
| 23 | when defining the match pattern. Refer to the supplied json test files |
| 24 | for examples when in doubt. The match pattern is written in json, and |
| 25 | will be used by python. So the match pattern will be a python regular |
| 26 | expression, but should be written using json syntax. |
| 27 | |
| 28 | |
| 29 | TEST CASE STRUCTURE |
| 30 | ------------------- |
| 31 | |
| 32 | Each test case has required data: |
| 33 | |
| 34 | id: A unique alphanumeric value to identify a particular test case |
| 35 | name: Descriptive name that explains the command under test |
| 36 | skip: A completely optional key, if the corresponding value is "yes" |
| 37 | then tdc will not execute the test case in question. However, |
| 38 | this test case will still appear in the results output but |
| 39 | marked as skipped. This key can be placed anywhere inside the |
| 40 | test case at the top level. |
| 41 | category: A list of single-word descriptions covering what the command |
| 42 | under test is testing. Example: filter, actions, u32, gact, etc. |
| 43 | setup: The list of commands required to ensure the command under test |
| 44 | succeeds. For example: if testing a filter, the command to create |
| 45 | the qdisc would appear here. |
| 46 | This list can be empty. |
| 47 | Each command can be a string to be executed, or a list consisting |
| 48 | of a string which is a command to be executed, followed by 1 or |
| 49 | more acceptable exit codes for this command. |
| 50 | If only a string is given for the command, then an exit code of 0 |
| 51 | will be expected. |
| 52 | cmdUnderTest: The tc command being tested itself. |
| 53 | expExitCode: The code returned by the command under test upon its termination. |
| 54 | tdc will compare this value against the actual returned value. |
| 55 | verifyCmd: The tc command to be run to verify successful execution. |
| 56 | For example: if the command under test creates a gact action, |
| 57 | verifyCmd should be "$TC actions show action gact" |
| 58 | matchPattern: A regular expression to be applied against the output of the |
| 59 | verifyCmd to prove the command under test succeeded. This pattern |
| 60 | should be as specific as possible so that a false positive is not |
| 61 | matched. |
| 62 | matchCount: How many times the regex in matchPattern should match. A value |
| 63 | of 0 is acceptable. |
| 64 | teardown: The list of commands to clean up after the test is completed. |
| 65 | The environment should be returned to the same state as when |
| 66 | this test was started: qdiscs deleted, actions flushed, etc. |
| 67 | This list can be empty. |
| 68 | Each command can be a string to be executed, or a list consisting |
| 69 | of a string which is a command to be executed, followed by 1 or |
| 70 | more acceptable exit codes for this command. |
| 71 | If only a string is given for the command, then an exit code of 0 |
| 72 | will be expected. |
| 73 | |
| 74 | |
| 75 | SETUP/TEARDOWN ERRORS |
| 76 | --------------------- |
| 77 | |
| 78 | If an error is detected during the setup/teardown process, execution of the |
| 79 | tests will immediately stop with an error message and the namespace in which |
| 80 | the tests are run will be destroyed. This is to prevent inaccurate results |
| 81 | in the test cases. tdc will output a series of TAP results for the skipped |
| 82 | tests. |
| 83 | |
| 84 | Repeated failures of the setup/teardown may indicate a problem with the test |
| 85 | case, or possibly even a bug in one of the commands that are not being tested. |
| 86 | |
| 87 | It's possible to include acceptable exit codes with the setup/teardown command |
| 88 | so that it doesn't halt the script for an error that doesn't matter. Turn the |
| 89 | individual command into a list, with the command being first, followed by all |
| 90 | acceptable exit codes for the command. |
| 91 | |
| 92 | Example: |
| 93 | |
| 94 | A pair of setup commands. The first can have exit code 0, 1 or 255, the |
| 95 | second must have exit code 0. |
| 96 | |
| 97 | "setup": [ |
| 98 | [ |
| 99 | "$TC actions flush action gact", |
| 100 | 0, |
| 101 | 1, |
| 102 | 255 |
| 103 | ], |
| 104 | "$TC actions add action reclassify index 65536" |
| 105 | ], |