Scan configuration details

Scan configuration is a JSON file that tells Conformance Scan what it is supposed to do, such as:

  • What API to scan?
  • Which endpoint to send the requests to?
  • How to authenticate to the API, if that is required?

You can quickly create a basic scan configuration in 42Crunch Platform by providing some basic information, or if a more complex scan configuration is needed, you may choose to work on it outside the platform in an editor of your choice and upload the finished configuration to the platform.

Scan configurations and tokens are specific to a scan version: you cannot run Scan v2 using a Scan v1 scan token, and vice versa. When running a scan, make sure you specify the right scan token for the scan version you are using, otherwise Conformance Scan cannot use the associated scan configuration and fails to run.

The details that a scan configuration can capture depend on is the configuration for Scan v1 or Scan v2.

API endpoints

By default, Conformance Scan lists endpoint URLs that are parsed directly from your OpenAPI definition. However, if you want to use a URL that is not listed, you can also enter it when you configure the scan settings for Scan v1 or edit the configuration for Scan v2.

A screenshot of selecting the API endpoint in the scan configuration wizard

If you want to override the API endpoint defined in your API definition and scan a different endpoint, make sure you specify a valid host for the paths in your API definition. For example, if your API follows OAS v2 and uses a basePath, make sure you include the basePath in the URL you enter, otherwise the basePath is ignored in the scan configuration.

The URL you enter for the API endpoint must fulfill the following criteria:

  • It is a public URL.
  • It specifies either http or https (for example, http://255.255.255.255 or https://api.example.com/v2).
  • It is not an internal address (for example, http://localhost or 255.255.255.255).
  • It does not include parameters (for example, http://www.example.com/products?id=1&page=2).
  • It does not include anchors (for example, http://www.example.com#up).

Authentication

If you have defined authentication for your API or its operations, Conformance Scan must provide the required details to authenticate when it sends requests to the API. You must therefore configure the authentication details for the security schemes in your API that you want to use in the scan. If you have not defined any security requirements in the API definition, no authentication is required.

Conformance Scan currently supports the following authentication types:

  • Basic authentication
  • API keys in headers, query strings, or cookies
  • Bearer token
  • OAuth 2
  • OpenID Connect (OAS v3 only)

To configure OAuth2 authentication, you must first manually obtain an access token that Conformance Scan can use to authenticate to your API. In the scan configuration wizard, authentication with OAuth token is configured like bearer token. For more details on OAuth2, see RFC 6749.

If needed, you can also configure mutual TLS for client authentication. The client certificate must be in pkcs12 format. For more details, see Scan API conformance.

How authentication details are configured is different for Scan v1 and Scan v2:

  • Scan v1 configuration: The configuration wizard shows all security schemes defined in the OpenAPI definition of your API. Fill in the details for the security schemes you want to use in the scan. You can leave the security schemes that you do not want to use in the scan empty and Conformance Scan will ignore these schemes. Any API operations that use only these security schemes for authentication are skipped in the scan.

    A screenshot of authentication configuration.

  • Scan v2 configuration: The basic scan configuration is generated automatically directly from your OpenAPI definition, including any authentication methods listed. The authentication tab of the configuration provides an at-a-glance summary of the available authentication methods, while the JSON file of the configuration lists the full details of each authentication method, as well as the environment variables (for example, SCAN42C_SECURITY_OAUTH2) that are used to provide the credential in your Docker command when running the scan. You only need to provide credential details for authentication methods that your API actually uses. If you have defined a method in you OpenAPI definition but your API does not actually use it, you do not need to provide credentials for it.

    A compilation of three example screenshots show how the authentication details are shown on the Authentication and Web editor tabs of a Scan v2 configuration.

If you run Scan v1 in 42Crunch Platform, the authentication details are only used in the current scan and are not stored anywhere; for on-premises scan, the authentication details are stored encrypted as part of the scan configuration in the platform. The authentication details are not retrievable and credentials are hidden.

With on-premises Scan v1, instead of hard-coding the authentication details in the scan configuration, you can use environment variables. See Using environment variables in Scan v1.

Additional settings

When you create a scan configuration for running Conformance Scan, there are various settings that you can choose to configure. Configuring these settings is entirely optional and not needed in most cases, the default settings that Conformance Scan uses are usually enough. However, for more advanced use, the settings let you tweak some aspects of the scan, such as memory limits.

The available settings may vary depending on are you running Scan v1 or Scan v2 and in 42Crunch Platform or on premises. For the full list of available settings, see API Conformance Scan settings

Scan scenarios and playbooks

This applies to Scan v2 only.

Scan scenarios allow you to configure chains of requests and responses that depend on each other for testing a particular scenario. You can, for example:

  • Tailor the tests and their input that Conformance Scan executes to suit your particular needs
  • Define complex happy paths requests (scenarios chaining multiple API calls)
  • Define unhappy paths to establish the expected baseline for error codes
  • Define advanced security tests, such as testing how your API implementation handles BOLA/IDOR (Broken Object Level Authorization, also known as Insecure Direct Object Reference) attack or BFLA (Broken Function Level Authorization).
  • Ensure that resources needed for testing a particular API operation get created or deleted as required.

Scan playbook compiles all scan scenarios in a particular scan configuration together for easy reference.

The screeshot shows an example of a scenario in a scan configuration playbook. The scenario includes two scenarios, one for testing the happy path baseline and another for an unhappy path to get a baseline for specific error handling. The scenario also includes some actions that must be performed before and after the operation that is the test subject here.

Currently, playbooks are read-only: you can view them on the playbook tab, but if you want to change something, you must edit them in the web editor or in the editor of your choice. This will be improved on in a future release.

Scenario structure

Scenarios are configured in JSON as part of your scan configuration for the API to be scanned. The basic scan configuration that is generated based on the OpenAPI definition of your API already includes some scenarios that Conformance Scan normally executes for your API, but if needed, you can define as many additional scenarios in the configuration as needed.

In the scan configuration, scenarios are defined per operation, each identified by the operationId (if available) or the path and method (verb) of the "parent" operation. Because the latter can lead to very long and complicated names, we recommend to define an operationId for each API operation in your API definition.

Because Conformance Scan always needs to know what the expected result looks like, you must specify at least one happy path scenario ("key": "happy.path") for each scenario to provide a baseline for successful API responses, typically HTTP 2XX. Defining multiple happy path scenarios lets you test all your defined response codes for success (for example, HTTP 200, 202, and 206), not just the first one that your API responds with. In addition, you can also specify one or more unhappy path scenarios ("key": "unhappy.path") to provide a baseline for API responses for errors, typically HTTP 4XX. Like with happy path requests, defining multiple unhappy path scenarios lets you ensure that all your combinations (for example, HTTP 400, 404, and 418), are tested.

The screeshot shows an example of basic scenarios for testing both happy path and unhappy path for an API operation.

What exactly happens in each scenario is defined as an array of one or more API requests that Conformance Scan executes. Like with reusable components in your OpenAPI definitions, it is often most efficient to define requests in one place and reference them in scenarios when needed, but you can also define requests directly in scenarios.

The property fuzzing indicates which scenarios and requests are to be tested or conformance to the API contract during the scan: if it is set to true, Conformance Scan runs conformance tests, if to false, the scenario or request is ignored during the conformance testing. At least one of the happy path scenarios must have fuzzing set to true, otherwise the operation is not scanned. As per usual, scenarios and requests marked for the conformance tests must produce an expected result to establish the baseline, otherwise they cannot be reliably scanned.

Defining order in scenarios

If you need to specify a particular order for the requests, for example, to make sure that specific resources are in place for a test, you can use before and after objects for this purpose. Both are arrays of requests that the scan executes either before (like prerequisites) or after (like cleanup) running the tests on the scenario. Like with requests, if you need same prerequisites for multiple scenarios, you can configure the required before and after flows only once and reference it in your scenarios when needed.

Where you reference before and after flows in your scan configuration the objects affects when they are run:

  • Referenced in happy path or unhappy path scenarios: The before and after requests are executed once for each test that Conformance Scan runs, both when establishing the baseline and when testing the conformance. Both blocks are always executed: for example, if you have defined a test that attempts to create a new resource through an injection and that injection works (HTTP 200 OK), the clean-up requests that return the data to a clean state are also executed.
  • Referenced in the root of your scan configuration: The before (preprocessing) and after (postprocessing) requests are executed once before Conformance Scan starts testing your API and after it finishes testing it. If the before flow fails for some reason, the scan stops before running any tests.

The screeshot shows an example of a playbook with both global prerequisite and postrocessing as well as scenario-specific before and after blocks.

For example, the Petstore API would require an instance of a pet store for most tests, so it might make sense to define a before flow in the root of the scan configuration that would create that required resource. However, as the data would not be returned back to clean state until after the API is scanned, you might need to take this into account for some tests, especially if you wanted to test the behavior when the required resource does not exist. In this case, you could either define additional before and after flows or even a separate scan configuration for testing these scenarios.

Scan variables and environments

This applies to Scan v2 only.

Conformance Scan uses variables ({{variable_name}}) to pass information across different blocks of the scan configuration, for example, to setup a credential, fill parameters or request body properties. This could happen either from one stage of a scenario to another, between different scenarios, or even between different deployment environments.

An example screenshot of showing variables in a scan configuration

Scan variables can be divided into two groups, depending on where they are defined:

  • Scenario-level variables: These variables are created in and specific to a particular scenario, meaning that they are used exclusively within the scope of that scenario.
  • Global variables: Variables defined in a global before block in the scan configuration have a broader scope. They can be accessed from and used in the requests and tests of any scenario in the scan configuration because Conformance Scan executes the before block before any test, making the variables it defines available globally throughout the testing process.

The values for the variables can be populated in two ways:

  • Defined in environment definitions in the scan configuration: This lets the scan configuration take different values depending on, for example, the environment where the API is deployed. For example, you might have an API that is live in both your development and testing environments and the required host or authentication details are different. In this case, you could simply edit the scan configuration to add another set of environment variables that are used when running the scan in the other environment. This way you can reuse the same configuration without having to recreate the parts that stay in common. Scan configuration always includes definitions for at least one environment, but you can expand the configuration as needed.

    An example screenshot of the environment configuration for Pixi API

  • Defined as part of response processing: Because the scan tests the full API implementation, the chains of requests and their dependencies become crucial. Testing a particular operation might require prerequisite calls so that all necessary resources exist in the first place, and the API responses to these calls provide the details, such as UUIDs, of these reports. To make sure that the subsequent requests in the chain use the correct resources, you can set, for example, the returned UUID as the value of a particular variable and use that variable in other calls.

    An example screenshot of setting a scan variable from a value in the API response.

Dynamic variables when there is no schema

Previously, Conformance Scan used the dynamic variable $random to generate values it uses for testing. However, $random always requires a schema to provide the details of the kind of value it should create and if schema is not available, Conformance Scan fails to generate a value it can use in tests.

In Scan v2, there are now more dynamic scan variables you can leverage to generate random values:

  • From schema: The variable $randomFromSchema corresponds to the previous $random and generates the values based on the schemas defined in your OpenAPI definition. Like $random, it requires that a schema has been defined for input that the API operation to be tested needs. The schema can provide examples of values in which case the provided examples are used as before. Because Conformance Scan must be able to find the schema used as the basis for value creation, $randomFromSchema can only be used in requests that are tied to an API operation through operationID, so that Conformance Scan can link it with the corresponding operation in the OpenAPI definition
  • From defined type: If there is no schema that could be used as the basis for the dynamic variable, you can indicate what kind of value the scan should generate directly in your scan configuration by using type-specific dynamic variable. In this case, Conformance Scan generates a suitable value for the dynamic variable based on the definition of the corresponding type in JSON Schema specification. The generated values follow their predefined default properties in all cases and therefore should only be used when there is no schema available. Otherwise, Conformance Scan would ignore the constraints defined the schema in favor of dynamic variable and value and therefore the sent request would at no point conform to the API contract. The following dynamic variables are available:
    VariableDescriptionExample
    {{$randomString}} Generates a random alphanumeric ([a-zA-Z0-9]) string of 20 charactersOacDH8y56XSzi12sQ9qB
    {{$randomuint}}Generates a random uint32 integer138462387
    {{$timestamp}}Generates the current time in UNIX format1695900968
    {{$timestamp3339}}Generates the current date and time as defined in RFC 33392023-09-28T11:40:55+00:00
    {{$uuid}}Generates a unique UUID3e4666bf-d5e5-4aa7-b8ce-cefe41c7568a

For example, in your scan configuration you might define a stand-alone request to send a name:

"requestBody": {
     "mode": "json",
    "json": {
         "name": "OIXS zisQ9qB\f"
    }
}

If you hard-code the values in the scan configuration, the same values are used in all requests, which could cause problem when the same value cannot be used twice. Instead, because there is no API operation to provide a schema on how the value should look like, you could use $randomString as a scan variable instead of hard-coding the values to create a different value each time the request is run:

"requestBody": {
     "mode": "json",
    "json": {
         "name": "{{$randomString}}"
    }
}

However, better results are often achieved if there is an API operation that could be leveraged to use $randomFromSchema:

"request": {
    "operationId": "editUserInfo",
    ...
    "requestBody": {
        "mode": "json",
        "json": {
            "pass": "{{$randomFromSchema}}",
            "user": "{{$randomFromSchema}}"
        }
    }
}

In this case, Conformance Scan would check the schema definition from the API operation editUserInfo and use the properties from that schema when populating the values.