API Firewall

API Firewall is an API-native application firewall that acts as the security gateway to your enterprise architecture, and is the single point of entry and exit for all API calls. It is the executor of API Protection: API Protection generates a protection configuration based on the OpenAPI (formerly Swagger) definition of your API, and API Firewall runs this configuration when it is deployed.

Both OpenAPI Specification v2 and v3 are supported.

Each API Firewall instance is based on the same base image, but tailored for the API in question based on the protection configuration, and it provides a virtual host for that particular API. API Firewall enforces OAuth and OpenID Connect configuration and runtime best practices, and filters out unwanted requests, such as bot requests or attacks, only letting through the requests that should be allowed based on the OpenAPI definition of the API it protects. API Firewall also blocks any responses from the API that have not been declared or that do not match the API definition.

For more details on the validation that API Firewall performs, see How API Firewall validates API traffic. To learn how you can add security information for API Firewall directly on your API definitions, see Protections and strategies.

Deployment architecture

API Firewall instances are best deployed as close to your APIs as possible. For example, API Firewall should be part of an API bundle that a microservices team delivers, so that the team is self-sufficient and delivers a fully secured API out of the box. If used with an API gateway that exposes the APIs, API Firewall is typically deployed in front of the API gateway, similarly to a web application firewall (WAF).

Sidecar mode

In sidecar mode, each API Firewall instance is co-located in the same Kubernetes pod with the API it protects. This way, when an individual microservice exposes your APIs, they are protected both for North-South and East-West (initiated by another microservice) traffic. Because the firewall instance is co-located with the API and can communicate only with that API, the hostname is always localhost.

The listener interface of API Firewall exposes the endpoint through which all API traffic must pass. API Firewall filters the traffic and only allows through transactions that conform to the contract set out in the API definition. The backend resources and databases that the API points to can be safely deployed behind dedicated services in their own separate pods that only API Firewall can call to.

API Firewall logs the firewall operation and API transactions, and relays the information back to 42Crunch Platform. System logs on access and errors from the API Firewall instance itself (in standard Apache format) are located under /opt/guardian/logs in the file system of the API Firewall container. For more details on logs, see API monitoring.

The graphic shows an illustration of the components in sidecar mode.

Deployment configuration

You can use variables to configure the deployment properties of API Firewall, such as the API-specific connection details, picking the right protection configuration for the API to be protected, and TLS configuration for secure connections.

Where the variables are configured and how they are passed to API Firewall depends on the environment of the API Firewall deployments, for example:

  • A configmap object in a Kubernetes deployment
  • A values.yaml configuration file for a Helm chart
  • A task.json configuration file in Amazon ECS on AWS Fargate deployment
  • A Docker env file when building the final API Firewall image tailored to the protected API

Regardless of your target environment, the variable names are the same. For the list of all available variables and what they do, see API Firewall variables.

For more details on how to configure and deploy API Firewall, see Protect APIs.

Protection tokens

Protection tokens tie protection configurations you create for your APIs to the running API Firewall instances.

Protection tokens are passed to API Firewall in the protection token variable. When an API Firewall instance starts, it connects to 42Crunch Platform and fetches the protection configuration matching the protection token specified for it. This ensures that the API Firewall instance runs the correct configuration for your API.

Note When API Firewall starts, it establishes a a two-way, HTTP/2 gRPC connection to 42Crunch Platform at the address protection.42crunch.com and the port 8001. Make sure that your network configuration (like your network firewall) authorizes these connections. API Firewall uses this connection to verify the protection token and to download the protection configuration you have created. During runtime, API Firewall uses the connection to send logs and monitoring information to the platform.

The protection token must not be hard-coded in any deployment scripts. Instead, store it as a secret and retrieve the value at deployment time.

Security riskAlways store all your tokens securely, for example, in Kubernetes Secrets, like other secrets you use! Treat tokens as you would other sensitive information, like your passwords.

For security reasons, you cannot view the values of your existing tokens after you have created them. However, you can easily create a new one to view and copy the value.

TLS configuration

By default, API Firewall works in TLS mode and the listener interface of the firewall only accepts secure connections. The TLS profile (Mozilla Modern) is preconfigured and embedded in the generated firewall image.

This means that for the default configuration you must provide a TLS secret containing the certificates and the corresponding keys for TLS configuration. The files to be used are specified in the API-specific variables LISTEN_SSL_CERT and LISTEN_SSL_KEY. Both the certificate file and the private key file must be in PEM format, and the certificate file must contain the full chain of certificates, sorted from leaf to root.

Tip API Firewall also supports PKCS #11. To configure this, use PKCS URI instead of the file names in LISTEN_SSL_CERT and LISTEN_SSL_KEY variables. For more details on The PKCS #11 URI scheme, see RFC 7512.

The following graphic shows which variables are needed for the TLS configuration and where they apply:

The example diagram illustrates what parts of the deployment each setting controls.

The TLS configuration files (including the private key) must be in the file system of the API Firewall container. API Firewall expects to find the TLS configuration files under /opt/guardian/conf/ssl, so the files must be passed to the file system of the API Firewall container:

  • Add the files directly to the API Firewall Docker image build for your API.
  • Mount the images from Kubernetes secrets.

If you do not already have a certificate/key pair to use for the TLS configuration, you can create one using, for example:

  • OpenSSL
  • The cert-manager in Kubernetes-based environments
  • IDCA, a 42Crunch utility that generates certificates signed by a self-signed CA.

For more details on how to get TLS configuration to the API Firewall instance, see Deploy API Firewall for your own APIs.

TipIf you do not need HTTPS connections, you can switch API Firewall to accept HTTP connections. In this case, you do not need to configure the certificates or keys for the TLS configuration, because the listener interface ignores the HTTPS setup. For more details, see Switch API Firewall to use HTTP connections.

Multi-environment deployment

Protection tokens enable you to secure your API in multiple environments simultaneously. For example, you could deploy the same API protected with an API Firewall instance using Kubernetes in Microsoft Azure and Amazon Web Services (AWS), but create and assign separate protection tokens for each deployment.

This way, you can manage the different cloud deployments for a single API independently of one another, such as bring one deployment down for updates while the other deployments continue to serve your API consumers.

You can create and revoke protection tokens on the Protection tab of the API as needed. You can also use a protection token in more than one deployment and manage those deployments together.

For more details, see Create or revoke protection tokens manually.

API Firewall health check

API Firewall instances provide a dedicated endpoint /hc for health check calls that, for example, load balancers can use to check that the instance is up and running.

The health check uses the port 8880, so the API Firewall container (or AWS task) must expose that port. However, the load balancer must not expose the port. Port 8880 is reserved for the health check, so API consumers cannot call that port.

HTTPS is not supported for the health check calls. API Firewall always responds with HTTP 200 OK to indicate all is well. Health check calls do not generate any logs.

For more details how to set up the health check for different systems, see Configure health check for API Firewall.

Non-blocking mode

By default, active API Firewall instances protecting an API block any transactions that are in violation of the API contract. However, if needed, you can switch API Protection to non-blocking mode.

Security risk Non-blocking mode leaves your API unprotected! API Firewall instances execute security policies normally but do not block any transactions, they only report on what would have been blocked. Proceed with caution, and only use non-blocking mode when absolutely necessary.

Non-blocking mode may help, for example:

  • Discover any potential issues in introducing API Firewall in the line of API traffic: are existing users blocked and if yes, why
  • Detect false positives, if any
  • Troubleshoot detected problems

For more details, see Switch API Protection to non-blocking mode.