API Firewall deployment architecture

API Firewall instances are best deployed as close to your APIs as possible. For example, API Firewall should be part of an API bundle that a microservices team delivers, so that the team is self-sufficient and delivers a fully secured API out of the box. If used with an API gateway that exposes the APIs, API Firewall is typically deployed in front of the API gateway, similarly to a web application firewall (WAF).

API Firewall runtime is fully optimized so that you can deploy and run it on any container orchestrator, such as Docker, Kubernetes, or Amazon ECS. Minimal latency and footprint mean that you can deploy API Firewall against hundreds of API endpoints with very little impact. API Firewall protects the traffic in both North-South and East-West direction.

Sidecar mode

In sidecar mode, each API Firewall instance is co-located in the same Kubernetes pod with the API it protects. This way, when an individual microservice exposes your APIs, they are protected both for North-South and East-West (initiated by another microservice) traffic. Because the firewall instance is co-located with the API and can communicate only with that API, the hostname is always localhost.

The listener interface of API Firewall exposes the endpoint through which all API traffic must pass. API Firewall filters the traffic and only allows through transactions that conform to the contract set out in the API definition. The backend resources and databases that the API points to can be safely deployed behind dedicated services in their own separate pods that only API Firewall can call to.

API Firewall logs the firewall operation and API transactions, and relays the information back to 42Crunch Platform. System logs on access and errors from the API Firewall instance itself are located under /opt/guardian/logs in the file system of the API Firewall container. For more details on logs, see API monitoring.

The graphic shows an illustration of the components in sidecar mode.

Deployment configuration

You can use variables to configure the deployment properties of API Firewall, such as the API-specific connection details, picking the right protection configuration for the API to be protected, and TLS configuration for secure connections.

Where the variables are configured and how they are passed to API Firewall depends on the environment of the API Firewall deployments, for example:

  • A configmap object in a Kubernetes deployment
  • A values.yaml configuration file for a Helm chart
  • A task.json configuration file in Amazon ECS on AWS Fargate deployment, or in the AWS CloudFormation template
  • A Docker env file when building the final API Firewall image tailored to the protected API

Regardless of your target environment, the variable names are the same. For the list of all available variables and what they do, see API Firewall variables.

For more details on how to configure and deploy API Firewall, see Protect APIs.

Protection tokens

Protection tokens tie protection configurations you create for your APIs to the running API Firewall instances.

Protection tokens are passed to API Firewall in the protection token variable. When an API Firewall instance starts, it connects to 42Crunch Platform and fetches the protection configuration matching the protection token specified for it. This ensures that the API Firewall instance runs the correct configuration for your API.

When API Firewall starts, it establishes a two-way, HTTP/2 gRPC connection to 42Crunch Platform at the address protection.<your hostname> and the port 8001. Make sure that your network configuration (like your network firewall) authorizes these connections. API Firewall uses this connection to verify the protection token and to download the protection configuration you have created. During runtime, API Firewall uses the connection to send logs and monitoring information to the platform.

If you are an enterprise customer not accessing 42Crunch Platform at https://platform.42crunch.com, your hostname is the same one as in your platform URL.

The protection token must not be hard-coded in any deployment scripts. Instead, store it as a secret and retrieve the value at deployment time.

Always store all your tokens securely, for example, in Kubernetes Secrets, like other secrets you use! Treat tokens as you would other sensitive information, like your passwords.

For security reasons, you cannot view the values of your existing tokens after you have created them. However, you can easily create a new one to view and copy the value.

TLS configuration

By default, API Firewall works in TLS mode and the listener interface of the firewall only accepts secure connections. The TLS profile (Mozilla Modern) is preconfigured and embedded in the generated firewall image.

This means that for the default configuration you must provide a TLS secret containing the certificates and the corresponding keys for TLS configuration. The files to be used are specified in the API-specific variables LISTEN_SSL_CERT and LISTEN_SSL_KEY. Both the certificate file and the private key file must be in PEM format, and the certificate file must contain the full chain of certificates, sorted from leaf to root.

API Firewall also supports PKCS #11. To configure this, use PKCS URI instead of the file names in LISTEN_SSL_CERT and LISTEN_SSL_KEY variables. For more details on The PKCS #11 URI scheme, see RFC 7512.

The following graphic shows which variables are needed for the TLS configuration and where they apply:

The example diagram illustrates what parts of the deployment each setting controls.

The TLS configuration files (including the private key) must be in the file system of the API Firewall container. API Firewall expects to find the TLS configuration files under /opt/guardian/conf/ssl, so the files must be passed to the file system of the API Firewall container:

  • Add the files directly to the API Firewall Docker image build for your API.
  • Mount the images from Kubernetes secrets.

If you do not already have a certificate/key pair to use for the TLS configuration, you can create one using, for example:

  • OpenSSL
  • The cert-manager in Kubernetes-based environments
  • IDCA, a 42Crunch utility that generates certificates signed by a self-signed CA.

For more details on how to get TLS configuration to the API Firewall instance, see Deploy API Firewall for your own APIs.

If you do not need HTTPS connections, you can switch API Firewall to accept HTTP connections. In this case, you do not need to configure the certificates or keys for the TLS configuration, because the listener interface ignores the HTTPS setup. For more details, see Switch API Firewall to use HTTP connections.

Multi-environment deployment

Protection tokens enable you to secure your API in multiple environments simultaneously. For example, you could deploy the same API protected with an API Firewall instance using Kubernetes in Microsoft Azure and Amazon Web Services (AWS), but create and assign separate protection tokens for each deployment.

This way, you can manage the different cloud deployments for a single API independently of one another, such as bring one deployment down for updates while the other deployments continue to serve your API consumers.

You can create and revoke protection tokens on the Protection tab of the API as needed. You can also use a protection token in more than one deployment and manage those deployments together.

For more details, see Create or revoke protection tokens manually.