Agents

open-appsec can be deployed on Kubernetes integrated with NGINX Ingress Controller or Kong Gateway. It can also be added NGINX or Kong Gateway on Linux or Docker platforms. All deployment vehicles share the same basic agent technology. In this section we will explain how agents work and what is the difference between the different deployment vehicles.

Agents

Agents are small software components that can be easily deployed on top of an existing web server, reverse proxy, Kubernetes Ingress or API Gateway, without changing existing architecture and while ensuring minimal latency and maximum control.

As security processing is done locally sensitive data does not leave the protected environment and there is no need to share certificates and private keys with third parties. Moreover, there is no dependency on 3rd party uptime for processing traffic.

Agents can be managed by a master called Fog. The Fog is a SaaS component that provides registration, policy update, configuration update, software updates, logging and learning data synchronization. Check Point operates highly available and scalable Fogs in several regions in the world.

Agents get all updates automatically and there is no need to upgrade them manually. It is possible to control the upgrade schedule.

Agents are designed to act stand-alone and will operate without disruption to traffic and security enforcement even when Fog is unreachable. You can also run as many agents to support your load as needed with no license constraints.

When Fog is unreachable some central administrative functions are not available: software and policy updates, lPS updates, logging to cloud and synchronization of learning data between agents. Logs will be kept locally in a configurable, cyclic buffer and be relayed when communication resumes. It also possible to configure logging to a local syslog server.

Agent Main Components

Agent's main components are detailed in the following diagram and explained below:

Attachment

The Attachment connects between processes that provide HTTP data and the open-appsec security logic.

The most common attachment is for NGINX (or open-resty, which is based on NGINX, also used by Kong gateway). It is a small dynamically loadable module that runs in the process space of NGINX acting as Web Server, Reverse proxy, Kubernetes ingress or API gateway. The Attachment gets HTTP data (URL, Header, Body, Response) from the hosting process and delivers it to the HTTP Transaction handler. The attachment does not keep any state and has no security logic.

To deal with potential issues where the HTTP Transaction handler is not responding, the Attachment implements a retry mechanism and a configurable fail-open/fail-close mechanism.

It is also possible to order the Attachment to ignore specific IP addresses or ranges, which allows for a controlled, gradual deployment. See more details below.

HTTP Transaction handler nano-service

A process (or multiple instances, depending on load) that gets data for processing from the Attachment, executes open-appsec security logic, returns a verdict and issues relevant logs.

Orchestrator

A process in charge of agent registration, obtaining policy updates, software updates and other administrative operations.

Watchdog

A process in charge of making sure that all components are up and running.

Deployment vehicles

open-appsec provides multiple deployment vehicles. All of them include the same agent technology:

Currently, open-appsec supports NGINX and Kong Gateway on Kubernetes, Linux and Docker platforms. Additional integrations will follow soon after.

Kubernetes Ingress Controller

  • Helm chart for NGINX Ingress Controller (enhanced with open-appsec)

  • Kubernetes Ingress Controller pod (based on the Ingress-NGINX Controller)

  • open-appsec Agent (as sidecar container in the Ingress Controller pod)

Kubernetes Kong Gateway

  • Helm chart for Kong (enhanced with open-appsec)

  • Kong pod (containing Kong Gateway container and optionally Kong Controller container)

  • open-appsec Agent (as sidecar container in the Kong pod)

Container setup (e.g. on Docker)

Includes two containers that communicate with each other

  • NGINX or Kong Gateway (include open-appsec Attachment)

  • open-appsec Agent

Linux NGINX

An agent installation script for environments that are already running NGINX or Kong on Linux, which installs:

  • open-appsec attachment for NGINX or Kong

  • open-appsec Agent

We encourage and provide assistance to anyone that wishes to develop their own Attachments and deployment vehicles.

Secure Communication

Agents/Gateways communicate with the Fog over encrypted and authenticated secure channel.

  • Agent/Gateway is using encrypted communication over HTTP/TLS (Port 443).

  • One time agent registration is done using a 256bit key.

  • The Agent/Gateway receives a unique agent key from the Fog that is used for identification.

  • Authentication is based on OAuth 2.0 (RFC 6479).

  • The agent periodically asks for an updated JSON Web Token (JWT).

List of URLs of Check Point operated public Fog, make sure to have outbound communication allowed from your agent:

  • https://inext-agents.cloud.ngen.checkpoint.com

  • https://downloads.openappsec.io

Profiles

Agents are associated with a Profile that simplifies management and allows applying the same settings to multiple agents. When you create the first Web Application or Web API asset using the Wizard, a Profile is automatically created. You can later re-use this profile or create a new one.

Profiles determine the following shared settings:

  • Type of deployment:

    • Kubernetes Available subtypes: NGINX Ingress Controller or Kong Gateway

    • Linux Available subtypes: NGINX or Kong Gateway

    • Docker Available subtypes: NGINX or Kong Gateway

  • Registration Token for new Agents

  • Agent upgrade Mode: Automatic, Scheduled, Manual

  • SSL and Private Keys storage mode: On Gateway, in Public Cloud secure storage

  • Advanced settings such as max number of agents that can connect to a profile

It is possible to delete an agent so that it will no longer be able to connect to the Fog.

The best-practice recommendation is to create an individual profile in the WebUI for each of your open-appsec deployments. Examples for "deployments" would be e.g.:

- K8s deployment using HELM or installation tool (consisting of one or multiple open-appsec agents) - redundant deployment using Docker on two or more virtual machines (each having it's own agent) protecting the same web assets - redundant embedded deployment on one or multiple redundant Linux machines protecting the same web assets Security-wise this makes sure that only policies for those assets which are protected by specific agents are enforced on those agents (by linking the Assets to only the relevant profile(s)), in addition to having a separate tokens per each deployment and all associated agents. In addition this approach provides the flexibility of being able to configure various settings available on profile-level individually per each deployment, if required.

Last updated