For Ingress NGINX Ingress Controller with open-appsec the following method is recommended if you have an advanced understanding of Kubernetes topics and wish to have very granular controls using CRDs. For simplified installation you can alternatively use the available installation tool, see here.
For Kong and Apache APISIX with open-appsec follow the instructions for installation using Helm below.
Prerequisites
Kubernetes 1.16.0+ cluster with RBAC enabled with Cluster admin permissions
Run the following command to install open-appsec together with Ingress NGINX Ingress Controller, Kong API Gateway, or Apache APISIX API Gateway and create the open-appsec CRDs which add new K8s resource-types that will be used later for defining the protection policies, log settings, exceptions, user response and more.
If you have persistent storage available in your cluster please set the "--set appsec.persistence.enabled=false" parameter in the following command to "true" to allow open-appsec to use persistent storage for the learning. This is only shown for maximum compatibility reasons below.
This installs APISIX with open-appsec into a new namespace "appsec-apisix" in local management mode (stand-alone).
Optional open-appsec helm install parameters
-n <namespace>: select a namespace name that will include the open-appsec and NGINX ingress controller resources, please use the appsec namespace.
--create-namespace: create namespace if it doesn't exist
--name-template: name of your deployment, used for pod naming (optional)
--set appsec.userEmail: allows you to associate your email address with your specific deployment by replacing <your-email-address> with your own email address.
This allows us to provide you easy assistance in case of any issues you might have with your specific deployment in the future and also to provide you information proactively regarding open-appsec in general or regarding your specific deployment. This is an optional parameter and can be removed. If we send automatic emails there will also be an opt-out option included for receiving similar communication in the future.
--set appsec.persistence.enabled: persistent volume includes machine learning information, if this is set to false then machine learning information is lost when the appsec container is stopped/restarted.
true: default is true
false
If this value is set to true (default, when not overriding with false) you must also specify appsec.persistence.learning.storageClass
--set appsec.persistence.learning.storageClass: Specify storage class to be used for the learning pod.
Note: storageClass name specified here must support ReadWriteMany (like AWS EFS or Azure Files).
--set appsec.mode: Configure if the deployment is connected to the central management WebUI (SaaS)
standalone: use this only for standalone deployment (locally managed via CRDs with no connection to central management WebUI (SaaS))
managed: use this for connection to central management WebUI (SaaS), when this is set appsec.agentToken must be provided as well.
--set appsec.agentToken: set the deployment profile token from central management WebUI (SaaS) to connect your open-appsec deployment to the central WebUI (SaaS), also make sure to set appsec.mode to managed when you provide the token, see here how to get the token: create a profile in web UI.
--set controller.ingressClassResource.name: specify unique ingress class name, default is 'appsec-nginx'
--set controller.ingressClassResource.controllerValue: default is 'k8s.io/appsec-nginx'
--set controller.service.externalTrafficPolicy=Local required for Azure.
For additional available configuration values please check the values.yaml within the downloaded Helm chart and the Ingress NGINX documentation available here.
-n <namespace>: select a namespace name that will include the open-appsec and Kong gateway resources, please use the appsec namespace.
--create-namespace: create namespace if it doesn't exist
--name-template: name of your deployment, used for pod naming (optional)
--set appsec.userEmail: allows you to associate your email address with your specific deployment by replacing <your-email-address> with your own email address.
This allows us to provide you easy assistance in case of any issues you might have with your specific deployment in the future and also to provide you information proactively regarding open-appsec in general or regarding your specific deployment. This is an optional parameter and can be removed. If we send automatic emails there will also be an opt-out option included for receiving similar communication in the future.
--set appsec.persistence.enabled: persistent volume includes machine learning information, if this is set to false then machine learning information is lost when the appsec container is stopped/restarted.
true: default is true
false
If this value is set to true (default, when not overriding with false) you must also specify appsec.persistence.learning.storageClass
--set appsec.persistence.learning.storageClass: Specify storage class to be used for the learning pod.
Note: storageClass name specified here must support ReadWriteMany (like AWS EFS or Azure Files).
--set appsec.mode: Configure if the deployment is connected to the central management WebUI (SaaS)
standalone: use this only for standalone deployment (locally managed via CRDs with no connection to central management WebUI (SaaS))
managed: use this for connection to central management WebUI (SaaS), when this is set appsec.agentToken must be provided as well.
--set appsec.agentToken: set the deployment profile token from central management WebUI (SaaS) to connect your open-appsec deployment to the central WebUI (SaaS), also make sure to set appsec.mode to managed when you provide the token, see here how to get the token: create a profile in web UI.
--set kind: select deployment type
AppSec: Installs open-appsec and Kong as K8s Deployment (default, recommended for most scenarios)
Note: If required, in this mode you can also switch to Daemonset using by additionally setting deployment.daemonset to true)
AppSecStateful: Installs open-appsec and Kong as a K8s StatefulSet
Vanilla: (for debugging purposes only) installs just regular Kong based on the Helm chart without open-appsec.
Note: This can be useful when debugging if a potential issue with the Kong deployment is caused by open-appsec or not.
NOTE: If Vanilla mode is used, then the Kong/Kong Gateway image specified under image.repository/image.tag is being used, instead of the open-appsec specific Kong/Kong Gateway image specified here: appsec.kong.image.repository / appsec.kong.image.tag
--set ingressController.ingressClass: specify desired ingress class name
For additional available configuration values please check the values.yaml within the downloaded Helm chart and the Kong documentation available here.
-n <namespace>: select a namespace name that will include the open-appsec and Kong gateway resources, please use the appsec namespace.
--create-namespace: create namespace if it doesn't exist
--name-template: name of your deployment, used for pod naming (optional)
--set appsec.userEmail: allows you to associate your email address with your specific deployment by replacing <your-email-address> with your own email address.
This allows us to provide you easy assistance in case of any issues you might have with your specific deployment in the future and also to provide you information proactively regarding open-appsec in general or regarding your specific deployment. This is an optional parameter and can be removed. If we send automatic emails there will also be an opt-out option included for receiving similar communication in the future.
--set appsec.persistence.enabled: persistent volume includes machine learning information, if this is set to false then machine learning information is lost when the appsec container is stopped/restarted.
true: default is true
false
If this value is set to true (default, when not overriding with false) you must also specify appsec.persistence.learning.storageClass
--set appsec.persistence.learning.storageClass: Specify storage class to be used for the learning pod.
Note: storageClass name specified here must support ReadWriteMany (like AWS EFS or Azure Files)
--set appsec.mode: Configure if the deployment is connected to the central management WebUI (SaaS)
standalone: use this only for standalone deployment (locally managed via CRDs with no connection to central management WebUI (SaaS))
managed: use this for connection to central management WebUI (SaaS), when this is set appsec.agentToken must be provided as well.
--set appsec.agentToken: set the deployment profile token from central management WebUI (SaaS) to connect your open-appsec deployment to the central WebUI (SaaS), also make sure to set appsec.mode to managed when you provide the token, see here how to get the token: create a profile in web UI.
--set ingressController.ingressClass: specify desired ingress class name
For additional available configuration values please check the values.yaml within the downloaded Helm chart and the APISIX documentation available here.
Step 3: Validate that open-appsec is installed and running
kubectl get pods -n appsec
The READY column should show 2/2 for the ingress controller pod and 1/1 for the learning deployment and shared storage deployment pods.
kubectl get pods -n appsec
The READY column typically shows 3/3 (or 2/2 if e.g. Kong is deployed without the Kong ingress controller) for the kong pod and 1/1 for the learning deployment and shared storage deployment pods.
kubectl get pods -n appsec-apisix
The READY column should show 2/2 for the ingress controller pod and 1/1 for the learning deployment and shared storage deployment pods.
open-appsec implements K8s ingress resources serving as an NGINX ingress controller with multi-layered Web App & API protection functionalities.
If you use today an NGINX Ingress, you can easily update your existing K8S ingress resource to use open-appsec ingress. Once you apply the change, the ingress will reload and traffic will be protected.
This is a good approach for a lab, staging or non critical production environments.
a. Create an open-appsec policy resource
First you must create a K8s open-appsec policy resource.
There's multiple alternative ways to create a policy:
Use the available configuration tool as explained here to easily create a policy resource.
Run the following commands to create the "open-appsec-best-practice-policy" resource in K8s:
Make sure to use the correct name for the open-appsec policy resource which you created above.
The default mode of the open-appsec-best-practice-policy is detect-learn. It will not block any traffic, unless you change the policy mode to prevent-learn, either for a specific ingress rule or for the whole policy.
NGINX Ingress Controller Option 2: Run a new protected Ingress in parallel
open-appsec implements K8s ingress resources serving as an NGINX ingress controller with multi-layered Web App & API protection functionalities.
Duplicate your existing ingress rules and run a new ingress, side by side with your existing one. Once you are happy with the result, you can change your DNS setting to point to the new, protected, ingress and take down the existing ("old") ingress.
This option allows you to test that all services are properly accessible via the new ingress, without worrying about traffic disruption.
a. Create an open-appsec policy resource
First you must create a K8s open-appsec policy resource.
There's multiple alternative ways to create a policy:
Use the available configuration tool as explained here to easily create a policy resource.
Run the following commands to create the "open-appsec-best-practice-policy" in K8s:
Kong Gateway: Add protection to existing Ingress resource
open-appsec will secure traffic integrating directly with the Kong Gateway container, as this allows open-appsec to also inspect HTTPS traffic terminated at the Kong Gateway.
In order for traffic to reach your API Gateway you can use the Kong Controller as an Ingress Controller alongside Kong API Gateway (Kong Controller will be deployed by default within the same pod as Kong Gateway as an additional container, but is an optional component).
Alternatively you can use another ingress controller of your choice.
If you use today an Ingress for proxying traffic to your Kong Gateway, you can easily update your existing K8S ingress resource to secure it's traffic with open-appsec. Once you apply the change, the ingress will reload and traffic will be protected.
Note: Having an Ingress Resource defined for traffic to Kong Gateway is mandatory for being able to protect the traffic with open-appsec, as the open-appsec policy resource has to be linked to an ingress resource via an annotation, see below steps. Additional options will be provided in the future.
a. Create an open-appsec policy resource
First you must create a K8s open-appsec policy resource.
There's multiple alternative ways to create a policy:
Use the available configuration tool as explained here to easily create a policy resource.
Run the following commands to create the "open-appsec-best-practice-policy" in K8s:
The default mode of this policy is detect-learn. It will not block any traffic, unless you change the policy mode to prevent-learn, either for a specific ingress rule or for the whole policy.
open-appsec will read and enforce the open-appsec policy specified in the ingress resource by this annotation even though the actual enforcement is done in the Kong Gateway and not in the Ingress Controller (this is similar to how Kong implements its declarative policy).
APISIX Gateway: Add protection to existing Ingress resource
open-appsec will secure traffic integrating directly with the APISIX Gateway container, as this allows open-appsec to also inspect HTTPS traffic terminated at the APISIX Gateway.
In order for traffic to reach your API Gateway you can use the APISIX Ingress Controller as an Ingress Controller alongside APISIX API Gateway.
If you use today an Ingress for proxying traffic to your APISIX Gateway, you can easily update your existing K8S ingress resource to secure it's traffic with open-appsec. Once you apply the change, the ingress will reload and traffic will be protected.
a. Create an open-appsec policy resource
First you must create a K8s open-appsec policy resource.
There's multiple alternative ways to create a policy:
Use the available configuration tool as explained here to easily create a policy resource.
Run the following commands to create the "open-appsec-best-practice-policy" resource in K8s:
Make sure to use the correct name for the open-appsec policy resource which you created above.
The default mode of the open-appsec-best-practice-policy is detect-learn. It will not block any traffic, unless you change the policy mode to prevent-learn, either for a specific ingress rule or for the whole policy.
open-appsec implements K8s ingress resources serving as an NGINX ingress controller with multi-layered Web App & API protection functionalities.
If you use today an NGINX Ingress, you can easily update your existing K8S ingress resource to use open-appsec ingress. Once you apply the change, the ingress will reload and traffic will be protected.
This is a good approach for a lab, staging or non critical production environments.
a. Create an open-appsec policy resource
First you must create a K8s open-appsec policy resource.
There's multiple alternative ways to create a policy:
Use the available configuration tool as explained here to easily create a policy resource.
Run the following commands to create the "open-appsec-best-practice-policy" resource in K8s:
Make sure to use the correct name for the open-appsec policy resource which you created above.
The default mode of the open-appsec-best-practice-policy is detect-learn. It will not block any traffic, unless you change the policy mode to prevent-learn, either for a specific ingress rule or for the whole policy.
NGINX Ingress Controller Option 2: Run a new protected Ingress in parallel
open-appsec implements K8s ingress resources serving as an NGINX ingress controller with multi-layered Web App & API protection functionalities.
Duplicate your existing ingress rules and run a new ingress, side by side with your existing one. Once you are happy with the result, you can change your DNS setting to point to the new, protected, ingress and take down the existing ("old") ingress.
This option allows you to test that all services are properly accessible via the new ingress, without worrying about traffic disruption.
a. Create an open-appsec policy resource
First you must create a K8s open-appsec policy resource.
There's multiple alternative ways to create a policy:
Use the available configuration tool as explained here to easily create a policy resource.
Run the following commands to create the "open-appsec-best-practice-policy" in K8s:
Kong Gateway: Add protection to existing Ingress resource
open-appsec will secure traffic integrating directly with the Kong Gateway container, as this allows open-appsec to also inspect HTTPS traffic terminated at the Kong Gateway.
In order for traffic to reach your API Gateway you can use the Kong Controller as an Ingress Controller alongside Kong API Gateway (Kong Controller will be deployed by default within the same pod as Kong Gateway as an additional container, but is an optional component).
Alternatively you can use another ingress controller of your choice.
If you use today an Ingress for proxying traffic to your Kong Gateway, you can easily update your existing K8S ingress resource to secure it's traffic with open-appsec. Once you apply the change, the ingress will reload and traffic will be protected.
Note: Having an Ingress Resource defined for traffic to Kong Gateway is mandatory for being able to protect the traffic with open-appsec, as the open-appsec policy resource has to be linked to an ingress resource via an annotation, see below steps. Additional options will be provided in the future.
a. Create an open-appsec policy resource
First you must create a K8s open-appsec policy resource.
There's multiple alternative ways to create a policy:
Use the available configuration tool as explained here to easily create a policy resource.
Run the following commands to create the "open-appsec-best-practice-policy" in K8s:
The default mode of this policy is detect-learn. It will not block any traffic, unless you change the policy mode to prevent-learn, either for a specific ingress rule or for the whole policy.
open-appsec will read and enforce the open-appsec policy specified in the ingress resource by this annotation even though the actual enforcement is done in the Kong Gateway and not in the Ingress Controller (this is similar to how Kong implements its declarative policy).
APISIX Gateway: Add protection to existing Ingress resource
open-appsec will secure traffic integrating directly with the APISIX Gateway container, as this allows open-appsec to also inspect HTTPS traffic terminated at the APISIX Gateway.
In order for traffic to reach your API Gateway you can use the APISIX Ingress Controller as an Ingress Controller alongside APISIX API Gateway.
If you use today an Ingress for proxying traffic to your APISIX Gateway, you can easily update your existing K8S ingress resource to secure it's traffic with open-appsec. Once you apply the change, the ingress will reload and traffic will be protected.
a. Create an open-appsec policy resource
First you must create a K8s open-appsec policy resource.
There's multiple alternative ways to create a policy:
Use the available configuration tool as explained here to easily create a policy resource.
Run the following commands to create the "open-appsec-best-practice-policy" resource in K8s:
Make sure to use the correct name for the open-appsec policy resource which you created above.
The default mode of the open-appsec-best-practice-policy is detect-learn. It will not block any traffic, unless you change the policy mode to prevent-learn, either for a specific ingress rule or for the whole policy.
Step 5: Validate that open-appsec works
Your existing or new Ingress is now running and you can try it out!
Generate some traffic to one of the services defined in your ingress.
Run this command to see logs:
Note the name of the ingress nginx pod by running:
kubectl get pods -n appsec
Show the logs of the open-appsec agent container by running:
kubectl logs [ingress nginx pod name] -c open-appsec -n appsec
Note the name of the Kong pod by running:
kubectl get pods -n appsec
Show the logs of the open-appsec agent container by running:
kubectl logs [kong pod name] -c open-appsec -n appsec
Note the name of the apisix gateway pod by running:
kubectl get pods -n appsec-apisix
Show the logs of the open-appsec agent container by running:
kubectl logs [apisix gateway pod name] -c open-appsec -n appsec-apisix
With the default policy logging being done to stdout, so you can easily direct it with fluentd/fluentbit or similar to logs collector (ELK or other). It is possible to configure open-appsec to log also to syslog.
open-appsec automatically logs the first 10 HTTP requests and then by default will only log malicious requests. You can change this setting.
Step 6: Point your DNS to the New Ingress
After testing that your services are reachable, you can point your public DNS record to the new ingress.
In case of a problem, at any time, you can either switch open-appsec off while running the same ingress code, or change your DNS back.
For Production usage you might want to switch from using the Basic to the more accurate Advanced Machine Learning model, as described here: