Getting Started Securing Elasticsearch

This document serves as an introduction for using Cilium to enforce Elasticsearch-aware security policies. It is a detailed walk-through of getting a single-node Cilium environment running on your machine. It is designed to take 15-30 minutes.

If you haven’t read the Introduction to Cilium yet, we’d encourage you to do that first.

The best way to get help if you get stuck is to ask a question on the Cilium Slack channel. With Cilium contributors across the globe, there is almost always someone available to help.

Step 0: Install kubectl & minikube

  1. Install kubectl version >= 1.8.0 as described in the Kubernetes Docs.
  2. Install one of the hypervisors supported by minikube .
  3. Install minikube >= 0.22.3 as described on minikube’s github page .
$ minikube start --network-plugin=cni --extra-config=kubelet.network-plugin=cni --memory=5120
$ minikube start --network-plugin=cni --container-runtime=cri-o --extra-config=kubelet.network-plugin=cni --memory=5120

After minikube has finished setting up your new Kubernetes cluster, you can check the status of the cluster by running kubectl get cs:

$ kubectl get cs
NAME                 STATUS    MESSAGE              ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health": "true"}
  1. Install etcd as a dependency of cilium in minikube by running:
$ kubectl create -n kube-system -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/addons/etcd/standalone-etcd.yaml
service "etcd-cilium" created
statefulset.apps "etcd-cilium" created

To check that all pods are Running and 100% ready, including kube-dns and etcd-cilium-0 run:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                               READY     STATUS    RESTARTS   AGE
kube-system   etcd-cilium-0                      1/1       Running   0          1m
kube-system   etcd-minikube                      1/1       Running   0          3m
kube-system   kube-addon-manager-minikube        1/1       Running   0          4m
kube-system   kube-apiserver-minikube            1/1       Running   0          3m
kube-system   kube-controller-manager-minikube   1/1       Running   0          3m
kube-system   kube-dns-86f4d74b45-lhzfv          3/3       Running   0          4m
kube-system   kube-proxy-tcd7h                   1/1       Running   0          4m
kube-system   kube-scheduler-minikube            1/1       Running   0          4m
kube-system   storage-provisioner                1/1       Running   0          4m

If you see output similar to this, you are ready to proceed to the next step.

Note

The output might differ between minikube versions, you should expect to have all pods in READY / Running state before continuing.

Step 1: Install Cilium

The next step is to install Cilium into your Kubernetes cluster. Cilium installation leverages the Kubernetes Daemon Set abstraction, which will deploy one Cilium pod per cluster node. This Cilium pod will run in the kube-system namespace along with all other system relevant daemons and services. The Cilium pod will run both the Cilium agent and the Cilium CNI plugin.

Choose the installation instructions for the environment in which you are deploying Cilium.

Docker Based

Cilium, Kubernetes and Docker

Step 1: Install Cilium with Docker

The next step is to install Cilium into your Kubernetes cluster. Cilium installation leverages the Kubernetes Daemon Set abstraction, which will deploy one Cilium pod per cluster node. This Cilium pod will run in the kube-system namespace along with all other system-relevant daemons and services. The Cilium pod will run both the Cilium agent and the Cilium CNI plugin.

To deploy Cilium, run:

$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.8/cilium.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.9/cilium.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.10/cilium.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.11/cilium.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.12/cilium.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created

Kubernetes is now deploying Cilium with its RBAC settings, ConfigMap and DaemonSet as a pod on minikube. This operation is performed in the background.

Run the following command to check the progress of the deployment:

$ kubectl get daemonsets -n kube-system
NAME         DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
cilium       1         1         0         1            1           <none>          3m
kube-proxy   1         1         1         1            1           <none>          8m

Wait until the cilium Deployment shows a CURRENT count of 1 like above (a READY value of 0 is OK for this tutorial).

CRI-O Based

Cilium, Kubernetes and CRI-O

Step 1: Install Cilium with CRI-O

The next step is to install Cilium into your Kubernetes cluster. Cilium installation leverages the Kubernetes Daemon Set abstraction, which will deploy one Cilium pod per cluster node. This Cilium pod will run in the kube-system namespace along with all other system-relevant daemons and services. The Cilium pod will run both the Cilium agent and the Cilium CNI plugin.

To deploy Cilium, run:

$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.8/cilium-crio.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.9/cilium-crio.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.10/cilium-crio.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.11/cilium-crio.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes/1.12/cilium-crio.yaml
configmap "cilium-config" created
secret "cilium-etcd-secrets" created
daemonset.extensions "cilium" created
clusterrolebinding.rbac.authorization.k8s.io "cilium" created
clusterrole.rbac.authorization.k8s.io "cilium" created
serviceaccount "cilium" created

Kubernetes is now deploying Cilium with its RBAC settings, ConfigMap and DaemonSet as a pod on minikube. This operation is performed in the background.

Run the following command to check the progress of the deployment:

$ kubectl get daemonsets -n kube-system
NAME         DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
cilium       1         1         0         1            1           <none>          3m
kube-proxy   1         1         1         1            1           <none>          8m

Wait until the cilium Deployment shows a CURRENT count of 1 like above (a READY value of 0 is OK for this tutorial).

Since CRI-O does not automatically detect that a new CNI plugin has been installed, you will need to restart the CRI-O daemon for it to pick up the Cilium CNI configuration.

First make sure cilium is running:

$ kubectl get pods -n kube-system -o wide
NAME               READY     STATUS    RESTARTS   AGE       IP          NODE
cilium-mqtdz       1/1       Running   0          3m       10.0.2.15   minikube

After that you can restart CRI-O:

$ minikube ssh -- sudo systemctl restart crio

Finally, you need to restart the Cilium pod so it can re-mount /var/run/crio.sock which was recreated by CRI-O

$ kubectl delete -n kube-system pod cilium-mqtdz

Step 2: Deploy the Demo Application

Following the Cilium tradition, we will use a Star Wars-inspired example. The Empire has a large scale Elasticsearch cluster which is used for storing a variety of data including:

  • index: troop_logs: Stormtroopers performance logs collected from every outpost which are used to identify and eliminate weak performers!
  • index: spaceship_diagnostics: Spaceships diagnostics data collected from every spaceship which is used for R&D and improvement of the spaceships.

Every outpost has an Elasticsearch client service to upload the Stormtroopers logs. And every spaceship has a service to upload diagnostics. Similarly, the Empire headquarters has a service to search and analyze the troop logs and spaceship diagnostics data. Before we look into the security concerns, let’s first create this application scenario in minikube.

Deploy the app using command below, which will create

  • An elasticsearch service with the selector label component:elasticsearch and a pod running Elasticsearch.
  • Three Elasticsearch clients one each for empire-hq, outpost and spaceship.
$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes-es/es-sw-app.yaml
serviceaccount "elasticsearch" created
service "elasticsearch" created
replicationcontroller "es" created
role "elasticsearch" created
rolebinding "elasticsearch" created
pod "outpost" created
pod "empire-hq" created
pod "spaceship" created
$ kubectl get svc,pods
NAME                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                           AGE
svc/elasticsearch   NodePort    10.111.238.254   <none>        9200:30130/TCP,9300:31721/TCP     2d
svc/etcd-cilium     NodePort    10.98.67.60      <none>        32379:31079/TCP,32380:31080/TCP   9d
svc/kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP                           9d

NAME               READY     STATUS    RESTARTS   AGE
po/empire-hq       1/1       Running   0          2d
po/es-g9qk2        1/1       Running   0          2d
po/etcd-cilium-0   1/1       Running   0          9d
po/outpost         1/1       Running   0          2d
po/spaceship       1/1       Running   0          2d

Step 3: Security Risks for Elasticsearch Access

For Elasticsearch clusters the least privilege security challenge is to give clients access only to particular indices, and to limit the operations each client is allowed to perform on each index. In this example, the outpost Elasticsearch clients only need access to upload troop logs; and the empire-hq client only needs search access to both the indices. From the security perspective, the outposts are weak spots and susceptible to be captured by the rebels. Once compromised, the clients can be used to search and manipulate the critical data in Elasticsearch. We can simulate this attack, but first let’s run the commands for legitimate behavior for all the client services.

outpost client uploading troop logs

$ kubectl exec outpost -- python upload_logs.py
Uploading Stormtroopers Performance Logs
created :  {'_index': 'troop_logs', '_type': 'log', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True}

spaceship uploading diagnostics

$ kubectl exec spaceship -- python upload_diagnostics.py
Uploading Spaceship Diagnostics
created :  {'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_version': 1, 'result': 'created', '_shards': {'total': 2, 'successful': 1, 'failed': 0}, 'created': True}

empire-hq running search queries for logs and diagnostics

$ kubectl exec empire-hq -- python search.py
Searching for Spaceship Diagnostics
Got 1 Hits:
{'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \
 '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \
             'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km/s] [CHANCE 80%]'}}
Searching for Stormtroopers Performance Logs
Got 1 Hits:
{'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \
 '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \
             'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}}

Now imagine an outpost captured by the rebels. In the commands below, the rebels first search all the indices and then manipulate the diagnostics data from a compromised outpost.

$ kubectl exec outpost -- python search.py
Searching for Spaceship Diagnostics
Got 1 Hits:
{'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \
 '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \
             'stats': '[CRITICAL] [ENGINE BURN @SPEED 5000 km/s] [CHANCE 80%]'}}
Searching for Stormtroopers Performance Logs
Got 1 Hits:
{'_index': 'troop_logs', '_type': 'log', '_id': '1', '_score': 1.0, \
 '_source': {'outpost': 'Endor', 'datetime': '33 ABY 4AM DST', 'title': 'Endor Corps 1: Morning Drill', \
             'notes': '5100 PRESENT; 15 ABSENT; 130 CODE-RED BELOW PAR PERFORMANCE'}}

Rebels manipulate spaceship diagnostics data so that the spaceship defects are not known to the empire-hq! (Hint: Rebels have changed the stats for the tiefighter spaceship, a change hard to detect but with adverse impact!)

$ kubectl exec outpost -- python update.py
Uploading Spaceship Diagnostics
{'_index': 'spaceship_diagnostics', '_type': 'stats', '_id': '1', '_score': 1.0, \
 '_source': {'spaceshipid': '3459B78XNZTF', 'type': 'tiefighter', 'title': 'Engine Diagnostics', \
             'stats': '[OK] [ENGINE OK @SPEED 5000 km/s]'}}

Step 4: Securing Elasticsearch Using Cilium

../../_images/cilium_es_gsg_topology.png

Following the least privilege security principle, we want to the allow the following legitimate actions and nothing more:

  • outpost service only has upload access to index: troop_logs
  • spaceship service only has upload access to index: spaceship_diagnostics
  • empire-hq service only has search access for both the indices

Fortunately, the Empire DevOps team is using Cilium for their Kubernetes cluster. Cilium provides L7 visibility and security policies to control Elasticsearch API access. Cilium follows the white-list, least privilege model for security. That is to say, a CiliumNetworkPolicy contains a list of rules that define allowed requests and any request that does not match the rules is denied.

In this example, the policy rules are defined for inbound traffic (i.e., “ingress”) connections to the elasticsearch service. Note that endpoints selected as backend pods for the service are defined by the selector labels. Selector labels use the same concept as Kubernetes to define a service. In this example, label component: elasticsearch defines the pods that are part of the elasticsearch service in Kubernetes.

In the policy file below, you will see the following rules for controlling the indices access and actions performed:

  • fromEndpoints with labels app:spaceship only HTTP PUT is allowed on paths matching regex ^/spaceship_diagnostics/stats/.*$
  • fromEndpoints with labels app:outpost only HTTP PUT is allowed on paths matching regex ^/troop_logs/log/.*$
  • fromEndpoints with labels app:empire only HTTP GET is allowed on paths matching regex ^/spaceship_diagnostics/_search/??.*$ and ^/troop_logs/search/??.*$
apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: secure-empire-elasticsearch
  namespace: default
specs:
- endpointSelector:
    matchLabels:
      component: elasticsearch
  ingress:
  - fromEndpoints:
    - matchLabels:
        app: spaceship
    toPorts:
    - ports:
      - port: "9200"
        protocol: TCP
      rules:
        http:
        - method: ^PUT$
          path: ^/spaceship_diagnostics/stats/.*$
  - fromEndpoints:
    - matchLabels:
        app: empire-hq
    toPorts:
    - ports:
      - port: "9200"
        protocol: TCP
      rules:
        http:
        - method: ^GET$
          path: ^/spaceship_diagnostics/_search/??.*$
        - method: ^GET$
          path: ^/troop_logs/_search/??.*$
  - fromEndpoints:
    - matchLabels:
        app: outpost
    toPorts:
    - ports:
      - port: "9200"
        protocol: TCP
      rules:
        http:
        - method: ^PUT$
          path: ^/troop_logs/log/.*$
- egress:
  - toEndpoints:
    - matchExpressions:
      - key: k8s:io.kubernetes.pod.namespace
        operator: Exists
  - toEntities:
    - cluster
    - host
  endpointSelector: {}
  ingress:
  - {}

Apply this Elasticsearch-aware network security policy using kubectl:

$ kubectl create -f https://raw.githubusercontent.com/cilium/cilium/1.3.0/examples/kubernetes-es/es-sw-policy.yaml
ciliumnetworkpolicy "secure-empire-elasticsearch" created

Let’s test the security policies. Firstly, the search access is blocked for both outpost and spaceship. So from a compromised outpost, rebels will not be able to search and obtain knowledge about troops and spaceship diagnostics. Secondly, the outpost clients don’t have access to create or update the index: spaceship_diagnostics.

$ kubectl exec outpost -- python search.py
GET http://elasticsearch:9200/spaceship_diagnostics/_search [status:403 request:0.008s]
...
...
elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\r\n')
command terminated with exit code 1
$ kubectl exec outpost -- python update.py
PUT http://elasticsearch:9200/spaceship_diagnostics/stats/1 [status:403 request:0.006s]
...
...
elasticsearch.exceptions.AuthorizationException: TransportError(403, 'Access denied\r\n')
command terminated with exit code 1

We can re-run any of the below commands to show that the security policy still allows all legitimate requests (i.e., no 403 errors are returned).

$ kubectl exec outpost -- python upload_logs.py
...
$ kubectl exec spaceship -- python upload_diagnostics.py
...
$ kubectl exec empire-hq -- python search.py
...

Step 6: Clean Up

You have now installed Cilium, deployed a demo app, and finally deployed & tested Elasticsearch-aware network security policies. To clean up, run:

$ minikube delete