Installation on Azure AKS

This guide covers installing Cilium into an Azure AKS environment running in Advanced Networking Mode .

This is achieved using Cilium CNI chaining, with the Azure CNI plugin as the base CNI plugin and Cilium chaining on top to provide L3/L4/L7 visibility and enforcement, as well as other advanced features like transparent encryption.

Prerequisites

Ensure that you have the azure cloud cli installed.

To verify, confirm that the following command displays the set of available Kubernetes versions.

az aks get-versions -l westus -o table

Create an AKS Cluster in Advanced Networking Mode

The full background on creating AKS clusters in advanced networking mode, see this guide .

If you want to us the CLI to create a dedicated set of Azure resources (resource groups, networks, etc.) specifically for this tutorial, the following commands (borrowed from the AKS documentation) run as a script or manually all in the same terminal are sufficient.

It can take 10+ minutes for the final command to be complete indicating that the cluster is ready.

Note

Do NOT specify the ‘–network-policy’ flag when creating the cluster, as this will cause the Azure CNI plugin to push down unwanted iptables rules:

export SP_PASSWORD=mySecurePassword
export RESOURCE_GROUP_NAME=myResourceGroup-NP
export CLUSTER_NAME=myAKSCluster
export LOCATION=westus

# Create a resource group
az group create --name $RESOURCE_GROUP_NAME --location $LOCATION

# Create a virtual network and subnet
az network vnet create \
    --resource-group $RESOURCE_GROUP_NAME \
    --name myVnet \
    --address-prefixes 10.0.0.0/8 \
    --subnet-name myAKSSubnet \
    --subnet-prefix 10.240.0.0/16

# Create a service principal and read in the application ID
SP_ID_PASSWORD=$(az ad sp create-for-rbac --skip-assignment --query [appId,password] -o tsv)
SP_ID=$(echo ${SP_ID_PASSWORD} | sed -e 's/ .*//g')
SP_PASSWORD=$(echo ${SP_ID_PASSWORD} | sed -e 's/.* //g')
unset SP_ID_PASSWORD

# Wait 15 seconds to make sure that service principal has propagated
echo "Waiting for service principal to propagate..."
sleep 15

# Get the virtual network resource ID
VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP_NAME --name myVnet --query id -o tsv)

# Assign the service principal Contributor permissions to the virtual network resource
az role assignment create --assignee $SP_ID --scope $VNET_ID --role Contributor

# Get the virtual network subnet resource ID
SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP_NAME --vnet-name myVnet --name myAKSSubnet --query id -o tsv)

# Create the AKS cluster and specify the virtual network and service principal information
# Enable network policy by using the `--network-policy` parameter
az aks create \
    --resource-group $RESOURCE_GROUP_NAME \
    --name $CLUSTER_NAME \
    --node-count 1 \
    --generate-ssh-keys \
    --network-plugin azure \
    --service-cidr 10.0.0.0/16 \
    --dns-service-ip 10.0.0.10 \
    --docker-bridge-address 172.17.0.1/16 \
    --vnet-subnet-id $SUBNET_ID \
    --service-principal $SP_ID \
    --client-secret $SP_PASSWORD

Configure kubectl to Point to Newly Created Cluster

Run the following commands to configure kubectl to connect to this AKS cluster:

az aks get-credentials --resource-group $RESOURCE_GROUP_NAME --name $CLUSTER_NAME
export KUBECONFIG=/Users/danwent/.kube/config

To verify, you should see AKS in the name of the nodes when you run:

kubectl get nodes
NAME                       STATUS   ROLES   AGE     VERSION
aks-nodepool1-12032939-0   Ready    agent   8m26s   v1.13.10

Create an AKS + Cilium CNI configuration

Create a chaining.yaml file based on the following template to specify the desired CNI chaining configuration:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cni-configuration
  namespace: cilium
data:
  cni-config: |-
    {
      "cniVersion": "0.3.0",
      "name": "azure",
      "plugins": [
        {
          "type": "azure-vnet",
          "mode": "transparent",
          "bridge": "azure0",
          "ipam": {
             "type": "azure-vnet-ipam"
           }
        },
        {
          "type": "portmap",
          "capabilities": {"portMappings": true},
          "snat": true
        },
        {
           "name": "cilium",
           "type": "cilium-cni"
        }
      ]
    }

Create the cilium namespace:

kubectl create namespace cilium

Deploy the ConfigMap:

kubectl apply -f chaining.yaml

Prepare & Deploy Cilium

Download the Cilium release tarball and change to the kubernetes install directory:

curl -LO https://github.com/cilium/cilium/archive/1.6.3.tar.gz
tar xzvf 1.6.3.tar.gz
cd cilium-1.6.3/install/kubernetes

Install Helm to prepare generating the deployment artifacts based on the Helm templates.

Generate the required YAML file and deploy it:

helm template cilium \
  --namespace cilium \
  --set nodeinit.azure=true \
  --set global.cni.chainingMode=generic-veth \
  --set global.cni.customConf=true \
  --set global.nodeinit.enabled=true \
  --set global.cni.configMap=cni-configuration \
  --set global.tunnel=disabled \
  > cilium.yaml
kubectl create -f cilium.yaml

This will create both the main cilium daemonset, as well as the cilium-node-init daemonset, which handles tasks like mounting the BPF filesystem and updating the existing Azure CNI plugin to run in ‘transparent’ mode.

Validate the Installation

You can monitor as Cilium and all required components are being installed:

kubectl -n kube-system get pods --watch
NAME                                    READY   STATUS              RESTARTS   AGE
cilium-operator-cb4578bc5-q52qk         0/1     Pending             0          8s
cilium-s8w5m                            0/1     PodInitializing     0          7s
coredns-86c58d9df4-4g7dd                0/1     ContainerCreating   0          8m57s
coredns-86c58d9df4-4l6b2                0/1     ContainerCreating   0          8m57s

It may take a couple of minutes for all components to come up:

cilium-operator-cb4578bc5-q52qk         1/1     Running   0          4m13s
cilium-s8w5m                            1/1     Running   0          4m12s
coredns-86c58d9df4-4g7dd                1/1     Running   0          13m
coredns-86c58d9df4-4l6b2                1/1     Running   0          13m

Deploy the connectivity test

You can deploy the “connectivity-check” to test connectivity between pods.

kubectl apply -f https://raw.githubusercontent.com/cilium/cilium/1.6.3/examples/kubernetes/connectivity-check/connectivity-check.yaml

It will deploy a simple probe and echo server running with multiple replicas. The probe will only report readiness while it can successfully reach the echo server:

kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
echo-585798dd9d-ck5xc    1/1     Running   0          75s
echo-585798dd9d-jkdjx    1/1     Running   0          75s
echo-585798dd9d-mk5q8    1/1     Running   0          75s
echo-585798dd9d-tn9t4    1/1     Running   0          75s
echo-585798dd9d-xmr4p    1/1     Running   0          75s
probe-866bb6f696-9lhfw   1/1     Running   0          75s
probe-866bb6f696-br4dr   1/1     Running   0          75s
probe-866bb6f696-gv5kf   1/1     Running   0          75s
probe-866bb6f696-qg2b7   1/1     Running   0          75s
probe-866bb6f696-tb926   1/1     Running   0          75s