Mutual TLS

Enabling mTLS

Now it’s time to enable mTLS by applying next Istio resources:

1 - Enables mTLS into workshop namespace

kubectl apply -f manifests/istio/mutual-tls/authentication-enable-tls-permissive.yml

This policy specifies that all workloads in the namespace workshop will only accept encrypted requests using TLS. The name of the policy must be default.

kind: "Policy"
metadata:
  name: "default"
  namespace: "workshop"
spec:
  peers:
  - mtls:
     mode: PERMISSIVE

2 - At this point, only the receiving side is configured to use mutual TLS. To configure the client side, you need to set destination rules to use mutual TLS. It’s possible to use multiple destination rules, one for each applicable service (or namespace). However, it’s more convenient to use a rule with the * wildcard to match all services in the namesapce.

kubectl apply -f manifests/istio/mutual-tls/destination-rule-tls.yml
kind: "DestinationRule"
metadata:
  name: "default"
  namespace: "workshop"
spec:
  host: "*.workshop.svc.cluster.local"
  trafficPolicy:
    tls:
      mode: ISTIO_MUTUAL

Then you can run:

curl http://${GATEWAY_IP}/
customer => preference => recommendation v1 from 'b87789c58-mfrhr': 2

And you’ll see exactly the same but now the communication between the services has been secured.

How can you check that?

Istio automatically installs necessary keys and certificates for mutual TLS authentication in all sidecar containers. Run command below to confirm key and certificate files exist under /etc/certs:

kubectl exec $(kubectl get pod -l app=recommendation,version=v2 -o jsonpath={.items..metadata.name}) -c istio-proxy -- ls /etc/certs
Output.
cert-chain.pem
key.pem
root-cert.pem

You can alson sniff the traffic between services, which is a bit more tedious, open a new terminal tab and run next command:

CUSTOMER_POD=$(kubectl get pod | grep customer | awk '{ print $1}'| head -1 ) (1)
PREFERENCE_POD=$(kubectl get pod | grep preference | awk '{ print $1}' | head -1) (1)

kubectl exec -it $CUSTOMER_POD -c istio-proxy /bin/bash (2)
kubectl exec -it $PREFERENCE_POD -c istio-proxy /bin/bash

# Inside pod shell

ifconfig (3)

sudo tcpdump -vvvv -A -i eth0 '((dst port 8080) and (net 10.0.3.3))' (4)
1 Get customer pod name
2 Open a shell inside pod
3 Get IP of current pod (probably the IP represented at eth0 interface)
4 Capture traffic from eth0 (or your interface) of port 8080 and network 10.0.3.3 (your IP from ifconfig)

Now all communication that happens between customer service and preference service is dumped in the console.

So now go to a terminal and execute:

curl http://${GATEWAY_IP}/
customer => preference => recommendation v1 from 'b87789c58-mfrhr': 2

Obviously, the response is exactly the same, but if you go to the terminal where you are executing tcpdump, you should see something like:

14:24:55.078222 IP (tos 0x0, ttl 64, id 32578, offset 0, flags [DF], proto TCP (6), length 967)
    172.17.0.15.33260 > customer-7dcd544ff9-652ds.8080: Flags [P.], cksum 0x5bf5 (incorrect -> 0x595e), seq 2211080917:2211081832, ack 2232186801, win 391, options [nop,nop,TS val 5958433 ecr 5779275], length 915: HTTP
E....B@.@._........
......j...w.....[......
.Z.!.X/K.............w$.?....&T.`n.....UX.C&)Cj....y..{.&..I..	..<.
.....A..q.;...o.9+.4..;...6|".......M.4Wm.:}.....^..v..2..?VW[&s........@}.~B.>D.k..H...r.... .L..i,.
...=..=..y..[.k..g..0..5.f%..vz|..t.....%.`.|...B..%r0.^k.y.....y.@l$O.....?...J..qc&.........Z.^&..F.....w.">7..	...[.......2.&........>......s.....5
.n$X.....l.#...... ..Q..u..jBI.Z.Eb$9.$.._...!.........~"Xx<....);........Z.
.y/E]......K......... .@s.3.\.
.i.v...#.O<..^.F.	...?..:s...).....e......*..F.Kz..i.jk..xx...#....|.U.!.......X.....@......0.....*...l.v..G)T...9...M.....i.H ..=	.a.hp..&8..L..`.s..d_o.~.T ./.......9..	;F81.......S.{.....1rE..o...`..............c+U...}.{7..Y....Q4.#..(.c]Q...[..8..$u.b...=..6.....~..9..H....R
.9x*q....h0......O......q..Fb)..E..m..=.M.....W.Yk>.......;.2eys..E.....=q.;.k ....R.f.(./^F....4.c..*Y.4....es.....TX`nh..L.z.6....(.X.>c.V.0z........GF%.%..l4P.......@.^Q........46.g.#.n...e.k.._..>.T+.S...t}....

Notice that you cannot see any detail about the communication since it happened through TLS.

Now, let’s disable TLS:

kubectl delete -f manifests/istio/mutual-tls/authentication-enable-tls.yml
kubectl delete -f manifests/istio/mutual-tls/destination-rule-tls.yml

And execute again:

curl http://${GATEWAY_IP}/
customer => preference => recommendation v1 from 'b87789c58-mfrhr': 2

And again check tcpdump output:

host: 0192.168.64.70:31380
user-agent: curl/7.54.0
accept: */*
x-forwarded-for: 172.17.0.1
x-forwarded-proto: http
x-envoy-internal: true
x-request-id: e5c0b90f-341b-9edc-ac3e-7dd8b33f0e8b
x-envoy-decorator-operation: customer.workshop.svc.cluster.local:8080/
x-b3-traceid: ce289e960a639d11
x-b3-spanid: ce289e960a639d11
x-b3-sampled: 1

Now, you can see that since there is no TLS enabled, the information is not shadowed but in clear.

A last thing about mutual TLS

If mutual TLS is enabled, http and tcp health checks from the kubelet will not work since the kubelet does not have Istio-issued certificates. In our workshop, we used the port 8080 for the regular service and also for healthcheck (see below).

kind: Service
metadata:
  name: recommendation
spec:
  ports:
  - name: http
    port: 8080
---
kind: Deployment
metadata:
  labels:
    app: recommendation
    version: v1
  name: recommendation-v1
spec:
...
    spec:
      containers:
      - env:
        ...
        livenessProbe:
          httpGet:
                path: /health
                port: 8080

So before enabling mTLS, you should be aware of this subtlety.

On the other hand, Istio supports the PERMISSIVE mode so the services can accept both http and mutual TLS traffic when this mode is turned on. This can solve the health checking issue.

As best practice, use a separate port for health check and enable mutual TLS only on the regular service port.