How to pass CKS — Kubernetes Security Specialist exam. Part 5

Arek Borucki
4 min readApr 12, 2021

CKS requires CKA (Certified Kubernetes Administrator) passed first. It is a mandatory pre-request. I shared my tips in a different post on how to pass CKA and CKAD. If you received CKA, know how to use kubectl and Kubernetes documentation in an efficient way, you can start study for CKS. CKS is harder than the other two K8s exams. Good preparation requires deep study of native and external Kubernetes security tools, best security practices, and also requires a good knowledge of Kubernetes architecture, especially about API server, etcd, and kubelet. The exam covers the following areas: gVisor, AppArmor, RBAC, Network Policies, Auditing, Falco,Trivy, Admission Controllers, CIS Benchmark, Pod Security Policies, writing secure Dockerfiles, Secrets, Privileged Pods.

  1. Episode — Network Policies
  2. Episode — gVisor
  3. Episode — Trivy
  4. Episode — AppArmor
  5. Episode — PodSecurityPolicies
  6. Episode — RBAC
  7. Dockerfile & SecurityContext

A PodSecurityPolicy is a cluster-level resource that controls security-sensitive aspects of the pod specification. The PodSecurityPolicy objects define a set of conditions that a pod must run with in order to be accepted into the system, as well as defaults for the related fields.

Pod security policy control is implemented as an optional (but recommended) admission controller. Pod security policy can be enabled in API server configuration file.

API server config file kube-apiserver.yaml is located on the master node under /etc/kubernetes/manifest/, we need to add PodSecurityPolicy to API server flag--enable-admission-plugins

vim /etc/kubernetes/manifest/kube-apiserver.yaml

# /etc/kubernetes/manifests/kube-apiserver.yamlapiVersion: v1
kind: Pod
metadata:
annotations:
kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.100.11:6443
creationTimestamp: null
labels:
component: kube-apiserver
tier: control-plane
name: kube-apiserver
namespace: kube-system
spec:
containers:
- command:
- kube-apiserver
- --advertise-address=192.168.100.11
- --authorization-mode=Node,RBAC
- --client-ca-file=/etc/kubernetes/pki/ca.crt
- --enable-admission-plugins=NodeRestriction,PodSecurityPolicy#ADD
- --enable-bootstrap-token-auth=true
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
- --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt

now, we can display now via kubectl get pspdefault pod security policy named default-allow-all

kubectl get psp
NAME. PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP VOLUMES
default-allow-all true RunAsAny RunAsAny RunAsAny RunAsAny false *

ok, let’s create our new PodSecurityPolicy , his policy prevents creation of privileged pods. The first step is to define policy in the YAML file

apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp-example
spec:
privileged: false # Don't allow privileged pods!
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
runAsUser:
rule: RunAsAny
fsGroup:
rule: RunAsAny
volumes:
- '*'

and via kubeclt create -f policy-file.yaml create it.

Our PSP has no effect because we also need to have yet a service account and RBAC permissions in place, Let’s also create them in namespace psp-test!

kubectl create ns psp-test
kubectl create serviceaccount -n psp-test psp-service-account

Role or ClusterRole needs to grant access to use the desired policies. Command can look like this (use help to generate correct syntax):

kubectl -n psp-test create clusterrole -h
Create a ClusterRole.
Examples:
# Create a ClusterRole named "pod-reader" that allows user to perform "get", "watch" and "list" on pods
kubectl create clusterrole pod-reader --verb=get,list,watch --resource=pods
Usage:
kubectl create clusterrole NAME --verb=verb --resource=resource.group [--resource-name=resourcename]
[--dry-run=server|client|none] [options]

and build command

kubectl -n psp-test create clusterrole psp-cluster-role --verb=use \       --resource=podsecuritypolicies --resource-name=psp-example

or it can be done via ready to use YAML

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: psp-cluster-role
rules:
- apiGroups: ['policy']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames:
- psp-example # psp name

Then the (Cluster)Role is bound to the newly created service account via command (first help):

kubectl -n psp-test create rolebinding psp-binding -h
Create a RoleBinding for a particular Role or ClusterRole.
Usage:
kubectl create rolebinding NAME --clusterrole=NAME|--role=NAME [--user=username] [--group=groupname]
[--serviceaccount=namespace:serviceaccountname] [--dry-run=server|client|none] [options]

and build command

kubectl -n psp-test create rolebinding psp-binding -- \ clusterrole=psp-cluster-role \
--serviceaccount=psp-test:psp-service- account

or it can be done via ready to use YAML

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: psp-binding
roleRef:
kind: ClusterRole
name: psp-cluster-role
apiGroup: rbac.authorization.k8s.io
subjects:
# Authorize specific service accounts:
- kind: ServiceAccount
name: psp-service-account
namespace: psp-test

and via kubeclt create -f clusterrole.yaml and and via kubeclt create -f rolebinding.yaml create both of them!

Our new pspis visible via kubeclt get psp

kubectl get psp
NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP VOLUMES
default-allow-all true * RunAsAny RunAsAny RunAsAny RunAsAny false *
psp-example false RunAsAny RunAsAny RunAsAny RunAsAny false *

Examples from this post are available on GitHub. Thank you and see you soon in the next episode! The next post will be about RBAC.

--

--