CKS 1.22 模拟题
Question 1
Task weight: 1%
You have access to multiple clusters from your main terminal through
kubectl
contexts. Write all context names into/opt/course/1/contexts
, one per line.From the kubeconfig extract the certificate of user
restricted@infra-prod
and write it decoded to/opt/course/1/cert
.
题目解析:
考点
- kubectl
解题
➜ kubectl config get-contexts -o name > /opt/course/1/contexts # 从 .kube/config 文件中找到 - name: restricted@infra-prod user client-certificate-data: LS0tLS1CRUdJ... ➜ echo LS0tLS1CRUdJ... | base64 -d > /opt/course/1/cert
Question 2
Task weight: 4%
Use context:
kubectl config use-context workload-prod
Falco is installed with default configuration on node
cluster1-worker1
. Connect usingssh cluster1-worker1
. Use it to:
- Find a Pod running image
nginx
which creates unwanted package management processes inside its container.- Find a Pod running image
httpd
which modifies/etc/passwd
.Save the Falco logs for case 1 under
/opt/course/2/falco.log
in formattime,container-id,container-name,user-name
. No other information should be in any line. Collect the logs for at least 30 seconds.Afterwards remove the threads (both 1 and 2) by scaling the replicas of the Deployments that control the offending Pods down to 0.
题目解析:
考点
- falco
解题
通过 falco 日志查找容器 ID, POD
Find a Pod running image
nginx
which creates unwanted package management processes inside its container➜ ssh cluster1-worker1 ➜ root@cluster1-worker1:~# grep nginx /var/log/syslog | grep -i "package management process" Dec 13 01:03:23 cluster1-worker1 falco[28640]: 01:03:23.564163340: Error Package management process launched in container (user=root user_loginuid=-1 command=apk container_id=3ed4079e7f61 container_name=nginx image=docker.io/library/nginx:1.19.2-alpine) ... ➜ root@cluster1-worker1:~# crictl ps -id 3ed4079e7f61 CONTAINER ID IMAGE NAME ... POD ID 3ed4079e7f61 6f715d38cfe0e nginx ... 7a864406b9794 ➜ root@cluster1-worker1:~# crictl pods -id 7a864406b9794 POD ID ... NAME NAMESPACE ... 7a864406b9794 ... webapi-6cfddcd6f4-ftxg4 team-blue ... # 可得这个容器属于 deployment team-blue/webapi
Find a Pod running image
httpd
which modifies/etc/passwd
➜ root@cluster1-worker1:~# grep httpd /var/log/syslog | grep -i "/etc/passwd" Dec 13 06:13:20 cluster1-worker1 falco: 06:13:20.562402988: Error File below /etc opened for writing (user=root user_loginuid=-1 command=sed -i $d /etc/passwd parent=sh pcmdline=sh -c echo hacker >> /etc/passwd; sed -i '$d' /etc/passwd; true file=/etc/passwdIKpDGh program=sed gparent=<NA> ggparent=<NA> gggparent=<NA> container_id=5c781a912497 image=docker.io/library/httpd) ➜ root@cluster1-worker1:~# crictl ps -id 5c781a912497 CONTAINER ID IMAGE NAME ... POD ID 5c781a912497 f6b40f9f8ad71 httpd ... 595af943c3245 ➜ root@cluster1-worker1:~# crictl pods -id 595af943c3245 POD ID ... NAME NAMESPACE ... 595af943c3245 ... rating-service-68cbdf7b7-v2p6g team-purple ... # 可得这个容器属于 deployment team-purple/rating-service
Save the Falco logs for case 1 under
/opt/course/2/falco.log
in formattime,container-id,container-name,user-name
...通过上面我们查看到的信息,打出的日志格式是这样的:
(user=root user_loginuid=-1 command=apk container_id=3ed4079e7f61 container_name=nginx image=docker.io/library/nginx:1.19.2-alpine)
而题目要求格式是 time,container-id,container-name,user-name,因此我们需要修改 falco 的配置文件
➜ root@cluster1-worker1:~# vim /etc/falco/falco_rules.yaml ... - rule: Launch Package Management Process in Container desc: Package management process ran inside container condition: > spawned_process and container and user.name != "_apt" and package_mgmt_procs and not package_mgmt_ancestor_procs and not user_known_package_manager_in_container output: > # Package management process launched in container (user=%user.name user_loginuid=%user.loginuid # command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag) Package management process launched in container %evt.time,%container.id,%container.name,%user.name # 删除上两行,新增这行 priority: ERROR tags: [process, mitre_persistence] ...
重启 falco 服务,查看日志输出
➜ root@cluster1-worker1:~# systemctl restart falco ➜ root@cluster1-worker1:~# tail -f /var/log/syslog | grep "Package management process launched in container" Dec 13 06:07:57 cluster1-worker1 falco[220990]: 06:07:57.756206761: Error Package management process launched in container 06:07:57.756206761,3ed4079e7f61,nginx,root Dec 13 06:07:57 cluster1-worker1 falco: 06:07:57.756206761: Error Package management process launched in container 06:07:57.756206761,3ed4079e7f61,nginx,root Dec 13 06:08:02 cluster1-worker1 falco[220990]: 06:08:02.754047763: Error Package management process launched in container 06:08:02.754047763,3ed4079e7f61,nginx,root Dec 13 06:08:02 cluster1-worker1 falco: 06:08:02.754047763: Error Package management process launched in container 06:08:02.754047763,3ed4079e7f61,nginx,root Dec 13 06:08:07 cluster1-worker1 falco[220990]: 06:08:07.755037087: Error Package management process launched in container 06:08:07.755037087,3ed4079e7f61,nginx,root Dec 13 06:08:07 cluster1-worker1 falco: 06:08:07.755037087: Error Package management process launched in container 06:08:07.755037087,3ed4079e7f61,nginx,root Dec 13 06:08:12 cluster1-worker1 falco[220990]: 06:08:12.751331220: Error Package management process launched in container 06:08:12.751331220,3ed4079e7f61,nginx,root Dec 13 06:08:12 cluster1-worker1 falco: 06:08:12.751331220: Error Package management process launched in container 06:08:12.751331220,3ed4079e7f61,nginx,root Dec 13 06:08:17 cluster1-worker1 falco[220990]: 06:08:17.763051479: Error Package management process launched in container 06:08:17.763051479,3ed4079e7f61,nginx,root Dec 13 06:08:17 cluster1-worker1 falco: 06:08:17.763051479: Error Package management process launched in container 06:08:17.763051479,3ed4079e7f61,nginx,root Dec 13 06:08:22 cluster1-worker1 falco[220990]: 06:08:22.744677645: Error Package management process launched in container 06:08:22.744677645,3ed4079e7f61,nginx,root
将结果写入
/opt/course/2/falco.log
06:07:57.756206761,3ed4079e7f61,nginx,root 06:08:02.754047763,3ed4079e7f61,nginx,root 06:08:07.755037087,3ed4079e7f61,nginx,root 06:08:12.751331220,3ed4079e7f61,nginx,root 06:08:17.763051479,3ed4079e7f61,nginx,root 06:08:22.744677645,3ed4079e7f61,nginx,root
Afterwards remove the threads (both 1 and 2) by scaling the replicas of the Deployments that control the offending Pods down to 0
通过上面查到的 Pod 归属,将副本数改为 0
➜ kubectl -n team-blue scale deployment webapi --replicas=0 ➜ kubectl -n team-purple scale deployment rating-service --replicas=0
Question 3
Task weight: 3%
Use context:
kubectl config use-context workload-prod
You received a list from the DevSecOps team which performed a security investigation of the k8s cluster1 (
workload-prod
). The list states the following about the apiserver setup:
- Accessible through a NodePort Service
Change the apiserver setup so that:
- Only accessible through a ClusterIP Service
题目解析:
考点
- service
解题
删除 apiserver 启动参数
➜ ssh cluster1-master1 ➜ root@cluster1-master1:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml \ - --kubernetes-service-node-port=31000 # 删除
修改 kubernetes service,将 NodePort 类型改为 ClusterIP
➜ kubectl edit svc kubernetes ... ports: - name: https port: 443 protocol: TCP targetPort: 6443 # nodePort: 31000 # 删除 sessionAffinity: None type: ClusterIP # 修改 ...
Question 4
Task weight: 8%
Use context:
kubectl config use-context workload-prod
There is Deployment
container-host-hacker
in Namespaceteam-red
which mounts/run/containerd
as a hostPath volume on the Node where its running. This means that the Pod can access various data about other containers running on the same Node.You're asked to forbid this behavior by:
- Enabling Admission Plugin
PodSecurityPolicy
in the apiserver- Creating a PodSecurityPolicy named
psp-mount
which allows hostPath volumes only for directory/tmp
- Creating a ClusterRole named
psp-mount
which allows to use the new PSP- Creating a RoleBinding named
psp-mount
in Namespaceteam-red
which binds the new ClusterRole to all ServiceAccounts in the Namespaceteam-red
Restart the Pod of Deployment
container-host-hacker
afterwards to verify new creation is prevented.NOTE: PSPs can affect the whole cluster. Should you encounter issues you can always disable the Admission Plugin again.
题目解析:
考点
- PodSecurityPolicy Pod 安全策略 | Kubernetes
解题
启用准入插件
➜ ssh cluster1-master1 ➜ root@cluster1-master1:~# ssh cluster1-master1 ➜ root@cluster1-master1:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy # 修改,在末尾加入,PodSecurityPolicy
创建 PodSecurityPolicy,从官网示例拷过来修改
apiVersion: policy/v1beta1 kind: PodSecurityPolicy metadata: name: psp-mount namespace: team-red spec: privileged: false # Don't allow privileged pods! # The rest fills in some required fields. seLinux: rule: RunAsAny supplementalGroups: rule: RunAsAny runAsUser: rule: RunAsAny fsGroup: rule: RunAsAny allowedHostPaths: - pathPrefix: /tmp volumes: - 'hostPath'
创建 ClusterRole,从官网示例拷过来修改
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: psp-mount rules: - apiGroups: ['policy'] resources: ['podsecuritypolicies'] verbs: ['use'] resourceNames: - team-red:psp-mount
创建 RoleBinding
➜ kubectl -n team-red create rolebinding psp-mount --clusterrole=psp-mount --group=system:serviceaccounts
Question 5
Task weight: 3%
Use context:
kubectl config use-context infra-prod
You're ask to evaluate specific settings of cluster2 against the CIS Benchmark recommendations. Use the tool
kube-bench
which is already installed on the nodes.Connect using
ssh cluster2-master1
andssh cluster2-worker1
.On the master node ensure (correct if necessary) that the CIS recommendations are set for:
- The
--profiling
argument of the kube-controller-manager- The ownership of directory
/var/lib/etcd
On the worker node ensure (correct if necessary) that the CIS recommendations are set for:
- The permissions of the kubelet configuration
/var/lib/kubelet/config.yaml
- The
--client-ca-file
argument of the kubelet
题目解析:
考点
- kube-bench
解题
使用 kube-bench 检查 cluster2-master1
➜ ssh cluster2-master1 ➜ root@cluster2-master1:~# kube-bench master ... == Summary == 41 checks PASS 13 checks FAIL 11 checks WARN 0 checks INFO
解:The
--profiling
argument of the kube-controller-manager# 从 kube-bench master 命令的输出中,找 kube-controller-manager --profiling 有关的信息 # 可以看到有 1.3.2 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml on the master node and set the below parameter. --profiling=false # 修改 /etc/kubernetes/manifests/kube-controller-manager.yaml ... - --service-cluster-ip-range=10.96.0.0/12 - --use-service-account-credentials=true - --profiling=false # 新增参数 ...
解:The ownership of directory
/var/lib/etcd
# 从 kube-bench master 命令的输出中,找 etcd 有关的信息 # 可以看到有 1.1.12 On the etcd server node, get the etcd data directory, passed as an argument --data-dir, from the below command: ps -ef | grep etcd Run the below command (based on the etcd data directory found above). For example, chown etcd:etcd /var/lib/etcd # 执行命令修改 etcd 数据目录属主属组 ➜ root@cluster2-master1:~# chown etcd:etcd /var/lib/etcd
使用 kube-bench 检查 cluster2-worker1
➜ ssh cluster2-worker1 ➜ root@cluster2-worker1:~# kube-bench node ... == Summary == 13 checks PASS 10 checks FAIL 2 checks WARN 0 checks INFO
解:The permissions of the kubelet configuration
/var/lib/kubelet/config.yaml
# 从 kube-bench node 命令中找 kubelet 配置文件相关的信息 # 可以找到 2.2.10 Run the following command (using the config file location identified in the Audit step) chmod 644 /var/lib/kubelet/config.yaml # 执行命令修改文件权限 ➜ root@cluster2-worker1:~# chmod 644 /var/lib/kubelet/config.yaml
解:The
--client-ca-file
argument of the kubelet# 从 kube-bench node 命令中找 client-ca-file 相关的信息 # 可以找到 [PASS] 2.1.4 Ensure that the --client-ca-file argument is set as appropriate (Scored) 2.2.7 Run the following command to modify the file permissions of the --client-ca-file 2.2.8 Run the following command to modify the ownership of the --client-ca-file . # 检查通过的,不用处理
Question 6
Task weight: 2%
(can be solved in any kubectl context)
There are four Kubernetes server binaries located at
/opt/course/6/binaries
. You're provided with the following verified sha512 values for these:kube-apiserver
f417c0555bc0167355589dd1afe23be9bf909bf98312b1025f12015d1b58a1c62c9908c0067a7764fa35efdac7016a9efa8711a44425dd6692906a7c283f032c
kube-controller-manager
60100cc725e91fe1a949e1b2d0474237844b5862556e25c2c655a33boa8225855ec5ee22fa4927e6c46a60d43a7c4403a27268f96fbb726307d1608b44f38a60
kube-proxy
52f9d8ad045f8eee1d689619ef8ceef2d86d50c75a6a332653240d7ba5b2a114aca056d9e513984ade24358c9662714973c1960c62a5cb37dd375631c8a614c6
kubelet
4be40f2440619e990897cf956c32800dc96c2c983bf64519854a3309fa5aa21827991559f9c44595098e27e6f2ee4d64a3fdec6baba8a177881f20e3ec61e26c
Delete those binaries that don't match with the sha512 values above.
题目解析:
考点
- sha512命令对比校验码
解题
这题好解,通过对比 sha512 的值,发现值不同,则删除二进制程序
➜ sha512sum /opt/course/6/binaries/kube-apiserver | grep f417c0555bc0167355589dd1afe23be9bf909bf98312b1025f12015d1b58a1c62c9908c0067a7764fa35efdac7016a9efa8711a44425dd6692906a7c283f032c ➜ sha512sum /opt/course/6/binaries/kube-controller-manager | grep 60100cc725e91fe1a949e1b2d0474237844b5862556e25c2c655a33boa8225855ec5ee22fa4927e6c46a60d43a7c4403a27268f96fbb726307d1608b44f38a60 ➜ sha512sum /opt/course/6/binaries/kube-proxy | grep 52f9d8ad045f8eee1d689619ef8ceef2d86d50c75a6a332653240d7ba5b2a114aca056d9e513984ade24358c9662714973c1960c62a5cb37dd375631c8a614c6 ➜ sha512sum /opt/course/6/binaries/kubelet | grep 4be40f2440619e990897cf956c32800dc96c2c983bf64519854a3309fa5aa21827991559f9c44595098e27e6f2ee4d64a3fdec6baba8a177881f20e3ec61e26c
没有输出的说明校验码不匹配,删除文件
➜ rm /opt/course/6/binaries/kube-controller-manager /opt/course/6/binaries/kubelet
Question 7
Task weight: 6%
Use context:
kubectl config use-context infra-prod
The Open Policy Agent and Gatekeeper have been installed to, among other things, enforce blacklisting of certain image registries. Alter the existing constraint and/or template to also blacklist images from
very-bad-registry.com
.Test it by creating a single Pod using image
very-bad-registry.com/image
in Namespacedefault
, it shouldn't work.You can also verify your changes by looking at the existing Deployment
untrusted
in Namespacedefault
, it uses an image from the new untrusted source. The OPA contraint should throw violation messages for this one.
题目解析:
考点
解题
查看 OPA
➜ kubectl get constraints NAME AGE requiredlabels.constraints.gatekeeper.sh/namespace-mandatory-labels 87d NAME AGE blacklistimages.constraints.gatekeeper.sh/pod-trusted-images 87d ➜ kubectl get constrainttemplates NAME AGE blacklistimages 87d requiredlabels 87d
解:Alter the existing constraint and/or template to also blacklist images from
very-bad-registry.com
➜ kubectl edit constrainttemplates blacklistimages ... images { image := input.review.object.spec.containers[_].image not startswith(image, "docker-fake.io/") not startswith(image, "google-gcr-fake.com/") not startswith(image, "very-bad-registry.com/") # 新增 } ...
验证
➜ kubectl run opa-test --image=very-bad-registry.com/image Error from server ([pod-trusted-images] not trusted image!): admission webhook "validation.gatekeeper.sh" denied the request: [pod-trusted-images] not trusted image! ➜ kubectl describe blacklistimages pod-trusted-images ... Total Violations: 1 Violations: Enforcement Action: deny Kind: Pod Message: not trusted image! Name: untrusted-68c4944d48-d2mzc Namespace: default Events: <none>
Question 8
Task weight: 3%
Use context:
kubectl config use-context workload-prod
The Kubernetes Dashboard is installed in Namespace
kubernetes-dashboard
and is configured to:
- Allow users to "skip login"
- Allow insecure access (HTTP without authentication)
- Allow basic authentication
- Allow access from outside the cluster
You are asked to make it more secure by:
- Deny users to "skip login"
- Deny insecure access, enforce HTTPS (self signed certificates are ok for now)
- Add the
--auto-generate-certificates
argument- Enforce authentication using a token (with possibility to use RBAC)
- Allow only cluster internal access
题目解析:
解题
修改 kubernetes-dashboard 启动参数
➜ kubectl -n kubernetes-dashboard edit deploy kubernetes-dashboard ... template: spec: containers: - args: - --namespace=kubernetes-dashboard - --authentication-mode=token # 修改 - --auto-generate-certificates # 新增 #- --enable-skip-login=true # 删除 #- --enable-insecure-login # 删除 image: kubernetesui/dashboard:v2.0.3 imagePullPolicy: Always name: kubernetes-dashboard
修改 service
➜ kubectl -n kubernetes-dashboard edit svc ... spec: clusterIP: 10.107.176.19 externalTrafficPolicy: Cluster ports: - name: http nodePort: 32513 # 删除 port: 9090 protocol: TCP targetPort: 9090 - name: https nodePort: 32441 # 删除 port: 443 protocol: TCP targetPort: 8443 selector: k8s-app: kubernetes-dashboard sessionAffinity: None type: ClusterIP # 修改 status: loadBalancer: {}
Question 9
Task weight: 3%
Use context:
kubectl config use-context workload-prod
Some containers need to run more secure and restricted. There is an existing AppArmor profile located at
/opt/course/9/profile
for this.
- Install the AppArmor profile on Node
cluster1-worker1
. Connect usingssh cluster1-worker1
.- Add label
security=apparmor
to the NodeCreate a Deployment named
apparmor
in Namespacedefault
with:
- One replica of image
nginx:1.19.2
- NodeSelector for
security=apparmor
- Single container named
c1
with the AppArmor profile enabledThe Pod might not run properly with the profile enabled. Write the logs of the Pod into
/opt/course/9/logs
so another team can work on getting the application running.
题目解析:
考点
- Apparmor 使用 AppArmor 限制容器对资源的访问 | Kubernetes
解题
Install the AppArmor profile on Node
cluster1-worker1
. Connect usingssh cluster1-worker1
➜ scp /opt/course/9/profile cluster1-worker1:/tmp ➜ ssh cluster1-worker1 ➜ root@cluster1-worker1:~# apparmor_parser /tmp/profile
Add label
security=apparmor
to the Node➜ kubectl label node cluster1-worker1 security=apparmor
Create a Deployment named
apparmor
in Namespacedefault
with...apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: apparmor name: apparmor spec: replicas: 1 selector: matchLabels: app: apparmor strategy: {} template: metadata: creationTimestamp: null labels: app: apparmor annotations: container.apparmor.security.beta.kubernetes.io/c1: localhost/very-secure spec: containers: - image: nginx:1.19.2 name: c1 resources: {} nodeSelector: security: apparmor status: {}
Write the logs of the Pod into
/opt/course/9/logs
➜ kubectl logs apparmor-85c65645dc-ctwtv > /opt/course/9/logs
Question 10
Task weight: 4%
Use context:
kubectl config use-context workload-prod
Team purple wants to run some of their workloads more secure. Worker node
cluster1-worker2
has container engine containerd already installed and its configured to support the runsc/gvisor runtime.Create a RuntimeClass named
gvisor
with handlerrunsc
.Create a Pod that uses the RuntimeClass. The Pod should be in Namespace
team-purple
, namedgvisor-test
and of imagenginx:1.19.2
. Make sure the Pod runs oncluster1-worker2
.Write the
dmesg
output of the successfully started Pod into/opt/course/10/gvisor-test-dmesg
.
题目解析:
考点
- RuntimeClass 容器运行时类(Runtime Class) | Kubernetes
解题
创建 RuntimeClass,在官网实例拷过来修改
apiVersion: node.k8s.io/v1 # RuntimeClass 定义于 node.k8s.io API 组 kind: RuntimeClass metadata: name: gvisor # 用来引用 RuntimeClass 的名字 # RuntimeClass 是一个集群层面的资源 handler: runsc # 对应的 CRI 配置的名称
创建Pod
- 指定runtimeClassName 为 gvisor
- 指定到 cluster1-worker2 上运行
apiVersion: v1 kind: Pod metadata: name: gvisor-test namespace: team-purple spec: runtimeClassName: gvisor # 新增 nodeName: cluster1-worker2 # 新增 containers: - name: gvisor-test image: nginx:1.19.2
在 Pod 里运行 dmesg 写入
/opt/course/10/gvisor-test-dmesg
➜ kubectl -n team-purple exec gvisor-test > /opt/course/10/gvisor-test-dmesg -- dmesg
Question 11
Task weight: 7%
Use context:
kubectl config use-context workload-prod
There is an existing Secret called
database-access
in Namespaceteam-green
.Read the complete Secret content directly from ETCD (using
etcdctl
) and store it into/opt/course/11/etcd-secret-content
. Write the plain and decoded Secret's value of key "pass" into/opt/course/11/database-password
.
题目解析:
考点
- etcd, secret 静态加密 Secret 数据 | Kubernetes
解题
Read the complete Secret content directly from ETCD (using
etcdctl
) ...➜ ssh cluster1-master1 ➜ root@cluster1-master1:~# ETCDCTL_API=3 etcdctl \ --cert /etc/kubernetes/pki/apiserver-etcd-client.crt \ --key /etc/kubernetes/pki/apiserver-etcd-client.key \ --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/team-green/database-access > etcd-secret-content ➜ scp cluster1-master1:/root/etcd-secret-content /opt/course/11/etcd-secret-content
Write the plain and decoded Secret's value of key "pass" into
/opt/course/11/database-password
➜ echo Y29uZmlkZW50aWFs | base64 -d > /opt/course/11/database-password
Question 12
Task weight: 8%
Use context:
kubectl config use-context restricted@infra-prod
You're asked to investigate a possible permission escape in Namespace
restricted
. The context authenticates as userrestricted
which has only limited permissions and shouldn't be able to read Secret values.Try to find the password-key values of the Secrets
secret1
,secret2
andsecret3
in Namespacerestricted
. Write the decoded plaintext values into files/opt/course/12/secret1
,/opt/course/12/secret2
and/opt/course/12/secret3
.
题目解析:
考点
解题
尝试
➜ kubectl -n restricted get secrets Error from server (Forbidden): secrets is forbidden: User "restricted" cannot list resource "secrets" in API group "" in the namespace "restricted" ➜ kubectl -n restricted get secrets -o yaml apiVersion: v1 items: [] kind: List metadata: resourceVersion: "" selfLink: "" Error from server (Forbidden): secrets is forbidden: User "restricted" cannot list resource "secrets" in API group "" in the namespace "restricted" ➜ kubectl -n restricted get all NAME READY STATUS RESTARTS AGE pod1-6dcc97847d-jfrxj 1/1 Running 0 88d pod2-5bb9f58f9-zxnrj 1/1 Running 0 88d pod3-59967cd4c7-pxmsk 1/1 Running 0 88d Error from server (Forbidden): replicationcontrollers is forbidden: User "restricted" cannot list resource "replicationcontrollers" in API group "" in the namespace "restricted" Error from server (Forbidden): services is forbidden: User "restricted" cannot list resource "services" in API group "" in the namespace "restricted" Error from server (Forbidden): daemonsets.apps is forbidden: User "restricted" cannot list resource "daemonsets" in API group "apps" in the namespace "restricted" Error from server (Forbidden): deployments.apps is forbidden: User "restricted" cannot list resource "deployments" in API group "apps" in the namespace "restricted" Error from server (Forbidden): replicasets.apps is forbidden: User "restricted" cannot list resource "replicasets" in API group "apps" in the namespace "restricted" Error from server (Forbidden): statefulsets.apps is forbidden: User "restricted" cannot list resource "statefulsets" in API group "apps" in the namespace "restricted" Error from server (Forbidden): horizontalpodautoscalers.autoscaling is forbidden: User "restricted" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "restricted" Error from server (Forbidden): cronjobs.batch is forbidden: User "restricted" cannot list resource "cronjobs" in API group "batch" in the namespace "restricted" Error from server (Forbidden): jobs.batch is forbidden: User "restricted" cannot list resource "jobs" in API group "batch" in the namespace "restricted" # 看来是没有权限直接查看,但是有查看 Pod 的权限,尝试通过 Pod 查找漏洞
通过 Pod 查找漏洞
➜ kubectl -n restricted describe pod pod1-6dcc97847d-jfrxj ... Environment: <none> Mounts: /etc/secret-volume from secret-volume (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: secret-volume: Type: Secret (a volume populated by a Secret) SecretName: secret1 Optional: false ... # 咦,看到 pod1 挂载了secret卷 ➜ k8s@terminal:~$ kubectl -n restricted exec -it pod1-6dcc97847d-jfrxj -- /bin/sh ➜ / # cat /etc/secret-volume/password you-are # 这个就是 secret1 的明文数据了 # 继续看 pod2 ➜ kubectl -n restricted describe pod pod2-5bb9f58f9-zxnrj ... Environment: PASSWORD: <set to the key 'password' in secret 'secret2'> Optional: false Mounts: <none> ... # 看到有一个环境变量PASSWORD,引用的是 secret2 的 password ➜ kubectl -n restricted exec -it pod2-5bb9f58f9-zxnrj -- /bin/sh ➜ / # echo $PASSWORD an-amazing # 这个就是 secret2 的明文数据了 # 继续看 pod3 ➜ kubectl -n restricted describe pod pod3-59967cd4c7-pxmsk ... Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7ggd9 (ro) ... Volumes: kube-api-access-7ggd9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true ... # 这个 pod 挂了 ca 证书,那么可以通过这个 ca 证书访问 apiserver 获取 secret ➜ kubectl -n restricted exec -it pod3-59967cd4c7-pxmsk -- /bin/sh ➜ / # ls /var/run/secrets/kubernetes.io/serviceaccount/ ca.crt namespace token # 有 ca 证书和 token,那就好办了 ➜ TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) ➜ / # curl -k https://kubernetes/api/v1/namespaces/restricted/secrets -H "Authorization: Bearer $TOKEN" { "kind": "SecretList", "apiVersion": "v1", "metadata": { "resourceVersion": "45186" }, "items": [ ... { "metadata": { "name": "secret1", "namespace": "restricted", ... }, "data": { "password": "eW91LWFyZQo=" }, "type": "Opaque" }, { "metadata": { "name": "secret2", "namespace": "restricted", ... }, "data": { "password": "YW4tYW1hemluZwo=" }, "type": "Opaque" }, { "metadata": { "name": "secret3", "namespace": "restricted", ... }, "data": { "password": "cEVuRXRSYVRpT24tdEVzVGVSCg==" }, "type": "Opaque" } ] } # 一锅端,全出来了 echo -n cEVuRXRSYVRpT24tdEVzVGVSCg== | base64 -d # 这个就是 secret3 的明文数据了
Question 13
Task weight: 7%
Use context:
kubectl config use-context infra-prod
There is a metadata service available at
http://192.168.100.21:32000
on which Nodes can reach sensitive data, like cloud credentials for initialisation. By default, all Pods in the cluster also have access to this endpoint. The DevSecOps team has asked you to restrict access to this metadata server.In Namespace
metadata-access
:
- Create a NetworkPolicy named
metadata-deny
which prevents egress to192.168.100.21
for all Pods but still allows access to everything else- Create a NetworkPolicy named
metadata-allow
which allows Pods having labelrole: metadata-accessor
to access endpoint192.168.100.21
There are existing Pods in the target Namespace with which you can test your policies, but don't change their labels.
题目解析:
考点
解题
创建 NetworkPolicy,从官网示例中拷过来修改
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: metadata-deny namespace: metadata-access spec: podSelector: matchExpressions: - key: app operator: Exists policyTypes: - Egress egress: - to: - ipBlock: cidr: 0.0.0.0/0 except: - 192.168.100.21/32 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: metadata-allow namespace: metadata-access spec: podSelector: matchLabels: role: metadata-accessor policyTypes: - Egress egress: - to: - ipBlock: cidr: 192.168.100.21/32
Question 14
Task weight: 4%
Use context:
kubectl config use-context workload-prod
There are Pods in Namespace
team-yellow
. A security investigation noticed that some processes running in these Pods are using the Syscallkill
, which is forbidden by a Team Yellow internal policy.Find the offending Pod(s) and remove these by reducing the replicas of the parent Deployment to 0.
题目解析:
考点
- syscall
解题
查看有哪些 Pod 在运行
➜ kubectl -n team-yellow get all -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod/collector1-7585cc58cb-6mzn9 1/1 Running 0 88d 10.44.0.11 cluster1-worker1 <none> <none> pod/collector1-7585cc58cb-9gmsq 1/1 Running 0 88d 10.44.0.13 cluster1-worker1 <none> <none> pod/collector2-8556679d96-72ctk 1/1 Running 0 88d 10.44.0.12 cluster1-worker1 <none> <none> pod/collector3-8b58fdc88-q5ggz 1/1 Running 0 88d 10.44.0.14 cluster1-worker1 <none> <none> pod/collector3-8b58fdc88-qp2b9 1/1 Running 0 88d 10.44.0.15 cluster1-worker1 <none> <none>
登录节点,strace 查看系统调用
➜ ssh cluster1-worker1 ➜ root@cluster1-worker1:~# ps aux | grep collector root 40367 0.0 0.0 702208 308 ? Ssl 02:13 0:05 ./collector1-process root 40471 0.0 0.0 702208 308 ? Ssl 02:13 0:05 ./collector1-process root 40530 0.0 0.0 702472 888 ? Ssl 02:13 0:05 ./collector3-process root 40694 0.0 0.0 702472 992 ? Ssl 02:13 0:05 ./collector3-process root 40846 0.0 0.0 702216 304 ? Ssl 02:13 0:05 ./collector2-process root 301954 0.0 0.0 9032 672 pts/2 S+ 12:37 0:00 grep --color=auto collector ➜ root@cluster1-worker1:~# strace -p 40367 strace: Process 40367 attached epoll_pwait(3, [], 128, 194, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 kill(666, SIGTERM) = -1 ESRCH (No such process) epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, [], 128, 999, NULL, 1) = 0 epoll_pwait(3, ^Cstrace: Process 40367 detached <detached ...> # 依次使用 strace 命令查看进程的系统调用,这里省略了,最终发现 collector1-process 进程有 kill 系统调用
将副本数改为 0
➜ kubectl -n team-yellow scale deploy collector1 --replicas=0
Question 15
Task weight: 4%
Use context:
kubectl config use-context workload-prod
In Namespace
team-pink
there is an existing Nginx Ingress resources namedsecure
which accepts two paths/app
and/api
which point to different ClusterIP Services.From your main terminal you can connect to it using for example:
- HTTP:
curl -v http://secure-ingress.test:31080/app
- HTTPS:
curl -kv https://secure-ingress.test:31443/app
Right now it uses a default generated TLS certificate by the Nginx Ingress Controller.
You're asked to instead use the key and certificate provided at
/opt/course/15/tls.key
and/opt/course/15/tls.crt
. As it's a self-signed certificate you need to usecurl -k
when connecting to it.
题目解析:
考点
解题
查看现有 ingress
➜ kubectl -n team-pink get ingress NAME CLASS HOSTS ADDRESS PORTS AGE secure nginx secure-ingress.test 192.168.100.12 80, 443 88d
创建 tls 证书 secret
➜ kubectl -n team-pink create secret tls secure-ingress --key=/opt/course/15/tls.key --cert=/opt/course/15/tls.crt
修改 ingress,使用自定义证书
➜ kubectl -n team-pink edit ingress secure apiVersion: networking.k8s.io/v1 kind: Ingress ... spec: ingressClassName: nginx rules: - host: secure-ingress.test http: paths: - backend: service: name: secure-app port: number: 80 path: /app pathType: Prefix - backend: service: name: secure-api port: number: 80 path: /api pathType: Prefix tls: # 新增 - hosts: # 新增 - secure-ingress.test # 新增 secretName: secure-ingress # 新增 status: loadBalancer: ingress: - ip: 192.168.100.12
验证
➜ curl -kv https://secure-ingress.test:31443/app ... * Server certificate: * subject: CN=secure-ingress.test; O=secure-ingress.test * start date: Sep 25 18:22:10 2020 GMT * expire date: Sep 20 18:22:10 2040 GMT * issuer: CN=secure-ingress.test; O=secure-ingress.test * SSL certificate verify result: self signed certificate (18), continuing anyway. ...
Question 16
Task weight: 7%
Use context:
kubectl config use-context workload-prod
There is a Deployment
image-verify
in Namespaceteam-blue
which runs imageregistry.killer.sh:5000/image-verify:v1
. DevSecOps has asked you to improve this image by:
- Changing the base image to
alpine:3.12
- Not installing
curl
- Updating
nginx
to use the version constraint>=1.18.0
- Running the main process as user
myuser
Do not add any new lines to the Dockerfile, just edit existing ones. The file is located at
/opt/course/16/image/Dockerfile
.Tag your version as
v2
. You can build, tag and push using:cd /opt/course/16/image podman build -t registry.killer.sh:5000/image-verify:v2 . podman run registry.killer.sh:5000/image-verify:v2 # to test your changes podman push registry.killer.sh:5000/image-verify:v2
Make the Deployment use your updated image tag
v2
.
题目解析:
考点
- 制作镜像
解题
修改Dockerfile
FROM alpine:3.12 # 修改 RUN apk update && apk add vim nginx=1.18.0-r3 # 修改, 去掉curl,nginx去掉版本号,或者起一个alpine容器搜索nginx版本 RUN addgroup -S myuser && adduser -S myuser -G myuser COPY ./run.sh run.sh RUN ["chmod", "+x", "./run.sh"] USER myuser # 修改 ENTRYPOINT ["/bin/sh", "./run.sh"]
打镜像
podman build -t registry.killer.sh:5000/image-verify:v2 . podman push registry.killer.sh:5000/image-verify:v2
更新镜像版本
➜ kubectl -n team-blue set image deployment image-verify *=registry.killer.sh:5000/image-verify:v2 # 或者 kubectl -n team-blue edit deployment image-verify 进行修改
Question 17
Task weight: 7%
Use context:
kubectl config use-context infra-prod
Audit Logging has been enabled in the cluster with an Audit Policy located at
/etc/kubernetes/audit/policy.yaml
oncluster2-master1
.Change the configuration so that only one backup of the logs is stored.
Alter the Policy in a way that it only stores logs:
- From Secret resources, level Metadata
- From "system:nodes" userGroups, level RequestResponse
After you altered the Policy make sure to empty the log file so it only contains entries according to your changes, like using
truncate -s 0 /etc/kubernetes/audit/logs/audit.log
.NOTE: You can usejq
to render json more readable.cat data.json | jq
题目解析:
考点
解题
修改 apiserver 启动参数
➜ ssh cluster2-master1 ➜ root@cluster2-master1:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml - --audit-log-maxbackup=1 # 修改
修改审计策略
➜ root@cluster2-master1:~# vim /etc/kubernetes/audit/policy.yaml apiVersion: audit.k8s.io/v1 kind: Policy rules: # log Secret resources audits, level Metadata - level: Metadata resources: - group: "" resources: ["secrets"] # log node related audits, level RequestResponse - level: RequestResponse userGroups: ["system:nodes"] # for everything else don't log anything - level: None
重启 apiserver
➜ root@cluster2-master1:~# cd /etc/kubernetes/manifests/ ➜ root@cluster2-master1:/etc/kubernetes/manifests# mv kube-apiserver.yaml .. # 等 apiserver 停止后 ➜ root@cluster2-master1:/etc/kubernetes/manifests# mv ../kube-apiserver.yaml .
验证
# shows Secret entries cat /etc/kubernetes/audit/logs/audit.log | grep '"resource":"secrets"' | wc -l # confirms Secret entries are only of level Metadata cat /etc/kubernetes/audit/logs/audit.log | grep '"resource":"secrets"' | grep -v '"level":"Metadata"' | wc -l # shows RequestResponse level entries cat /etc/kubernetes/audit/logs/audit.log | grep -v '"level":"RequestResponse"' | wc -l # shows RequestResponse level entries are only for system:nodes cat /etc/kubernetes/audit/logs/audit.log | grep '"level":"RequestResponse"' | grep -v "system:nodes" | wc -l
Question 18
Task weight: 4%
Use context:
kubectl config use-context infra-prod
Namespace
security
contains five Secrets of type Opaque which can be considered highly confidential. The latest Incident-Prevention-Investigation revealed that ServiceAccountp.auster
had too broad access to the cluster for some time. This SA should've never had access to any Secrets in that Namespace.Find out which Secrets in Namespace
security
this SA did access by looking at the Audit Logs under/opt/course/18/audit.log
.Change the password to any new string of only those Secrets that were accessed by this SA.
NOTE: You can usejq
to render json more readable.cat data.json | jq
题目解析:
考点
- 审计日志分析
解题
粗略查看一下审计日志 /opt/course/18/audit.log
{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"824df58e-4986-4c60-8eb1-6d1eb89516f0","stage":"RequestReceived","requestURI":"/readyz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["192.168.102.11"],"userAgent":"kube-probe/1.19","requestReceivedTimestamp":"2020-09-24T21:25:56.713207Z","stageTimestamp":"2020-09-24T21:25:56.713207Z"}{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"824df58e-4986-4c60-8eb1-6d1eb89516f0","stage":"ResponseComplete","requestURI":"/readyz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["192.168.102.11"],"userAgent":"kube-probe/1.19","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-09-24T21:25:56.713207Z","stageTimestamp":"2020-09-24T21:25:56.714900Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:public-info-viewer\" of ClusterRole \"system:public-info-viewer\" to Group \"system:unauthenticated\""}}{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"f9e355c9-6221-4214-abc4-ba313bb07df2","stage":"RequestReceived","requestURI":"/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s","verb":"get","user":{"username":"system:kube-controller-manager","groups":["system:authenticated"]},"sourceIPs":["192.168.102.11"],"userAgent":"kube-controller-manager/v1.19.0 (linux/amd64) kubernetes/e199641/leader-election","objectRef":{"resource":"endpoints","namespace":"kube-system","name":"kube-controller-manager","apiVersion":"v1"},"requestReceivedTimestamp":"2020-09-24T21:25:56.880803Z","stageTimestamp":"2020-09-24T21:25:56.880803Z"} ...
可以发现都是 json 格式的日志,而且文件有几千行,因此需要做一定的筛选
➜ cat /opt/course/18/audit.log | wc -l # 4448 行 ➜ cat /opt/course/18/audit.log | grep p.auster | wc -l # 28 行,还是有点多,还得继续筛选 ➜ cat /opt/course/18/audit.log | grep p.auster | grep Secret | wc -l # 2 行 ➜ cat /opt/course/18/audit.log | grep p.auster | grep Secret | jq { "kind": "Event", "apiVersion": "audit.k8s.io/v1", "level": "RequestResponse", "auditID": "74fd9e03-abea-4df1-b3d0-9cfeff9ad97a", "stage": "ResponseComplete", "requestURI": "/api/v1/namespaces/security/secrets/vault-token", "verb": "get", "user": { "username": "system:serviceaccount:security:p.auster", "uid": "29ecb107-c0e8-4f2d-816a-b16f4391999c", "groups": [ "system:serviceaccounts", "system:serviceaccounts:security", "system:authenticated" ] }, "sourceIPs": [ "192.168.102.21" ], "userAgent": "curl/7.64.0", "objectRef": { "resource": "secrets", "namespace": "security", "name": "vault-token", "apiVersion": "v1" }, "responseObject": { "kind": "Secret", "apiVersion": "v1", ... "data": { "password": "bERhR0hhcERKcnVuCg==" }, "type": "Opaque" }, ... } { "kind": "Event", "apiVersion": "audit.k8s.io/v1", "level": "RequestResponse", "auditID": "aed6caf9-5af0-4872-8f09-ad55974bb5e0", "stage": "ResponseComplete", "requestURI": "/api/v1/namespaces/security/secrets/mysql-admin", "verb": "get", "user": { "username": "system:serviceaccount:security:p.auster", "uid": "29ecb107-c0e8-4f2d-816a-b16f4391999c", "groups": [ "system:serviceaccounts", "system:serviceaccounts:security", "system:authenticated" ] }, "sourceIPs": [ "192.168.102.21" ], "userAgent": "curl/7.64.0", "objectRef": { "resource": "secrets", "namespace": "security", "name": "mysql-admin", "apiVersion": "v1" } "responseObject": { "kind": "Secret", "apiVersion": "v1", ... "data": { "password": "bWdFVlBSdEpEWHBFCg==" }, "type": "Opaque" }, ... }
- 通过以上过滤查到的审计记录,可以发现请求了 vault-token, mysql-admin 两个 secrets
修改 secret
➜ echo new-pass | base64 bmV3LXBhc3MK ➜ kubectl -n security edit secret vault-token apiVersion: v1 data: password: bmV3LXBhc3MK # 修改 kind: Secret metadata: ... name: vault-token namespace: security type: Opaque ➜ kubectl -n security edit secret mysql-admin apiVersion: v1 data: password: bmV3LXBhc3MK # 修改 kind: Secret metadata: ... name: mysql-admin namespace: security type: Opaque
Question 19
Task weight: 2%
Use context:
kubectl config use-context workload-prod
The Deployment
immutable-deployment
in Namespaceteam-purple
should run immutable, it's created from file/opt/course/19/immutable-deployment.yaml
. Even after a successful break-in, it shouldn't be possible for an attacker to modify the filesystem of the running container.Modify the Deployment in a way that no processes inside the container can modify the local filesystem, only
/tmp
directory should be writeable. Don't modify the Docker image.Save the updated YAML under
/opt/course/19/immutable-deployment-new.yaml
and update the running Deployment.
题目解析:
考点
解题
修改 yaml
➜ cp /opt/course/19/immutable-deployment.yaml /opt/course/19/immutable-deployment-new.yaml ➜ vim /opt/course/19/immutable-deployment-new.yaml apiVersion: apps/v1 kind: Deployment metadata: namespace: team-purple name: immutable-deployment labels: app: immutable-deployment spec: replicas: 1 selector: matchLabels: app: immutable-deployment template: metadata: labels: app: immutable-deployment spec: containers: - image: busybox:1.32.0 command: ['sh', '-c', 'tail -f /dev/null'] imagePullPolicy: IfNotPresent name: busybox securityContext: # 新增 readOnlyRootFilesystem: true # 新增 volumeMounts: # 新增 - name: tmp # 新增 mountPath: /tmp # 新增 restartPolicy: Always volumes: # 新增 - name: tmp # 新增 emptyDir: {} # 新增
验证
➜ kubectl -n team-purple exec -it immutable-deployment-7cf85b8f74-z8zb9 -- /bin/sh / # echo 1 > /tmp/1 / # echo 1 > /1 /bin/sh: can't create /1: Read-only file system
Question 20
Task weight: 8%
Use context:
kubectl config use-context workload-stage
The cluster is running Kubernetes
1.21.4
. Update it to1.22.1
available viaapt
package manager.Use
ssh cluster3-master1
andssh cluster3-worker1
to connect to the instances.
题目解析:
考点
解题
查看节点
➜ kubectl get node NAME STATUS ROLES AGE VERSION cluster3-master1 Ready control-plane,master 88d v1.21.4 cluster3-worker1 Ready <none> 88d v1.21.4
查看 master 节点 kubeadm, kubectl, kubelet 版本,升级计划
➜ ssh cluster3-master1 ➜ root@cluster3-master1:~# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:44:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"} ➜ root@cluster3-master1:~# kubectl version Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"} ➜ root@cluster3-master1:~# kubelet --version Kubernetes v1.21.4 # kubeadm 已经是 1.22.1 版本了,不用升级 kubeadm # 执行升级计划,kubeadm upgrade plan 或 kubeadm upgrade plan 1.22.1 ➜ root@cluster3-master1:~# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [preflight] Running pre-flight checks. [upgrade] Running cluster health checks [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: v1.21.4 [upgrade/versions] kubeadm version: v1.22.1 I1214 01:16:17.575626 228047 version.go:255] remote version is much newer: v1.23.0; falling back to: stable-1.22 [upgrade/versions] Target version: v1.22.4 [upgrade/versions] Latest version in the v1.21 series: v1.21.7 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT TARGET kubelet 2 x v1.21.4 v1.21.7 Upgrade to the latest version in the v1.21 series: COMPONENT CURRENT TARGET kube-apiserver v1.21.4 v1.21.7 kube-controller-manager v1.21.4 v1.21.7 kube-scheduler v1.21.4 v1.21.7 kube-proxy v1.21.4 v1.21.7 CoreDNS v1.8.0 v1.8.4 etcd 3.4.13-0 3.4.13-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.21.7 _____________________________________________________________________ Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT TARGET kubelet 2 x v1.21.4 v1.22.4 Upgrade to the latest stable version: COMPONENT CURRENT TARGET kube-apiserver v1.21.4 v1.22.4 kube-controller-manager v1.21.4 v1.22.4 kube-scheduler v1.21.4 v1.22.4 kube-proxy v1.21.4 v1.22.4 CoreDNS v1.8.0 v1.8.4 etcd 3.4.13-0 3.5.0-0 You can now apply the upgrade by executing the following command: kubeadm upgrade apply v1.22.4 Note: Before you can perform this upgrade, you have to update kubeadm to v1.22.4. _____________________________________________________________________ The table below shows the current state of component configs as understood by this version of kubeadm. Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually upgrade to is denoted in the "PREFERRED VERSION" column. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED kubeproxy.config.k8s.io v1alpha1 v1alpha1 no kubelet.config.k8s.io v1beta1 v1beta1 no _____________________________________________________________________
升级 master
➜ root@cluster3-master1:~# kubeadm upgrade apply v1.22.1 ... [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.1". Enjoy! [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so. # 驱逐节点运行的 Pod ➜ root@cluster3-master1:~# kubectl drain cluster3-master1 --ignore-daemonsets # 升级 kubelet, kubectl ➜ root@cluster3-master1:~# apt-get install kubelet=1.22.1-00 kubectl=1.22.1-00 # 重启 kubelet ➜ root@cluster3-master1:~# systemctl daemon-reload ➜ root@cluster3-master1:~# systemctl restart kubelet # 解除保护 ➜ root@cluster3-master1:~# kubectl uncordon cluster3-master1
升级 worker
ssh cluster3-worker1 # 检查版本,这里略过 # 升级 ➜ root@cluster3-worker1:~# kubeadm upgrade node # 驱逐节点运行的 Pod ➜ root@cluster3-worker1:~# kubectl drain cluster3-worker1 --ignore-daemonsets # 升级 kubelet, kubectl ➜ root@cluster3-worker1:~# apt-get install kubelet=1.22.1-00 kubectl=1.22.1-00 # 重启 kubelet ➜ root@cluster3-worker1:~# systemctl daemon-reload ➜ root@cluster3-worker1:~# systemctl restart kubelet # 解除保护 ➜ root@cluster3-worker1:~# kubectl uncordon cluster3-worker1
验证
➜ kubectl get node NAME STATUS ROLES AGE VERSION cluster3-master1 Ready control-plane,master 88d v1.22.1 cluster3-worker1 Ready <none> 88d v1.22.1
Question 21
Task weight: 2%
(can be solved in any kubectl context)
The Vulnerability Scanner
trivy
is installed on your main terminal. Use it to scan the following images for known CVEs:
nginx:1.16.1-alpine
k8s.gcr.io/kube-apiserver:v1.18.0
k8s.gcr.io/kube-controller-manager:v1.18.0
docker.io/weaveworks/weave-kube:2.7.0
Write all images that don't contain the vulnerabilities
CVE-2020-10878
orCVE-2020-1967
into/opt/course/21/good-images
.
题目解析:
考点
- trivy
解题
使用 trivy 工具扫描镜像
➜ trivy image nginx:1.16.1-alpine | egrep "CVE-2020-10878|CVE-2020-1967" | libcrypto1.1 | CVE-2020-1967 | HIGH | 1.1.1d-r2 | 1.1.1g-r0 | openssl: Segmentation fault in | | libssl1.1 | CVE-2020-1967 | | 1.1.1d-r2 | 1.1.1g-r0 | openssl: Segmentation fault in | ➜ trivy image k8s.gcr.io/kube-apiserver:v1.18.0 | egrep "CVE-2020-10878|CVE-2020-1967" | | CVE-2020-10878 | | | | perl: corruption of | ➜ trivy k8s.gcr.io/kube-controller-manager:v1.18.0 | egrep "CVE-2020-10878|CVE-2020-1967" | | CVE-2020-10878 | | | | perl: corruption of | ➜ trivy docker.io/weaveworks/weave-kube:2.7.0 | egrep "CVE-2020-10878|CVE-2020-1967" # 只有 docker.io/weaveworks/weave-kube:2.7.0 没有扫出漏洞
写入答案
➜ echo docker.io/weaveworks/weave-kube:2.7.0 > /opt/course/21/good-images
Question 22
Task weight: 3%
(can be solved in any kubectl context)
The Release Engineering Team has shared some YAML manifests and Dockerfiles with you to review. The files are located under
/opt/course/22/files
.As a container security expert, you are asked to perform a manual static analysis and find out possible security issues with respect to unwanted credential exposure. Running processes as root is of no concern in this task.
Write the filenames which have issues into
/opt/course/22/security-issues
.NOTE: In the Dockerfile and YAML manifests, assume that the referred files, folders, secrets and volume mounts are present. Disregard syntax or logic errors.
题目解析:
考点
- 找安全隐患
解题
- 查看都有哪些文件
➜ ll /opt/course/22/files
total 48
drwxr-xr-x 2 k8s k8s 4096 Dec 14 02:15 ./
drwxr-xr-x 3 k8s k8s 4096 Sep 16 08:37 ../
-rw-r--r-- 1 k8s k8s 341 Sep 16 08:37 deployment-nginx.yaml
-rw-r--r-- 1 k8s k8s 723 Sep 16 08:37 deployment-redis.yaml
-rw-r--r-- 1 k8s k8s 384 Sep 16 08:37 Dockerfile-go