Question 1

Task weight: 1%

You have access to multiple clusters from your main terminal through kubectl contexts. Write all context names into /opt/course/1/contexts, one per line.

From the kubeconfig extract the certificate of user restricted@infra-prod and write it decoded to /opt/course/1/cert.

题目解析:

  • 考点

    • kubectl
  • 解题

    1. kubectl config get-contexts -o name > /opt/course/1/contexts
    2. # 从 .kube/config 文件中找到
    3. - name: restricted@infra-prod
    4. user
    5. client-certificate-data: LS0tLS1CRUdJ...
    6. echo LS0tLS1CRUdJ... | base64 -d > /opt/course/1/cert

Question 2

Task weight: 4%

Use context: kubectl config use-context workload-prod

Falco is installed with default configuration on node cluster1-worker1. Connect using ssh cluster1-worker1. Use it to:

  1. Find a Pod running image nginx which creates unwanted package management processes inside its container.
  2. Find a Pod running image httpd which modifies /etc/passwd.

Save the Falco logs for case 1 under /opt/course/2/falco.log in format time,container-id,container-name,user-name. No other information should be in any line. Collect the logs for at least 30 seconds.

Afterwards remove the threads (both 1 and 2) by scaling the replicas of the Deployments that control the offending Pods down to 0.

题目解析:

  • 考点

    • falco
  • 解题

    • 通过 falco 日志查找容器 ID, POD

      • Find a Pod running image nginx which creates unwanted package management processes inside its container

        1. ssh cluster1-worker1
        2. root@cluster1-worker1:~# grep nginx /var/log/syslog | grep -i "package management process"
        3. Dec 13 01:03:23 cluster1-worker1 falco[28640]: 01:03:23.564163340: Error Package management process launched in container (user=root user_loginuid=-1 command=apk container_id=3ed4079e7f61 container_name=nginx image=docker.io/library/nginx:1.19.2-alpine)
        4. ...
        5. root@cluster1-worker1:~# crictl ps -id 3ed4079e7f61
        6. CONTAINER ID IMAGE NAME ... POD ID
        7. 3ed4079e7f61 6f715d38cfe0e nginx ... 7a864406b9794
        8. root@cluster1-worker1:~# crictl pods -id 7a864406b9794
        9. POD ID ... NAME NAMESPACE ...
        10. 7a864406b9794 ... webapi-6cfddcd6f4-ftxg4 team-blue ...
        11. # 可得这个容器属于 deployment team-blue/webapi
      • Find a Pod running image httpd which modifies /etc/passwd

        1. root@cluster1-worker1:~# grep httpd /var/log/syslog | grep -i "/etc/passwd"
        2. Dec 13 06:13:20 cluster1-worker1 falco: 06:13:20.562402988: Error File below /etc opened for writing (user=root user_loginuid=-1 command=sed -i $d /etc/passwd parent=sh pcmdline=sh -c echo hacker >> /etc/passwd; sed -i '$d' /etc/passwd; true file=/etc/passwdIKpDGh program=sed gparent=<NA> ggparent=<NA> gggparent=<NA> container_id=5c781a912497 image=docker.io/library/httpd)
        3. root@cluster1-worker1:~# crictl ps -id 5c781a912497
        4. CONTAINER ID IMAGE NAME ... POD ID
        5. 5c781a912497 f6b40f9f8ad71 httpd ... 595af943c3245
        6. root@cluster1-worker1:~# crictl pods -id 595af943c3245
        7. POD ID ... NAME NAMESPACE ...
        8. 595af943c3245 ... rating-service-68cbdf7b7-v2p6g team-purple ...
        9. # 可得这个容器属于 deployment team-purple/rating-service
    • Save the Falco logs for case 1 under /opt/course/2/falco.log in format time,container-id,container-name,user-name...

      • 通过上面我们查看到的信息,打出的日志格式是这样的:

        1. (user=root user_loginuid=-1 command=apk container_id=3ed4079e7f61 container_name=nginx image=docker.io/library/nginx:1.19.2-alpine)
      • 而题目要求格式是 time,container-id,container-name,user-name,因此我们需要修改 falco 的配置文件

        1. root@cluster1-worker1:~# vim /etc/falco/falco_rules.yaml
        2. ...
        3. - rule: Launch Package Management Process in Container
        4. desc: Package management process ran inside container
        5. condition: >
        6. spawned_process
        7. and container
        8. and user.name != "_apt"
        9. and package_mgmt_procs
        10. and not package_mgmt_ancestor_procs
        11. and not user_known_package_manager_in_container
        12. output: >
        13. # Package management process launched in container (user=%user.name user_loginuid=%user.loginuid
        14. # command=%proc.cmdline container_id=%container.id container_name=%container.name image=%container.image.repository:%container.image.tag)
        15. Package management process launched in container %evt.time,%container.id,%container.name,%user.name # 删除上两行,新增这行
        16. priority: ERROR
        17. tags: [process, mitre_persistence]
        18. ...
      • 重启 falco 服务,查看日志输出

        1. root@cluster1-worker1:~# systemctl restart falco
        2. root@cluster1-worker1:~# tail -f /var/log/syslog | grep "Package management process launched in container"
        3. Dec 13 06:07:57 cluster1-worker1 falco[220990]: 06:07:57.756206761: Error Package management process launched in container 06:07:57.756206761,3ed4079e7f61,nginx,root
        4. Dec 13 06:07:57 cluster1-worker1 falco: 06:07:57.756206761: Error Package management process launched in container 06:07:57.756206761,3ed4079e7f61,nginx,root
        5. Dec 13 06:08:02 cluster1-worker1 falco[220990]: 06:08:02.754047763: Error Package management process launched in container 06:08:02.754047763,3ed4079e7f61,nginx,root
        6. Dec 13 06:08:02 cluster1-worker1 falco: 06:08:02.754047763: Error Package management process launched in container 06:08:02.754047763,3ed4079e7f61,nginx,root
        7. Dec 13 06:08:07 cluster1-worker1 falco[220990]: 06:08:07.755037087: Error Package management process launched in container 06:08:07.755037087,3ed4079e7f61,nginx,root
        8. Dec 13 06:08:07 cluster1-worker1 falco: 06:08:07.755037087: Error Package management process launched in container 06:08:07.755037087,3ed4079e7f61,nginx,root
        9. Dec 13 06:08:12 cluster1-worker1 falco[220990]: 06:08:12.751331220: Error Package management process launched in container 06:08:12.751331220,3ed4079e7f61,nginx,root
        10. Dec 13 06:08:12 cluster1-worker1 falco: 06:08:12.751331220: Error Package management process launched in container 06:08:12.751331220,3ed4079e7f61,nginx,root
        11. Dec 13 06:08:17 cluster1-worker1 falco[220990]: 06:08:17.763051479: Error Package management process launched in container 06:08:17.763051479,3ed4079e7f61,nginx,root
        12. Dec 13 06:08:17 cluster1-worker1 falco: 06:08:17.763051479: Error Package management process launched in container 06:08:17.763051479,3ed4079e7f61,nginx,root
        13. Dec 13 06:08:22 cluster1-worker1 falco[220990]: 06:08:22.744677645: Error Package management process launched in container 06:08:22.744677645,3ed4079e7f61,nginx,root
      • 将结果写入 /opt/course/2/falco.log

        1. 06:07:57.756206761,3ed4079e7f61,nginx,root
        2. 06:08:02.754047763,3ed4079e7f61,nginx,root
        3. 06:08:07.755037087,3ed4079e7f61,nginx,root
        4. 06:08:12.751331220,3ed4079e7f61,nginx,root
        5. 06:08:17.763051479,3ed4079e7f61,nginx,root
        6. 06:08:22.744677645,3ed4079e7f61,nginx,root
    • Afterwards remove the threads (both 1 and 2) by scaling the replicas of the Deployments that control the offending Pods down to 0

      • 通过上面查到的 Pod 归属,将副本数改为 0

        1. kubectl -n team-blue scale deployment webapi --replicas=0
        2. kubectl -n team-purple scale deployment rating-service --replicas=0

Question 3

Task weight: 3%

Use context: kubectl config use-context workload-prod

You received a list from the DevSecOps team which performed a security investigation of the k8s cluster1 (workload-prod). The list states the following about the apiserver setup:

  • Accessible through a NodePort Service

Change the apiserver setup so that:

  • Only accessible through a ClusterIP Service

题目解析:

  • 考点

    • service
  • 解题

    • 删除 apiserver 启动参数

      1. ssh cluster1-master1
      2. root@cluster1-master1:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml
      3. \ - --kubernetes-service-node-port=31000 # 删除
    • 修改 kubernetes service,将 NodePort 类型改为 ClusterIP

      1. kubectl edit svc kubernetes
      2. ...
      3. ports:
      4. - name: https
      5. port: 443
      6. protocol: TCP
      7. targetPort: 6443
      8. # nodePort: 31000 # 删除
      9. sessionAffinity: None
      10. type: ClusterIP # 修改
      11. ...

Question 4

Task weight: 8%

Use context: kubectl config use-context workload-prod

There is Deployment container-host-hacker in Namespace team-red which mounts /run/containerd as a hostPath volume on the Node where its running. This means that the Pod can access various data about other containers running on the same Node.

You're asked to forbid this behavior by:

  1. Enabling Admission Plugin PodSecurityPolicy in the apiserver
  2. Creating a PodSecurityPolicy named psp-mount which allows hostPath volumes only for directory /tmp
  3. Creating a ClusterRole named psp-mount which allows to use the new PSP
  4. Creating a RoleBinding named psp-mount in Namespace team-red which binds the new ClusterRole to all ServiceAccounts in the Namespace team-red

Restart the Pod of Deployment container-host-hacker afterwards to verify new creation is prevented.

NOTE: PSPs can affect the whole cluster. Should you encounter issues you can always disable the Admission Plugin again.

题目解析:

  • 考点

  • 解题

    • 启用准入插件

      1. ssh cluster1-master1
      2. root@cluster1-master1:~# ssh cluster1-master1
      3. root@cluster1-master1:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml
      4. - --enable-admission-plugins=NodeRestriction,PodSecurityPolicy # 修改,在末尾加入,PodSecurityPolicy
    • 创建 PodSecurityPolicy,从官网示例拷过来修改

      1. apiVersion: policy/v1beta1
      2. kind: PodSecurityPolicy
      3. metadata:
      4. name: psp-mount
      5. namespace: team-red
      6. spec:
      7. privileged: false # Don't allow privileged pods!
      8. # The rest fills in some required fields.
      9. seLinux:
      10. rule: RunAsAny
      11. supplementalGroups:
      12. rule: RunAsAny
      13. runAsUser:
      14. rule: RunAsAny
      15. fsGroup:
      16. rule: RunAsAny
      17. allowedHostPaths:
      18. - pathPrefix: /tmp
      19. volumes:
      20. - 'hostPath'
    • 创建 ClusterRole,从官网示例拷过来修改

      1. apiVersion: rbac.authorization.k8s.io/v1
      2. kind: ClusterRole
      3. metadata:
      4. name: psp-mount
      5. rules:
      6. - apiGroups: ['policy']
      7. resources: ['podsecuritypolicies']
      8. verbs: ['use']
      9. resourceNames:
      10. - team-red:psp-mount
    • 创建 RoleBinding

      1. kubectl -n team-red create rolebinding psp-mount --clusterrole=psp-mount --group=system:serviceaccounts

Question 5

Task weight: 3%

Use context: kubectl config use-context infra-prod

You're ask to evaluate specific settings of cluster2 against the CIS Benchmark recommendations. Use the tool kube-bench which is already installed on the nodes.

Connect using ssh cluster2-master1 and ssh cluster2-worker1.

On the master node ensure (correct if necessary) that the CIS recommendations are set for:

  1. The --profiling argument of the kube-controller-manager
  2. The ownership of directory /var/lib/etcd

On the worker node ensure (correct if necessary) that the CIS recommendations are set for:

  1. The permissions of the kubelet configuration /var/lib/kubelet/config.yaml
  2. The --client-ca-file argument of the kubelet

题目解析:

  • 考点

    • kube-bench
  • 解题

    • 使用 kube-bench 检查 cluster2-master1

      1. ssh cluster2-master1
      2. root@cluster2-master1:~# kube-bench master
      3. ...
      4. == Summary ==
      5. 41 checks PASS
      6. 13 checks FAIL
      7. 11 checks WARN
      8. 0 checks INFO
    • 解:The --profiling argument of the kube-controller-manager

      1. # 从 kube-bench master 命令的输出中,找 kube-controller-manager --profiling 有关的信息
      2. # 可以看到有
      3. 1.3.2 Edit the Controller Manager pod specification file /etc/kubernetes/manifests/kube-controller-manager.yaml
      4. on the master node and set the below parameter.
      5. --profiling=false
      6. # 修改 /etc/kubernetes/manifests/kube-controller-manager.yaml
      7. ...
      8. - --service-cluster-ip-range=10.96.0.0/12
      9. - --use-service-account-credentials=true
      10. - --profiling=false # 新增参数
      11. ...
    • 解:The ownership of directory /var/lib/etcd

      1. # 从 kube-bench master 命令的输出中,找 etcd 有关的信息
      2. # 可以看到有
      3. 1.1.12 On the etcd server node, get the etcd data directory, passed as an argument --data-dir,
      4. from the below command:
      5. ps -ef | grep etcd
      6. Run the below command (based on the etcd data directory found above).
      7. For example, chown etcd:etcd /var/lib/etcd
      8. # 执行命令修改 etcd 数据目录属主属组
      9. root@cluster2-master1:~# chown etcd:etcd /var/lib/etcd
    • 使用 kube-bench 检查 cluster2-worker1

      1. ssh cluster2-worker1
      2. root@cluster2-worker1:~# kube-bench node
      3. ...
      4. == Summary ==
      5. 13 checks PASS
      6. 10 checks FAIL
      7. 2 checks WARN
      8. 0 checks INFO
    • 解:The permissions of the kubelet configuration /var/lib/kubelet/config.yaml

      1. # 从 kube-bench node 命令中找 kubelet 配置文件相关的信息
      2. # 可以找到
      3. 2.2.10 Run the following command (using the config file location identified in the Audit step)
      4. chmod 644 /var/lib/kubelet/config.yaml
      5. # 执行命令修改文件权限
      6. root@cluster2-worker1:~# chmod 644 /var/lib/kubelet/config.yaml
    • 解:The --client-ca-file argument of the kubelet

      1. # 从 kube-bench node 命令中找 client-ca-file 相关的信息
      2. # 可以找到
      3. [PASS] 2.1.4 Ensure that the --client-ca-file argument is set as appropriate (Scored)
      4. 2.2.7 Run the following command to modify the file permissions of the --client-ca-file
      5. 2.2.8 Run the following command to modify the ownership of the --client-ca-file .
      6. # 检查通过的,不用处理

Question 6

  1. Task weight: 2%

    (can be solved in any kubectl context)

    There are four Kubernetes server binaries located at /opt/course/6/binaries. You're provided with the following verified sha512 values for these:

    kube-apiserver f417c0555bc0167355589dd1afe23be9bf909bf98312b1025f12015d1b58a1c62c9908c0067a7764fa35efdac7016a9efa8711a44425dd6692906a7c283f032c

    kube-controller-manager 60100cc725e91fe1a949e1b2d0474237844b5862556e25c2c655a33boa8225855ec5ee22fa4927e6c46a60d43a7c4403a27268f96fbb726307d1608b44f38a60

    kube-proxy 52f9d8ad045f8eee1d689619ef8ceef2d86d50c75a6a332653240d7ba5b2a114aca056d9e513984ade24358c9662714973c1960c62a5cb37dd375631c8a614c6

    kubelet 4be40f2440619e990897cf956c32800dc96c2c983bf64519854a3309fa5aa21827991559f9c44595098e27e6f2ee4d64a3fdec6baba8a177881f20e3ec61e26c

    Delete those binaries that don't match with the sha512 values above.

题目解析:

  • 考点

    • sha512命令对比校验码
  • 解题

    • 这题好解,通过对比 sha512 的值,发现值不同,则删除二进制程序

      1. sha512sum /opt/course/6/binaries/kube-apiserver | grep f417c0555bc0167355589dd1afe23be9bf909bf98312b1025f12015d1b58a1c62c9908c0067a7764fa35efdac7016a9efa8711a44425dd6692906a7c283f032c
      2. sha512sum /opt/course/6/binaries/kube-controller-manager | grep 60100cc725e91fe1a949e1b2d0474237844b5862556e25c2c655a33boa8225855ec5ee22fa4927e6c46a60d43a7c4403a27268f96fbb726307d1608b44f38a60
      3. sha512sum /opt/course/6/binaries/kube-proxy | grep 52f9d8ad045f8eee1d689619ef8ceef2d86d50c75a6a332653240d7ba5b2a114aca056d9e513984ade24358c9662714973c1960c62a5cb37dd375631c8a614c6
      4. sha512sum /opt/course/6/binaries/kubelet | grep 4be40f2440619e990897cf956c32800dc96c2c983bf64519854a3309fa5aa21827991559f9c44595098e27e6f2ee4d64a3fdec6baba8a177881f20e3ec61e26c
    • 没有输出的说明校验码不匹配,删除文件

      1. rm /opt/course/6/binaries/kube-controller-manager /opt/course/6/binaries/kubelet

Question 7

Task weight: 6%

Use context: kubectl config use-context infra-prod

The Open Policy Agent and Gatekeeper have been installed to, among other things, enforce blacklisting of certain image registries. Alter the existing constraint and/or template to also blacklist images from very-bad-registry.com.

Test it by creating a single Pod using image very-bad-registry.com/image in Namespace default, it shouldn't work.

You can also verify your changes by looking at the existing Deployment untrusted in Namespace default, it uses an image from the new untrusted source. The OPA contraint should throw violation messages for this one.

题目解析:

  • 考点

  • 解题

    • 查看 OPA

      1. kubectl get constraints
      2. NAME AGE
      3. requiredlabels.constraints.gatekeeper.sh/namespace-mandatory-labels 87d
      4. NAME AGE
      5. blacklistimages.constraints.gatekeeper.sh/pod-trusted-images 87d
      6. kubectl get constrainttemplates
      7. NAME AGE
      8. blacklistimages 87d
      9. requiredlabels 87d
    • 解:Alter the existing constraint and/or template to also blacklist images from very-bad-registry.com

      1. kubectl edit constrainttemplates blacklistimages
      2. ...
      3. images {
      4. image := input.review.object.spec.containers[_].image
      5. not startswith(image, "docker-fake.io/")
      6. not startswith(image, "google-gcr-fake.com/")
      7. not startswith(image, "very-bad-registry.com/") # 新增
      8. }
      9. ...
    • 验证

      1. kubectl run opa-test --image=very-bad-registry.com/image
      2. Error from server ([pod-trusted-images] not trusted image!): admission webhook "validation.gatekeeper.sh" denied the request: [pod-trusted-images] not trusted image!
      3. kubectl describe blacklistimages pod-trusted-images
      4. ...
      5. Total Violations: 1
      6. Violations:
      7. Enforcement Action: deny
      8. Kind: Pod
      9. Message: not trusted image!
      10. Name: untrusted-68c4944d48-d2mzc
      11. Namespace: default
      12. Events: <none>

Question 8

Task weight: 3%

Use context: kubectl config use-context workload-prod

The Kubernetes Dashboard is installed in Namespace kubernetes-dashboard and is configured to:

  1. Allow users to "skip login"
  2. Allow insecure access (HTTP without authentication)
  3. Allow basic authentication
  4. Allow access from outside the cluster

You are asked to make it more secure by:

  1. Deny users to "skip login"
  2. Deny insecure access, enforce HTTPS (self signed certificates are ok for now)
  3. Add the --auto-generate-certificates argument
  4. Enforce authentication using a token (with possibility to use RBAC)
  5. Allow only cluster internal access

题目解析:

  • 解题

    • 修改 kubernetes-dashboard 启动参数

      1. kubectl -n kubernetes-dashboard edit deploy kubernetes-dashboard
      2. ...
      3. template:
      4. spec:
      5. containers:
      6. - args:
      7. - --namespace=kubernetes-dashboard
      8. - --authentication-mode=token # 修改
      9. - --auto-generate-certificates # 新增
      10. #- --enable-skip-login=true # 删除
      11. #- --enable-insecure-login # 删除
      12. image: kubernetesui/dashboard:v2.0.3
      13. imagePullPolicy: Always
      14. name: kubernetes-dashboard
    • 修改 service

      1. kubectl -n kubernetes-dashboard edit svc
      2. ...
      3. spec:
      4. clusterIP: 10.107.176.19
      5. externalTrafficPolicy: Cluster
      6. ports:
      7. - name: http
      8. nodePort: 32513 # 删除
      9. port: 9090
      10. protocol: TCP
      11. targetPort: 9090
      12. - name: https
      13. nodePort: 32441 # 删除
      14. port: 443
      15. protocol: TCP
      16. targetPort: 8443
      17. selector:
      18. k8s-app: kubernetes-dashboard
      19. sessionAffinity: None
      20. type: ClusterIP # 修改
      21. status:
      22. loadBalancer: {}

Question 9

Task weight: 3%

Use context: kubectl config use-context workload-prod

Some containers need to run more secure and restricted. There is an existing AppArmor profile located at /opt/course/9/profile for this.

  1. Install the AppArmor profile on Node cluster1-worker1. Connect using ssh cluster1-worker1.
  2. Add label security=apparmor to the Node
  3. Create a Deployment named apparmor in Namespace default with:

    • One replica of image nginx:1.19.2
    • NodeSelector for security=apparmor
    • Single container named c1 with the AppArmor profile enabled

    The Pod might not run properly with the profile enabled. Write the logs of the Pod into /opt/course/9/logs so another team can work on getting the application running.

题目解析:

  • 考点

  • 解题

    • Install the AppArmor profile on Node cluster1-worker1. Connect using ssh cluster1-worker1

      1. scp /opt/course/9/profile cluster1-worker1:/tmp
      2. ssh cluster1-worker1
      3. root@cluster1-worker1:~# apparmor_parser /tmp/profile
    • Add label security=apparmor to the Node

      1. kubectl label node cluster1-worker1 security=apparmor
    • Create a Deployment named apparmor in Namespace default with...

      1. apiVersion: apps/v1
      2. kind: Deployment
      3. metadata:
      4. creationTimestamp: null
      5. labels:
      6. app: apparmor
      7. name: apparmor
      8. spec:
      9. replicas: 1
      10. selector:
      11. matchLabels:
      12. app: apparmor
      13. strategy: {}
      14. template:
      15. metadata:
      16. creationTimestamp: null
      17. labels:
      18. app: apparmor
      19. annotations:
      20. container.apparmor.security.beta.kubernetes.io/c1: localhost/very-secure
      21. spec:
      22. containers:
      23. - image: nginx:1.19.2
      24. name: c1
      25. resources: {}
      26. nodeSelector:
      27. security: apparmor
      28. status: {}
    • Write the logs of the Pod into /opt/course/9/logs

      1. kubectl logs apparmor-85c65645dc-ctwtv > /opt/course/9/logs

Question 10

Task weight: 4%

Use context: kubectl config use-context workload-prod

Team purple wants to run some of their workloads more secure. Worker node cluster1-worker2 has container engine containerd already installed and its configured to support the runsc/gvisor runtime.

Create a RuntimeClass named gvisor with handler runsc.

Create a Pod that uses the RuntimeClass. The Pod should be in Namespace team-purple, named gvisor-test and of image nginx:1.19.2. Make sure the Pod runs on cluster1-worker2.

Write the dmesg output of the successfully started Pod into /opt/course/10/gvisor-test-dmesg.

题目解析:

  • 考点

  • 解题

    • 创建 RuntimeClass,在官网实例拷过来修改

      1. apiVersion: node.k8s.io/v1 # RuntimeClass 定义于 node.k8s.io API 组
      2. kind: RuntimeClass
      3. metadata:
      4. name: gvisor # 用来引用 RuntimeClass 的名字
      5. # RuntimeClass 是一个集群层面的资源
      6. handler: runsc # 对应的 CRI 配置的名称
    • 创建Pod

      • 指定runtimeClassName 为 gvisor
      • 指定到 cluster1-worker2 上运行
      1. apiVersion: v1
      2. kind: Pod
      3. metadata:
      4. name: gvisor-test
      5. namespace: team-purple
      6. spec:
      7. runtimeClassName: gvisor # 新增
      8. nodeName: cluster1-worker2 # 新增
      9. containers:
      10. - name: gvisor-test
      11. image: nginx:1.19.2
    • 在 Pod 里运行 dmesg 写入 /opt/course/10/gvisor-test-dmesg

      1. kubectl -n team-purple exec gvisor-test > /opt/course/10/gvisor-test-dmesg -- dmesg

Question 11

Task weight: 7%

Use context: kubectl config use-context workload-prod

There is an existing Secret called database-access in Namespace team-green.

Read the complete Secret content directly from ETCD (using etcdctl) and store it into /opt/course/11/etcd-secret-content. Write the plain and decoded Secret's value of key "pass" into /opt/course/11/database-password.

题目解析:

  • 考点

  • 解题

    • Read the complete Secret content directly from ETCD (using etcdctl) ...

      1. ssh cluster1-master1
      2. root@cluster1-master1:~# ETCDCTL_API=3 etcdctl \
      3. --cert /etc/kubernetes/pki/apiserver-etcd-client.crt \
      4. --key /etc/kubernetes/pki/apiserver-etcd-client.key \
      5. --cacert /etc/kubernetes/pki/etcd/ca.crt get /registry/secrets/team-green/database-access > etcd-secret-content
      6. scp cluster1-master1:/root/etcd-secret-content /opt/course/11/etcd-secret-content
    • Write the plain and decoded Secret's value of key "pass" into /opt/course/11/database-password

      1. echo Y29uZmlkZW50aWFs | base64 -d > /opt/course/11/database-password

Question 12

Task weight: 8%

Use context: kubectl config use-context restricted@infra-prod

You're asked to investigate a possible permission escape in Namespace restricted. The context authenticates as user restricted which has only limited permissions and shouldn't be able to read Secret values.

Try to find the password-key values of the Secrets secret1, secret2 and secret3 in Namespace restricted. Write the decoded plaintext values into files /opt/course/12/secret1, /opt/course/12/secret2 and /opt/course/12/secret3.

题目解析:

  • 考点

  • 解题

    • 尝试

      1. kubectl -n restricted get secrets
      2. Error from server (Forbidden): secrets is forbidden: User "restricted" cannot list resource "secrets" in API group "" in the namespace "restricted"
      3. kubectl -n restricted get secrets -o yaml
      4. apiVersion: v1
      5. items: []
      6. kind: List
      7. metadata:
      8. resourceVersion: ""
      9. selfLink: ""
      10. Error from server (Forbidden): secrets is forbidden: User "restricted" cannot list resource "secrets" in API group "" in the namespace "restricted"
      11. kubectl -n restricted get all
      12. NAME READY STATUS RESTARTS AGE
      13. pod1-6dcc97847d-jfrxj 1/1 Running 0 88d
      14. pod2-5bb9f58f9-zxnrj 1/1 Running 0 88d
      15. pod3-59967cd4c7-pxmsk 1/1 Running 0 88d
      16. Error from server (Forbidden): replicationcontrollers is forbidden: User "restricted" cannot list resource "replicationcontrollers" in API group "" in the namespace "restricted"
      17. Error from server (Forbidden): services is forbidden: User "restricted" cannot list resource "services" in API group "" in the namespace "restricted"
      18. Error from server (Forbidden): daemonsets.apps is forbidden: User "restricted" cannot list resource "daemonsets" in API group "apps" in the namespace "restricted"
      19. Error from server (Forbidden): deployments.apps is forbidden: User "restricted" cannot list resource "deployments" in API group "apps" in the namespace "restricted"
      20. Error from server (Forbidden): replicasets.apps is forbidden: User "restricted" cannot list resource "replicasets" in API group "apps" in the namespace "restricted"
      21. Error from server (Forbidden): statefulsets.apps is forbidden: User "restricted" cannot list resource "statefulsets" in API group "apps" in the namespace "restricted"
      22. Error from server (Forbidden): horizontalpodautoscalers.autoscaling is forbidden: User "restricted" cannot list resource "horizontalpodautoscalers" in API group "autoscaling" in the namespace "restricted"
      23. Error from server (Forbidden): cronjobs.batch is forbidden: User "restricted" cannot list resource "cronjobs" in API group "batch" in the namespace "restricted"
      24. Error from server (Forbidden): jobs.batch is forbidden: User "restricted" cannot list resource "jobs" in API group "batch" in the namespace "restricted"
      25. # 看来是没有权限直接查看,但是有查看 Pod 的权限,尝试通过 Pod 查找漏洞
    • 通过 Pod 查找漏洞

      1. kubectl -n restricted describe pod pod1-6dcc97847d-jfrxj
      2. ...
      3. Environment: <none>
      4. Mounts:
      5. /etc/secret-volume from secret-volume (ro)
      6. Conditions:
      7. Type Status
      8. Initialized True
      9. Ready True
      10. ContainersReady True
      11. PodScheduled True
      12. Volumes:
      13. secret-volume:
      14. Type: Secret (a volume populated by a Secret)
      15. SecretName: secret1
      16. Optional: false
      17. ...
      18. # 咦,看到 pod1 挂载了secret卷
      19. k8s@terminal:~$ kubectl -n restricted exec -it pod1-6dcc97847d-jfrxj -- /bin/sh
      20. / # cat /etc/secret-volume/password
      21. you-are
      22. # 这个就是 secret1 的明文数据了
      23. # 继续看 pod2
      24. kubectl -n restricted describe pod pod2-5bb9f58f9-zxnrj
      25. ...
      26. Environment:
      27. PASSWORD: <set to the key 'password' in secret 'secret2'> Optional: false
      28. Mounts: <none>
      29. ...
      30. # 看到有一个环境变量PASSWORD,引用的是 secret2 的 password
      31. kubectl -n restricted exec -it pod2-5bb9f58f9-zxnrj -- /bin/sh
      32. / # echo $PASSWORD
      33. an-amazing
      34. # 这个就是 secret2 的明文数据了
      35. # 继续看 pod3
      36. kubectl -n restricted describe pod pod3-59967cd4c7-pxmsk
      37. ...
      38. Environment: <none>
      39. Mounts:
      40. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7ggd9 (ro)
      41. ...
      42. Volumes:
      43. kube-api-access-7ggd9:
      44. Type: Projected (a volume that contains injected data from multiple sources)
      45. TokenExpirationSeconds: 3607
      46. ConfigMapName: kube-root-ca.crt
      47. ConfigMapOptional: <nil>
      48. DownwardAPI: true
      49. ...
      50. # 这个 pod 挂了 ca 证书,那么可以通过这个 ca 证书访问 apiserver 获取 secret
      51. kubectl -n restricted exec -it pod3-59967cd4c7-pxmsk -- /bin/sh
      52. / # ls /var/run/secrets/kubernetes.io/serviceaccount/
      53. ca.crt namespace token
      54. # 有 ca 证书和 token,那就好办了
      55. TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
      56. / # curl -k https://kubernetes/api/v1/namespaces/restricted/secrets -H "Authorization: Bearer $TOKEN"
      57. {
      58. "kind": "SecretList",
      59. "apiVersion": "v1",
      60. "metadata": {
      61. "resourceVersion": "45186"
      62. },
      63. "items": [
      64. ...
      65. {
      66. "metadata": {
      67. "name": "secret1",
      68. "namespace": "restricted",
      69. ...
      70. },
      71. "data": {
      72. "password": "eW91LWFyZQo="
      73. },
      74. "type": "Opaque"
      75. },
      76. {
      77. "metadata": {
      78. "name": "secret2",
      79. "namespace": "restricted",
      80. ...
      81. },
      82. "data": {
      83. "password": "YW4tYW1hemluZwo="
      84. },
      85. "type": "Opaque"
      86. },
      87. {
      88. "metadata": {
      89. "name": "secret3",
      90. "namespace": "restricted",
      91. ...
      92. },
      93. "data": {
      94. "password": "cEVuRXRSYVRpT24tdEVzVGVSCg=="
      95. },
      96. "type": "Opaque"
      97. }
      98. ]
      99. }
      100. # 一锅端,全出来了
      101. echo -n cEVuRXRSYVRpT24tdEVzVGVSCg== | base64 -d
      102. # 这个就是 secret3 的明文数据了

Question 13

Task weight: 7%

Use context: kubectl config use-context infra-prod

There is a metadata service available at http://192.168.100.21:32000 on which Nodes can reach sensitive data, like cloud credentials for initialisation. By default, all Pods in the cluster also have access to this endpoint. The DevSecOps team has asked you to restrict access to this metadata server.

In Namespace metadata-access:

  • Create a NetworkPolicy named metadata-deny which prevents egress to 192.168.100.21 for all Pods but still allows access to everything else
  • Create a NetworkPolicy named metadata-allow which allows Pods having label role: metadata-accessor to access endpoint 192.168.100.21

There are existing Pods in the target Namespace with which you can test your policies, but don't change their labels.

题目解析:

  • 考点

  • 解题

    • 创建 NetworkPolicy,从官网示例中拷过来修改

      1. apiVersion: networking.k8s.io/v1
      2. kind: NetworkPolicy
      3. metadata:
      4. name: metadata-deny
      5. namespace: metadata-access
      6. spec:
      7. podSelector:
      8. matchExpressions:
      9. - key: app
      10. operator: Exists
      11. policyTypes:
      12. - Egress
      13. egress:
      14. - to:
      15. - ipBlock:
      16. cidr: 0.0.0.0/0
      17. except:
      18. - 192.168.100.21/32
      19. ---
      20. apiVersion: networking.k8s.io/v1
      21. kind: NetworkPolicy
      22. metadata:
      23. name: metadata-allow
      24. namespace: metadata-access
      25. spec:
      26. podSelector:
      27. matchLabels:
      28. role: metadata-accessor
      29. policyTypes:
      30. - Egress
      31. egress:
      32. - to:
      33. - ipBlock:
      34. cidr: 192.168.100.21/32

Question 14

Task weight: 4%

Use context: kubectl config use-context workload-prod

There are Pods in Namespace team-yellow. A security investigation noticed that some processes running in these Pods are using the Syscall kill, which is forbidden by a Team Yellow internal policy.

Find the offending Pod(s) and remove these by reducing the replicas of the parent Deployment to 0.

题目解析:

  • 考点

    • syscall
  • 解题

    • 查看有哪些 Pod 在运行

      1. kubectl -n team-yellow get all -o wide
      2. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      3. pod/collector1-7585cc58cb-6mzn9 1/1 Running 0 88d 10.44.0.11 cluster1-worker1 <none> <none>
      4. pod/collector1-7585cc58cb-9gmsq 1/1 Running 0 88d 10.44.0.13 cluster1-worker1 <none> <none>
      5. pod/collector2-8556679d96-72ctk 1/1 Running 0 88d 10.44.0.12 cluster1-worker1 <none> <none>
      6. pod/collector3-8b58fdc88-q5ggz 1/1 Running 0 88d 10.44.0.14 cluster1-worker1 <none> <none>
      7. pod/collector3-8b58fdc88-qp2b9 1/1 Running 0 88d 10.44.0.15 cluster1-worker1 <none> <none>
    • 登录节点,strace 查看系统调用

      1. ssh cluster1-worker1
      2. root@cluster1-worker1:~# ps aux | grep collector
      3. root 40367 0.0 0.0 702208 308 ? Ssl 02:13 0:05 ./collector1-process
      4. root 40471 0.0 0.0 702208 308 ? Ssl 02:13 0:05 ./collector1-process
      5. root 40530 0.0 0.0 702472 888 ? Ssl 02:13 0:05 ./collector3-process
      6. root 40694 0.0 0.0 702472 992 ? Ssl 02:13 0:05 ./collector3-process
      7. root 40846 0.0 0.0 702216 304 ? Ssl 02:13 0:05 ./collector2-process
      8. root 301954 0.0 0.0 9032 672 pts/2 S+ 12:37 0:00 grep --color=auto collector
      9. root@cluster1-worker1:~# strace -p 40367
      10. strace: Process 40367 attached
      11. epoll_pwait(3, [], 128, 194, NULL, 1) = 0
      12. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      13. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      14. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      15. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      16. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      17. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      18. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      19. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      20. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      21. kill(666, SIGTERM) = -1 ESRCH (No such process)
      22. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      23. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      24. epoll_pwait(3, [], 128, 999, NULL, 1) = 0
      25. epoll_pwait(3, ^Cstrace: Process 40367 detached
      26. <detached ...>
      27. # 依次使用 strace 命令查看进程的系统调用,这里省略了,最终发现 collector1-process 进程有 kill 系统调用
    • 将副本数改为 0

      1. kubectl -n team-yellow scale deploy collector1 --replicas=0

Question 15

Task weight: 4%

Use context: kubectl config use-context workload-prod

In Namespace team-pink there is an existing Nginx Ingress resources named secure which accepts two paths /app and /api which point to different ClusterIP Services.

From your main terminal you can connect to it using for example:

  • HTTP: curl -v http://secure-ingress.test:31080/app
  • HTTPS: curl -kv https://secure-ingress.test:31443/app

Right now it uses a default generated TLS certificate by the Nginx Ingress Controller.

You're asked to instead use the key and certificate provided at /opt/course/15/tls.key and /opt/course/15/tls.crt. As it's a self-signed certificate you need to use curl -k when connecting to it.

题目解析:

  • 考点

  • 解题

    • 查看现有 ingress

      1. kubectl -n team-pink get ingress
      2. NAME CLASS HOSTS ADDRESS PORTS AGE
      3. secure nginx secure-ingress.test 192.168.100.12 80, 443 88d
    • 创建 tls 证书 secret

      1. kubectl -n team-pink create secret tls secure-ingress --key=/opt/course/15/tls.key --cert=/opt/course/15/tls.crt
    • 修改 ingress,使用自定义证书

      1. kubectl -n team-pink edit ingress secure
      2. apiVersion: networking.k8s.io/v1
      3. kind: Ingress
      4. ...
      5. spec:
      6. ingressClassName: nginx
      7. rules:
      8. - host: secure-ingress.test
      9. http:
      10. paths:
      11. - backend:
      12. service:
      13. name: secure-app
      14. port:
      15. number: 80
      16. path: /app
      17. pathType: Prefix
      18. - backend:
      19. service:
      20. name: secure-api
      21. port:
      22. number: 80
      23. path: /api
      24. pathType: Prefix
      25. tls: # 新增
      26. - hosts: # 新增
      27. - secure-ingress.test # 新增
      28. secretName: secure-ingress # 新增
      29. status:
      30. loadBalancer:
      31. ingress:
      32. - ip: 192.168.100.12
    • 验证

      1. curl -kv https://secure-ingress.test:31443/app
      2. ...
      3. * Server certificate:
      4. * subject: CN=secure-ingress.test; O=secure-ingress.test
      5. * start date: Sep 25 18:22:10 2020 GMT
      6. * expire date: Sep 20 18:22:10 2040 GMT
      7. * issuer: CN=secure-ingress.test; O=secure-ingress.test
      8. * SSL certificate verify result: self signed certificate (18), continuing anyway.
      9. ...

Question 16

Task weight: 7%

Use context: kubectl config use-context workload-prod

There is a Deployment image-verify in Namespace team-blue which runs image registry.killer.sh:5000/image-verify:v1. DevSecOps has asked you to improve this image by:

  1. Changing the base image to alpine:3.12
  2. Not installing curl
  3. Updating nginx to use the version constraint >=1.18.0
  4. Running the main process as user myuser

Do not add any new lines to the Dockerfile, just edit existing ones. The file is located at /opt/course/16/image/Dockerfile.

Tag your version as v2. You can build, tag and push using:

  1. cd /opt/course/16/image
  2. podman build -t registry.killer.sh:5000/image-verify:v2 .
  3. podman run registry.killer.sh:5000/image-verify:v2 # to test your changes
  4. podman push registry.killer.sh:5000/image-verify:v2

Make the Deployment use your updated image tag v2.

题目解析:

  • 考点

    • 制作镜像
  • 解题

    • 修改Dockerfile

      1. FROM alpine:3.12 # 修改
      2. RUN apk update && apk add vim nginx=1.18.0-r3 # 修改, 去掉curl,nginx去掉版本号,或者起一个alpine容器搜索nginx版本
      3. RUN addgroup -S myuser && adduser -S myuser -G myuser
      4. COPY ./run.sh run.sh
      5. RUN ["chmod", "+x", "./run.sh"]
      6. USER myuser # 修改
      7. ENTRYPOINT ["/bin/sh", "./run.sh"]
    • 打镜像

      1. podman build -t registry.killer.sh:5000/image-verify:v2 .
      2. podman push registry.killer.sh:5000/image-verify:v2
    • 更新镜像版本

      1. kubectl -n team-blue set image deployment image-verify *=registry.killer.sh:5000/image-verify:v2
      2. # 或者 kubectl -n team-blue edit deployment image-verify 进行修改

Question 17

Task weight: 7%

Use context: kubectl config use-context infra-prod

Audit Logging has been enabled in the cluster with an Audit Policy located at /etc/kubernetes/audit/policy.yaml on cluster2-master1.

Change the configuration so that only one backup of the logs is stored.

Alter the Policy in a way that it only stores logs:

  1. From Secret resources, level Metadata
  2. From "system:nodes" userGroups, level RequestResponse

After you altered the Policy make sure to empty the log file so it only contains entries according to your changes, like using truncate -s 0 /etc/kubernetes/audit/logs/audit.log.

NOTE: You can use jq to render json more readable. cat data.json | jq

题目解析:

  • 考点

  • 解题

    • 修改 apiserver 启动参数

      1. ssh cluster2-master1
      2. root@cluster2-master1:~# vim /etc/kubernetes/manifests/kube-apiserver.yaml
      3. - --audit-log-maxbackup=1 # 修改
    • 修改审计策略

      1. root@cluster2-master1:~# vim /etc/kubernetes/audit/policy.yaml
      2. apiVersion: audit.k8s.io/v1
      3. kind: Policy
      4. rules:
      5. # log Secret resources audits, level Metadata
      6. - level: Metadata
      7. resources:
      8. - group: ""
      9. resources: ["secrets"]
      10. # log node related audits, level RequestResponse
      11. - level: RequestResponse
      12. userGroups: ["system:nodes"]
      13. # for everything else don't log anything
      14. - level: None
    • 重启 apiserver

      1. root@cluster2-master1:~# cd /etc/kubernetes/manifests/
      2. root@cluster2-master1:/etc/kubernetes/manifests# mv kube-apiserver.yaml ..
      3. # 等 apiserver 停止后
      4. root@cluster2-master1:/etc/kubernetes/manifests# mv ../kube-apiserver.yaml .
    • 验证

      1. # shows Secret entries
      2. cat /etc/kubernetes/audit/logs/audit.log | grep '"resource":"secrets"' | wc -l
      3. # confirms Secret entries are only of level Metadata
      4. cat /etc/kubernetes/audit/logs/audit.log | grep '"resource":"secrets"' | grep -v '"level":"Metadata"' | wc -l
      5. # shows RequestResponse level entries
      6. cat /etc/kubernetes/audit/logs/audit.log | grep -v '"level":"RequestResponse"' | wc -l
      7. # shows RequestResponse level entries are only for system:nodes
      8. cat /etc/kubernetes/audit/logs/audit.log | grep '"level":"RequestResponse"' | grep -v "system:nodes" | wc -l

Question 18

Task weight: 4%

Use context: kubectl config use-context infra-prod

Namespace security contains five Secrets of type Opaque which can be considered highly confidential. The latest Incident-Prevention-Investigation revealed that ServiceAccount p.auster had too broad access to the cluster for some time. This SA should've never had access to any Secrets in that Namespace.

Find out which Secrets in Namespace security this SA did access by looking at the Audit Logs under /opt/course/18/audit.log.

Change the password to any new string of only those Secrets that were accessed by this SA.

NOTE: You can use jq to render json more readable. cat data.json | jq

题目解析:

  • 考点

    • 审计日志分析
  • 解题

    • 粗略查看一下审计日志 /opt/course/18/audit.log

      1. {"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"824df58e-4986-4c60-8eb1-6d1eb89516f0","stage":"RequestReceived","requestURI":"/readyz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["192.168.102.11"],"userAgent":"kube-probe/1.19","requestReceivedTimestamp":"2020-09-24T21:25:56.713207Z","stageTimestamp":"2020-09-24T21:25:56.713207Z"}{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"824df58e-4986-4c60-8eb1-6d1eb89516f0","stage":"ResponseComplete","requestURI":"/readyz","verb":"get","user":{"username":"system:anonymous","groups":["system:unauthenticated"]},"sourceIPs":["192.168.102.11"],"userAgent":"kube-probe/1.19","responseStatus":{"metadata":{},"code":200},"requestReceivedTimestamp":"2020-09-24T21:25:56.713207Z","stageTimestamp":"2020-09-24T21:25:56.714900Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":"RBAC: allowed by ClusterRoleBinding \"system:public-info-viewer\" of ClusterRole \"system:public-info-viewer\" to Group \"system:unauthenticated\""}}{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"f9e355c9-6221-4214-abc4-ba313bb07df2","stage":"RequestReceived","requestURI":"/api/v1/namespaces/kube-system/endpoints/kube-controller-manager?timeout=10s","verb":"get","user":{"username":"system:kube-controller-manager","groups":["system:authenticated"]},"sourceIPs":["192.168.102.11"],"userAgent":"kube-controller-manager/v1.19.0 (linux/amd64) kubernetes/e199641/leader-election","objectRef":{"resource":"endpoints","namespace":"kube-system","name":"kube-controller-manager","apiVersion":"v1"},"requestReceivedTimestamp":"2020-09-24T21:25:56.880803Z","stageTimestamp":"2020-09-24T21:25:56.880803Z"}
      2. ...
    • 可以发现都是 json 格式的日志,而且文件有几千行,因此需要做一定的筛选

      1. cat /opt/course/18/audit.log | wc -l # 4448 行
      2. cat /opt/course/18/audit.log | grep p.auster | wc -l # 28 行,还是有点多,还得继续筛选
      3. cat /opt/course/18/audit.log | grep p.auster | grep Secret | wc -l # 2 行
      4. cat /opt/course/18/audit.log | grep p.auster | grep Secret | jq
      5. {
      6. "kind": "Event",
      7. "apiVersion": "audit.k8s.io/v1",
      8. "level": "RequestResponse",
      9. "auditID": "74fd9e03-abea-4df1-b3d0-9cfeff9ad97a",
      10. "stage": "ResponseComplete",
      11. "requestURI": "/api/v1/namespaces/security/secrets/vault-token",
      12. "verb": "get",
      13. "user": {
      14. "username": "system:serviceaccount:security:p.auster",
      15. "uid": "29ecb107-c0e8-4f2d-816a-b16f4391999c",
      16. "groups": [
      17. "system:serviceaccounts",
      18. "system:serviceaccounts:security",
      19. "system:authenticated"
      20. ]
      21. },
      22. "sourceIPs": [
      23. "192.168.102.21"
      24. ],
      25. "userAgent": "curl/7.64.0",
      26. "objectRef": {
      27. "resource": "secrets",
      28. "namespace": "security",
      29. "name": "vault-token",
      30. "apiVersion": "v1"
      31. },
      32. "responseObject": {
      33. "kind": "Secret",
      34. "apiVersion": "v1",
      35. ...
      36. "data": {
      37. "password": "bERhR0hhcERKcnVuCg=="
      38. },
      39. "type": "Opaque"
      40. },
      41. ...
      42. }
      43. {
      44. "kind": "Event",
      45. "apiVersion": "audit.k8s.io/v1",
      46. "level": "RequestResponse",
      47. "auditID": "aed6caf9-5af0-4872-8f09-ad55974bb5e0",
      48. "stage": "ResponseComplete",
      49. "requestURI": "/api/v1/namespaces/security/secrets/mysql-admin",
      50. "verb": "get",
      51. "user": {
      52. "username": "system:serviceaccount:security:p.auster",
      53. "uid": "29ecb107-c0e8-4f2d-816a-b16f4391999c",
      54. "groups": [
      55. "system:serviceaccounts",
      56. "system:serviceaccounts:security",
      57. "system:authenticated"
      58. ]
      59. },
      60. "sourceIPs": [
      61. "192.168.102.21"
      62. ],
      63. "userAgent": "curl/7.64.0",
      64. "objectRef": {
      65. "resource": "secrets",
      66. "namespace": "security",
      67. "name": "mysql-admin",
      68. "apiVersion": "v1"
      69. }
      70. "responseObject": {
      71. "kind": "Secret",
      72. "apiVersion": "v1",
      73. ...
      74. "data": {
      75. "password": "bWdFVlBSdEpEWHBFCg=="
      76. },
      77. "type": "Opaque"
      78. },
      79. ...
      80. }
    • 通过以上过滤查到的审计记录,可以发现请求了 vault-token, mysql-admin 两个 secrets
    • 修改 secret

      1. echo new-pass | base64
      2. bmV3LXBhc3MK
      3. kubectl -n security edit secret vault-token
      4. apiVersion: v1
      5. data:
      6. password: bmV3LXBhc3MK # 修改
      7. kind: Secret
      8. metadata:
      9. ...
      10. name: vault-token
      11. namespace: security
      12. type: Opaque
      13. kubectl -n security edit secret mysql-admin
      14. apiVersion: v1
      15. data:
      16. password: bmV3LXBhc3MK # 修改
      17. kind: Secret
      18. metadata:
      19. ...
      20. name: mysql-admin
      21. namespace: security
      22. type: Opaque

Question 19

Task weight: 2%

Use context: kubectl config use-context workload-prod

The Deployment immutable-deployment in Namespace team-purple should run immutable, it's created from file /opt/course/19/immutable-deployment.yaml. Even after a successful break-in, it shouldn't be possible for an attacker to modify the filesystem of the running container.

Modify the Deployment in a way that no processes inside the container can modify the local filesystem, only /tmp directory should be writeable. Don't modify the Docker image.

Save the updated YAML under /opt/course/19/immutable-deployment-new.yaml and update the running Deployment.

题目解析:

  • 考点

  • 解题

    • 修改 yaml

      1. cp /opt/course/19/immutable-deployment.yaml /opt/course/19/immutable-deployment-new.yaml
      2. vim /opt/course/19/immutable-deployment-new.yaml
      3. apiVersion: apps/v1
      4. kind: Deployment
      5. metadata:
      6. namespace: team-purple
      7. name: immutable-deployment
      8. labels:
      9. app: immutable-deployment
      10. spec:
      11. replicas: 1
      12. selector:
      13. matchLabels:
      14. app: immutable-deployment
      15. template:
      16. metadata:
      17. labels:
      18. app: immutable-deployment
      19. spec:
      20. containers:
      21. - image: busybox:1.32.0
      22. command: ['sh', '-c', 'tail -f /dev/null']
      23. imagePullPolicy: IfNotPresent
      24. name: busybox
      25. securityContext: # 新增
      26. readOnlyRootFilesystem: true # 新增
      27. volumeMounts: # 新增
      28. - name: tmp # 新增
      29. mountPath: /tmp # 新增
      30. restartPolicy: Always
      31. volumes: # 新增
      32. - name: tmp # 新增
      33. emptyDir: {} # 新增
    • 验证

      1. kubectl -n team-purple exec -it immutable-deployment-7cf85b8f74-z8zb9 -- /bin/sh
      2. / # echo 1 > /tmp/1
      3. / # echo 1 > /1
      4. /bin/sh: can't create /1: Read-only file system

Question 20

Task weight: 8%

Use context: kubectl config use-context workload-stage

The cluster is running Kubernetes 1.21.4. Update it to 1.22.1 available via apt package manager.

Use ssh cluster3-master1 and ssh cluster3-worker1 to connect to the instances.

题目解析:

  • 考点

  • 解题

    • 查看节点

      1. kubectl get node
      2. NAME STATUS ROLES AGE VERSION
      3. cluster3-master1 Ready control-plane,master 88d v1.21.4
      4. cluster3-worker1 Ready <none> 88d v1.21.4
    • 查看 master 节点 kubeadm, kubectl, kubelet 版本,升级计划

      1. ssh cluster3-master1
      2. root@cluster3-master1:~# kubeadm version
      3. kubeadm version: &version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.1", GitCommit:"632ed300f2c34f6d6d15ca4cef3d3c7073412212", GitTreeState:"clean", BuildDate:"2021-08-19T15:44:22Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
      4. root@cluster3-master1:~# kubectl version
      5. Client Version: version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.4", GitCommit:"3cce4a82b44f032d0cd1a1790e6d2f5a55d20aae", GitTreeState:"clean", BuildDate:"2021-08-11T18:16:05Z", GoVersion:"go1.16.7", Compiler:"gc", Platform:"linux/amd64"}
      6. root@cluster3-master1:~# kubelet --version
      7. Kubernetes v1.21.4
      8. # kubeadm 已经是 1.22.1 版本了,不用升级 kubeadm
      9. # 执行升级计划,kubeadm upgrade plan 或 kubeadm upgrade plan 1.22.1
      10. root@cluster3-master1:~# kubeadm upgrade plan
      11. [upgrade/config] Making sure the configuration is correct:
      12. [upgrade/config] Reading configuration from the cluster...
      13. [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
      14. [preflight] Running pre-flight checks.
      15. [upgrade] Running cluster health checks
      16. [upgrade] Fetching available versions to upgrade to
      17. [upgrade/versions] Cluster version: v1.21.4
      18. [upgrade/versions] kubeadm version: v1.22.1
      19. I1214 01:16:17.575626 228047 version.go:255] remote version is much newer: v1.23.0; falling back to: stable-1.22
      20. [upgrade/versions] Target version: v1.22.4
      21. [upgrade/versions] Latest version in the v1.21 series: v1.21.7
      22. Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
      23. COMPONENT CURRENT TARGET
      24. kubelet 2 x v1.21.4 v1.21.7
      25. Upgrade to the latest version in the v1.21 series:
      26. COMPONENT CURRENT TARGET
      27. kube-apiserver v1.21.4 v1.21.7
      28. kube-controller-manager v1.21.4 v1.21.7
      29. kube-scheduler v1.21.4 v1.21.7
      30. kube-proxy v1.21.4 v1.21.7
      31. CoreDNS v1.8.0 v1.8.4
      32. etcd 3.4.13-0 3.4.13-0
      33. You can now apply the upgrade by executing the following command:
      34. kubeadm upgrade apply v1.21.7
      35. _____________________________________________________________________
      36. Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
      37. COMPONENT CURRENT TARGET
      38. kubelet 2 x v1.21.4 v1.22.4
      39. Upgrade to the latest stable version:
      40. COMPONENT CURRENT TARGET
      41. kube-apiserver v1.21.4 v1.22.4
      42. kube-controller-manager v1.21.4 v1.22.4
      43. kube-scheduler v1.21.4 v1.22.4
      44. kube-proxy v1.21.4 v1.22.4
      45. CoreDNS v1.8.0 v1.8.4
      46. etcd 3.4.13-0 3.5.0-0
      47. You can now apply the upgrade by executing the following command:
      48. kubeadm upgrade apply v1.22.4
      49. Note: Before you can perform this upgrade, you have to update kubeadm to v1.22.4.
      50. _____________________________________________________________________
      51. The table below shows the current state of component configs as understood by this version of kubeadm.
      52. Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
      53. resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
      54. upgrade to is denoted in the "PREFERRED VERSION" column.
      55. API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
      56. kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
      57. kubelet.config.k8s.io v1beta1 v1beta1 no
      58. _____________________________________________________________________
    • 升级 master

      1. root@cluster3-master1:~# kubeadm upgrade apply v1.22.1
      2. ...
      3. [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.22.1". Enjoy!
      4. [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
      5. # 驱逐节点运行的 Pod
      6. ➜ root@cluster3-master1:~# kubectl drain cluster3-master1 --ignore-daemonsets
      7. # 升级 kubelet, kubectl
      8. ➜ root@cluster3-master1:~# apt-get install kubelet=1.22.1-00 kubectl=1.22.1-00
      9. # 重启 kubelet
      10. ➜ root@cluster3-master1:~# systemctl daemon-reload
      11. ➜ root@cluster3-master1:~# systemctl restart kubelet
      12. # 解除保护
      13. ➜ root@cluster3-master1:~# kubectl uncordon cluster3-master1
    • 升级 worker

      1. ssh cluster3-worker1
      2. # 检查版本,这里略过
      3. # 升级
      4. root@cluster3-worker1:~# kubeadm upgrade node
      5. # 驱逐节点运行的 Pod
      6. root@cluster3-worker1:~# kubectl drain cluster3-worker1 --ignore-daemonsets
      7. # 升级 kubelet, kubectl
      8. root@cluster3-worker1:~# apt-get install kubelet=1.22.1-00 kubectl=1.22.1-00
      9. # 重启 kubelet
      10. root@cluster3-worker1:~# systemctl daemon-reload
      11. root@cluster3-worker1:~# systemctl restart kubelet
      12. # 解除保护
      13. root@cluster3-worker1:~# kubectl uncordon cluster3-worker1
    • 验证

      1. kubectl get node
      2. NAME STATUS ROLES AGE VERSION
      3. cluster3-master1 Ready control-plane,master 88d v1.22.1
      4. cluster3-worker1 Ready <none> 88d v1.22.1

Question 21

Task weight: 2%

(can be solved in any kubectl context)

The Vulnerability Scanner trivy is installed on your main terminal. Use it to scan the following images for known CVEs:

  • nginx:1.16.1-alpine
  • k8s.gcr.io/kube-apiserver:v1.18.0
  • k8s.gcr.io/kube-controller-manager:v1.18.0
  • docker.io/weaveworks/weave-kube:2.7.0

Write all images that don't contain the vulnerabilities CVE-2020-10878 or CVE-2020-1967 into /opt/course/21/good-images.

题目解析:

  • 考点

    • trivy
  • 解题

    • 使用 trivy 工具扫描镜像

      1. trivy image nginx:1.16.1-alpine | egrep "CVE-2020-10878|CVE-2020-1967"
      2. | libcrypto1.1 | CVE-2020-1967 | HIGH | 1.1.1d-r2 | 1.1.1g-r0 | openssl: Segmentation fault in |
      3. | libssl1.1 | CVE-2020-1967 | | 1.1.1d-r2 | 1.1.1g-r0 | openssl: Segmentation fault in |
      4. trivy image k8s.gcr.io/kube-apiserver:v1.18.0 | egrep "CVE-2020-10878|CVE-2020-1967"
      5. | | CVE-2020-10878 | | | | perl: corruption of |
      6. trivy k8s.gcr.io/kube-controller-manager:v1.18.0 | egrep "CVE-2020-10878|CVE-2020-1967"
      7. | | CVE-2020-10878 | | | | perl: corruption of |
      8. trivy docker.io/weaveworks/weave-kube:2.7.0 | egrep "CVE-2020-10878|CVE-2020-1967"
      9. # 只有 docker.io/weaveworks/weave-kube:2.7.0 没有扫出漏洞
    • 写入答案

      1. echo docker.io/weaveworks/weave-kube:2.7.0 > /opt/course/21/good-images

Question 22

Task weight: 3%

(can be solved in any kubectl context)

The Release Engineering Team has shared some YAML manifests and Dockerfiles with you to review. The files are located under /opt/course/22/files.

As a container security expert, you are asked to perform a manual static analysis and find out possible security issues with respect to unwanted credential exposure. Running processes as root is of no concern in this task.

Write the filenames which have issues into /opt/course/22/security-issues.

NOTE: In the Dockerfile and YAML manifests, assume that the referred files, folders, secrets and volume mounts are present. Disregard syntax or logic errors.

题目解析:

  • 考点

    • 找安全隐患
  • 解题

    • 查看都有哪些文件
  1. ll /opt/course/22/files
  2. total 48
  3. drwxr-xr-x 2 k8s k8s 4096 Dec 14 02:15 ./
  4. drwxr-xr-x 3 k8s k8s 4096 Sep 16 08:37 ../
  5. -rw-r--r-- 1 k8s k8s 341 Sep 16 08:37 deployment-nginx.yaml
  6. -rw-r--r-- 1 k8s k8s 723 Sep 16 08:37 deployment-redis.yaml
  7. -rw-r--r-- 1 k8s k8s 384 Sep 16 08:37 Dockerfile-go

标签: CKS

添加新评论