Kubernetes 101
These notes started as an introduction to Kubernetes slideshow for NUCCDC, but might be broadly useful to anyone getting into k8s for the first time.
Why use it?
(I don't need Google scale)
- Declarative management & GitOps
- Common interface for new ops people
K8S Basics
- Workloads -- containers!
- many ways to describe running a container
- Authentication and authorization
- Role-Based Access Control
- Access K8S control plane via REST API on :6443. Always uses TLS.
- In-cluster resources can all be represented as YAML.
- Huge opportunity to codify all of your infrastructure
- The k8s API can be extended with new object types via "Custom Resource Defintions"
- You provide an OpenAPI spec and (optionally) a container image that knows how to handle your custom resource
- A kubeconfig file details the connection address and credentials to connect to the Kubernetes API.
RBAC
- ServiceAccount
- Role
- some built-in! cluster-admin
- RoleBinding
- ClusterRole / ClusterRoleBinding
- cluster-admin role === root on nodes
- cluster-admin role on cloud provider sometimes == root in cloud provider...
K8S Workload Resources
- Pod
- "base unit" of a workload -- one or more containers that share network namespaces and (optionally) volumes
- Deployment
- Pod definition + replication/rollout strategy
- StatefulSet
- Pod definition + stateful information (static hostname)
- DaemonSet
- Run a pod exactly once per node
K8S Configuration Resources
- ConfigMap
- key-value pairs
- can be injected into a pod as
- files (keys are filenames, values are file contents)
- environment variables (keys are envvar names, values are envvar values)
- Secret
- let’s just take a ConfigMap and base64-encode it
- same injection as ConfigMaps
Cluster Networking
- Cluster Networking Interface (CNI) is an abstraction over cluster networking
- Popular CNI implementations
- Canal
- Flannel
- Cilium
- Multus
Cluster Storage
- Cluster Storage Interface (CSI) is an abstraction over storage interfaces
- Popular CSI implementations
- Cloud-provider-specific implementation (e.g. the EBS provisioner for AWS)
- Rancher's local-path-provisioner
- rook-ceph
- Longhorn
K8S Distributions
- Typically include an installer with configuration options
- Some combination of CNI, CSI, and common services, for example:
- An ingress controller, like Traefik
- Common k8s distros
-
k3s ("like k8s, but smaller") -- small-ish Rancher offering
- for real, the 3 doesn't mean anything
-
RKE2 -- Rancher's beefier and FIPS-compliant offering
-
minikube for local testing
-
kind -- Kubernetes IN Docker
-
kubeadm ("vanilla" k8s)
-
Managed K8S
- Amazon/Google/Microsoft/etc will manage the cluster itself for you, so you can worry about what runs inside. A couple of different deployment models exist for these:
- bring your own VM
- "fully-managed" control plane
Service Discovery
- CoreDNS provides some helpful DNS trees
<pod>.<namespace>.pod.cluster.local
<svc>.<namespace>.svc.cluster.local
Navigating a Cluster
- kubectl is your friend
- try k9s for a nice TUI
Make the YAML Stop
- Sorry, no
Bring on more YAML
- Ok, sure
Helm
- YAML is great and all, but what if I want to distribute an app to others? They might need to:
- use a different storage solution
- run in a different network range
- inject their own secrets
Enter Helm
- templating for your YAML before it hits the cluster
- parameterized by a "values file" (you guessed it, more YAML)
- templates use Go's built-in template system
- example of the structure of a helm chart
Advanced -- The Operator Pattern
- CRD is supplied to the cluster
- A deployment (the "operator") runs that watches for all new resources of that type
- That deployment takes those resources (desired state) and actualizes them, and provides progress in the statussubresource
- For example, cert-manager is a commonly used application for provisioning X.509 certificates in-cluster and uses CRDs for CertificateRequests, Certificates, Issuers, and more
Advanced -- Virtual Clusters
- Run a k8s cluster inside your k8s cluster!
- All of the services for the nested cluster run as pods in the parent cluster
Tricks a Red Team Might Pull
- Stealing your kubeconfig to do whatever they want with your permissions
- Adding a CRD that you don't notice, and using an operator to do something nefarious...
- Virtual cluster with malicious workloads
"First-hour" Activities That Might Help
- Rotate the cluster certificate
- Rotate the token for your user account
- Remember to update your kubeconfig and not let red team get a copy
- If cluster is empty at start
- Enable deny-all RBAC policy
- Install all new apps with their RBAC configuration