Sharing Our Passion for Technology
& Continuous Learning
A Hands-On Tour of Kubernetes: Part 2 - Namespaces and Labels
Namespaces
We've only created one pod so far. Kubernetes wouldn't be very special if we could only run one pod, so let's try running multiple pods.
$ kubectl run app-1 --image=nginx:1.24
pod/app-1 created
$ kubectl run app-2 --image=nginx:1.24
pod/app-2 created
$ kubectl run app-3 --image=nginx:1.24
pod/app-3 created
It seems like Kubernetes was happy to create three pods. Let's list our pods to verify.
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-1 1/1 Running 0 83s
app-2 1/1 Running 0 66s
app-3 1/1 Running 0 17s
Consider that a given Kubernetes cluster may be used to run dozens of applications. What would this list look like if I created hundreds of pods?
Well, there's no fancy truncation. This list would be very long. Fortunately, Kubernetes allows us to create namespaces so that we can group related resources together.
A namespace is like any other resource in Kubernetes, meaning we can use kubectl
to list our namespaces.
$ kubectl get namespaces
NAME STATUS AGE
default Active 8d
kube-system Active 8d
kube-public Active 8d
kube-node-lease Active 8d
It looks like our local cluster already has more than one namespace!
- The namespaces prefixed with
kube-
contain resources used by the cluster itself. - The
default
namespace is where new resources will go unless otherwise specified.
Since we haven't been specifying a namespace in our kubectl
commands, all the pods we've been creating have been added under the default
namespace. Let's go ahead and create a new namespace for ourselves.
$ kubectl create namespace app
namespace/app created
We can list our namespaces again to verify our new namespace exists.
$ kubectl get namespaces
NAME STATUS AGE
default Active 8d
kube-system Active 8d
kube-public Active 8d
kube-node-lease Active 8d
app Active 42s
There it is! Let's now create a pod under this namespace.
$ kubectl run app-4 --image=nginx:1.24 --namespace=app
pod/app-4 created
Note the --namespace
option at the end of the above command. --namespace
can be added to most kubectl
commands to specify which namespace to use. For example, here is how we can list our newly-created pod:
$ kubectl get pods --namespace=app
NAME READY STATUS RESTARTS AGE
app-4 1/1 Running 0 7s
Without --namespace
we'd be listing the pods in the default
namespace. In fact, removing --namespace
is the same as explicitly setting the namespace to default
.
$ kubectl get pods --namespace=default
NAME READY STATUS RESTARTS AGE
app-1 1/1 Running 0 19m
app-2 1/1 Running 0 19m
app-3 1/1 Running 0 18m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
app-1 1/1 Running 0 19m
app-2 1/1 Running 0 19m
app-3 1/1 Running 0 18m
Many kubectl
flags and options provide a long form (e.g. --namespace
) and a short form (-n
). The short form requires fewer keystrokes, but I will use the long form in this series since it is more descriptive.
When introducing new options, I will show the short form (if one exists) alongside the long form, so feel free to use the version you prefer.
Resource names need to be unique in a given namespace, but different namespaces can have resources with identical names.
For example, since a pod called app-1
exists in the default
namespace, we cannot create another pod called app-1
in the default
namespace:
$ kubectl run app-1 --image=nginx:1.24
Error from server (AlreadyExists): pods "app-1" already exists
However, we can certainly create a pod called app-1
in the app
namespace:
$ kubectl run app-1 --image=nginx:1.24 --namespace=app
pod/app-1 created
We'll see this name duplication if we list list out the pods in both namespaces:
$ kubectl get pods --namespace=app
NAME READY STATUS RESTARTS AGE
app-4 1/1 Running 0 5m5s
app-1 1/1 Running 0 15s
$ kubectl get pods --namespace=default
NAME READY STATUS RESTARTS AGE
app-1 1/1 Running 0 24m
app-2 1/1 Running 0 24m
app-3 1/1 Running 0 23m
It's important to emphasize that the app-1
pod in the default
namespace and the app-1
pod in the app
namespace are two different pods. They share nothing other than the name.
Beyond providing a convenient mechanism for organizing resources, namespaces are also central to Kubernetes' RBAC model and for controlling resource allocation. We won't go into detail about either of these topics, but these characteristics of namespaces are what facilitate the usage of multi-tenant clusters (multiple teams using the same cluster). However, even for single-tenant clusters, it's common to create a namespace for every workload.
You can also set the default namespace with
$ kubectl config set-context --current --namespace <namespace>
Deleting a namespace will automatically delete all resources in that namespace. Let's delete our app
namespace and see this in action.
$ kubectl delete namespace app
namespace "app" deleted
Recall that this namespace contained two pods. If we try to list the pods in this namespace, we'll see that no such resources exist.
$ kubectl get pods --namespace=app
No resources found in app namespace.
Indeed, our namespace is gone entirely.
$ kubectl get namespaces
NAME STATUS AGE
default Active 8d
kube-system Active 8d
kube-public Active 8d
kube-node-lease Active 8d
Before moving on, let's delete the other pods we created in the default
namespace.
$ kubectl delete pod/app-1 pod/app-2 pod/app-3
pod "app-1" deleted
pod "app-2" deleted
pod "app-3" deleted
Labels
Namespaces are an important mechanism for grouping related resources, but they often aren't sufficient for keeping things organized. What if we want to further segment resources in a namespace? What if we want to group together resources across namespaces?
In Kubernetes, labels allow us to tag resources with arbitrary key-value pairs. We can then query resources by their labels.
For example, let's say we have three applications. These three applications are creatively named:
- app-1
- app-2
- app-3
All three of these applications consist of:
- a backend process that exposes a JSON API
- a frontend process that serves HTML/CSS/JS and consumes the API
Futhermore, suppose these applications have the following visibility:
- "app-1" and "app-2" are public, internet-facing applications
- "app-3" is an internal-facing application for support
Let's set up this fictional environment in our cluster and see how we might label these resources. To start, we'll create some namespaces for ourselves.
$ kubectl create namespace app-1
namespace/app-1 created
$ kubectl create namespace app-2
namespace/app-2 created
$ kubectl create namespace app-3
namespace/app-3 created
Next, we'll create the pods representing the backend and frontend processes for each application.
$ kubectl run app-1-backend --image=nginx:1.24 --namespace=app-1
pod/app-1-backend created
$ kubectl run app-1-frontend --image=nginx:1.24 --namespace=app-1
pod/app-1-frontend created
$ kubectl run app-2-backend --image=nginx:1.24 --namespace=app-2
pod/app-2-backend created
$ kubectl run app-2-frontend --image=nginx:1.24 --namespace=app-2
pod/app-2-frontend created
$ kubectl run app-3-backend --image=nginx:1.24 --namespace=app-3
pod/app-3-backend created
$ kubectl run app-3-frontend --image=nginx:1.24 --namespace=app-3
pod/app-3-frontend created
For good measure, we'll list the pods in each namespace and verify everything is in the right spot.
$ kubectl get pods --namespace=app-1
NAME READY STATUS RESTARTS AGE
app-1-backend 1/1 Running 0 89s
app-1-frontend 1/1 Running 0 80s
$ kubectl get pods --namespace=app-2
NAME READY STATUS RESTARTS AGE
app-2-backend 1/1 Running 0 68s
app-2-frontend 1/1 Running 0 61s
$ kubectl get pods --namespace=app-3
NAME READY STATUS RESTARTS AGE
app-3-backend 1/1 Running 0 56s
app-3-frontend 1/1 Running 0 51s
Without any other changes, how can I list all the public-facing pods? Well, I can't. I would already need to know the visibility of each application and run multiple kubectl
commands.
However, we can remedy this situation by adding a visibility
label to our pods. We can then query for pods by their visibility
value.
Let's start by adding the proper visibility
label to one of our pods.
$ kubectl label pod/app-1-frontend visibility=public --namespace=app-1
pod/app-1-frontend labeled
kubectl
reported our pod was labeled, but does anything look different if we list the pods?
$ kubectl get pods --namespace=app-1
NAME READY STATUS RESTARTS AGE
app-1-backend 1/1 Running 0 6m34s
app-1-frontend 1/1 Running 0 6m25s
The output looks the same as before, other than updated ages. By default, kubectl get
doesn't show resource labels. We can ask kubectl
to show the value of a certain label by using the --label-columns
(-L
) option:
$ kubectl get pods --label-columns=visibility --namespace=app-1
NAME READY STATUS RESTARTS AGE VISIBILITY
app-1-backend 1/1 Running 0 12m
app-1-frontend 1/1 Running 0 12m public
We now see a "visibility" column, along with the value of that label for both pods.
app-1-frontend
shows the value "public". This is what we set it to previously.app-1-backend
shows no value. We haven't set this label on this pod.
If we don't know what labels exist on our resources, we can use the --show-labels
option to show all the labels that exist on a resource. The output formatting can get a little messy with this command if our resources have many labels, but it's useful for exploration.
$ kubectl get pods --show-labels --namespace=app-1
NAME READY STATUS RESTARTS AGE LABELS
app-1-backend 1/1 Running 0 14m run=app-1-backend
app-1-frontend 1/1 Running 0 13m run=app-1-frontend,visibility=public
So it seems our pods already have a run
label! This label is added automatically when we use the kubectl run
command to create pods. We won't use this run
label for anything, but be aware that certain actions will add labels to our resources.
Let's go ahead and finish adding the correct visibility
label to our other pods.
$ kubectl label pod/app-1-backend visibility=public --namespace=app-1
pod/app-1-backend labeled
$ kubectl label pod/app-2-backend pod/app-2-frontend visibility=public --namespace=app-2
pod/app-2-backend labeled
pod/app-2-frontend labeled
$ kubectl label pod/app-3-backend pod/app-3-frontend visibility=internal --namespace=app-3
pod/app-3-backend labeled
pod/app-3-frontend labeled
Alright, back to our original concern... how do we list all the pods that are public facing? Since our pods are now sufficiently labeled, we can use the --selector
(-l
) option to provide a label selector to our kubectl get
command.
$ kubectl get pods --selector=visibility=public --namespace=app-1
NAME READY STATUS RESTARTS AGE
app-1-frontend 1/1 Running 0 21m
app-1-backend 1/1 Running 0 21m
Hm, that was disappointing. We only received two pods, but we expect to see four. The --namespace
option is still narrowing the query to our app-1
namespace. If we want to query across all namespaces, do we remove it?
$ kubectl get pods --selector=visibility=public
No resources found in default namespace.
Nope! Remember, not specifying --namespace
is the same as --namespace=default
, so the previous command tried to list all pods matching the giving label selector in the default
namespace.
For what we're trying to accomplish, we need to use --all-namespaces
(-A
) option.
$ kubectl get pods --selector=visibility=public --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
app-1 app-1-frontend 1/1 Running 0 23m
app-1 app-1-backend 1/1 Running 0 23m
app-2 app-2-backend 1/1 Running 0 22m
app-2 app-2-frontend 1/1 Running 0 22m
Much better! None of the "app-3" pods are included in the output. We can list those "app-3" pods by altering the value of the visibility
label in our label selector.
$ kubectl get pods --selector=visibility=internal --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
app-3 app-3-backend 1/1 Running 0 24m
app-3 app-3-frontend 1/1 Running 0 24m
This example shows how we can use labels for simple, ad-hoc analysis, but labels can also be consumed by automated processes to address things like resource auditing and cost analysis. Depending on your organization's needs, tools like Kyverno and OPA Gatekeeper can enforce the usage of certain labels.
Notably, as we'll see in upcoming material, labels are also used by other Kubernetes resources to configure their behavior.
If you need to delete a label for any reason, you can use the following command.
kubectl label <type>/<name> <label_name>-
For example, to delete the visibility
label from the app-1-frontend
pod, we would run:
$ kubectl label pod/app-1-frontend visibility- --namespace=app-1
pod/app-1-frontend unlabeled
Before moving on, let's delete the resources we've created. Remember that deleting a namespace automatically deletes all the resources under that namespace.
$ kubectl delete namespace/app-1 namespace/app-2 namespace/app-3
namespace "app-1" deleted
namespace "app-2" deleted
namespace "app-3" deleted
Imperative vs. Declarative
So far, we've been using kubectl
to imperatively create our resources. This has been useful for getting our feet off the ground, but Kubernetes generally favors a declarative approach. Rather than telling Kubernetes how we want to create our resources, we tell Kubernetes what to create.
Kubernetes resources are typically declared as YAML manifests, although JSON also works. Let's look at an example pod manifest.
apiVersion: v1
kind: Pod
metadata:
name: example
namespace: default
spec:
containers:
- name: example
image: nginx:1.24
Alright, so there is a bit to unpack here. Let's break down the meaning of these fields.
- Together,
apiVersion
andkind
specify the type of resource declared in this manifest. Every resource manifest includes these fields.- In our example, we're using
apiVersion: v1
, which represents the core API resources. Resources are versioned inside an API group. - The
kind
refers to a specific resource type inside the API version. Since we want a pod,kind
is set toPod
. Resource kinds usePascalCase
.
- In our example, we're using
- The
metadata
object contains various fields that are common across resources.name
: The name of the resource. Every resource needs a name.namespace
: Which namespace this resource belongs in. If this is not specified, thedefault
namespace is used.- Although not shown in the example,
metadata
also contains fields to hold our labels, annotations, finalizers, and owner references, among others.
- As the name implies, the
spec
object contains the resource specification. The resource type determines which fields are included underspec
.- Resources typically have a set of required fields alongside a set of optional fields.
- For a pod, we need to specify at least one container. Our example defines a single container with the name
example
that uses the imagenginx:1.24
. Note that the container name doesn't necessarily need to match the pod name, although pods created usingkubectl run
set the pod and container name to the same value. - Many other fields can be set in a pod spec, but they are not included in the example because they are either not required or they assume a default value if unspecified.
Right now, this manifest is just words on a website. Somehow, we need to send this manifest to our cluster. Here is how we do that:
- Copy the contents of the manifest into a new file.
- Send the manifest to Kubernetes using
kubectl apply -f <filename>
.
Here is how that might look with bash
:
$ cat <<EOF >pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: example
namespace: default
spec:
containers:
- name: example
image: nginx:1.24
EOF
$ kubectl apply -f pod.yaml
pod/example created
And the same, but with PowerShell:
$ @"
apiVersion: v1
kind: Pod
metadata:
name: example
namespace: default
spec:
containers:
- name: example
image: nginx:1.24
"@ > pod.yaml
$ kubectl apply -f pod.yaml
pod/example created
For comparison, here is how we'd create the same pod using imperative commands:
$ kubectl run example --image=nginx:1.24 --namespace=default
On the surface, the imperative approach seems much simpler. Why go through the effort of creating a file and composing a resource manifest when we're going to run a kubectl
command anyway?
Despite the verbosity, the declarative approach has a couple big advantages over imperative resource creation:
- Repeatability: When deploying your resources with a series of imperative commands (e.g. in a script), there is an implicit assumption that the environment doesn't change between runs. For certain environments this may be a safe assumption, but for "living" environments such as many production systems, this constraint is rarely satisfied. The declarative approach embraces a live environment -- rather than trying to push our desires into the environment, we'll change the environment to satisfy our desires.
- Maintainability: Applications change. New components are added to the system. Implementing these changes imperatively involves an ever-evolving set of scripts and/or continuous deployment pipelines. In contrast, the declarative approach has a simple deployment model: point
kubectl
to the directory containing your manifests and it will ship the resources to the cluster. Should those manifests change, Kubernetes will respond appropriately.
The Kubernetes documentation provides some guidance on when to use which technique (imperative vs. declarative), but for brevity, we'll continue using imperative commands for the remainder of these posts.
Tip: We can append --dry-run=client -o yaml
to our imperative commands to view the manifest of the underlying resource being created.
$ kubectl run example --image=nginx:1.24 --namespace=default --dry-run=client -o yaml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: example
name: example
namespace: default
spec:
containers:
- image: nginx:1.24
name: example
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
And if you prefer JSON, you can use -o json
instead of -o yaml
.
Deleting declarative resources is a straightforward process with kubectl
using the command:
kubectl delete -f filename.yml
When you run this command, kubectl
will delete the resources specified in filename.yml. However, keep in mind that any resources that are not specified in the file will not be deleted.
In this post, we learned about using namespaces, labels, and manifest files to organize and manage our resources. In the next post, we'll look at how pods communicate with each other and the outside world.