Let’s say you’d like to run a pod on your cluster that accepts incoming ssh connections. (There are various reasons to do this — I have one application planned for an upcoming post.)
It’s actually quite easy to just run sshd in a container and mount a public key file as /root/.ssh/authorized_keys
to allow a user with the corresponding private key to ssh in as root.
It’s a little trickier, though, if you want to allow ssh access without allowing root access.
The main issue is that a non-root user can’t launch the ssh service, so you can’t simply run your pod as a non-root user. And [right now, as far as I know] you can’t mount a file with a different owner than the security context of the pod. But the ~/.ssh/authorized_keys
file needs to be owned by its own corresponding user in order for the ssh service to accept it…
So the trick is to create an image that has a non-root user that owns its own authorized_keys file while the root user launches the ssh service.
As a [tl;dr], you can just pull my image from docker hub: chamer81/ssh — and see below for the corresponding kubernetes manifests. But if you’re curious, here’s how I solved the problem:
In the Dockerfile, I added a user (“sshuser”) with UID 1001 that has an empty ~/.ssh/authorized_keys
file (with the correct ownership and permissions):
FROM ubuntu:latest
RUN apt update \
&& apt -y --no-install-recommends install \
openssh-server
RUN useradd -m -s /bin/bash -u 1001 sshuser
RUN mkdir /home/sshuser/.ssh && \
touch /home/sshuser/.ssh/id.pub && \
touch /home/sshuser/.ssh/authorized_keys && \
chmod 600 /home/sshuser/.ssh/authorized_keys && \
chown sshuser:sshuser /home/sshuser/.ssh/authorized_keys
COPY sshd_config /etc/ssh/sshd_config
COPY run_cmd.sh run_cmd.sh
RUN chmod 777 run_cmd.sh
CMD ["./run_cmd.sh"]
EXPOSE 22
And then in the corresponding run_cmd.sh
file, I copy the user’s public key file onto the end of the authorized_keys:
#!/bin/bash
# copy the ssh user's public key into authorized_keys:
cat /home/sshuser/.ssh/id.pub >> /home/sshuser/.ssh/authorized_keys
# Start the ssh server:
service ssh start > /var/log/customsshlog
# Tail the log to avoid exiting:
tail -f /var/log/customsshlog
Naturally you could build the final version of the authorized_keys file into the container image instead of copying the mounted public key file to it at runtime, but that would make the image less versatile. You would have to rebuild a separate version for every user. Following kubernetes best practices, this is the sort of specific configuration file that should be mounted as part of the deployment.
The sshd_config
file that is copied into the image is just the standard version of the same file except with “PasswordAuthentication no
” configured to force public/private key authentication instead of password authentication. This is probably redundant since the sshuser doesn’t have a password set — it’s just an extra precaution.
You can test this container locally as follows (note that your own key can be based on a different algorithm, e.g. id_rsa
):
docker run -p 2222:22 -v /home/${USER}/.ssh/id_ed25519.pub:/home/sshuser/.ssh/id.pub chamer81/ssh
Then you can log into the running docker container as follows:
ssh -p 2222 sshuser@localhost
Running it on kubernetes is just as simple! Here’s the deployment.yaml
file to apply to the cluster:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ssh
name: ssh
spec:
replicas: 1
selector:
matchLabels:
app: ssh
template:
metadata:
labels:
app: ssh
spec:
containers:
- image: docker.io/chamer81/ssh:latest
name: ssh
ports:
- containerPort: 22
protocol: TCP
volumeMounts:
- mountPath: /home/sshuser/.ssh/id.pub
subPath: id.pub
name: id-pub
# uncomment this to allow root login:
# - mountPath: /root/.ssh/authorized_keys
# subPath: id.pub
# name: id-pub
volumes:
- configMap:
defaultMode: 0400
name: id-pub
name: id-pub
The id-pub configmap is just your id_ed25519.pub
(or id_rsa.pub
) file made into a configmap. You can create the configmap in many ways (e.g. a one-liner with kubectl), but I find it simplest to always just use kustomize. Here’s my corresponding kustomization.yaml
:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: demo
resources:
- deployment.yaml
- ssh-service.yaml
configMapGenerator:
- name: id-pub
files:
- id.pub
The observant reader will note that I also include a manifest for the corresponding service. Naturally you’ll need one if you want to connect from outside the cluster! Depending on your kubernetes provider, you may be able to create a load-balancer for this (even though the service is not http/https), but it’s cheaper to use a node port. Here’s my ssh-service.yaml
file:
apiVersion: v1
kind: Service
metadata:
labels:
app: ssh
name: ssh
spec:
ports:
- port: 22
protocol: TCP
targetPort: 22
selector:
app: ssh
sessionAffinity: None
type: NodePort
Once all of this has been applied to the cluster, you can connect via ssh using the IP address of any of the nodes. You just need to find the node port kubernetes has chosen for you by getting the service, as follows:
$ kubectl get service -n demo
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ssh NodePort 10.128.227.201 <none> 22:30334/TCP 108m
In the above example you can see the port is 30334. If you don’t know the external IPs of your nodes, you can find them by running kubectl get nodes -o wide. T
hen you can ssh in as follows:
ssh -p <port> sshuser@<node IP>
And that’s how you can create an ssh server in your cluster!