How to serve data from different volumes to different users, using sftp in an rancher 2.x environment.
Recently I started migrating my online services to a dockerized environment with k8s and rancher. It went quite smoothly with some minor problems here and there. I will document some of my problems/findings or how I solved things here, so others might benefit from it (or so I don’t forget how I set up the system :D).
Why would you need it?
You know already? skip to the main part
When hosting stuff only for yourself, it might be tempting just to use real ssh for transferring data to the server. Login, put the stuff where it belongs, being able to do some command line magic and such.
That’s how I started doing “server stuff” back then (hard to tell when I really got into it, let’s skip the time when i just played around with webpages in the era of xoom, GeoCities, etc. and start counting with my first linux vserver at hosteurope: August 2007).
And that’s how I did well for a couple of years.
But it’s dangerous!
SSH gives you a lot of power, if you host content for others, it gives others a lot of power. You can trust yourself? You can trust them? Nice :) But what if someone looses the access key (or password?).
Hand out as little power as possible
Just give your users or even yourself as little power as needed for getting the jobs done properly. In case of key/password loss, you don’t have to worry too much then.
There’s several ways of doing so, one comfortable way you can do this is using the atmoz/sftp container. I will describe on how you could use it in this blog post.
Setting up atmoz/sftp
What’s in the image?
atmoz/sftp comes with a pre-configured sshd and some scripts for nicely setting up users provided in environment variables or config file.
It restricts all ssh capability to “sftp” only, which is a nice and secure way to restrict the users to only being able to list, browse, add, download, edit and delete files/directories (and set some file flags). In addition, users are “chrooted” (it’s not a real chroot, but good enough for our purpose) to their home directory, meaning they cannot access other users data.
How to use it along with rancher?
Create secrets & config
First we need to provide some keys & settings that will be used by our server.
Choose the cluster / project you want to deploy your sftp to and switch to the “Resources/Secrets” tab. Create a new secret as follows:
Select “single namespace”, select the namespace you want to deploy to, give it a name, and add “users.conf” as first secret with your users usernames and the user-ids they will get. The format is described here. You can add passwords or even encrypted passwords here, but we go for the more secure approach and choose authorized keys for authenticating our users.
You can for now save the secret now and edit it later or leave it open, as we need a few more steps for our full config.
Adding server keys
Ssh servers usually identify themselves with a certain keypair. Whenever a client connects first, they will present the fingerprint of the server to the user and ask him if that fingerprint is okay, and if the client should connect.
Once the user approves, the fingerprint is saved in the client’s settings. Whenever the client connects in future, it will re-validate if the server still provides the same fingerprint. Once the fingerprint changes, most clients will refuse to connect, because they assume the server got hacked/modified or they’re in malicious environment and cannot even connect to the right server.
So if we consider using this in somewhat productive way, we need to ensure, the server isn’t changing the keys everytime it reboots (default).
That’s why we pre-create keys and provide them via secret to the image. The sshd config of atmoz/sftp uses ed25519 (ecdsa) and rsa key, so we need to create both:
ssh-keygen -q -P "" -t ed25519 -a 100 -f ssh_host_ed25519_key ssh-keygen -q -P "" -t rsa -a 100 -o -b 4096 -f ssh_host_rsa_key
This should generate 4 files in your current directory:
$ ls -la drwxrwxr-x 2 jbrosi jbrosi 4096 Dez 29 18:21 . drwxrwxr-x 11 jbrosi jbrosi 4096 Dez 29 18:21 .. -rw------- 1 jbrosi jbrosi 399 Dez 29 18:21 ssh_host_ed25519_key -rw-r--r-- 1 jbrosi jbrosi 95 Dez 29 18:21 ssh_host_ed25519_key.pub -rw------- 1 jbrosi jbrosi 3381 Dez 29 18:21 ssh_host_rsa_key -rw-r--r-- 1 jbrosi jbrosi 739 Dez 29 18:21 ssh_host_rsa_key.pub
Now copy them one by one to your secrets config. Choose the file names as key and paste the file content into the value part.
You can use “cat” to output the content of the files, e.g.
Adding user keys
As noted before, we don’t want the users to enter passwords when connecting, rather we want them to use public keys.
Add the public key of each user you want to connect later. Ask them to provide their public keys, you don’t need their secret keys.
That’s due to the power of asymmetric encryption, I will explain that a bit more in depth in another post.
If users don’t know how to create their keypairs, you can provide them with the following information (or use it to create your own keypair):
ssh-keygen -t ed25519 -a 100 # or to stay compatible with other systems if you want to re-use the key: ssh-keygen -t rsa -b 4096 -o -a 100
The cli will ask you where to save the keys and if they should be protected with a password. Per default it will put them where it can find them when trying to connect to servers, so you don’t have to change that location.
It will generate 2 keys (each). The
.pub ones are the public keys. You can give them to anyone without danger, but never loose your private key.
And again: only save the public keys of your users in the secrets, you don’t need the secret keys.
Deploy the server
Start as usual by selecting the cluster you want this service to run, select the namespace you want to serve files from and create a new Workload:
The important settings:
- The docker image is “atmoz/sftp”,
- you need to map the port 22 (ssh) to outside world so your friends/you can connect to it. I use “HostPort” here with value of 50022, as I’m using a stateful set anyway, see comment on Layer-4 Load-Balancer below.
- if you use “local node path/disk” for persistent storage, you might want to consider using stateful set of 1 pod. For other drivers scalable deployment of 1 pod is fine.
Here everything comes together.
I chose to have different secrets for keys / user config / user keys, but you can also put them together. Important is the mapping:
Choose file mode 400 for the secret keys and 444 for the public keys. And map them to /etc/ssh with proper names.
Next map the user config to
And the individual user keys to
/etc/home/<username>/.ssh/keys/<keyname>. For both 400 is fine.
Map additional volumes for the users
Now that everything is in place, your users should be able to login, using the keys you added the public part of.
Be aware that they cannot create / upload things directly to their “root” directory (which is
/home/<username> but might look like root to them because of the “chroot”). So you need to mount volumes with directories for them.
Choose “add volume” for that and give them access to the things you want.
How to use it?
Use your favorite sftp client like “winscp” for windows or - mount it directly via sshfs.
sshfs user@your-host:. ./mounted-ssh -C -p 50022
Be sure you chose the port you mapped on the “deploy workflow” step!
sshfs lets you use the sftp as it was a local disk (well almost - it’s wayyyy slower and doesn’t support things like file-locking). You can just drag & drop or
mv files around like you’re used to.
Once you know how to do, it’s relatively easy to setup a sftp access for your users to access their backups, manipulate their data, etc.
It plays along nicely with your rancher/k8s/docker environment and allows to get access to containers' data without accessing them directly.