Preface
We can push files and directories into LXD and we can pull files out of LXD using lxc file.
It works pretty well if you're interacting with instance on the same host.
If, however, you need to send files from your host (e.g. desktop computer) to an instance inside of LXD on a remote server, things are far less elegant.
Let's look at a better way.
Using lxd file
Suppose you run some service inside of a LXD container and you need to send configuration files from your desktop or laptop computer to that instance. This is one way to do it:
rsync \
--archive \
--recursive \
--progress \
my_file_or_files \
remote-host:/tmp/
ssh remote-host \
lxc file push \
/tmp/my_file_or_files \
remote-instance/path/to/where/they/should/go
Not particularly onerous but it can be improved upon.
Configuring SSH
rsync works by running the rsync program locally and on the remote, calculating what data must be transacted and in which direction, then doing it, all over an SSH connection.
Note, we are not talking about the related (but different) rsyncd.
Let's configure some things, so we can alter the first command and obsolete the second to achieve our goal.
Generate an SSH Key
We must create an ssh key on the host (e.g. desktop/ laptop) with which to SSH to the instance. I have names my key using the scheme <remote-host>-lxd-<instance-name>. Replace the two placeholders to create a name such as decepticons-lxd-megatron.
ssh-keygen -t ed25519 -f ~/.ssh/<remote-host>-lxd-<instance-name>
Create a stanza in your ~/.ssh/config to add the key.
# FILE: ~/.ssh/config
# LXD instance on host "decepticons"
Host decepticons-lxd-megatron
HostName megatron.lxd
User ubuntu
ProxyJump decepticons
IdentityFile ~/.ssh/decepticons-lxd-megatron
# The server on which LXD is installed
Host decepticons
User daniel
Hostname 10.12.0.1
IdentityFile ~/.ssh/decepticons
# Some other configuration you may have
Host *
AddKeysToAgent yes
IdentitiesOnly yes
ServerAliveInterval 60
ServerAliveCountMax 60
The Hostname megatron.lxd requires you have DNS resolving lxd domains on the remote host.
Notice the line beginning with JumpHost. This is the crucial part. We can SSH to the instance, via the host, in one fell swoop.
Add the key to the ~/.ssh/authorized_keys file inside the LXD instance using your editor.
sshd_config.d/
User changes to the sshd configuration can be added into .conf files residing within /etc/sshd_config.d/.
The first change our instance requires is PubkeyAuthentication yes.
This will enable SSH keys to work with sshd.
echo 'PubkeyAuthentication yes' |
sudo tee /etc/sshd_config.d/99-custom.conf
Then reload sshd:
sudo systemctl reload ssh
On Ubuntu the service is called ssh, it may be sshd on other systems.
Configure the Firewall
Ensure that your firewall is sending traffic into the containers from the host via whatever network method you conifgured for LXD.
When I set up LXD I created a bridge network that LXD manages, called lxdbr0.
You may well already have the configuration in-place, but if not, this is a point to check when debugging.
Open the firewall ports inside the container, to enable SSH access:
sudo ufw allow ssh
sudo ufw reload
Testing
At this point, everything should be configured to enable you to SSH from your host into the LXD instance via the host:
ssh decepticons-lxd-megatron
Bonus Tip: Cloud Config
We can automate the addition of the ssh key to our new instances when we create them by configuring a LXD profile or creating a file-containing the stanza and specifying it at creation time.
#cloud-config
users:
- name: ubuntu
ssh_authorized_keys:
- ssh-ed25519 aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa daniel@desktop
write_files:
- content: |
PubkeyAuthentication yes
path: /etc/ssh/sshd_config.d/99-custom.conf
permissions: 0600
runcmd:
- [ufw, allow, ssh]
If adding to a profile:
lxc profile edit <profile-name>
If supplying via CLI when creating a container:
lxc launch \
ubuntu:noble \
barricade \
--config user.user-data="$(cat cloud-config-file.yaml)"
You can create multiple profiles and multiple cloud-config files to be used in tandem, to suit those times when your user is named something other than ubuntu.
Do as you will!
Round-up
A few changes and we can now ssh into the remote instance or rsync between our host and the remote instance. Easy peasy.
I've set this up for the instances on my server somewhere in continental Europe and my main workstation.
It's making life far simpler.
The one caveat I have at this time is that once almost everything is configured, there will be a need to manually add new instances into your ~/.ssh/config.
It would be very nice if this too could be automated to some degree.
I have thoughts on the matter but haven't enacted any plans yet.
I am considering setting up some files in ~/.ssh/config-parts which are numerically ordered for globbing purposes, which can be joined via cat to essentially render a complete ~/.ssh/config file.
Something along the lines of:
/home/daniel/.ssh/
├── authorized_keys
├── config
└── config-parts
├── 50-static
└── 99-last
#!bin/bash
# FILE: ~/.local/bin/render-ssh-config
# Get the instance names
mapfile -t < <(
ssh decepticons \
lxc list --columns=n --format=compact |
tail -n+2 |
sed -E 's/^\s*(.*)\s*/\1/'
)
cat ~/.ssh/config-parts/[0-5]*
for instance in "${MAPFILE[@]}"
do
cat <<EOF
# Generated automatically
Host decepticons-lxd-${instance}
HostName ${instance}.lxd
User ubuntu
ProxyJump decepticons
IdentityFile ~/.ssh/decepticons-lxd-${instance}
done
EOF
cat ~/.ssh/config-parts/[6-9]*
exit 0
It may be prudent to maintain a manifest of instance-username pair overrides to make this more flexible. For this I wouldn't use BASH. BASH is fine for some things, but I'd rather write this in something other than a shell-scripting language.
For now that'll do. Until next time, take care!