Jul 06

Encrypted, remote backup

So, I’m a big fan of backups. I’m a bigger fan of differential backups. My personal favorite method of doing them is using the rdiff-backup script on a linux box. What happens when you have sensitive information that you need to remotely backup. Do you upload a new encrypted file of everything every night? Do you have to download the whole file to get to one file? If you were using an encrypted tarball or sometype of encrypted loopback file, like truecrypt, then the answer would be yes. That is bad if you have a slow link. One way to solve that is to do encryption on each file. To do this we can use EncFS and sshfs. Both of these are in the default repositories for the current ubuntu versions.

There are five steps to making the backup happen:

  1. mount the remote filesystem via sshfs

    sshfs <user>@<machine>:</remote/path> <ssh-local-mountpoint>

  2. mount the unencrypted version of the remote file system via encfs

    encfs <sshfs-local-mountpoint> <unencrypted-mountpoint>

  3. do the backup from wherever to the unencrypted remote filesystem in #2.
  4. unmount the unencrypted filesystem

    fusermount -u <unencrypted-mountpoint>

  5. unmount the encrypted file system

    fusermount -u <sshfs-local-mountpoint>

That’s not too bad. Now, we want to automate this so that we can have a cron job that does this for us at night, at lunch or whenever. We need to script it. This requires passwordless login for sshfs and encfs. For sshfs, we can use a rsa/dsa private/public key pair. No problem. For encfs, we have to get a little trickier. First sshfs:

sshfs -ouid=$EUID,gid=${GROUPS[1]} -o ssh_command="ssh -i <path-to-private-key>"
user>@<machine>:</remote/path> <ssh-local-mountpoint>

There’s a lot of options, so let me break it down:


This tells sshfs to set the mount directory and files to your running user ID and primary group ID. Why do this, because if you don’t you will have problems with encfs, specifically “Permission Denied” when trying to do anything with the encfs mounted file system.

-o ssh_command="ssh -i <path-to-private-key>"

This is the only way I could get ssh to take a specific private key. As far as I could tell, there is not a way to directly tell sshfs what key to use when loging into a remote system. To fake it, we have to tell sshfs to use a different command when running ssh. This just inserts the ssh option for specifiying a private key into the ssh command string. Why don’t we just use the default key? Because I don’t want being able to backup and restore being tied to a specific user on my system. At this point, I assume you know how to setup keyless login via a public/private key pair. If not, google ssh passwordless login to make sure you have it setup correctly.

If you have a slow connection, but extra processing power, you can add compression to the ssh connection by adding -C to the command like so:

-o ssh_command="ssh -C -i <path-to-private-key>"

Next we need to mount the unencrypted version of the file system

 --extpass="echo <password>"
<sshfs-local-mountpoint> <unencrypted-mountpoint>

The only option we added there was

"echo <password>"

which tells encfs to use an external program to get the password. We’re just echoing it back to encfs. Give this script the same security you would give the ssh login key. What about a null password you ask? That threw errors in my testing, and encfs will still stop and wait for user input even with a null password.

Now we have both filesystems mounted, we can backup at our pleasure. Just make sure that you unmount both directories when you are done. I hope this makes life easier for you.

Permanent link to this article: http://blog.curioussystem.com/2009/07/encrypted-remote-backup/

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>