Ok, so I discovered a number of shortcomings in my recent attempt to sync a folder in one direction to Amazon S3 using encryption, the most important of which was that it wouldn’t resume a failed transfer efficiently, which in the case of large transfers wasn’t at all ideal (as I learned to be own cost - damn my 256k upload speed). So, this is attempt number 2. I decided to completely rewrite the script in Python instead to give me some more flexibility, coupled with the availability of Boto, a nice Python library for accessing all the Amazon Web Services.
Edit: this script is deprecated in favour of a rewritten version 2. I use Amazon S3 to host large media files which I want cheap scalable bandwidth on, and for expandable offsite storage of important backups. I used to have some simple incremental tar scripts to do my offsite backups, but since I moved to Bacula, I’ve just established an alternative schedule and file set definition for my offsite backups, the critical subset of data I couldn’t possibly stand to lose (like company documents).
As I’ve talked about recently, as a background task, I’m setting up a new Ubuntu server to take over the main file server, mail server, build server, backup server, web server, and you-name-it-server duties of my home office. It will eventually be taking over from a venerable Debian server, which was built on some old hardware left over from retired machines (except with 2 new mirrored hard drives) and has basically sat in a corner being rock-solid for years without me touching it, at least until the PSU failed (and was rapidly replaced), reminding me that I’ve been meaning to upgrade it for over a year.