One of the great things about the internet is the availability of cheap or free services online, so many clients are using gmail, dropbox, github etc. for their business operations. But all to often they forget that these services are often playing the oldest game in the technology industry. "Vendor Lock-in".
While the ones I mentioned are not to bad, you can cheaply and easily rescue or backup your data to another location, or move to an alternative provider. Not all of them are like that.
We are in the middle of a migration project from Netsuite (It's a SAS Oracle based ERP system) to Xtuple, which is a open source ERP system, based around postgresql. This is a slow and painfull migration, as there is no standard for ERP data, and exporting is slow and clumsy over SOAP. Anyway, as a plesant distraction from this large migration, the same client also wanted us to look at migrating from backpack, a 37 signals product.
Backpack, unlike all the SAS systems I mentioned has deliberately made it hard, or practically impossible to migrate from their services. The primary offering of backpack is a online file storage service that you can permit clients or suppliers the ability to do share files and folders. It is only web based (unlike dropbox or box.net), and there is no desktop client that you can use to access the files other than the web interface.
So how to rescue the data... Read on for the trick..
While I did attempt to get this outsourced, it turned out that as the project was over-running horrifically, me spending one day would be far more effecient that waiting for the remote developer to complete it.
The code and process is pretty simple.
You fire up the browser, go to the site you want to download from (eg. backpack), log in, then press the mirror button. What happens then is
* request the list of links from the page by running a method in inject.js, and returning json to the console (via console.log)
* iterate through the link list, again calling a method from inject.js, to use xmlhttprequest and find out if they are HTML (in which case they are flag for future parsing), or downloadable.
* If it's downloadable, request the file and return via console.log() a json encoded array of bytes (yes I know it's not amazingly efficient, but it worked...), and save the to the file system.
* then iterate through the links, going through the process again....
It uses a few folders to store the current state (so for example if it crashes or you kill it, as it can eat through memory), It can carry on where it left off.
Using this tool we managed to rescue almost 10Gb of data from backpack..