Followup: Speeding up Crashplan Backups

UPDATE

Well…  I can no longer recommend CrashPlan, even using my fix below.   I recently upgraded my file server, performed a computer adoption according to CrashPlan’s instructions…. and CrashPlan lost all of my backups from that machine.  All 16TB or so or them.  It also lost my backup set definitions after the adoption.

CrashPlan support was… not particularly helpful.  They claim they see the 16TB of data attached to the right computer GUID, but I can’t see the data in my client, and when I look at my account in the CrashPlan web interface, I see nothing.  Support said it “might work” if I forced a backup to run… and it didn’t.

My CrashPlan account expires soon, and I won’t be renewing.

 

 

So, I now have a bunch more performance data from CrashPlan covering the period after I put my fix in place (from my last post).  This is just a quick followup to show that this fix does, indeed, have the desired effect.

They say that a picture is worth a thousand words, so voila:

crashplan-afterfix

It’s left as an exercise for the reader to determine the point in time at which I put the fix in place.  😉

 


19 Responses to Followup: Speeding up Crashplan Backups

  1. Avatar Cleese
    Cleese says:

    Just a follow-up on this; I got great speeds with your fix, but for the last few days my backups have once again been crawling at even lower speeds than the worst I’ve been seeing before the fix. Are your backups still running at the same speed?

    • Avatar alter3d
      alter3d says:

      Sorry for the delay in replying — I was out of town without Internet access for 2 weeks. 🙂

      Just checked, and my backup is currently running at 8.6Mbps, so it still seems to be OK for me.

  2. It’s great that your speed has gone up, but how about the time it takes to do actually do a backup? With dedupe isn’t it possible that while your upload speed has dropped, you’re actually taking less time to do a backup than without dedupe?

    • Avatar alter3d
      alter3d says:

      Leaving dedupe on could, in theory, be faster. As with all other optimizations, it’s a tradeoff between the different compute resources available (CPU, RAM, network).

      That said, it’s my opinion that, in almost all cases, dedupe on CrashPlan should be turned off. Here’s why:

      – The speed reported by CrashPlan in the UI and in the logs is the “effective” backup rate — that is, taking into account compression and dedupe. There’s a clear improvement after disabling dedupe, both with my own backups and from other people I’ve talked to (or have heard from through comments on my posts).

      – From speaking with other CrashPlan users with large backup sets, the dedupe algorithm slows things down so much that by the ~5TB mark (depending on your CPU) it’s pretty much useless even on Internet connections with slow (<1Mbps) upload speed.

      - Most home users do not have a significant amount of duplicate data, so you're consuming a lot of CPU time calculating something for nearly no benefit. Dedupe might make sense for corporate backups if you're doing stuff like desktop images, but even here I question it's value *in CrashPlan*, because their algorithm seems too slow to be useful. When you look at storage dedupe solutions for "enterprise IT", it generally has very high requirements (usually a really good CPU + at least 1GB of RAM per TB of storage), which doesn't make much sense for personal systems.

      I think that, in very limited circumstances, it makes sense to leave dedupe on, but IMHO it shouldn't be the default because the use cases are a very very small portion of what realistically happens.

  3. Thank you for this. I was just about to ditch CrashPlan and go back to a local storage solution for backup (my dataset is about 9TB) You saved me a lot of hassle. This is working like a charm!

  4. Thanks for this post, I am so happy.
    I have a 4 TB dataset and was getting 1.2 Mbps. I made this change on a Ubuntu server and saw my rate riseto over 30 Mbps.I have Verizon FIOS, a pain to deal with but I also have Cox fiber in my neighborhood and you can’t beat competition for making service better.
    For those who haven’t seen a speed increase I was wondering if you restarted the Crashplan service? Maybe a full reboot would be best.

  5. Pingback:Network Rockstar | Speeding up CrashPlan Backups

  6. This only works temporarily for me – I tried making the changes, and for about five minutes I’d get speeds around 10 Mbps.. and supposedly nearly 100 in spikes, which is strange because my connection is generally only 5-6 upload if I’m lucky.. but then it starts decreasing and within two or three minutes at most its back to 1 or maybe 1.1 mbps if I’m lucky :/ Anyone experience this?

  7. I had the same experience as “K”. I get fast speeds for a few minutes and then it drops back down (am getting between 2 and 3 Mbps on Fios)…

  8. Avatar Trogdan
    Trogdan says:

    Neat option, but buyer beware. Data-deduplication is not just for analyzing blocks on your own local system, but it de-dups blocks with the crashplan servers. There are two places where this will kill you. 1) You perform a computer adoption 2) In my case on linux, if my external RAID becomes re-mapped to a different device (sdb -> sdc for example), a data-dedup will be performed.

    Data-dedup is MUCH faster than WAN. On my system with a pentium core 2 duo, data-dedup operations occur at over 300 Mbps, where as my uploads are usually a little less than 10 Mbps.

    I plan to use this mod to speed things up, but ONLY after dedup is completed. Otherwise, you’re just re-uploading data twice to the server.

  9. Thanks for the information on the XML file. My backup speed has doubled or so, initially it tripled but now it’s tailed off somewhat.

    I couldn’t get your scripts to work, though. If you install the headless CrashPlan package from http://pcloadletter.co.uk/2012/01/30/crashplan-syno-package/ as I did, the file location and the method to restart are different. If you use the pcloadletter stuff, the XML file is at:
    //@appstore/CrashPlan/conf/my.service.xml.
    To edit the XML file manually, you need to log in as root, the default admin account doesn’t have enough privileges.

    You can stop and start CrashPlan interactively in DSM from the Crashplan package page if you use the pcloadletter package.

    Your scripts also require a bash shell, which I didn’t have, and the default Synology sed doesn’t support all the arguments in the script.

  10. Avatar ElectronO
    ElectronO says:

    Today I had a 94 gig tar file which sat for a week in the “analyzing” stage although I have a Mac Pro with 32 gigs of RAM. I used to have no trouble sending such files. I followed your advice and now the file is happily sending and will be done in about 2 days. I might play around with setting data reduplication to minimal via crashPlan’s GUI to see if that enables files of this size to transfer without issue.

  11. Avatar ElectronO
    ElectronO says:

    On OS X, the file is located here: /Library/Application Support/CrashPlan/conf

  12. Avatar MikeUVA123
    MikeUVA123 says:

    Your change worked great! I have gone from 4.2Mbps to between 10Mbps – 15Mbps. I am not really sure why it doesn’t stay at the original 60Mbps that it kept for the first 5 mins after I made the change. Still an improvement.

  13. Avatar pumpupthejam
    pumpupthejam says:

    Could you share the scripts for your graphs? Would be really useful to track my own performance..

    • Avatar leeroy
      leeroy says:

      Yes, I would be very interested in this too.

      @alter3d From which log file are you extracting the upload speed ?

  14. Pingback:CrashPlan Slow Part 2 - Cloud Storage Buzz

  15. Avatar KaHaR
    KaHaR says:

    For those of you with multicore CPUs, it might be worth it to increase the “Use up to: XXX percent CPU” to 100% as CrashPlan is only using a single CPU. I was uploading at 2mbps before I made the change and at 8-9 mbps after the change.

  16. Avatar wuwuwuster
    wuwuwuster says:

    My Crashplan says 481 days remaining… 🙁