Skip to main content

Straight Talk: Data backup and deduplication conclusion

Missed out on the previous parts of our Straight Talk series? Check out the introduction to data backup and deduplication, backup to tape, disk staging, enter data deduplication, sizing a disk backup system and considerations and questions for vendors.

In the previous sections, we explained the various backup complexities and the ultimate solution that solves the backup and restore problem permanently. This is the last in a series of sections from this guide. It summarises the consequences of selecting the wrong backup architecture for your environment.

Disk backup with deduplication is not simply primary storage or a media server with a deduplication feature. Based on the choices of deduplication methods and the scaling architecture, a great deal in your backup environment can be impacted, with consequences such as:

  • Slow backups
  • Expanding backup windows
  • Undersized systems that hit capacity limits within months
  • Costly forklift upgrades for systems that don't scale as data grows
  • Slow restores of traditional full or image backups
  • Slow offsite tape copy
  • Slow instant recoveries of files, objects, and VMs — in hours versus minutes

The choice of architecture for disk backup with deduplication can help you improve your backup environment, or it can lead you to simply replace the old tape-based challenges with new, more expensive disk-based challenges.

With each vendor, take the time to ask many questions and do a thorough comparison, as all disk backup systems are truly not created equal.

Bill Andrews is president and CEO of ExaGrid Systems