Proxmox Backup Server 3.4

Summary

Proxmox Backup Server 3.4 enhances backup performance with optimized garbage collection, granular sync job selection, and a statically-linked client. Faster garbage collection improves storage efficiency, while granular sync jobs enhance data security with offsite backups. The static client simplifies backups on diverse Linux systems.

Protect your data with the self-healing storage solution that technical experts trust.

** Main Story**

So, Proxmox Server Solutions just dropped Proxmox Backup Server 3.4, and honestly, it’s a pretty solid upgrade, especially if you’re deep into data protection (and who isn’t these days?). They’ve really focused on making things faster and easier to use. Let’s dive in.

Smarter Garbage Collection: Speeding Things Up

If you’re not familiar, Proxmox Backup Server uses deduplication, breaking backup data into chunks to save space. But, over time, you end up with orphaned chunks, and that’s where garbage collection (GC) comes in. It clears out the unneeded stuff, reclaiming storage. Now, the previous versions could be a bit slow, right? Like watching paint dry. Version 3.4 changes things by introducing a caching system that cuts down on the intense file metadata updates.

Yes, it does use a little more memory, but the trade-off is totally worth it because GC runs way faster. Imagine the time savings! And you know what? As an admin, you can even tweak the caching on each datastore, striking that perfect balance between performance and memory use, it’s all about optimisation you know.

Offsite Backups: More Control Than Ever

Offsite backups are, without a doubt, critical. Having a copy of your data somewhere safe is just good practice. Proxmox Backup Server already uses sync jobs to copy backups between locations. You could previously filter sync jobs by backup groups, but it was a little limited, wasn’t it? Version 3.4 ramps that up, letting you pick specific encrypted or verified backup snapshots to sync.

Think about it: you can now have much finer control over your offsite backup strategy. This also, makes it easier to meet compliance requirements, something we’re all getting more familiar with. Speaking of compliance, I once worked on a project where we had to prove our offsite backups were securely encrypted; this kind of feature would have been a lifesaver then.

Backups Made Simple: The Static Client

Sure, Proxmox Backup Server plays nice with Proxmox VE. However, it can be used beyond it with it’s client line extension. What’s cool is the new statically-linked command-line client. Now you can backup linux files easily, even if you’re not inside Proxmox VE. It’s portable, so you don’t need to worry about shared libraries. This means deploying backups across different Linux versions is much easier. It just works, and I like that.

Other Noteworthy Improvements

But wait, there’s more!
* Faster Tape Backups: You can now tweak the number of threads used for reading chunks during tape backups. This can really boost performance if you’re using tape for long-term storage. It really does help with throughput.
* Up-to-date Platform: It’s built on Debian 12.10 (“Bookworm”) and uses Linux kernel 6.8.12-9 as the stable default. Plus, there’s an optional 6.14 kernel if you need it for newer hardware. ZFS 2.2.7 is in there, too, with patches for the 6.14 kernel.

So, how do you get your hands on it? Proxmox Backup Server 3.4 is available as an ISO image for new installs, or you can upgrade from previous versions using APT. It’s still free and open-source under the GNU AGPLv3 license. And, of course, Proxmox offers enterprise support subscriptions if you want access to their Enterprise Repository, updates, and support. Frankly, for the peace of mind, it’s worth considering. I think it’s a great step forward for data protection.

3 Comments

  1. The enhanced control over offsite backups, particularly for compliance, is a significant improvement. Could these granular sync options also streamline disaster recovery processes, allowing for quicker restoration of specific data sets?

    • That’s a great question! Absolutely, the granular sync options should significantly streamline disaster recovery. By selecting specific snapshots for restoration, you can reduce the amount of data that needs to be transferred, leading to quicker recovery times for critical systems. It definitely adds a layer of efficiency to the DR process!

      Editor: StorageTech.News

      Thank you to our Sponsor Esdebe

  2. The faster garbage collection through caching seems like a significant enhancement. I’m curious how the memory usage scales with larger datasets and the impact of adjusting the caching on individual datastores. Has anyone tested this in environments with petabytes of data?

Comments are closed.