Immutable Backup Records
AWS S3 and some S3-compatible storage providers like Backblaze B2, Minio, and Wasabi offer a feature called “object lock”. Arq can use this feature to make your backup records immutable and therefore immune from ransomware attacks.
Arq locks your backup data for as long as you choose.
Arq can still perform budget, retention and object cleanup functions. It just can’t remove items until their locks expire.
Once you’ve locked data, there’s no way to unlock it or delete it until the lock expires, so choose your immutability days carefully!
Set Up Immutable Backups with Object Lock
-
Choose S3 or an S3-compatible storage provider that supports the object lock API, such as B2 (using the S3-compatible API) or Wasabi. NOTE If you’re using B2, you must add your B2 account to Arq as an “S3-compatible” storage location; choosing “B2” as the storage location type will not allow object lock. Learn More
-
Create a new bucket and enable object lock on the bucket.
-
Create a backup plan using that storage location.
-
Edit your backup plan, click the Immutable tab, and check the ‘Make latest backup record immutable’ checkbox.
- Choose the minimum number of days to make the latest backup record immutable.
- Choose the interval for refreshing the immutability of objects.
Ongoing Lock Maintenance
Arq de-duplicates data, so each new backup record points to the same data as the previous backup record except for new/modified/deleted items. Because of this, Arq refreshes the locks on objects it’s reusing.
When Arq uploads a new object, it sets the (latest version of) the object’s lock to expire to the today’s date + the chosen minimum immutability days + the chosen referesh interval.
For reused objects where the lock expiration is currently earlier than today’s date + the chosen minimum immutability days, Arq resets the lock expiration to today’s date + the chosen minimum immutability days + the chosen refresh interval.
Why Not Extend the Locks Every Time Arq Backs Up?
Some providers charge per transaction, so updating the locks on existing data every time would incur costs. It would also make the backup activity take longer.
Lock Maintenance Example
For example, if you create a backup plan to run daily and set immutability to 30 days, refreshed every 10 days. The backup plan runs the first time and creates a backup record; all uploaded objects are new, and are locked for 40 days.
The next day the backup plan runs and uploads data for new/changed files, setting the lock on those to 40 days, and reuses the existing backup data for the rest of the backup record. So, some of the objects for the backup record are locked for 40 days, and some for 39 days.
Thirty days later, the backup plan runs and Arq uploads data for new/changed files, setting the lock on those to 40 days, and reuses the existing backup data for the rest of the backup record. It extends the lock to 40 days from now for any object it reused for this backup record whose lock expires in < 30 days.
At any given time, all the data referenced by the latest backup record has a lock that expires between 30 and 40 days from now.
What If I Set a Budget and/or Thinning?
If you configure Arq to delete old backup records and also configure immutability, Arq will be unable to delete old backup records that are still immutable. Once the object locks expire, Arq will be able to delete the old backup records.
Ransomware Protection
If an extra-clever ransomware attack finds a way to access your backup data at S3/B2/Wasabi, it will be unable to permanently delete the backup data.
More on Object Lock and Object Versioning
Object lock also requires object versioning is enabled on the bucket. When an object is “deleted”, S3 creates a “delete marker”. Normal queries for lists of objects don’t return that object, but queries for all versions of objects do.
Any attempt to “delete” a version that’s locked will fail with “access denied”, no matter what credentials are used.
If an attacker or anyone else “deletes” your object-locked data, they’re just creating “delete markers”. You can remove the delete markers to make your data visible again. We’ve written a small utility called “s3undelete” that can remove delete markers from any data set.