Immutable Backup Records
AWS S3, S3-compatible storage providers (Backblaze B2, Minio, Wasabi), and Google Cloud Storage offer features that let Arq make your backup records immutable, which helps protect against ransomware attacks.
Arq locks your backup data for as long as you choose.
Arq can still perform budget, retention and object cleanup functions. It just can’t remove items until their locks expire.
Once you’ve locked data, there’s no way to unlock it or delete it until the lock expires, so choose your immutability days carefully!
Set Up Immutable Backups with S3 or S3-Compatible Storage
-
Choose S3 or an S3-compatible storage provider that supports the object lock API, such as B2 (using the S3-compatible API) or Wasabi. NOTE If you’re using B2, you must add your B2 account to Arq as an “S3-compatible” storage location; choosing “B2” as the storage location type will not allow object lock. Learn More
-
Create a new bucket and enable object lock on the bucket.
-
Create a backup plan using that storage location.
-
Edit your backup plan, click the Immutable tab, and check the ‘Make latest backup record immutable’ checkbox.
- Choose the minimum number of days to make the latest backup record immutable.
- Choose the interval for refreshing the immutability of objects.
Set Up Immutable Backups with Google Cloud Storage
- Create a new GCS bucket with both Object Retention and Object Versioning enabled.
- In the Google Cloud Console, create a bucket and enable “Object retention” (choose “Unlocked” mode).
- Also enable “Object versioning” on the same bucket. Both settings are required.
-
Create a backup plan using that storage location.
- Edit your backup plan, click the Immutable tab, and check the ‘Make latest backup record immutable’ checkbox.
- Choose the minimum number of days to make the latest backup record immutable.
- Choose the interval for refreshing the immutability of objects.
Ongoing Lock Maintenance
Arq de-duplicates data, so each new backup record points to the same data as the previous backup record except for new/modified/deleted items. Because of this, Arq refreshes the locks on objects it’s reusing.
When Arq uploads a new object, it sets the (latest version of) the object’s lock to expire to the today’s date + the chosen minimum immutability days + the chosen referesh interval.
For reused objects where the lock expiration is currently earlier than today’s date + the chosen minimum immutability days, Arq resets the lock expiration to today’s date + the chosen minimum immutability days + the chosen refresh interval.
Why Not Extend the Locks Every Time Arq Backs Up?
Some providers charge per transaction, so updating the locks on existing data every time would incur costs. It would also make the backup activity take longer.
Lock Maintenance Example
For example, if you create a backup plan to run daily and set immutability to 30 days, refreshed every 10 days. The backup plan runs the first time and creates a backup record; all uploaded objects are new, and are locked for 40 days.
The next day the backup plan runs and uploads data for new/changed files, setting the lock on those to 40 days, and reuses the existing backup data for the rest of the backup record. So, some of the objects for the backup record are locked for 40 days, and some for 39 days.
Thirty days later, the backup plan runs and Arq uploads data for new/changed files, setting the lock on those to 40 days, and reuses the existing backup data for the rest of the backup record. It extends the lock to 40 days from now for any object it reused for this backup record whose lock expires in < 30 days.
At any given time, all the data referenced by the latest backup record has a lock that expires between 30 and 40 days from now.
What If I Set a Budget and/or Thinning?
If you configure Arq to delete old backup records and also configure immutability, Arq will be unable to delete old backup records that are still immutable. Once the object locks expire, Arq will be able to delete the old backup records.
Ransomware Protection
If an extra-clever ransomware attack finds a way to access your backup data at S3/B2/Wasabi/GCS, it will be unable to permanently delete the backup data.
How an Attacker Could Make Your Data Look Deleted
Object lock requires object versioning to be enabled on the bucket (for both S3 and GCS).
S3: When an object is “deleted”, S3 creates a “delete marker”. Normal queries for lists of objects don’t return that object, but queries for all versions of objects do. Any attempt to “delete” a version that’s locked will fail with “access denied”, no matter what credentials are used. So if an attacker gains access to your S3 credentials, the most they can do is create delete markers on your objects. Your backup data would appear to be gone, but the locked versions are still there.
Google Cloud Storage: GCS uses per-object retention in “Unlocked” mode, which allows the retention period to be extended but not shortened. Combined with object versioning, overwrites create new versions while old versions retain their locks. Any attempt to delete a retained object version will fail with a 403 error. An attacker could overwrite objects, but the original locked versions remain intact.
In both cases, your immutable backup data is still present – it just looks deleted or overwritten.
One remaining risk: If an attacker gains access to your cloud provider account (e.g. your AWS root account or Google Cloud account), they could delete the entire storage account or bucket itself. Object lock protects objects within a bucket, but it can’t prevent deletion of the account that contains the bucket. To mitigate this, protect your cloud provider account credentials carefully, enable multi-factor authentication, and consider using IAM policies that restrict bucket deletion.
Recovering Data After an Attack
If an attacker has created delete markers (S3) or overwritten objects (GCS) to hide your backup data, Arq can recover it.
In Arq, select the storage location in the left sidebar under “STORAGE”. If a backup set’s data has been hidden by delete markers or overwrites, it will appear with “(deleted - recoverable)” next to its name. Select it and click the Recover button.
Arq will scan the backup set and recover all objects that are recoverable by removing S3 delete markers or restoring prior GCS object versions. Once the recovery activity finishes, your backup set and its data will be visible again.
An attacker may have deleted only some objects within a backup set rather than the entire thing. In that case, the backup set won’t show “(deleted - recoverable)” but you may encounter errors when restoring. If this happens, select the backup set and click the Recover button to scan for and remove any delete markers within the backup set.