Why this article exists

I sought to answer a couple of questions for Veeam and Minio buckets:

  1. How much space does Veeam backups take up on a Minio-based Object Storage repository?
  2. How much does the Block Size setting make on space consumed?
  3. What are the object size distributions when the Block Size settings are changed?

In an effort to answer these questions, I came across an interesting discovery. When Veeam would delete objects that are no longer required due to retention policy settings, it left behind DeleteMarkers that would never get cleaned up. This amounted to a million DeleteMarkers being created in only two weeks. These expired DeleteMarkers are zero-byte objects that Minio has to deal with. Ouch. Most of the time, these DeleteMarkers are not causing much of a problem. They do not consume a lot of space, nor do they impact the performance of backups or restores.

That doesn’t sound so bad…

However, they do present an issue when you’re talking management of your Minio cluster and healing failed disks. A huge benefit of utilizing a distributed Minio cluster is recovering from disk failures can be easier than traditional RAID arrays. Additionally, Minio is a great choice for object storage because just like Veeam, you can use many different types of hardware to build these clusters. Especially when we’re talking about backup storage, I’d suggest that most Minio clusters will be built using traditional spinning disk hard drives. So what happens when the healing process needs to transfer hundreds of millions of zero-byte objects between nodes built with traditional spinning disk hard drives? Minio doesn’t know they aren’t valid, so it *has* to heal those objects. As you can quickly guess, this becomes a long process – often far longer than comfortable. Minimizing object counts (where applicable) quickly becomes critical to maintaining a healthy Minio cluster. That’s not to say Minio can’t perform with hundreds of millions of objects in its cluster – it absolutely can – but we don’t need to make it work harder by leaving a bunch of digital crud laying around.

Nearly a million delete markers after only 18 days.

Nearly all the “Less than 1024 B” objects in the screenshot above are expired delete markers.

How did we get into this mess? That’s not standard Veeam behaviour!

You’re absolutely correct. Veeam should not be leaving behind these expired delete markers. There are two ways you can get into this mess.

  1. Create a plain Minio bucket without Object Locking and Versioning enabled. Toggle Versioning on, then toggle it off. Unfortunately, toggling it off doesn’t actually disable versioning, it only suspends it. This is what Veeam support claims caused my problem.
  2. Run Veeam v11 with Minio acting as the capacity/archive tier. Veeam version 11 had a bug that left expired DeleteMarkers behind, even with a properly configured Minio bucket.

For the first instance, I never ran through creating a Minio bucket with versioning left as “unversioned” and trying again. Since Veeam can utilize Immutability with object storage such as Minio provides, it makes *far* more sense to create a Minio bucket with Object Locking enabled and utilize Immutability when creating the Veeam repo.

For the second instance, Veeam v12 has the fix applied. If you are running Veeam v11 and want to apply this fix, contact Veeam support (you can reference Case #06154552) and they will provide a patch.

Now that we’re in this mess, what can we do to get out of it?

There are two ways to clean up these DeleteMarkers when they’ve occurred.

  1. Implement an ILM rule to clean up expired DeleteMarkers (if you are running Minio).
  2. Run a script to clean up the DeleteMarkers (for any other S3 storage)

Creating an ILM rule

Creating an ILM rule to clean up expired DeleteMarkers is really easy. However, it’s very slow. It may take days/weeks/months to fully purge your bucket of these dastardly objects. That said, it’s simply a set-it-and-forget-it type of situation and takes nearly no effort to setup.

Log into your Minio management console and run the below command.

# mc ilm rule add --expire-delete-marker ALIAS/BUCKET

# example where n1t is my alias for my Minio pool/node, and test-veeam-tj is my bucket

mc ilm rule add --expire-delete-marker n1t/test-veeam-tj

Validate rule exists

mc ilm rule list n1t/test-veeam-tj
Minio ilm policy for Veeam buckets

Now you can leave this run and Veeam will clean up the expired DeleteMarkers periodically or (quoting Minio developers here):

ILM cleanup can happen in two different situations. One, i.e typically, the scanner would visit an object and based on the ILM policy configured it would expire this object. The second situation is when you perform a list operation or a stat-like operation, MinIO uses this opportunity to apply any pending ILM actions on this object.

Krishnan Parthasarathi – Minio

I managed to get Minio to clean these up faster by browsing through the bucket to initiate list operations, at which point Minio cleaned up any DeleteMarkers I stumbled upon. Unfortunately, an ILM rule may take a long time to clean this up (YMMV), so I prefer running the script below.

Running a cleanup script

Veeam provided me with a script which I will share here. All disclaimers are valid here (I am not responsible for any damage or disruption caused by your use of this script. I provide it freely here for you to use at your own risk).

function Remove-DeleteMarkers {
        [Parameter(Mandatory = $true)]
        [string] $Bucket,
        [Parameter(Mandatory = $false)]
        [string] $EndPoint

    function addMarkers {
            [Parameter(Mandatory = $false)]
            [string] $NextMarker
        if (!$NextMarker) {
            if ($EndPoint) { $versions = Get-S3Version -BucketName $Bucket -EndpointUrl $EndPoint }
            else { $versions = Get-S3Version -BucketName $Bucket }
            $markers = $versions.Versions | Where-Object IsDeleteMarker -EQ True
            foreach ($marker in $markers) { $script:keyVersions += @{ Key = $marker.Key; VersionId = $marker.VersionId } }
        else {
            if ($EndPoint) { $versions = Get-S3Version -BucketName $Bucket -KeyMarker $NextMarker -EndpointUrl $EndPoint }
            else { $versions = Get-S3Version -BucketName $Bucket  -KeyMarker $NextMarker }
            $markers = $versions.Versions | Where-Object IsDeleteMarker -EQ True
            foreach ($marker in $markers) { $script:keyVersions += @{ Key = $marker.Key; VersionId = $marker.VersionId } }
        return $versions.NextKeyMarker
    function deleteMarkers {
            [Parameter(Mandatory = $true)]
            [array] $ListToDelete
        if (!$EndPoint) {
            Remove-S3Object -BucketName $Bucket -KeyAndVersionCollection $ListToDelete[0..999] -Force -ReportErrorsOnly
        else {
            Remove-S3Object -BucketName $Bucket -KeyAndVersionCollection $ListToDelete[0..999] -Force -ReportErrorsOnly -EndpointUrl $EndPoint

    $markersCounter = 0
    do {
        $script:keyVersions = @()
        if ($token) {
            $token = addMarkers -NextMarker $token
        else {
            $token = addMarkers
        if ($script:keyVersions) {
            deleteMarkers -ListToDelete $script:keyVersions
            $markersCounter += $script:keyVersions.Count
            [string]$script:keyVersions.Count + " delete markers were removed..."
    while ($token)
    "Total removed delete markers: " + [string]$markersCounter

Ensure you have the right tools

The above PowerShell script needs a few extra steps to utilize it properly.

First, install AWS S3 Tools

install-module aws.tools.s3

Now setup your AWS Credentials

Set-AWSCredential -AccessKey myusername -SecretKey mysupersecretkey -StoreAs test-veeam-tj

Set your PowerShell to utilize this AWS Credential Profile

Set-AWSCredential -ProfileName test-veeam-tj

Finally, remove these DeleteMarkers. For my purposes, I copied and pasted the code above into my PowerShell console and ran the function directly by calling it with the appropriate parameters (as seen below)

Remove-DeleteMarkers -Bucket test-veeam-tj -EndPoint https://s3-ca-east-2.stage2data.com

For approximately 1 million DeleteMarkers on a test cluster (not a high-performance cluster by any means) it took roughly 15 hours to clean up all DeleteMarkers. YMMV.

DeleteMarkers in Minio bucket all cleaned up.

The backups during the last few days were paused until I could figure out what to do. Since you can’t change a bucket to enable Object Locking after it’s been created, I had to delete and re-create my bucket.

Lessons Learned

Here’s the real kicker of this whole situation. If I hadn’t accidentally suspended versioning on a bucket, I wouldn’t have discovered the issue of Minio S3 Buckets used by Veeam v11 (and older) possibly containing millions of DeleteMarkers that require cleaning up.

It’s not hard to believe that Veeam could simply be creating millions of small objects. However, when I ran through my tests, the “Less than 1024 B” object count simply didn’t make sense. When I explored further, I could see that the delta for the number of objects deleted was always nearly the same as the number of “Less than 1024 B” objects created. Some tiny objects are to be expected, but this many? Something smelled fishy.

Learning from your mistakes is critical in life. This time it just so happened that learning from this mistake uncovered something huge.

Now, let’s get to work on cleaning up our other buckets!