Azure Innovators

A comprehensive guide for IT professionals dealing with the widespread Azure VM boot issues following Microsoft’s July Patch Tuesday updates

The Problem

This past Tuesday, July 8th, 2025, Microsoft released 137 patches. While many critical vulnerabilities were closed, the deployment of one change in particular has caused significant boot failures across Azure virtual machines worldwide. If you’re reading this, you’re likely one of the many IT professionals dealing with VMs that simply won’t restart after the patch installation.

You’re not alone. This issue has affected countless organizations globally, and it’s not something that could have been prevented through standard operational procedures. In one of our customers’ environments, 71% of their Azure VMs were impacted by this problem.

What We Know

    • Scope: Global issue affecting numerous Azure customers

    • Cause: The July update introduced changes related to Secure Boot where if Secure Boot isn’t enabled before applying the patch, they cannot be applied correctly resulting in the operating system becoming non-bootable.

    1. Microsoft’s Response: No official fix released yet (though one is expected soon) UPDATE: Microsoft released an Out-Of-Band update to resolve this issue-KB5064489https://support.microsoft.com/en-us/topic/july-13-2025-kb5064489-os-build-26100-4656-out-of-band-14a82ab2-100f-4dd4-8141-f490ec90c8f4

    • Timeline: Began following the July 2025 Patch Tuesday release on July 8th, 2025

Significant Impact
For those using automated deployment solutions, including Azure Update Manager, these updates were applied automatically during the night. The net result is that when users arrived in the morning, they couldn’t logon to affected systems. IT Pros investigating the problem quickly found their VMs stuck on the black Hyper-V logo screen.

Verifying The Issue

    • Check Boot Diagnostics: Review the boot diagnostic screenshot in the Azure portal to determine if the VM is stuck on the Hyper-V logo screen

Proven Solutions

After working through this issue, here are the methods We’ve found that have proven effective for getting affected Azure VMs back online. Let’s go through them from easiest (and fastest) to hardest (and slower):

Method 1: VM Resizing

This approach sometimes resolves the boot issue by forcing a hardware reconfiguration:

    1. Stop the affected VM through the Azure portal

    1. Navigate to the VM’s Size configuration

    1. Select a different VM size (you can change back later if needed)

    1. Apply the resize operation

    1. Start the VM – this often resolves the boot failure

    1. Optional: Resize back to original specifications once confirmed working

Success Rate: 20% of our customer’s affected servers were brought back to life using this approach
Downtime: Minimal beyond the initial failure

Method 2: Enable Trusted Launch

If your VM configuration supports it, enabling Trusted Launch can resolve the boot issues:

    1. Stop the VM completely

    1. Go to VM Configuration in the Azure portal

    1. Navigate to Security settings

    1. Enable Trusted Launch (if available for your VM type)

    1. Enable Secure Boot and vTPM

    1. Apply changes and restart the VM

Note: This option is only available for Generation 2 VMs and certain VM sizes.
Success Rate: 20% of our customer’s affected servers were brought back to life using this approach
Downtime: Minimal beyond the initial failure

Method 3: Rescue VM Approach

A Rescue VM is simply a new, temporary VM we can use to access and repair the OS on the affected VM’s boot disk:

    1. Create a rescue VMin the same resource group and region
        1. Ensure you select an adequate size, use Generation 2, and that Trusted Launch including Secure Boot and vTPM are enabled

    1. Stop the affected VM and detach its OS disk

    1. Attach the OS disk to the rescue VM as a data disk

    1. Access the Rescue VM
        1. Identify the drive letter assigned to the attached disk using the Disk Management MMC (the corrupted OS disk from the non-bootable VM)

        1. Open a Command Prompt

        1. Run the following command:
            1. dism /image:<Drive letter>:\ /cleanup-image /revertpendingactions

        1. Mark the attached disk Offline using Disk Management

        1. Install the Hyper-V Role

        1. Open Hyper-V Manager

        1. Create a new VM using the existing attached disk

        1. Start this new VM allowing it to revert the applied patches

        1. Shutdown the Rescue VM

    1. Stop the Rescue VM

    1. Detach the disk from rescue VM

    1. Reattach as OS disk to the original VM

    1. Start the original VM

Success Rate: 60% of our customer’s affected servers were brought back to life using this approach
Downtime: 1-2 hours beyond the initial downtime

Post Recovery Actions

Ensure Trusted Launch, including Secure Boot and vTPM, is enabled before the July patches are deployed again.

Our Experience

In our environment, we successfully restored all affected VMs without any data loss. The combination of VM resizing and Trusted Launch configuration resolved 40% of cases, with the remaining requiring the Rescue VM approach.

Key Takeaways

    • This is a widespread Microsoft issue, not isolated to individual organizations

    • Multiple proven solutions exist to restore affected VMs

    • No data loss is expected if proper recovery procedures are followed

    • This represents normal IT operational risk in cloud environments

Looking Forward

Microsoft is expected to release an official fix for this issue soon. However, the solutions outlined above provide immediate relief for affected systems. Consider this incident a reminder of the importance of having robust disaster recovery procedures and the value of understanding multiple VM recovery approaches.

Have you encountered this issue? Share your experiences and additional solutions in the comments below. Let’s help the IT community work through this together.

Leave a Reply

Your email address will not be published. Required fields are marked *