I had an interesting issue for a customer recently when trying to migrate their Exchange virtual machine from Azure Classic to Azure Resource Manager.

The VM is running Windows Server 2016 with Exchange Server 2016 CU3 and is a standalone server with no DAG. All user mailboxes are hosted in Office 365. The server acts as a hybrid server used for management purposes and is also used as an internal relay for printer systems for scan to email up to Office 365.

I went ahead with the migration by de-allocating the server in classic, creating a managed disk using the vhd blob of the classic VM as the source, then created an ARM virtual machine using the newly created managed disk as the source. After waiting patiently for the VM to boot up and be available for RDP on it’s new IP address, I was very surprised when checking the latest Azure boot diagnostics screenshot to find that the server had no network connectivity?

 

 

My first thoughts were that there might of been something wrong with the managed disk copy, so I tried it again to be absolutely sure, but I received the same result. I thought perhaps there was a network connectivity problem with the particular hardware cluster that the virtual machine had been placed on, so I tried a re-deploy on the virtual machine, but once it came back online it was still the same.

Therefore, I downloaded the managed disk to a separate Azure VM which had Hyper-V installed, so I could use Nested Virtualisation, to create a VM in Hyper-V using the vhd, and gain console access to find out what was going on.

More info on Nested Virtualisation

When starting the virtual machine I noticed that it was taking a rather long time to get past the “Getting devices ready” stage, where it normally installs new drivers due to different hardware being detected. In this case that is true as ASM and ARM run on completely different hardware clusters.

 

 

 

 

 

 

 

I eventually managed to log into the VM to find the server had no network adapter? This is despite the fact that I gave the VM a network adapter in Hyper-V.

 

 

 

 

 

I inspected the device in Device Manager and found that the network adapter no longer had an installed driver?

 

 

 

 

 

 

If I tried to install the digitally-signed Microsoft Hyper-V Network Adapter driver manually, it would just time out.

 

 

 

 

 

 

 

 

 

Upon further inspection in the services I found that several services were stuck on starting and stopping, one of which was the Network Setup service, which I believe to be the reason why I wasn’t able to install the Hyper-V network adapter driver. Further investigations revealed that this was also nothing to do with resource utilisation as I had upgraded the Host VM to use 8 vcores and 32GB of memory with Premium SSD disks in Azure, but even this change had little to no effect on the problem.

 

 

 

 

 

 

 

I proceeded to restart the virtual machine in safe mode, set all the Exchange services to disabled in the startup type and I rebooted the virtual machine. This meant that the windows services were no longer stuck on stopping and the Hyper-V adapter installed successfully through device manager.

 

 

 

 

 

 

Therefore, I decided to try the migration again, but try disabling the Exchange services before turning the server off in Classic. This doesn’t actually stop the services, it just sets them to disabled, so you can do this without causing any impact to Exchange users until the server is rebooted or the services are manually stopped.

You can do this easily through Powershell by running the following on your server:

Get-Service | Where-Object { $_.DisplayName –like “Microsoft Exchange *” } | Set-Service –StartupType Disable

Alternatively you could put Exchange in maintenance mode. Here is a good TechNet post which explains how to do this. This is for 2013 but 2016 should work the same.

Once I’d done this, I de-allocated the virtual machine, re-created the managed disk then tried creating the virtual machine in ARM again. This time, I was able to successfully connect to the virtual machine with no issues! I checked over the network adapter and device manager and everything looked perfect.

I therefore proceeded to re-enable the Exchange services. You can do this in Powershell again like so:

Get-Service | Where-Object { $_.DisplayName –like “Microsoft Exchange *” } | Set-Service –StartupType Automatic

Give the virtual machine another reboot to allow the Exchange services to start in the preferred order to avoid any problems.

And that’s it! All working!

So, to summarise:

  • Exchange VM migrated from Classic to Resource Manager had no network adapter.
  • Gained console access through Hyper-V to find that the Hyper-V network adapter was not installed as it had no driver.
  • I wasn’t able to manually install the Hyper-V network adapter driver as it was timing out. This was due to the Network Setup service being stuck on stopping.
  • Identified that the Exchange services were somewhat causing some Windows services to misbehave, one of which was the Network Setup service. This is believed to be due to the Exchange services having a huge effect on the network stack in Windows.
  • Tried disabling the Exchange services through Safe Mode in Powershell. Rebooted the server to find Windows services behaving normally.
  • Manual installation of the Hyper-V network adapter driver was successful.
  • Tried the migration in Azure again, but tried disabling the Exchange services on the Classic VM before de-allocating it. The VM came up in ARM perfectly fine, with the Network Adapter driver installed.
  • Re-enabled Exchange services in ARM, rebooted the VM and service resumed with a successful migration!

Hope that helps! 🙂

About the author