Azure VM

All related post for Microsoft Azure VM

Azure VM

Top 20 most helpful information/checklist that any Azure Pre Sales Architect should keep handy in 2017.

no thumb

Estimated Reading Time: 8 minutes

If you are planning to meet your customer for a large transformation and migration deal with Azure offering, you are done with all your homework, presentation and ready to crack the deal, hold on… before you make any promises, please spend some time to check this 20 most helpful webpages or URL’s which may make yourself and your customer happy and your delivery team life lot easier in future. I have complied this list based on my personal experience and I hope this will make a big difference during any RFP/RFI or HLD/LLD or SoW preparation on Azure.

Credit: “Royalty Free” photo from

1. Azure Pricing Calculator.

Azure pricing calculator is something which you need at every step of your engagement with the customer, here is the link for the Azure Pricing Calculator.

For the Azure CSP the pricing calculator is available in the CSP portal.

2.  Azure Subscription Limits and Quotas.
A must have URL to know about the available quota per subscription which will help for a smooth design during the HLD phase, here is the link for that Azure subscription and service limits, quotas, and constraints.

3. Cost control in Azure.
When you are in deep discussion with the customer one of the basic question customer may ask you what to do if my budget overshoot in Azure, in that case you should be capable enough to answer this tricky question, although there are few 3rd party products like Cloud Cruiser available in Azure Market place for the cost control however they don’t have support for Azure CSP, this below URL is one of the native feature in Azure and that will serve the purpose without much effort Setup Billing Alerts in Azure.

4. Running non-supported Windows OS in Azure.
What will happen to my legacy applications running on Windows 2003, can I move them to Azure? This is one of the frequently asked question you may face during your sessions with your customer and you should be ready with the answer, first of all you should know that Windows 2003 VM is no longer officially supported in Azure however you may run them as long as you want and more details can be found in this Windows 2003 VM’s in Azure. However a 2nd option is to inform the customer about running them in a designated Hyper-V host in Azure which can be easily build with the new nested virtualization introduced in Azure.

5. Azure Site Recovery Supported scenario,
Azure site recovery is very successful in all types of migration activities to Azure except few areas where it may become a pain at a later stage for the delivery team, when they are in mid of a migration process and they may discover that the VM or the physical machine can’t be moved to Azure with the help of ASR due to one or the other unsupported scenarios. In one of my earlier article I have mentioned the same thing, which you can fine here. (Azure ASR Limitations which is difficult to bypass)

Under this type of situation customer may loss trust in your delivery team and there may be conflict arises between delivery and the pre sales team regarding who has promised this deliverable to the customer. So it’s always recommend and advisable to learn the different scenarios which are supported by the ASR process. Please find the below URL’s which can help here.

6. Running Oracle Database in Azure.
Can I able to run my oracle databases in Azure, how can I move large Oracle databases to cloud? This is also one of the common question if the enterprise is having lots of Oracle databases in their environment. ASR may be used for Oracle databases but if the oracle VM’s or physical machine are not supported by ASR, it’s better to use the Oracle data guard for the migration.

Here is an article which can help you to answer some basic questions on Oracle migration to Azure Supported scenarios and Migration options for the Oracle database in Azure.

7. Site connectivity in Azure
Can I able to connect my existing on premise sites to Azure, do I need to invest in new VPN routers and Gateway? This is one of the common question you should be ready to answer for your customer and MS provide a list of the supported VPN routers however this list may not cover all the routers available in the market. For example the TP-LINK router which I am using for my home office is not covered in this list while I able to setup the VPN connectivity with Azure. To know more please click here.

Please find the supported routers Supported VPN Routers in Azure.

8. Comparison with AWS.
Please expect set of questions when you meet your customer about similar offering from the Amazon Web Services, so I will suggest that you should prepare yourself with a high level product comparison between AWS and Azure. I have recently complied a head to head comparison between Azure and AWS offering and I am sure this comparison is definitely going to help you.

Please find my post below Azure VS. AWS Head to Head Comparison Q3 2017

9. Moving resources from one subscription to another.
Now this is an important question if customer already have some foot print in Azure and there is a chance that you can on board them in your CSP subscription or maybe you are advising them for a EA option. The question regarding the movement of resources from once subscription to another is an important question you should be capable to answer at first place.

Here is a post for that Move resources from one Subscription to another.

10. Life Cycle Policy of Azure Resources.
Although this question may not be important for some customer however I have seen many customer wanted to know if there is any impact on their applications if Microsoft changes the underlying hardware.

A detail explanation about the Azure Life Cycle Policy can be found in this article Life Cycle Policy for Azure Resources.

11. Total cost of ownership (TCO) in Azure and in AWS.
This is one of the most discussed topic during the estimation and proposal preparation phase, generally Microsoft pre sales consultant must have already completes this process before the release of the RFP or bid documents, however you should also know about this. And I believe this two URL’s below should help you to answer any quick question on the TCO during your discussion with the customer.

Total cost of Ownership for Azure.

Total cost of ownership for AWS.

12. Azure Stencils.
As an Azure pre-sales architect you will need the Azure Visio and PowerPoint stencils + icon sets and they are available for the download at the Microsoft site, which will help you a lot. This is a must have tool for your successful presentations, for the High Level and Low Level design and you will need it throughout the bid process and every new deal which you will participate. Please download the Azure stencils below here.

Microsoft Azure, Cloud and Enterprise Symbol / Icon Set – Visio stencil, PowerPoint, PNG, SVG

13. Azure data centre compliance.
Compliance of the Azure data center, when the security folks from the customer will ask you about many compliance related questions in Azure and you can directly target them to this URL and they will get the answers of all their questions, so this URL should be a handy one for you or else there is a big chance that the security guys can put a cold water in your presentation and they may switch to different vendor who can convince them better on the security part and no doubt the security guys have an important role in all your deals.

Here is a list of the Compliance of the Azure Data Center.

14. Azure Product Availability by region.
Not all the azure products are available at all the Azure regions, so before you promise anything about any particular Azure data center, please take a quick look into this URL mentioned below:

Product availability by Regions.

15. Azure Backup – Supported Scenarios.
This is an important area which has to be addressed correctly during the pre-sales bid otherwise it may again become a pain for the delivery team. For example recently in one of the project I have found that the pre sales team has promised for the ASR move of the Windows 2008 R2 SP1 VM’s in Azure because they are very well supported by ASR however after the first wave the delivery team found that they can’t install the Azure backup agent in the Windows 2008 VM’s which are 32 bit, and that results in a complete back out of the ASR move. This kind of situation can give a bad name to you during the execution part so be very careful and you should must add this URL in your check list.

Azure Backup-FAQ

Azure VM Backup-FAQ

16. Monitoring – Azure Log Analytics-Supported Data Sources.
And here comes the monitoring and this is going to be part of most of your deals and if you have chosen to prescribe the Azure monitoring solution in your offering please don’t forget to take a quick look on the supported data sources. You should keep in your mind that you can’t monitor everything with the Azure Log Analytics. For example if customer want’s a monitoring solutions for their web applications you may need to direct them to the 3rd party solutions available in the Azure Market Place like AppDynamics etc. However for the present data sources which are supported you can take a look into this below URL.

Azure Log Analytics Supported Data Sources

17. Azure Reference Architecture.
Whether you are a novice or an expert in the on premise architecture design, this is the time you should spend few days understanding the Azure Application Architecture, you have to understand that most of architecture in Azure cloud is based on the SRH guidelines, which is nothing but the scalability, resiliency and high availability. This below two URL’s should be enough to understand and master the probable going to be architecture in Azure for your customers.

Azure Architecture Center.

Azure Reference Architecture.

18. Azure Express Route.
Azure express route is always a point of discussion in many customer’s engagement and many of them would like to put it in the kitty of the network team but you should be ready with some of the FAQ of the Azure ExpressRoute and here is the URL for that.

FAQ-Azure Express Route

19. Business Continuity and Disaster Recovery in Azure.
Azure BCP or DR is something like elephant in the room. This is you need to well plan before the final commitment during the engagement with customer. If required please setup a small POC with few set of application to validate your concept before finalizing the SoW.

You should also should be aware of the common terms which is used in any DR process as shown below and this has to be agreed by your customer or the application owners. Some of them are listed below. You should know what needs to recovered in case of DR.

RTO: The recovery time objective (RTO), which is the maximum acceptable length of time that your application can be offline.

RPO: A recovery point objective (RPO), which is the maximum acceptable length of time during which data might be lost due to a major incident. Note that this metric describes the length of time only; it does not address the amount or quality of the data lost.

Here is a list of URL which are going to help you in this process.

Business Continuity and Disaster Recovery in Azure in the Azure Paired Regions.

Disaster Recovery for the Azure Applications.

High Availability of the Azure Applications.

Designing resilient applications for Azure.

20. What is there in Azure stack?
This is a question which many consultants are facing from the customers for the last few months and as an Azure pre-sales architect you should be aware of what is there in Microsoft Azure Stack and how can you compete with it with the other hyper converged vendors available in the Market. Here is an article which will definitely increase your knowledge on Azure stack.

Key features and concepts in Azure stack.

That’s make the final list of 20 but this is of course not the end, being a player in tough competition, you should constantly can stay informed of innovations, new releases and product reviews of the Azure world to get ahead of others. Hope you will like this post.

Best of luck for your next Azure Assignment.

read more
Azure BackupAzure VM

Should you upgrade to Azure VM Backup Stack V2?


Image result for azure backup

Picture Credit: Royalty Free Pictures from

The Azure Resource Manager model has come up with the option to upgrade to VM Backup Stack V2.  There are many salient features of the VM Backup Stack V2 , the main price point I believe is the ability to take the snapshot backup of the disks up to 4 TB in Size. As per my experience, I know this is a great ability looking forward to up to 60% failure of the MARS agent backup which is not reliable. The ability to take the snapshot backup will also guarantee 99.99%  recovery of the snapshot disks. In scenarios where large disks were being backed up by MARS agent will definitely be backed by the Azure VM Backup Stack V2 and large disk snapshot backup is possible if you upgrade.

Another feature enhancement as per the MS site is as follows:

  • Ability to see snapshots taken as part of a backup job that’s available for recovery without waiting for data transfer to finish. It reduces the wait time for snapshots to copy to the vault before triggering restore. Also, this ability eliminates the additional storage requirement for backing up premium VMs, except for the first backup.
  • Reduces backup and restore times by retaining snapshots locally, for seven days.
  • Support for disk sizes up to 4 TB.
  • Ability to use an unmanaged VM’s original storage accounts, when restoring. This ability exists even when the VM has disks that are distributed across storage accounts. It speeds up restore operations for a wide variety of VM configurations.

Difference between the Backup Stack V1 and Backup Stack V2

ItemsBackup Stack V1Backup Stack V2
The process of BackupIn two phases, first the VM or disk snapshot has been taken and in next step, the snapshot will be sent to the Azure Recovery Services Vault.In this phase, the snapshot is taken and preserved for 7 days before sending to Azure Recovery Services Vault.
When the Recovery Point is CreatedA recovery point is created once phase 1 and 2 are done.A recovery point is created as soon as the snapshot is taken.
Recovery Point creation SpeedSlowFast
Storage CostNo additional storage costLocal storage cost may increase since snapshot will be stored for 7 days before moving to Recovery Services Vault. According to the current pricing model MS is not charging for storing the managed disks for 7 days.
Impact of the upgrade on Current Backup No impact.

Please note that incremental snapshot will be taken for the un-managed disk but for the managed disks the snapshot will be taken for the full disk. So in case if you planning for 1 TB of managed Disk you need to pay for the snapshot of the full disk.

How to upgrade

Log in to the Azure Portal and Go to the Recovery Services Vault.

Go to properties. In the left side pan you will see the following.

Click on the Upgrade Button

Click on the upgrade to upgrade to Backup Stack V 2.0.

Note: This upgrade is not Vault basis, it is Subscription based. And This change is not reversible too

Conclusion: The Azure VM Backup Stack V 2.0 is a good decision to upgrade if you have the large number of large disk capacity VM’s. You can go for it since there is no additional cost involved at the moment and there will be no additional configuration needed to be done in the recovery services vault and the existing backups will not be impacted.

That’s all for today. You have a great day ahead.

read more
Azure VM

Add managed disk to your Azure VM without any downtime.

vm logo

Estimated Reading Time: 3 minutes

Dear friends, today I am going to show you how to add a non OS disk to the Azure VM in the fly without any downtime. I have a Windows VM, where I would like to add an additional disk. When I click on the this PC it was showing me only the OS disk and the temporary storage drive(Which is by default created with any azure VM) so I have decided to add a data disk to this VM.

To achieve this, I went to VM blade and click on the disks.

The next step is to add the data disks. Click on Add data disk button. The maximum size of the data disk can be added is 4 TB as you can see below.

Once I clicked on the add data disk button it has allowed me to create the managed disks.

The difference between Azure managed disk and non-managed disks is something which I am planning to discuss in a different post in future. You can get an overview of the Azure managed disks here.

Now since I have clicked on the create button. I can find here that I have created a 100 GB disk, which is stored in a storage account which is in Standard_LRS type. Host caching is Read/Write type. (That means the disk is Read/Write type).

Once I have clicked the save button the disk is getting created as you can see the message below “Updating virtual machine disks”

Now the next step is to RDP into my VM and go to disk management. Right click on the disk which is shown and initialize it.

Once it’s initialized I have clicked on New Simple Volume and click on next

You can fine the following screen as shown below.

The next step is to give the size of the disk. I have chosen full 100 GB disk.

In the next step I have assigned a drive letter to the disk

And the last step is to format the disk

Once it’s completed I have clicked on the finish button.

Now the disk is still getting formatted

Now the format has completed and the disk is ready for my use.

That’s all for today, I hope you will like this post. I will bring more on managed disks in my next posts.

read more
Azure VM

Top 10 information which are required for a reasonable sizing estimate of the Azure VM

no thumb

Estimated Reading Time: 1 minute

There are several information which are required to make a reasonable sizing estimate for Azure VMs, and here is a comprehensive list which we used to follow from long time to get the reasonable T-Shirt sizing of the Azure VM’s.

  1. CPU utilization
  2. Memory Utilization
  3. Disk Size
  4. Disk IOPs (I/O’s per second) for C: drive and data / database drives
  5. Disk data transfer rates (Mbytes per second) for C: drive and data / database drives
  6. Backup / restore requirements
  7. Availability / redundancy approach
  8. DR approach
  9. Any specific network requirements (For example DMZ, Load balancer)
  10. VM scheduling opportunities – I.E. when can it be powered off to save money.

Out of these many parameters the first five items are the most critical for sizing as over-provisioning in Azure can have major cost implications. The first five can be easily gathered by configuring Windows Performance Counter. The information of the next 5 can be gathered by the discussion with application owners. In my future posts I will show you how we can capture the information of the first five parameters from Windows Server OS based VM’s or Physical Servers.

read more
Azure VM

How to minimize Brute Force Attacks by hackers in Azure VM’s

no thumb

Estimated Reading Time: 6 minutes

In one of my post in June I have mentioned about the Microsoft Data Center Public IP address ranges and provided the URL to download them. Please note that this IP ranges are also well known to hackers and they are very popular in the hacker’s community. Hacker’s now a days generally uses the Brute force mechanismto attack this IP range. As per the calculations on an average hackers make 5 login attempt per minute to this IP address ranges on RDP and SSH ports and this is going to increase in future as more and more valuable data and information is moving to azure every day.

Picture Credit:

There are two ways to minimize or get rid of this attack.

First option is not to use the public IP address for the VM’s and setup all the VM’s in the local area network with private IP address. This is a common scenario which most of the large enterprises are following where they setup the site to site VPN or express route to their on premise data center and Azure and setup a DNS server on premise or azure which assign the private IP address to each VM’s. In this case when a VM is configured for private IP you can see the following thing in place for the public IP address. The public IP address field for this VM is blank.

The network settings of this type of VM will look like this

In this scenario best practice is that you should use a jump box which may be a terminal server in your local area network to login to this VM’s, once you login you can also able to ping the VM if ICMP is allowed on the azure VM’s as you can see below.

This above approach is very much acceptale for large or medium size organisation which also have multi layer firewall devices to protect their hybrid enviroment. However sometimes we require Azure VM’s which need the public IP address. In this scenario you need to follow the second option which will reduce the risk.

The second option is to reduce exposure to a brute force attack by limiting the amount of time that a port is open. The question is how to achieve this.

As you can see below I have another VM which does contain a Public IP address and is part of a public subnet

The best way to achieve this is to enable the JIT (Just in time access) for the Azure Virtual Machines. Now while I say this I should explain why NSG which is also capable to do this activity is not the right fit here. The main reason is that JITA is a combination of Azure RBAC (Role Based Access Control) and NSG.

What is Just in time access for the Azure VM?

Just in time VM access enables you to lock down your VMs in the network level by blocking inbound traffic to specific ports. It enables you to control the access and reduce the attack surface to your VMs, by allowing access only upon a specific need.

Similar to NSG here also we need to mention the ports on the VM where we need to lock down the inbound traffic. The below image will show what is actually going to happen in case of JITA.

As you can see in the above diagram when a user requests access to a VM, Security Center checks in the RBAC(Role Based Access Control) whether the user has write Access to this VM. If the user have write permissions, the request is approved and Security Center automatically configures the Network Security Groups (NSGs) to allow inbound traffic to the management ports for the amount of time you specified. After the time has expired, Security Center restores the NSGs to their previous states.

JIT is a very good option since Azure network administrator don’t need to go again and again and change the NSG settings however it will incurr additional charges to your Azure subscription as it is the part of the Security Center Standard Pricing Tier. For more information on the Security Center Tier’s please click this URL.

Another thing which you can find here that if you upgrade the secuirty tier to standard it will apply to all the eligible resources in a particular resource group. As you can see below it will charge you USD 15 per Node per Month.

So it’s something you should keep in mind so that you will not be surprised after 90 days’s when you will receive your Azure bill and it will include these charges.

Steps to enable Just in Time Access to this VM

Go to Azure Security Center

Go down to the JIT tab as you can see below

Go to the recommended tab in the JIT window

Select the VM where you want to enable JIT

Click on enable JIT on 1 VM

And you can see the default configuration here

Click on Save and JIT has been activated in this VM.

Now you can click on Request Access Button as shown below.

Here you can find the list of default ports which security center recommend to enable the JIT. I have selected port number 3389 for the RDP.

Now MyIP will automatically take the public IP address of your computer as the source IP and allow the RDP access to destination VM which is the Vm where JIT has been configured. Once it’s done you can check the Last User name below where it will show the username which have the access to this VM. For example my account which already have the write access to this VM has been granted RDP permissions in this VM for three hours.

I have tried to RDP to this server and you can see that I can able to login without any problem.

After 3 hours when I have tried again I was unable to RDP and was getting this error

You can also able to edit the JIT policy by clicking on edit option in the configured tab

You can also audit the JIT Activity Log by going to the Activity Log Settings as shown below.

Activity log provides a filtered view of previous operations for that VM along with time, date, and subscription. You can download the log in the CSV format.

If you wanted to remove the JIT you can remove that by clicking the remove button as shown here.


Private IP address helps you to restrict the Azure VM access only two internal users and just in time VM access in Security Center helps you to control access to your Azure virtual machines when the VM’s are having public IP address and thus minimize the risk associated with Brute Force Attacks. I will bring more posts on Azure VM security on future.

Good luck with your Azure Assignment and enjoy your rest of the day/night.

read more
Azure VM

Doing Azure Assessment – Disk IOPS and Bandwidth has high impact on correct sizing

no thumb

Estimated Reading Time: 9 minutes

I know many of you while doing a Azure Assessment mainly focusing on Memory, CPU Core and Size and no. of the disks to find the right T-Shirt size of the Azure VM’s, however most important part which many assessment tools may ignore is the requirement of the correct disk IOPS, latency and throughput sizing, which is critical for most of the applications otherwise you may need to change the template at a later stage which will impact overall migration cost forecast.

To know more about the sizing parameters, you can refer one of my post that the top 10 information which are required for a reasonable sizing estimate of the Azure VM.

People who are from the storage background were very much aware of the terms related to IOPS, latency and throughput. In the golden days of SAN storage the disk manufacturer generally bench mark their product with the value of the maximum throughput the disk can deliver.

Throughput is nothing but the Average IO size X IOPS which is generally measured in MB\Seconds.

This can be compared with the new Bullet train which is going to be launch in India by 2020. It’s the maximum speed which the bullet train can reach, currently the maximum speed of the bullet train is 500 Km\Hr. a disk can also have 500 MB/Sec throughput which is the maximum it can deliver.

The next parameter is the IOPS which is very important for the correct sizing of disks. This means IO operations per second, which means the amount of read or write operations that could be done in one second.

Another important parameter is the IO Size which is the size of the data which can be processed by the I/O operations.

The last and one of the important parameter is the latency. Latency is how fast a single I/O-request is handled.

What is the best way to get the IOPS and throughput information from a Windows Server where the present application is running?

There multiple ways and multiple tools available in the internet like Iometer from Intel and Diskpad however I’ll recommend that if you are evaluating the disks of a Windows OS based system you should always use the Windows Performance Counter for your assessment. The Perfmon will give the required metrics for the correct assessment.

To collect the metrics you should configure the data collector set in a way that it should capture the right set of metrics and the perfmon counter should run in a period when the VM or Physical machine should witness the highest activity. If you consider a business case the metrics for an ERP application can be taken for a period of Monday to Friday because the application is at its pick at that time and that period data should be considered for the right sizing of the IOPS and throughput. However for few database VM’s the highest pick can be at weekend because the team may run some jobs in weekend and if that the case you should consider the data collection period for the weekend. To know the best time period for collecting the metrics you should contact the application owners.

Now let’s consider the main important part of this article. How we are going to determine the correct VM size of an Azure VM, before we understand this we should find out how Microsoft has sized their VM templates. In my analysis I have taken couple of on premise VM’s to understand the sizing.

I must tell that Microsoft is non consistent in the parameters which it has defined for the disk sizes across the template. However in most of the cases Microsoft has considered the following two parameters.

Microsoft measures the disk throughput and they usually consider this two parameters for the throughput calculation.

  • IOPS (Input/output operations per second)
  • MBps (Bandwidth for disk traffic only, where MBps = 10^6 bytes/sec.)

Please note that IOPS is a number here and the unit of Bandwidth is in MBps.

As I have informed you earlier we can collect the server storage data with windows perfomon counter. I have configured the items marked in red in the data collector set which I ran for 24 hrs. to collect the metrics from the server.

To understand it better let’s take three use cases, the first one is for low configuration application server and the 2nd one for the high configuration database server and 3rd one for an old ERP Server.

And in this sample example below you can find that I have taken an example of a low configuration on premise Application Server. As you can see in the below graph I have collected the storage (Physical Disk) data of the VM.

Fig: Physical Disks IOPS and throughput usage for 24 hrs. For the on premise sample application server.

As here you can see the in the above example I have collected the Perfmon data for 24 hrs. in a typical business day and have plotted a graph against the IOPS and throughput (disk bandwidth). In the above example the maximum IOPS is showing 310 IOPS. And in the above graph plot which is only for disk bandwidth the maximum bandwidth is showing around 12 MBps.

Based on the above metrics I can conclude that a VM template which can support IOPS 300 to 400 and bandwidth of above 12 MBps is suitable for this Application. Now let’s took a look into the CPU and Memory utilization of this server.

For the same VM the CPU and Memory Usage is showing as below.

If you look at the CPU utilization you can see the average CPU utilization is around 40% since this system is having two cores, and the average memory utilization is around 2.15 GB. Since CPU utilization is around 40% we can choose a 1 core VM however that will not fit since the memory requirement is high.

So if you take a look on the general purpose A2 series VM you can find that a Standard_A2 template is suitable for this Application.

Now you may ask the question that in this above table we don’t see the information for the 2nd parameter for storage which is disk network bandwidth which I have mentioned above. To get that information you need to refer another table here.

So for the VM templates where you don’t find the Storage bandwidth please refer to the above table. Where it is mentioned that Standard Tier VM will support Max bandwidth of 60 MB/s

As per the assessment with the data collected from the perfmon metrics below table describe the sizing parameters which we have considered and what is the best fit.

ParametersOn Premise VMSelected Azure Template (Standard_A2)
CPU Core22
Memory2.2 GB (Max Utilized)3.5 GB
IOPS310 (Max required)500 (Stripe Volume NA)
Storage Bandwidth12 MBps60 MBps

Now let’s talk about our 2nd example where we considered a database Server. As you can see below this is a high IOPS database server so the graph will look like this as shown below.

In this above figure you can find the IOPS is going above 20000 and network bandwidth is touching 700 Mbps. Let’s now check the CPU and Memory utilization for this server.

The above figure shows the CPU and Memory utilization data. Except few spikes in CPU we can see average CPU utilization is below 30% and the average memory utilization is below 128 GB however there are occasional spikes.

In this example let’s look into the following table for ESv3-series. The ESv3-series is a series of memory optimized VM’s, ESv3-series instances are based on the 2.3 GHz Intel XEON ® E5-2673 v4 (Broadwell) processor and can achieve 3.5GHz with Intel Turbo Boost Technology 2.0 and use premium storage. Ev3-series instances are ideal for memory-intensive enterprise applications.

The ESv3-series VM template table will look like this.

As you can find in the above table in the column number seven it will show the Max IOPS Size and Max Disk Network Size.

Now in next step you need to concentrate on this table for premium disks. You can add the number of disk to achieve the IO and Disk Bandwidth

For the Standard_D32s_v3 VM the IOPS is 51200 which I have marked in red and throughput is 768 MBps also have CPU core of 32 and Memory of 256 GB can be the best fit for this VM. However if we can consider the average CPU utilization, Memory Utilization and IOPS and Disk Network Speed we can also select the Standard_E16s_v3 template to get the max utilization of the resources. This is a call which Azure System Admin need to take. Please note that Azure VM template can be easily upgraded in case utilization causes any issue. Price wise there will be almost 50% difference in both the VM template.

Let’s verify the sizes here which we have considered in this exercise.

ParametersOn Premise VMSelected Azure Template (Standard_E32s_v3)Optimized Azure Template (Standard_E16s_v3)
CPU Core323232
Memory (Average)128 GB (Average) 250 GB (Max)256 GB128 GB
IOPS (Average)200005120025600
Storage Bandwidth (Max)350 MBps768 MBps384 MBPs

Let’s take another 3rd example, this server is old and have only 8 core CPU but 48 GB Memory which has been increased based on the requirement in last 6 years, the utilization is showing as below

In the above physical server the CPU usage is 80 to 100 percent and the max memory usage is 90%

If we look at the IOPS and Disk Network Usage Graph it will show like this

Max IOPS is touching 18000 and max disk network bandwidth is touching 850 MBps

This is a special case where IO and Disk Network Bandwidth Requirement is very high for this case we need to select the high IO intensive VM from the template. And if you look into high IO intensive VM the table look like this

In the above table my selection will be Standard_L8s which will fit for CPU and Memory but for achieving the IOPS and Network Bandwidth requirement we need to consider striped disk volume of minimum (Please refer to above premium disk table) four disk (P40) which will help us to achieve a disk bandwidth requirement of 900 Mbps, however this is not guaranteed/possible as per this article by MS. If you need guarantee, you need to choose Standard_L32S which will be a dedicated machine for your workload in Azure and very much over kill in terms of CPU and Memory but fit well for the Network Bandwidth Usage, however it will be super expensive as well.

As per the below link, it is mentioned that VM throughput limit should be higher than the combined IOPS/throughput limit of the attached disks, which will discard our above disk striping idea since the combined limit of the VM is less than what is required here. Please check this URL for more details.

Clearly as per the above statement VM template overrides the combined striped disk IOPS and Disk Network bandwidth which is a sad news. L

Let’s verify the correct sizing if we go with above article by MS, here which we can considered in this exercise.

ParametersOn Premise VMSelected Azure Template (Standard_L32S*)
CPU Core832
Memory48 GB (Max Utilized)256 GB
IOPS18000 (Max required)40000
Storage Bandwidth900 MBps1000 MBps

Standard_L32S is a dedicated VM and will incur huge cost for the enterprise and it will overkill the CPU and memory.


As we have seen in our examples IOPS and Disk Network Bandwidth is playing an important role to do correct VM sizing, so it is always recommended that you should consider these parameters while you do your next Azure Assessment otherwise it will be nightmare for you. If you are migrating on premise VM or Physical Server to Azure and you find IOPS and Network Bandwidth Requirement is very high, you should always request the application owners if they can tune the application or database in the server so that it will help in reducing the T shirt size in the VM. Azure assessment is not very easy process and it needs time and effort to make the best utilization of your Azure budget.

read more
Azure VM

How to take backup of the Azure storage account and why incremental snapshot should be the best practice to save the cost

no thumb

Estimated Reading Time: 5 minutes

I have been frequently asked this question in many meetup by Azure developers who have created hundreds and thousands of containers inside the Azure storage account and they wanted to know how they can take the backup of the complete Azure storage account.

I think this is a common question which has been asked by many people.

To answer this question I should say that practically it’s not possible to take the backup of Azure storage account what we need to do is to take the snapshot of the blob container and download it for a point in time backup.

Fig: Azure Blob Hierarchy

What is a blob snapshot?

As per Microsoft the snapshot is a read-only version of a blob that’s taken at a point in time. Snapshots are useful for backing up blobs. After you create a snapshot, you can read, copy, or delete it, but you cannot modify it.

A snapshot of a blob is identical to its base blob, except that the blob URI has a DateTime value appended to the blob URI to indicate the time at which the snapshot was taken.

Most common use case of Snapshot.

The most common use case of blob snapshot is the snapshot of the VHD file. A VHD store the current information of a VM disk. If you have taken the snapshot of a VHD file you can later create a VM from that snapshot. In this article I am not going to show you how to do that because that is already shown in many videos and blogs. In my present post I will try to explain the underline mathematics of the Azure Blob Snapshots so that you can understand the billing of Azure Blob Snapshot.

We can also backup of disks using the snapshot for Azure VM’s, this is a common practice and Azure administrator generally schedule backup in a regular interval of time.

Why it’s a case of worry

I have seen many Azure admins lately surprised with many billing related issues related to Azure storage account which has been created and in a period of time multiple snapshots has incurred huge billing cost as there is a math behind every snapshot and it is also important to delete the snapshot in time to time to save the cost so it is very important we should understand how the snapshot billing done in Azure.

Understanding Snapshot Billing

To understand this in a better way let’s take a very simple example of multiple identical twins who are studying in the same school. Now in my example I have considered multiple identical twins in a class in a school and there is a special rule in this school that school will charge only single fee for the identical twins.

Scenario 1:

In this below figure let’s consider there are three students in a class (Left side) and they have three identical twins (Right Side) in the class and as per the rule of the school ,the school will charge the fees only for three students who are in left side. In this below figure left side students represent three blobs and right side students represents the snapshot of each blob. So if the fees for each student is considered has USD 1000 the total fees needed to be paid is USD 3000 in this case.

Base Blob Snapshot

In technical words in the above figure you have three blocks in the left hand side blob and in the right side you have three snapshots of those blocks taken in any point of time and after that there is no change done in the base blob so the charges incurred only for the three unique blocks in the left hand side.

Scenario 2:

In this scenario let’s say the 3rd student has changed the color of his uniform to green in this case the school will charge the fees from four students instead of three since the school will consider the student who has changed the color of his uniform as another unique student.

Base Blob Snapshot

In technical words if base blob is being updated and the 3rd block in the left hand side has been changed however no new snapshot has not been taken. Since there is a change in the 3rd block Azure will charge for three previous snapshot and one for the third base block which has been updated.

Scenario 3:

In this scenario the 3rd student in the left hand side completely replaced by a new student but there is no change in the left hand side identical twins. In this case the school will charge the fees from four students instead of three since the school will consider the student who has been replaced as another unique student.

In technical words in this case the base blob has been updated, but the snapshot has not. Block 3  in left hand side was replaced with a new block in the base blob, but the snapshot still reflects block 3. As a result, the account is charged for four blocks.

Scenario 4:

In this scenario all the students in the left hand side has been replaced by new students and there is no change in the identical students in the right side so the school will consider all the six students as unique and take fees from all of them.

In technical words the base blob has been completely updated with new set of blocks and all the original blocks has been replaced also in the left hand side there is no change in the snapshot blocks so Azure will charge for all the six blocks present here.

Can we copy the snapshot to a different storage account?

Yes we can copy a snapshot created in a storage account to a different storage account as a blob. When a snapshot copied from one storage account to another account it will maintain the same size of the base blob and will incur same cost of storage.

What is Incremental Snapshot and why it is considered as the best practice at present?

Incremental snapshot is similar to incremental backup of any database, here in case of a blob when a snapshot is created from the base blob, with the help of an API called GetPageRange API only changes which happened just after the last snapshot taken. When we copy one complete snapshot from one storage account to another storage account that can be very slow and can consume much storage space which will increase the storage cost. With the incremental snapshot backup successive copies of the data contain only that portion that has changed since the preceding snapshot copy was made.
This way, the time to copy and the space to store backups is reduced.


If you are following a customized backup solution for the Azure blobs the snapshot is the best possible solution at this moment. Incremental snapshot can reduce the cost and helps you to manage the storage cost effectively.

read more
Azure VM

Dealing with Azure Storage Account – know how to secure them in all possible ways

no thumb

Estimated Reading Time: 12 minutes

Although this is a technical blog but I don’t able to stop myself citing an incident which I have witnessed recently which is somehow related to the technical aspect of this post.

Picture Credit: Royalty free photos from

Last September there was a case of fraudulent transaction reported by one of my very close friend. In his case fraudulent transaction has occurred in one of his credit card issued by a top global bank in Bangalore, the card was issued in India however three transaction were done in Atlanta US, my friend who is in India is unaware of the transactions because in US there is no process of MFA (Multi Factor Authentication), i.e. unlike card transaction in India, in US you will not receive the OTP in your mobile phone to authenticate the transaction. Although user has received the alert message of the transactions post the transaction happened however it was late night and he was sleeping that time (The transactions took place during 12 AM to 1 AM IST). When he saw the SMS and reported the incident to bank, bank has immediately blocked his card and issued him a temporary credit after three days. He also need to send a dispute form to the bank against those three transactions.

Post investigation it was revealed that he purchased an item from a US based small merchant website and his credit card information was stolen from there. Merchant has confirmed that his infrastructure recently migrated to Azure and the storage account has been compromised which has caused this issue for multiple customers. Although the bank has given the complete credit to my friend credit card account (Thanks to Reserve Bank of India for this notification issued in July 2017 mentioning zero liability to customer if fraudulent transaction were reported within three days) but it was the mental agony which he has faced for 45 days since fraudulent transaction in credit card are not very common in India and the bank, although credited the amount temporarily in his account informed in an email that it will be credited permanently only post investigation which will take 45 days’ time to complete, especially for the people who started using credit card only after the recent demonetization drive happened in last year Nov. in India, a fraudulent traction is a real night mare J

Now let’s see the technical side of it, if your infrastructure is in Azure and you are storing important data on them it’s very important that you should know how to protect the azure storage account. In azure storage it is very simple and easy to manage if you know some basic concepts like types of storage, permissions, use cases for each storage types etc. The storage account is one of the building block of the Azure Infrastructure and it is required for almost everything, so it’s very important that you should know how to securely access your storage account. There is another problem that most developer doesn’t concentrate on the security side of the Azure storage account while writing the code. Security is utmost important since Brute Force Attacks are very common now a days and it is very important you should use the best method available to protect the Azure Storage Account. This is also becoming very important because most of the large ecommerce websites are moving to public cloud rapidly to support their seasonal flash sales where they can easily scale out their infrastructure in need basis but during all these migration security is something which is generally neglected because that requires lots of changes in the code and delay the migration schedule of such projects.

Let’s see the security layers and features which are available to protect your azure storage account.

Various ways to protect and secure your Azure Storage Account, data inside the storage account and data which is on transit

How to protect your storage account

The first step to protect the Azure Storage Account is to protect them with RBAC (Role Based Access Control)

Understand what are the built in RBAC Roles available for the Azure storage Account – Secure and restrict the access of your storage account with RBAC roles.

When you are dealing every day with Azure Storage Account, there are few things which you should keep in your mind. All Azure Storage Account are controlled by Azure RBAC (Role Based Access Control). The following roles have some or full permissions to the Azure Storage Account

  • Owners – They can manage everything including the access in the storage account. As you can see the Add button to give users access it’s missing in other

  • Contributors – They can do anything the owner can do except assign access. For example if you are a contributor you can create a storage account and create a container but you can’t add user access to that storage account.

  • Storage Account Contributor – They can manage the storage account – they can read the storage account and can create, modify or delete the storage accounts. They can also access and delete the content of the storage account after login through storage explorer.

It may be possible a user has been only given the above storage account contributor permission to a storage account in a resource group, if he goes to the resource group tabs he will not be able to see the resource group where I have given him the contributor access in the storage account.

For example I have given a particular user access to four resource groups out of that three are US East and one in Australia East, however I have given him the permission to a storage account which is in US West. This is what he can see when he will check his resources groups tab.

And this what he will see when he will check the storage accounts tab in Azure Portal.

  • Reader – They can view information about the storage account, except secret information like storage keys. They can’t able to see the storage keys.
  • User Administrator – They can grant user access to particular storage accounts
  • VM Contributor – This role allow the users to manage the storage account. In order for a user to create a virtual machine, they have to be able to create the corresponding VHD file in a storage account. To do that, they need to be able to retrieve the storage account key for creating the VM through API. Therefore, they must have this permission so they can list the storage account keys.

How to Access the Storage Account Data Securely?

You can protect the storage account data with storage keys

This step is to protect the access of your storage account with the help of the storage keys

Understand the storage keys

Every storage account in Azure have two 512-bit strings keys (Key1 and Key2) created along with the storage account. These keys are used get the access to the storage account objects in a secured way. A common question which has been asked by many admins is whether it will cause any problem for the VM if we regenerate keys of the storage account where the VHD are stored. The answer is if you have VMs whose VHD files are stored in the storage account, they will not be affected by regenerating the storage account keys.

How to secure your storage keys?

The best practice to secure your storage keys is to use the Azure Key vault. When you regenerate the key and update the Azure Key Vault, the applications or any scripts will not need to be redeployed because they will pick up the new key from the Azure Key Vault automatically. This is very common and highly secure way to protect the keys even from the Admins and developers, since the keys are stored in the key vault developers don’t need to hardcode them in the script file or application so it’s can be considered as an additional layer of security. In most of our scripts which we are using for various automation tasks we regenerate storage keys with the help of keys, every 30 mins a PowerShell script will re generate the keys and write them back to Key Vault, the PowerShell script is stored in a Azure Runbooks which run in an interval of every 15 minutes. This is a very good practice which can be followed in all the enterprises to secure the keys. A reference article which may help to gather some information about how to write the PowerShell script for this purpose can be found here.

Best Practice: Always use same storage key in your application, as using of key1 for some pieces and key2 for other is not recommended. Key rotation will be successful in case you use the same key in the entire application.

How to cover the risk of 15 mins? (This interval is going to change based on your application requirement)

You know if a hacker knows the keys he can do so many things within 15 mins before the regeneration of your next set of keys, to cover this risk MS also provided another way the application should access the storage and that is called Shared Access Signature.

What is Shared Access Signature?

If you are an azure developer you must use the SAS in your application. A Shared Access Signature is a string containing a security token that can be attached to a URI that allows you to delegate access to storage objects and specify constraints such as the permissions and the date/time range of access. With Shared Access Signatures, you can give a client just the permissions required for a limited amount of time. Additionally, you can specify that requests made using a SAS are restricted to a certain IP address or IP address range external to Azure. You can also require that requests are made using a specific protocol (HTTPS or HTTP/HTTPS). This means if you only want to allow HTTPS traffic, you can set the required protocol to HTTPS only, and HTTP traffic will be blocked.

What to do when you suspect that the SAS also has been compromised?

  • You can issue SAS tokens with short expiration policies and wait for the token to expire.
  • You can rename or delete the resource (assuming the token was scoped for a single object).
  • You can change the storage account keys immediately (But this may have larger effect and downtime for other application which is using the same storage account)

(If you have accidently deleted your storage account please refer to this blog post to see what are the options available for you)

Another two ways to protect your Azure Storage Account Data is by registering your application in Azure AD

First way

In this case the service account access will be first authenticated in Azure AD and based on that the secret key will be issued from Azure Key Vault when the application request the key.

This is an older approach where developers create an Azure AD application and associate the AD service account credential in the application code. This web application that will be accessing the Key Vault is the one that is registered in Azure Active Directory and has been given access to the Key Vault. However because of the following two limitations this method is not so secure.

  • The Azure AD application service account credentials are typically hard coded in the source code
  • When the Azure AD application credentials expire, application used to be down

To stop the misuse of this approach the storage keys can be changed/rotated frequently however the AD credential hard coding can’t be stopped unless the service account is stored in an azure automation account which will cause multiple authentication hops and may create latency in the application unless coded in a better way.

Second Way

To avoid this above mentioned loopholes MS has recently introduced another technique. In this new approach which has been released recently and still in public preview is the use of Azure Managed Service Identity (Preview). Here a managed service identity from Azure Active Directory allows your app to easily access other AAD-protected resources such as Azure Key Vault. The identity is managed by the Azure platform and does not require you to provision or rotate any secrets. In this case when you request a token to Key Vault, you need to make sure you have added an access policy that includes your application’s identity. Otherwise, your calls to Key Vault will be rejected, even if they include the token. A more details about the Azure MSI can be found here.

How to protect the storage account data on transit?

There are three ways to protect your data transport in Azure Storage Account.

The first one is the transport level encryption.

To protect the data which is on transit, it’s always advisable to encrypt the data with the transport level encryption. This can be done with the HTTPS protocol. HTTPS ensure secure communication over public internet. Always enable secure transferred required to force the secure transit of data.

A PowerShell command can be used to enable the secure transfer

Set-AzureRmStorageAccount -Name “{StorageAccountName}” -ResourceGroupName “{ResourceGroupName}” -EnableHttpsTrafficOnly $True

StorageAccountName : {StorageAccountName}

Kind : Storage

EnableHttpsTrafficOnly: True

A common example is when you need to upload important data from your on premise data center to Azure.

In the below example I have used AzCopy to move the data from on-premises to Azure with the help of HTTPS protocol.

Copy data (From On-Premises to Azure Blob container)

Copy data (From Blob Container to Azure VM)

The second method is the use of Azure File shares

You can mount an Azure File Share and enable the secure transfer with the SMB 3.0 protocol. In this case also you need to enable the Secure Transfer Required switch.

The third method is the use of the Client-side encryption

Client-side encryption is also a method for encrypting your data at rest, as the data is stored in its encrypted form.  This feature allows you to programmatically encrypt your data in a client application before sending it across the wire to be written to Azure Storage, and to programmatically decrypt your data after retrieving it from Azure Storage. To know more about the client side encryption please click here

How to protect the storage account data when the data is at rest?

There are three ways to protect your storage data which is at rest, the first one is what I have mentioned above is the use of client-side encryption which also secure the data in the storage account post transit but using https protocol while transit of data is widely used and very simple to configure.

There are other two methods also available which can be used to store the data in encrypted form when it is at rest. The first one is the Azure Disk Encryption and this is commonly used for VM’s.

Azure Disk Encryption is another new feature which is widely used now a days for the Azure IaaS VM’s. This allow the encryption of the OS disk and data disks in the VM. If you are using a windows OS disks are encrypted with the Bitlocker technology and for the Linux VM disks are encrypted with DM-Crypt technology.

The 3rd technique to protect the data which is at rest is the use of the Storage Service Encryption (SSE), SSE allows you to write the data in the storage account in encrypted form, when the request of read the data originate, it again decrypt the data and send back to the requester. This enables you to secure your data without having to modify code or add code to any applications. As you can see below there is switch which allows to enable this feature. This settings is applicable to the entire storage account.

How to protect the storage account at the network level?

Although there are many ways you can protect the storage account but it’s always good if you can protect them at the network layer as by securing this layer you can easily discard 90% of the attacks. Microsoft has recently released a solution for this and that is Azure Storage Firewalls and Virtual Networks (preview), like any network firewall, implementing this will help to allow users to access the storage account only from specific allowed network subnets. This is in conjunction with the Azure Storage Account Security is a very good way to protect your storage account from every day attacks. IP network rules are only allowed for public internet IP addresses. IP address ranges reserved for private networks are not allowed in IP rules. Private networks include addresses that start with *10. **, *172.16. **, and *192.168.**, Only IPV4 addresses are supported at this time. Each storage account can support up to 100 IP network rules.

Note: Currently if you try to connect your Azure Storage Account from on premises it will follow the public internet path but with this new technology you can grant access from your on-premises networks to your storage account with an IP network rule, site to site VPN is not supported yet however Azure ExpressRoute Circuit is supported.

Since the Storage Firewalls and Virtual Networks are in preview. This capability is currently only available for new or existing storage accounts in all Azure public cloud regions.


Securing the Azure storage Account is an important criteria for most of the Azure related development work and it’s the responsibility of the Azure Admin to make the developers make aware of all the possible security features available in Azure to secure the Azure storage. In this article I have tried to jot down various ways available to secure your azure storage account. Use of Azure Storage Firewalls and Virtual Network and proper access with RBAC and proper authorization with Azure Storage Keys and/or SAS tokens is going to be very good combination for all the future related development in this area.

read more
Azure VM

Unable to RDP to Azure VM after March 2018 (KBKB4088878 and KB4088875) windows update – Redeploy it

no thumb

Estimated Reading Time: 2 minutes

Unable to RDP to Azure VM is a very common issue faced by many people worldwide. If you have installed the terminal services or RDS the RDP issue is more common. As per Microsoft there are multiple troubleshooting steps mentioned in the troubleshooting session. And here is the list.

  1. Reset Remote Desktop configuration.
  2. Check Network Security Group rules / Cloud Services endpoints.
  3. Review VM console logs.
  4. Reset the NIC for the VM. (If required remove NIC and assigned new NIC)
  5. Check the VM Resource Health.
  6. Reset your VM password.
  7. Restart your VM.
  8. Redeploy your VM.

Fig: How to redeploy an Azure VM

However out of the 7 issues mentioned here the number 8 issue is highly popular and works in most of the cases.

Scenario: The VM’s are up and running in the portal and users were unable to RDP. The reason is after the Windows March 2018 patching in some Windows 2008 SP2 non R2 VM RDP didn’t worked.

Why the redeploy of the Azure VM will resolve the issues?

When we redeploy a VM in Azure it moves the VM in a new node in the Azure infrastructure, and powers it back. The redeployment of the VM retained all your configuration options and associate resources.

If you are using a static IP the redeployment will move the VM to a different host and the IP will remain the same.

If the issue still not resolved?

However if the issue is still not resolved please open a case with MS and ask them to clone this VM and mount it on another test VM with Hyper-V enabled. After that they can able to remove some registry settings and need to completely uninstall the March update. After that VM should be accessible to the network.


Windows March 2018 update (patch (KB4088878 and KB4088875) is causing  RDP session failures in many Azure VM’s with Windows 2008 SP2  and few issues were also reported from Windows 2012 VM’s as well. The NIC didn’t work after the update which is the most common cause of this problem.

More update is available here.


Share This:

read more
Azure VM

Disaster Recovery of Azure VM – Step by step configuration guide

no thumb

Estimated Reading Time: 6 minutes

I think if you are an old school infrastructure management service techie, you must have been part of many DR excersice during your various job roles. If you are a pre sales techie, in many of the pre sales discussion you may have tried to convince your customers that Microsoft Azure enviroment is highly reliable and available so you don’t need to setup any DR enviroment. However it’s hard to digest by the customer because of the compliance need. The compliance requirements such as ISO 27001 still require that you have a provable disaster recovery solution in place as part of a business continuity plan (BCP). For many days the questions related to setting up a DR site in another Azure region doesn’t have any concrete answer untill May 2017 when Microsoft has released the Diseaster Recovery (Preview) of the Azure VM’s. However I will say this functionality is still not fully functional since there is no support for managed disks.


Edit: Managed disks are now fully supported in ASR. Please refer the below article.

Article for the support of managed disks.

Today we will see how we can configure the disaster recovery step by step.

Configure Azure VM disaster recovery step by step for the VM’s which have unmanaged disks

I have selected a VM in my Lab, the VM is located in West US 2 and it’s having Windows 2016 Operating System.

It’s a Windows 2016 Datacenter Server VM, please find the OS version below.

The next step is to go the disaster recovery (Preview) tab as you can see below.

In the next step you need to configure the disaster recovery for this VM.

Select the resource group under which the replicated VM will be created when the VM is failed over.

Select the virtual network in the target region to which failed over VM will be associated to.

Select the cache storage account, cache storage account is located in the source region. They are used as a temporary data store before replicating the changes to the target region. By default one cache storage account is created per vault and re-used. You can select different cache storage account if intend to customize the cache storage account to be used for this VM.

Data being replicated from the source VM is stored in replica managed disks in the target region. For each managed disks in source VM, one replicated managed disk is created and used in target region.

Recovery services vault contains target VM configuration settings and orchestrates replication. In the event of a disruption where your source VM is not available, you can failover from recovery services vault.

Vault resource group is the resource group of the recovery services vault. Replication policy defines the settings for the recovery point retention history and app consistent snapshot frequency.

The world map below shows the Azure Data Center’s which we have chosen for the replication. We have chosen to replicate the VM from West US 2 to East US 2.

The next step is to create the Azure resource.

When you can check the progress you can see the deployment is in progress.

In the next step it will show the replication going on for the VM.

Since this is part of the ASR (Azure Site Recovery), it will perform the same jobs which is generally done during the VM migration. You can find below the jobs which are triggered.

Note: For more details on Azure Site Recovery you can click here.

After some time you can find out that enable replication has been completed.

The replication may take 15 minutes to few hours depending on the size of the VM

As you can see below in my case 98% percentage has been completed after 20 minutes

Since the VM was small it was completed after 25 minutes

What is RPO: RPO is the Recovery Point Object.

Recovery Point Objective (RPO) describes the interval of time that might pass during a disruption before the quantity of data lost during that period exceeds the Business Continuity Plan’s maximum allowable threshold or “tolerance.”

Example: If the last available good copy of data upon an outage is from 16 hours ago, and the RPO for this business is 20 hours then we are still within the parameters of the Business Continuity Plan’s RPO. In other words it the answers the question – “Up to what point in time could the Business Process’s recovery proceed tolerably given the volume of data lost during that interval?”

Now I have to shut down the primary VM just to check the RPO status after two days. After two days the RPO was showing 2 days as you can see below.

And there is an error about replication was halted.

After I have started the VM the replication has been completed and the data from the primary site and DR site has been synced and the RPO has came down.

Run a disaster recovery drill for Azure VMs to a secondary Azure region

To test I have decided to a test failover

A test failover configuration is shown below.

Test failover took some time but it was not very high.

After few minutes failover has been completed successfully.

Now I can see both the VM’s in the primary site and the DR site is running in two different Azure regions.

The next step is to clean up the test failover

You can mention some note below.

Once you click on OK it will start the task to delete the VM

After some time the task will be completed as you can see below.

That’s all about today, I think you will like my post on Azure Disaster Recovery (Preview), I will bring more on BCP and DR on Azure in my future posts. For more details on each replication steps you can click here.

Happy Practical !!!!

read more
1 2 3
Page 2 of 3