Azure VM

All related post for Microsoft Azure VM

Azure VM

Microsoft Azure Monitor Launches to Improve Windows Virtual Desktop


Microsoft has released a tool to make it easy for organizations to manage their Windows Virtual Desktop environments. Called Azure Monitor, Microsoft says the tool is now generally available from this week.

The big benefit of Azure Monitor for Windows Virtual Desktop (WVD) is the ability to gain betting insights into virtual desktop environments.

Windows Virtual Desktop (WVD) allows users to create a virtualization of Windows 7, Windows 10, Office 365 ProPlus apps and third-party apps. With WVD, customers receive remote desktop sessions by running virtualizations in Azure virtual machines.

While WVD provides a secure way to run Windows on the cloud, it has some complex components. The aim of Azure Monitor is to make the complexities of the service less of a problem.

“With Azure Monitor for Windows Virtual Desktop, you can find and troubleshoot problems in the deployment, view the status and health of host pools, diagnose user feedback and understand resource utilization,” Microsoft’s announcement points out.


As Azure Monitor moves to general availability, it comes with the following improvements over the preview version:

  • Improved data collection and new guidance to help you optimize for cost
  • Updated setup experience with easier UI, expanded support for VM set-up, automated Windows Event Log setup, and more
  • Relocated Windows Virtual Desktop agent warnings and errors at the top of the Host Diagnostics page to help you prioritize issues with the highest impact
  • Accessibility enhancements
  • Workbook versioning: GA release is Version 1.0.0

Azure Monitor is not free, so it’s worth checking out Microsoft’s pricing page here. The company recommends customers start with pay-a-you-go pricing and adjust to scale once more usage information is known.

Azure Monitor only works with the latest version of Windows Virtual Desktop of newer. Interestingly, Microsoft does not specifically say which version this is.

Source Winbuzzer

read more
Azure VM

Apple Ready to Join the VR Party with Headset Launch in 2022


Virtual reality (VR) and augmented reality (AR) are technologies that promise to change our way of life. While they can be transformative in entertainment, VR and AR also open doors in industry and eventually the possibilities could be endless. The biggest names in tech are already taking VR and AR seriously, but one is conspicuous by its absence… Apple.

While giants like Microsoft, Google, Samsung, Sony, and others already have VR and AR hardware/platforms, Apple does not. Still, we are told regularly that the company is serious about the technology. We are told Apple’s advancements are at least on par with its rivals, minus the hardware.

We are told one day Apple’s developments will reach market in the form of a platform and device. A new report by Bloomberg suggests that may happen. However, it seems Apple remains in no rush to jump on board the VR train. In fact, the report says the company’s first headset will not arrive until 2022.

As we have come to expect from Cupertino, the product will be high-end. It will also reportedly be aimed at a niche market with Apple not expecting big sales. That said, the device will be consumer-focused and capable of gaming, communication, and video. Furthermore, it could also support AR in some capacity.

Is this Apple’s big play, developing a headset that is both VR and AR? We are used to Apple defining categories by making devices useful to everyday users. Combining VR and AR technology could be a major step forward.

High-End Specs

Although, the report points out AR is not the focus of the headset. Instead, VR will be the leading technology. Apple still has bigger plans for AR, including a pair of glasses that will likely arrive later.

Developed under the codename N301, Apple’s headset will be made of premium materials and could rival Microsoft’s HoloLens in terms of price ($3,500). The company will use the most powerful processors it has at its disposal and a fan to keep the device cool. Elsewhere, it is constructed of fabric materials and displays that best the resolution of any VR headset currently available.

That fan has caused some design problems, the report suggests. With a fan and high-end components, the headset is too big and uncomfortable. To save size, Apple removed compatibility for people who wear glasses.

Instead of cutting out a larger percentage of the population with that space-saving, the company has included a way to slot prescription lenses into the headset.

How this will work remains to be seen and it is worth noting we are very much in the rumor phase with this story.

Tip of the day:

Did you know you can use Windows 10´s built in antivirus Microsoft Defender also with scheduled scans? In our tutorial we give you step-by-step instructions on how to program your personal scan-schedule to keep your free of malware.

Source Winbuzzer

read more
Azure ADAzure VM

Microsoft Sustainability Calculator Aims to Help Cloud Customers Manage their Carbon Footprint

no thumb

Microsoft has announced a new tool that will help cloud customers better understand their carbon emissions. Called the Microsoft Sustainability Calculator, the product has landed in private preview ahead of a future launch.
According to Redmond, the aim of the Microsoft Sustainability Calculator is to give customers more oversight across their carbon output. More specifically, to become more transparent about what their environmental impact is.
“It’s challenging to make and meet meaningful carbon reduction goals without the ability to measure carbon emissions,” Microsoft says.

Microsoft has announced a new tool that will help cloud customers better understand their carbon emissions. Called the Microsoft Sustainability Calculator, the product has landed in private preview ahead of a future launch.

According to Redmond, the aim of the Microsoft Sustainability Calculator is to give customers more oversight across their carbon output. More specifically, to become more transparent about what their environmental impact is.

“It’s challenging to make and meet meaningful carbon reduction goals without the ability to measure carbon emissions,” Microsoft says.

By leveraging AI technology, the calculator provides accurate accounting for carbon usage. It highlights the impact caused by Microsoft cloud service across an organization’s footprint. Armed with the information, company’s can make concise decisions about their environmental impact.

For example, the Microsoft Sustainability Calculator measures the impact of moving regular applications to the cloud and how this can reduce the carbon output of a company.

Carbon Negative

It looks like the tool goes hand-in-hand with Microsoft’s own push to be carbon negative by 2030. Redmond made that pledge earlier this year. The decision followed a 2017 commitment to cut 75% of its carbon emissions by the same date and builds on 2019 revisions of 70% renewable energy by 2023.

“While the world will need to reach net zero, those of us who can afford to move faster and go further should do so,” Microsoft President Brad Smith said of the new commitment. “That’s why today we are announcing an ambitious goal and a new plan to reduce and ultimately remove Microsoft’s carbon footprint.”

Source Winbuzzer

read more
Azure VM

Independent Benchmarks Suggest AMD’s Ryzen H-Series Could Beat Intel Comet Lake


AMD made bold claims at CES 2020 last month, including that its Ryzen H-series of laptop chips would be on-par with Intel’s high-end and even the i7-9700K. However, its main comparison was against its competition’s current offering, the i7-1065G7. Intel may not be advancing at the same pace as AMD, but it has new chips due sometime this year.

Now, Twitters _rogame has spotted 3DMark results that pit the 4800H and an RX 5600M against Intel’s i7-10760H Comet Lake with a 2060 Max Q. The results are quite interesting. The 4800H edges by 10-11% in individual graphics and physics scores, but Intel wins out by 7.8% in the combined test.

The graphics score of a 3D Mark test measures just the GPU and the physics score solely the CPU. As you’d expect, combined tests both, which indicates that Intel has found a better meshing between the hardware at this early stage, perhaps due to power management.

Share on Facebook
Tweet on Twitter

AMD made bold claims at CES 2020 last month, including that its Ryzen H-series of laptop chips would be on-par with Intel’s high-end and even the i7-9700K. However, its main comparison was against its competition’s current offering, the i7-1065G7. Intel may not be advancing at the same pace as AMD, but it has new chips due sometime this year.

Now, Twitters _rogame has spotted 3DMark results that pit the 4800H and an RX 5600M against Intel’s i7-10760H Comet Lake with a 2060 Max Q. The results are quite interesting. The 4800H edges by 10-11% in individual graphics and physics scores, but Intel wins out by 7.8% in the combined test.

The graphics score of a 3D Mark test measures just the GPU and the physics score solely the CPU. As you’d expect, combined tests both, which indicates that Intel has found a better meshing between the hardware at this early stage, perhaps due to power management.

For reference, the 4800H is an 8-core/16 thread 43W with a base close of 2.9 GHz. The i7-10750H has 6 cores and 12 threads with a base of 2.6. However, the Intel chip is able to achieve a higher boost clock than AMD’s chip, at 4.69 GHz vs 2.97 GHz. 3D Mark tends to reward higher clock speeds over cores, being primarily based on gaming performance.

As a result, we may see Ryzen giving even better results in scenarios that can better utilize its cores. That could include tasks like 3D rendering, editing, and general multitasking. Even so, you can’t fully trust benchmarks at this stage as they may be based on prototype hardware or have other inaccuracies.

It’ll be interesting to see how they stack up once reviewers get their hands on them, as this could be the first truly convincing laptop lineup AMD has managed to muster. The question for Microsoft fans, of course, is what the company’s Surface lineup will go with, the Surface Laptop 3 somewhat unsuccessfully breaking the mold with a Ryzen offering.

Source Winbuzzer

read more
Azure VM

Announcing new AMD EPYC™-based Azure Virtual Machines

no thumb

Microsoft is focused on giving our clients industry-driving execution for every one of their outstanding burdens. Subsequent to being the main worldwide cloud supplier to report the sending of AMD EPYC™ based Azure Virtual Machines in 2017, we’ve been cooperating to keep carrying the most recent development to endeavors.

Today, we are reporting our second-age HB-arrangement Azure Virtual Machines, HBv2, which highlights the most recent AMD EPYC 7002 processor. Clients will almost certainly build HPC execution and versatility to run physically bigger remaining burdens on Azure. We’ll additionally be bringing the AMD 7002 processors and Radeon Instinct GPUs to our group of cloud-based virtual work areas. At last, our new Dav3 and Eav3-arrangement Azure Virtual Machines, in see today, give more client decision to meet an expansive scope of prerequisites for universally useful outstanding tasks at hand utilizing the new AMD EPYC™ 7452 processor.

Our developing Azure HPC contributions

Clients are picking our Azure HPC contributions (HB-arrangement) consolidating original AMD EPYC Naples for their exhibition and versatility. We’ve seen a 33 percent memory data transfer capacity advantage with EPYC, and that is a key factor for a considerable lot of our clients’ HPC outstanding tasks at hand. For instance, liquid elements is one remaining task at hand in which this preferred position is profitable. Purplish blue has an expanding number of clients for whom this is a centerpiece of their R&D and even creation exercises. On ANSYS Fluent, a broadly utilized liquid elements application, we have estimated EPYC-controlled HB examples conveying a 54x exhibition improvement by scaling crosswise over almost 6,000 processor centers. Furthermore, this is 24 percent quicker than a main uncovered metal arrangement with an indistinguishable InfiniBand organize. Furthermore, not long ago, Azure turned into the primary cloud to scale a firmly coupled HPC application to 10,000 centers. This is 10x higher than what had been already conceivable on some other cloud supplier. Sky blue clients will be among the first to exploit this ability to handle the hardest difficulties and enhance with reason.

New HPC, universally useful, and memory streamlined Azure Virtual Machines

Purplish blue is proceeding to build its HPC abilities, thanks to a limited extent to our coordinated effort with AMD. In primer benchmarking, HBv2 VMs including 120 CPUs from the second era EPYC processor are exhibiting execution additions of more than 100 percent on HPC remaining tasks at hand like liquid elements and car collision test examination. HBv2 adaptability points of confinement are likewise expanding with the cloud’s first sending of 200 Gigabit InfiniBand, on account of the second era EPYC processor’s PCIe 4.0 ability. HBv2 virtual machines (VMs) will support up to 36,000 centers for MPI outstanding tasks at hand in a solitary virtual machine scale set, and up to 80,000 centers for our biggest clients.

We’ll additionally be bringing AMD EPYC 7002 processor to our group of cloud-based remote work areas, blending with the Radeon MI25 GPU for clients running Windows-based conditions. The new arrangement offers exceptional GPU resourcing adaptability, giving clients more decision than any time in recent memory to measure virtual machines right from 1/eighth of a solitary GPU up to an entire GPU.

At long last, we are likewise declaring new Azure Virtual Machines as a feature of the Dv3 and Ev3-arrangement—improved for broadly useful and memory concentrated outstanding burdens. These new VM sizes highlight AMD’s EPYC™ 7452 processor. The new broadly useful Da_v3 and Das_v3 Azure Virtual Machines give up to 64 vCPUs, 256 GiBs of RAM, and 1,600 GiBs of SSD-based impermanent capacity. Moreover, the new memory enhanced Ea_v3 and Eas_v3 Azure Virtual Machines give up to 64 vCPUs, 432 GiBs of RAM, and 1,600 GiBs of SSD-based brief stockpiling. Both VM arrangement bolsters Premium SSD plate stockpiling. The new VMs are as of now in see in the East US Azure area and with accessibility coming soon to different areas.

Da_v3 and Das_v3 virtual machines can be utilized for a wide scope of broadly useful applications. Model use cases incorporate most venture grade applications, social databases, in-memory reserving, and investigation. Applications that request quicker CPUs, better nearby plate execution or higher recollections can likewise profit by these new VMs. Furthermore, the Ea_v3 and Eas_v3 VM arrangement are enhanced for other enormous in-memory business basic remaining burdens.

Details shortly…Azure.Microsoft

read more
Azure VM

How to minimize Brute Force Attacks by hackers in Azure VM’s

no thumb

Estimated Reading Time: 6 minutes

In one of my post in June I have mentioned about the Microsoft Data Center Public IP address ranges and provided the URL to download them. Please note that this IP ranges are also well known to hackers and they are very popular in the hacker’s community. Hacker’s now a days generally uses the Brute force mechanismto attack this IP range. As per the calculations on an average hackers make 5 login attempt per minute to this IP address ranges on RDP and SSH ports and this is going to increase in future as more and more valuable data and information is moving to azure every day.


Picture Credit:

There are two ways to minimize or get rid of this attack.

First option is not to use the public IP address for the VM’s and setup all the VM’s in the local area network with private IP address. This is a common scenario which most of the large enterprises are following where they setup the site to site VPN or express route to their on premise data center and Azure and setup a DNS server on premise or azure which assign the private IP address to each VM’s. In this case when a VM is configured for private IP you can see the following thing in place for the public IP address. The public IP address field for this VM is blank.

The network settings of this type of VM will look like this

In this scenario best practice is that you should use a jump box which may be a terminal server in your local area network to login to this VM’s, once you login you can also able to ping the VM if ICMP is allowed on the azure VM’s as you can see below.

This above approach is very much acceptale for large or medium size organisation which also have multi layer firewall devices to protect their hybrid enviroment. However sometimes we require Azure VM’s which need the public IP address. In this scenario you need to follow the second option which will reduce the risk.

The second option is to reduce exposure to a brute force attack by limiting the amount of time that a port is open. The question is how to achieve this.

As you can see below I have another VM which does contain a Public IP address and is part of a public subnet

The best way to achieve this is to enable the JIT (Just in time access) for the Azure Virtual Machines. Now while I say this I should explain why NSG which is also capable to do this activity is not the right fit here. The main reason is that JITA is a combination of Azure RBAC (Role Based Access Control) and NSG.

What is Just in time access for the Azure VM?

Just in time VM access enables you to lock down your VMs in the network level by blocking inbound traffic to specific ports. It enables you to control the access and reduce the attack surface to your VMs, by allowing access only upon a specific need.

Similar to NSG here also we need to mention the ports on the VM where we need to lock down the inbound traffic. The below image will show what is actually going to happen in case of JITA.

As you can see in the above diagram when a user requests access to a VM, Security Center checks in the RBAC(Role Based Access Control) whether the user has write Access to this VM. If the user have write permissions, the request is approved and Security Center automatically configures the Network Security Groups (NSGs) to allow inbound traffic to the management ports for the amount of time you specified. After the time has expired, Security Center restores the NSGs to their previous states.

JIT is a very good option since Azure network administrator don’t need to go again and again and change the NSG settings however it will incurr additional charges to your Azure subscription as it is the part of the Security Center Standard Pricing Tier. For more information on the Security Center Tier’s please click this URL.

Another thing which you can find here that if you upgrade the secuirty tier to standard it will apply to all the eligible resources in a particular resource group. As you can see below it will charge you USD 15 per Node per Month.

So it’s something you should keep in mind so that you will not be surprised after 90 days’s when you will receive your Azure bill and it will include these charges.

Steps to enable Just in Time Access to this VM

Go to Azure Security Center

Go down to the JIT tab as you can see below

Go to the recommended tab in the JIT window

Select the VM where you want to enable JIT

Click on enable JIT on 1 VM

And you can see the default configuration here

Click on Save and JIT has been activated in this VM.

Now you can click on Request Access Button as shown below.

Here you can find the list of default ports which security center recommend to enable the JIT. I have selected port number 3389 for the RDP.

Now MyIP will automatically take the public IP address of your computer as the source IP and allow the RDP access to destination VM which is the Vm where JIT has been configured. Once it’s done you can check the Last User name below where it will show the username which have the access to this VM. For example my account which already have the write access to this VM has been granted RDP permissions in this VM for three hours.

I have tried to RDP to this server and you can see that I can able to login without any problem.

After 3 hours when I have tried again I was unable to RDP and was getting this error

You can also able to edit the JIT policy by clicking on edit option in the configured tab

You can also audit the JIT Activity Log by going to the Activity Log Settings as shown below.

Activity log provides a filtered view of previous operations for that VM along with time, date, and subscription. You can download the log in the CSV format.

If you wanted to remove the JIT you can remove that by clicking the remove button as shown here.


Private IP address helps you to restrict the Azure VM access only two internal users and just in time VM access in Security Center helps you to control access to your Azure virtual machines when the VM’s are having public IP address and thus minimize the risk associated with Brute Force Attacks. I will bring more posts on Azure VM security on future.

read more
Azure VM

What you should know about Azure’s Low priority VM’s and possible use cases

vm logo4

Estimated Reading Time: 4 minutes

In a recent design discussion with the development team folks we were talking about the Azure Low Priority VM’s deployment in their next project to save the cost and this is quite natural since after its launch in May it has drawn much attention from the press and many of you like other enterprises are trying to get the advantage of the Azure low cost low priority VM’s, however Azure low priority VM’s needs to have a good business case which can be correlated with good use case. Without a good business case and a good use case it will make no sense to experience the Power of Azure Low Priority VM’s

What is Azure Low Priority VM’s?

Similar to AWS spot instances, Microsoft also came out in last May with Azure Low Priority VM’s. Low Priority VM’s are the VM’s which are available at significant discounted price. Low priority VM’s are provided from the unused set of VM’s in Azure or in other words it can be allocated from the Azure Excess Compute Capacity to the customer who request for it.

Price for Azure Low Priority VM’s

Low-Priority Linux VM’s accompany 80% discount while the Windows VM’s accompany 60% discounted price. The discount is figured in contrast with their On-Demand hourly cost. This is available across most of the VM instance types in Azure.

A sample discounted price for the general purpose standard Av2 series windows instances can be seen below.

For more details on pricing of the Azure Low Priority VM’s please check this URL

Features of this low priority VM’s are as follows:

  • Up to 80% discount, fixed price.
  • Uses surplus capacity, availability can vary all the time.
  • VM’s can be seized at any time.
  • Available at all regions.
  • Do more with same price.

Up to this everything is fine however please note the point number three where I have mentioned that the VM can be seized at any time so always remember that nodes can go up and down in Low Priority VM’s so all the workloads are not suitable for the Azure Low Priority VM’s, now the question is what type of workloads are suitable for the Azure Low Priority VM’s.

What type of workloads are suitable for the Azure Low Priority VM’S?

  • The workload which is tolerant of interruption
  • The workload which is tolerant of reduced capacity

Suitable Workloads are as follows

  • Batch processing ( Asynchronous distributed jobs and tasks running on many VM’s)
  • Stateless Web UI
  • Containerized applications
  • Map/Reduce type applications
  • Job completion time flexible
  • Short duration tasks

Low-priority VMs are currently available only for workloads running in Batch however it will be extended to other workloads in future.

Now as you see one of the most common use case is the Batch tasks and when we talk about the batch task one common example which will come to our mind is what will happen if the VMs when a job is interrupted due to VM preemption?

In case of any interruption the tasks will be automatically executed and rescheduled and re executed at a later stage when the VM’s are available again.

Lifecycle of the Low Priority VM’s batch job in case preemption

What are options of creating the low priority VM’s?

There are multiple options available with Azure low priority VM pool creation and it depends on what is the target.

Option 1: Lowering the cost

In this scenario all the VM’s will be configured as the low priority VM in the pool. No dedicated VM’s will be available.

Option 2: Lowering the cost with a guaranteed baseline

In this scenario the pool will be configured for fixed number of low priority VM’s and a fixed number of high priority VM’s (The number of low priority VM’s will be 60 to 80%)

Option 3: Lowering the cost while maintain a capacity

In this scenario pool will have all low priority, set dedicated = preempted. In this case batch will have full dedicated VM’s but if it find low priority VM’s it will scale down to the low priority VM’s

Steps to check number of Low Priority VM’s available currently

First navigate to Azure Batch Accounts

And click on create the batch account in your subscription.

Once the batch account has been created you should be able to find this as shown below

You can view the metrics of your low priority VM’s in the below Metrics tab

You can click on quota to see the available quota limit in this batch account


At present low-priority virtual machines (VMs) can be used to reduce the cost of Batch workloads. Low-priority VMs make new types of Batch workloads possible by providing a large amount of compute power that is also economical. If your organization is running lots of batch workloads everyday low priority is certainly will be one of your choice in Azure.

read more
Azure VM

File Server migration strategy in Azure with Zero down time for end users without any ACL loss

File Server migration

Estimated Reading Time: 5 minutes

Are you planning to move your on premise file servers to Azure if yes this post can help you to do a better planning for the steps required for seamless movement of the file share to Azure, before you plan the actual move let’s see what are very important factors which we need to consider before the move. The most important factors which should be considered for the file server migrations are as follows:

Does the new file Share Security, authentication and ACL (Access Control List)?

As per our testing and multiple MS articles currently the Azure File Share doesn’t meet all the above requirement, for example the Active Directory-based authentication and ACL support is not present in Azure File Share which is one of the important requirement.

How the end user are accessing the data?

In present days most of the windows based file servers enterprises are using DFSR for the file share technology. In Azure if we mount the file share by using SMB, we don’t have folder-level control over permissions instead we can use shared access signatures (SAS) to generate tokens that have specific file permissions, and which are valid for a specified time interval but that is somewhat will be a complete new to the users and will be a complete change the way how you have implemented the file share in your current on premise environment.

How many users/clients can access the file simultaneously?

The current quota in Azure File Share is 2000

What is the maximum size of the File Share?

Currently the Azure File Share supports maximum up to 5 TiB of data storage. In future it may support upto 100 TiB.

Sample USE Case:

Let’s consider a very common use case which we have considered for this Article. We have considered a large enterprise which have multiple locations around the globe and there are more than 100 file servers which is currently being used. All the file servers are not very big but total data size is around 40 TB. Now in this use case we have consolidated the data in 12 Azure VM’s in different Azure Regions instead of 100 servers on premise. We have achieved the same with the help of DFSR.

Steps we have followed:

To achieve this we have followed the below steps

Fig: Migration steps to move on premise Windows based File Servers to Azure IaaS

Why DFSR is still the best option: If you want to copy files with same set of permissions (same hash) and it should replicate files with latest changes.

DFSR components: DFSR namespace – This is used to publish the data to end users. People can access the virtual Namespace and get to the files and folders on DFSR server.

DFS Replication: This is use to replicate the data between the servers. We can also control the schedule and bandwidth of DFSR replication. We can mark servers read only too. This facility will force read only attribute to the server and no one will be able to make any changes to the specific server. DFSR replication works with Replication groups. In a replication group we define the folders to be replicated between 2 or more servers. This can be fully mesh or we can control it like Hub and Spoke via. Connections. DFSR configures some hidden folders under the replicated folders and stores internal data before processing. We should not remove or add content manually on these folders.

Comparison test between RoboCopy and AzCopy

The question came to our mind whether we will use Robocopy or AzCopy to stage the data test. To test the speed we have done the following comparison test.

Here is the test result:

ToolSize (GB)Time (Min.)Time (Sec.)ACL (Permissions)









It’s very clear that you can’t use AzCopy since the ACL (Permissions) are lost. (Probably that is reason why DoubleTake uses Robocopy internally in their application. J)

We did Robocopy to copy the data from one server to the other to reduce the time for DFSR replication. You can read this small article to understand how fast it is to pre seed the data with Robocopy, rather letting DFSR replicate all of it.

Example command we used to prepopulate the data is:

robocopy.exe “\\\j$\DFSR\ABNU-FS-A” “E:\ABNU-FS-A” /e /b /copyall /r:6 /w:5 /MT:64 /xd DfsrPrivate /tee /log:E:\RobocopyLogs\servername.log

This above command is copying folder name ABNU-FS-A to Local E drive on the server from where we are running the command.

MT64 is the thread count, default is 8, and with 16 MT we can copy 200 MB in few seconds. However, as we faced some issues with the network we usually now are running 16 Threads to make sure robocopy will not hang.

Once we robocopy the data we check the file hash. Example is below:

To check the data file hash on the remote source server is:

Get-dfsrfilehash \\\j$\DFSR\ABNU-FS-A * – this is to check the file hash on all the folders under ABNU-FS-A.

Get-dfsrfilehash E:\ ABNU-FS-A\*

Note: we need AD PowerShell module to run above command. Once this is done, we add the E drive folder to the replication group and let it sync with DFSR. As we have already copied the data and file hash, matches it will take just few hours for GB’s of data. That’s all.

Now People may think why we have not used the new AzureFileSync which is the buzzword now a days for FileShares

Although we have not used the Azure FileSync, however let’s discuss few things about the Azure File Sync.

What is Azure FileSync?

With Azure File Sync, shares can be replicated to Windows Servers on-premises or in Azure. The users would access the file share through the Windows Server, such as through an SMB or NFS share. This is useful for scenarios in which data will be accessed and modified far away from an Azure datacenter, such as in a branch office scenario. Data may be replicated between multiple Windows Server endpoints, such as between multiple branch offices.

Why this is not the right fit for the work which we are doing?

The main use case where we can use Azure File Share is if you are having multiple branch offices with very slow network speed. The best use case is On-premises with slow network, where the Windows, Linux, and macOS clients can mount a local on-premises Windows File share as a fast cache of the Azure File share. Since we have very good bandwidth to Azure from all the branches with Site to Site connectivity this option for AzureFileSync doesn’t fit here.

Data Transfer Method which are available for the Pre Staging of the files are as follows

  • Azure Export/Import
  • RoboCopy
  • AzCopy
  • Azure FileSync


There are multiple options to transfer the data from on premises to Azure for the File Servers staging but if you want a very smooth migration where end user will not see any down time this the best approach. However ACL hash can only be supported by RoboCopy and Azure FileSync, use of Azure File Share can be created without the need to manage hardware or an OS instead of what we are doing here building the Azure IaaS VM’s since this is not a possible use case here as we need to preserve the ACL and unfortunately it’s still not supported with Azure File Share at the moment.

read more
Azure VM

Top 20 most helpful information/checklist that any Azure Pre Sales Architect should keep handy in 2017.

no thumb

Estimated Reading Time: 8 minutes

If you are planning to meet your customer for a large transformation and migration deal with Azure offering, you are done with all your homework, presentation and ready to crack the deal, hold on… before you make any promises, please spend some time to check this 20 most helpful webpages or URL’s which may make yourself and your customer happy and your delivery team life lot easier in future. I have complied this list based on my personal experience and I hope this will make a big difference during any RFP/RFI or HLD/LLD or SoW preparation on Azure.

Credit: “Royalty Free” photo from

1. Azure Pricing Calculator.

Azure pricing calculator is something which you need at every step of your engagement with the customer, here is the link for the Azure Pricing Calculator.

For the Azure CSP the pricing calculator is available in the CSP portal.

2.  Azure Subscription Limits and Quotas.
A must have URL to know about the available quota per subscription which will help for a smooth design during the HLD phase, here is the link for that Azure subscription and service limits, quotas, and constraints.

3. Cost control in Azure.
When you are in deep discussion with the customer one of the basic question customer may ask you what to do if my budget overshoot in Azure, in that case you should be capable enough to answer this tricky question, although there are few 3rd party products like Cloud Cruiser available in Azure Market place for the cost control however they don’t have support for Azure CSP, this below URL is one of the native feature in Azure and that will serve the purpose without much effort Setup Billing Alerts in Azure.

4. Running non-supported Windows OS in Azure.
What will happen to my legacy applications running on Windows 2003, can I move them to Azure? This is one of the frequently asked question you may face during your sessions with your customer and you should be ready with the answer, first of all you should know that Windows 2003 VM is no longer officially supported in Azure however you may run them as long as you want and more details can be found in this Windows 2003 VM’s in Azure. However a 2nd option is to inform the customer about running them in a designated Hyper-V host in Azure which can be easily build with the new nested virtualization introduced in Azure.

5. Azure Site Recovery Supported scenario,
Azure site recovery is very successful in all types of migration activities to Azure except few areas where it may become a pain at a later stage for the delivery team, when they are in mid of a migration process and they may discover that the VM or the physical machine can’t be moved to Azure with the help of ASR due to one or the other unsupported scenarios. In one of my earlier article I have mentioned the same thing, which you can fine here. (Azure ASR Limitations which is difficult to bypass)

Under this type of situation customer may loss trust in your delivery team and there may be conflict arises between delivery and the pre sales team regarding who has promised this deliverable to the customer. So it’s always recommend and advisable to learn the different scenarios which are supported by the ASR process. Please find the below URL’s which can help here.

6. Running Oracle Database in Azure.
Can I able to run my oracle databases in Azure, how can I move large Oracle databases to cloud? This is also one of the common question if the enterprise is having lots of Oracle databases in their environment. ASR may be used for Oracle databases but if the oracle VM’s or physical machine are not supported by ASR, it’s better to use the Oracle data guard for the migration.

Here is an article which can help you to answer some basic questions on Oracle migration to Azure Supported scenarios and Migration options for the Oracle database in Azure.

7. Site connectivity in Azure
Can I able to connect my existing on premise sites to Azure, do I need to invest in new VPN routers and Gateway? This is one of the common question you should be ready to answer for your customer and MS provide a list of the supported VPN routers however this list may not cover all the routers available in the market. For example the TP-LINK router which I am using for my home office is not covered in this list while I able to setup the VPN connectivity with Azure. To know more please click here.

Please find the supported routers Supported VPN Routers in Azure.

8. Comparison with AWS.
Please expect set of questions when you meet your customer about similar offering from the Amazon Web Services, so I will suggest that you should prepare yourself with a high level product comparison between AWS and Azure. I have recently complied a head to head comparison between Azure and AWS offering and I am sure this comparison is definitely going to help you.

Please find my post below Azure VS. AWS Head to Head Comparison Q3 2017

9. Moving resources from one subscription to another.
Now this is an important question if customer already have some foot print in Azure and there is a chance that you can on board them in your CSP subscription or maybe you are advising them for a EA option. The question regarding the movement of resources from once subscription to another is an important question you should be capable to answer at first place.

Here is a post for that Move resources from one Subscription to another.

10. Life Cycle Policy of Azure Resources.
Although this question may not be important for some customer however I have seen many customer wanted to know if there is any impact on their applications if Microsoft changes the underlying hardware.

A detail explanation about the Azure Life Cycle Policy can be found in this article Life Cycle Policy for Azure Resources.

11. Total cost of ownership (TCO) in Azure and in AWS.
This is one of the most discussed topic during the estimation and proposal preparation phase, generally Microsoft pre sales consultant must have already completes this process before the release of the RFP or bid documents, however you should also know about this. And I believe this two URL’s below should help you to answer any quick question on the TCO during your discussion with the customer.

Total cost of Ownership for Azure.

Total cost of ownership for AWS.

12. Azure Stencils.
As an Azure pre-sales architect you will need the Azure Visio and PowerPoint stencils + icon sets and they are available for the download at the Microsoft site, which will help you a lot. This is a must have tool for your successful presentations, for the High Level and Low Level design and you will need it throughout the bid process and every new deal which you will participate. Please download the Azure stencils below here.

Microsoft Azure, Cloud and Enterprise Symbol / Icon Set – Visio stencil, PowerPoint, PNG, SVG

13. Azure data centre compliance.
Compliance of the Azure data center, when the security folks from the customer will ask you about many compliance related questions in Azure and you can directly target them to this URL and they will get the answers of all their questions, so this URL should be a handy one for you or else there is a big chance that the security guys can put a cold water in your presentation and they may switch to different vendor who can convince them better on the security part and no doubt the security guys have an important role in all your deals.

Here is a list of the Compliance of the Azure Data Center.

14. Azure Product Availability by region.
Not all the azure products are available at all the Azure regions, so before you promise anything about any particular Azure data center, please take a quick look into this URL mentioned below:

Product availability by Regions.

15. Azure Backup – Supported Scenarios.
This is an important area which has to be addressed correctly during the pre-sales bid otherwise it may again become a pain for the delivery team. For example recently in one of the project I have found that the pre sales team has promised for the ASR move of the Windows 2008 R2 SP1 VM’s in Azure because they are very well supported by ASR however after the first wave the delivery team found that they can’t install the Azure backup agent in the Windows 2008 VM’s which are 32 bit, and that results in a complete back out of the ASR move. This kind of situation can give a bad name to you during the execution part so be very careful and you should must add this URL in your check list.

Azure Backup-FAQ

Azure VM Backup-FAQ

16. Monitoring – Azure Log Analytics-Supported Data Sources.
And here comes the monitoring and this is going to be part of most of your deals and if you have chosen to prescribe the Azure monitoring solution in your offering please don’t forget to take a quick look on the supported data sources. You should keep in your mind that you can’t monitor everything with the Azure Log Analytics. For example if customer want’s a monitoring solutions for their web applications you may need to direct them to the 3rd party solutions available in the Azure Market Place like AppDynamics etc. However for the present data sources which are supported you can take a look into this below URL.

Azure Log Analytics Supported Data Sources

17. Azure Reference Architecture.
Whether you are a novice or an expert in the on premise architecture design, this is the time you should spend few days understanding the Azure Application Architecture, you have to understand that most of architecture in Azure cloud is based on the SRH guidelines, which is nothing but the scalability, resiliency and high availability. This below two URL’s should be enough to understand and master the probable going to be architecture in Azure for your customers.

Azure Architecture Center.

Azure Reference Architecture.

18. Azure Express Route.
Azure express route is always a point of discussion in many customer’s engagement and many of them would like to put it in the kitty of the network team but you should be ready with some of the FAQ of the Azure ExpressRoute and here is the URL for that.

FAQ-Azure Express Route

19. Business Continuity and Disaster Recovery in Azure.
Azure BCP or DR is something like elephant in the room. This is you need to well plan before the final commitment during the engagement with customer. If required please setup a small POC with few set of application to validate your concept before finalizing the SoW.

You should also should be aware of the common terms which is used in any DR process as shown below and this has to be agreed by your customer or the application owners. Some of them are listed below. You should know what needs to recovered in case of DR.

RTO: The recovery time objective (RTO), which is the maximum acceptable length of time that your application can be offline.

RPO: A recovery point objective (RPO), which is the maximum acceptable length of time during which data might be lost due to a major incident. Note that this metric describes the length of time only; it does not address the amount or quality of the data lost.

Here is a list of URL which are going to help you in this process.

Business Continuity and Disaster Recovery in Azure in the Azure Paired Regions.

Disaster Recovery for the Azure Applications.

High Availability of the Azure Applications.

Designing resilient applications for Azure.

20. What is there in Azure stack?
This is a question which many consultants are facing from the customers for the last few months and as an Azure pre-sales architect you should be aware of what is there in Microsoft Azure Stack and how can you compete with it with the other hyper converged vendors available in the Market. Here is an article which will definitely increase your knowledge on Azure stack.

Key features and concepts in Azure stack.

That’s make the final list of 20 but this is of course not the end, being a player in tough competition, you should constantly can stay informed of innovations, new releases and product reviews of the Azure world to get ahead of others. Hope you will like this post.

Best of luck for your next Azure Assignment.

read more
Azure BackupAzure VM

Should you upgrade to Azure VM Backup Stack V2?


Image result for azure backup

Picture Credit: Royalty Free Pictures from

The Azure Resource Manager model has come up with the option to upgrade to VM Backup Stack V2.  There are many salient features of the VM Backup Stack V2 , the main price point I believe is the ability to take the snapshot backup of the disks up to 4 TB in Size. As per my experience, I know this is a great ability looking forward to up to 60% failure of the MARS agent backup which is not reliable. The ability to take the snapshot backup will also guarantee 99.99%  recovery of the snapshot disks. In scenarios where large disks were being backed up by MARS agent will definitely be backed by the Azure VM Backup Stack V2 and large disk snapshot backup is possible if you upgrade.

Another feature enhancement as per the MS site is as follows:

  • Ability to see snapshots taken as part of a backup job that’s available for recovery without waiting for data transfer to finish. It reduces the wait time for snapshots to copy to the vault before triggering restore. Also, this ability eliminates the additional storage requirement for backing up premium VMs, except for the first backup.
  • Reduces backup and restore times by retaining snapshots locally, for seven days.
  • Support for disk sizes up to 4 TB.
  • Ability to use an unmanaged VM’s original storage accounts, when restoring. This ability exists even when the VM has disks that are distributed across storage accounts. It speeds up restore operations for a wide variety of VM configurations.

Difference between the Backup Stack V1 and Backup Stack V2

ItemsBackup Stack V1Backup Stack V2
The process of BackupIn two phases, first the VM or disk snapshot has been taken and in next step, the snapshot will be sent to the Azure Recovery Services Vault.In this phase, the snapshot is taken and preserved for 7 days before sending to Azure Recovery Services Vault.
When the Recovery Point is CreatedA recovery point is created once phase 1 and 2 are done.A recovery point is created as soon as the snapshot is taken.
Recovery Point creation SpeedSlowFast
Storage CostNo additional storage costLocal storage cost may increase since snapshot will be stored for 7 days before moving to Recovery Services Vault. According to the current pricing model MS is not charging for storing the managed disks for 7 days.
Impact of the upgrade on Current Backup No impact.

Please note that incremental snapshot will be taken for the un-managed disk but for the managed disks the snapshot will be taken for the full disk. So in case if you planning for 1 TB of managed Disk you need to pay for the snapshot of the full disk.

How to upgrade

Log in to the Azure Portal and Go to the Recovery Services Vault.

Go to properties. In the left side pan you will see the following.

Click on the Upgrade Button

Click on the upgrade to upgrade to Backup Stack V 2.0.

Note: This upgrade is not Vault basis, it is Subscription based. And This change is not reversible too

Conclusion: The Azure VM Backup Stack V 2.0 is a good decision to upgrade if you have the large number of large disk capacity VM’s. You can go for it since there is no additional cost involved at the moment and there will be no additional configuration needed to be done in the recovery services vault and the existing backups will not be impacted.

That’s all for today. You have a great day ahead.

read more
1 2 3
Page 1 of 3