close

Azure VM

All related post for Microsoft Azure VM

Azure VM

How to minimize Brute Force Attacks by hackers in Azure VM’s

no thumb

Estimated Reading Time: 6 minutes

In one of my post in June I have mentioned about the Microsoft Data Center Public IP address ranges and provided the URL to download them. Please note that this IP ranges are also well known to hackers and they are very popular in the hacker’s community. Hacker’s now a days generally uses the Brute force mechanismto attack this IP range. As per the calculations on an average hackers make 5 login attempt per minute to this IP address ranges on RDP and SSH ports and this is going to increase in future as more and more valuable data and information is moving to azure every day.

 

Picture Credit: FreeClipart.org

There are two ways to minimize or get rid of this attack.

First option is not to use the public IP address for the VM’s and setup all the VM’s in the local area network with private IP address. This is a common scenario which most of the large enterprises are following where they setup the site to site VPN or express route to their on premise data center and Azure and setup a DNS server on premise or azure which assign the private IP address to each VM’s. In this case when a VM is configured for private IP you can see the following thing in place for the public IP address. The public IP address field for this VM is blank.

The network settings of this type of VM will look like this

In this scenario best practice is that you should use a jump box which may be a terminal server in your local area network to login to this VM’s, once you login you can also able to ping the VM if ICMP is allowed on the azure VM’s as you can see below.

This above approach is very much acceptale for large or medium size organisation which also have multi layer firewall devices to protect their hybrid enviroment. However sometimes we require Azure VM’s which need the public IP address. In this scenario you need to follow the second option which will reduce the risk.

The second option is to reduce exposure to a brute force attack by limiting the amount of time that a port is open. The question is how to achieve this.

As you can see below I have another VM which does contain a Public IP address and is part of a public subnet

The best way to achieve this is to enable the JIT (Just in time access) for the Azure Virtual Machines. Now while I say this I should explain why NSG which is also capable to do this activity is not the right fit here. The main reason is that JITA is a combination of Azure RBAC (Role Based Access Control) and NSG.

What is Just in time access for the Azure VM?

Just in time VM access enables you to lock down your VMs in the network level by blocking inbound traffic to specific ports. It enables you to control the access and reduce the attack surface to your VMs, by allowing access only upon a specific need.

Similar to NSG here also we need to mention the ports on the VM where we need to lock down the inbound traffic. The below image will show what is actually going to happen in case of JITA.

As you can see in the above diagram when a user requests access to a VM, Security Center checks in the RBAC(Role Based Access Control) whether the user has write Access to this VM. If the user have write permissions, the request is approved and Security Center automatically configures the Network Security Groups (NSGs) to allow inbound traffic to the management ports for the amount of time you specified. After the time has expired, Security Center restores the NSGs to their previous states.

JIT is a very good option since Azure network administrator don’t need to go again and again and change the NSG settings however it will incurr additional charges to your Azure subscription as it is the part of the Security Center Standard Pricing Tier. For more information on the Security Center Tier’s please click this URL.

Another thing which you can find here that if you upgrade the secuirty tier to standard it will apply to all the eligible resources in a particular resource group. As you can see below it will charge you USD 15 per Node per Month.

So it’s something you should keep in mind so that you will not be surprised after 90 days’s when you will receive your Azure bill and it will include these charges.

Steps to enable Just in Time Access to this VM

Go to Azure Security Center


Go down to the JIT tab as you can see below


Go to the recommended tab in the JIT window


Select the VM where you want to enable JIT


Click on enable JIT on 1 VM

And you can see the default configuration here

Click on Save and JIT has been activated in this VM.

Now you can click on Request Access Button as shown below.

Here you can find the list of default ports which security center recommend to enable the JIT. I have selected port number 3389 for the RDP.

Now MyIP will automatically take the public IP address of your computer as the source IP and allow the RDP access to destination VM which is the Vm where JIT has been configured. Once it’s done you can check the Last User name below where it will show the username which have the access to this VM. For example my account which already have the write access to this VM has been granted RDP permissions in this VM for three hours.

I have tried to RDP to this server and you can see that I can able to login without any problem.

After 3 hours when I have tried again I was unable to RDP and was getting this error

You can also able to edit the JIT policy by clicking on edit option in the configured tab

You can also audit the JIT Activity Log by going to the Activity Log Settings as shown below.

Activity log provides a filtered view of previous operations for that VM along with time, date, and subscription. You can download the log in the CSV format.

If you wanted to remove the JIT you can remove that by clicking the remove button as shown here.

Conclusion

Private IP address helps you to restrict the Azure VM access only two internal users and just in time VM access in Security Center helps you to control access to your Azure virtual machines when the VM’s are having public IP address and thus minimize the risk associated with Brute Force Attacks. I will bring more posts on Azure VM security on future.

read more
Azure VM

What you should know about Azure’s Low priority VM’s and possible use cases

vm logo4

Estimated Reading Time: 4 minutes

In a recent design discussion with the development team folks we were talking about the Azure Low Priority VM’s deployment in their next project to save the cost and this is quite natural since after its launch in May it has drawn much attention from the press and many of you like other enterprises are trying to get the advantage of the Azure low cost low priority VM’s, however Azure low priority VM’s needs to have a good business case which can be correlated with good use case. Without a good business case and a good use case it will make no sense to experience the Power of Azure Low Priority VM’s

What is Azure Low Priority VM’s?

Similar to AWS spot instances, Microsoft also came out in last May with Azure Low Priority VM’s. Low Priority VM’s are the VM’s which are available at significant discounted price. Low priority VM’s are provided from the unused set of VM’s in Azure or in other words it can be allocated from the Azure Excess Compute Capacity to the customer who request for it.

Price for Azure Low Priority VM’s

Low-Priority Linux VM’s accompany 80% discount while the Windows VM’s accompany 60% discounted price. The discount is figured in contrast with their On-Demand hourly cost. This is available across most of the VM instance types in Azure.

A sample discounted price for the general purpose standard Av2 series windows instances can be seen below.

For more details on pricing of the Azure Low Priority VM’s please check this URL

Features of this low priority VM’s are as follows:

  • Up to 80% discount, fixed price.
  • Uses surplus capacity, availability can vary all the time.
  • VM’s can be seized at any time.
  • Available at all regions.
  • Do more with same price.

Up to this everything is fine however please note the point number three where I have mentioned that the VM can be seized at any time so always remember that nodes can go up and down in Low Priority VM’s so all the workloads are not suitable for the Azure Low Priority VM’s, now the question is what type of workloads are suitable for the Azure Low Priority VM’s.

What type of workloads are suitable for the Azure Low Priority VM’S?

  • The workload which is tolerant of interruption
  • The workload which is tolerant of reduced capacity

Suitable Workloads are as follows

  • Batch processing ( Asynchronous distributed jobs and tasks running on many VM’s)
  • Stateless Web UI
  • Containerized applications
  • Map/Reduce type applications
  • Job completion time flexible
  • Short duration tasks

Low-priority VMs are currently available only for workloads running in Batch however it will be extended to other workloads in future.

Now as you see one of the most common use case is the Batch tasks and when we talk about the batch task one common example which will come to our mind is what will happen if the VMs when a job is interrupted due to VM preemption?

In case of any interruption the tasks will be automatically executed and rescheduled and re executed at a later stage when the VM’s are available again.

Lifecycle of the Low Priority VM’s batch job in case preemption

What are options of creating the low priority VM’s?

There are multiple options available with Azure low priority VM pool creation and it depends on what is the target.

Option 1: Lowering the cost

In this scenario all the VM’s will be configured as the low priority VM in the pool. No dedicated VM’s will be available.

Option 2: Lowering the cost with a guaranteed baseline

In this scenario the pool will be configured for fixed number of low priority VM’s and a fixed number of high priority VM’s (The number of low priority VM’s will be 60 to 80%)

Option 3: Lowering the cost while maintain a capacity

In this scenario pool will have all low priority, set dedicated = preempted. In this case batch will have full dedicated VM’s but if it find low priority VM’s it will scale down to the low priority VM’s

Steps to check number of Low Priority VM’s available currently

First navigate to Azure Batch Accounts

And click on create the batch account in your subscription.

Once the batch account has been created you should be able to find this as shown below

You can view the metrics of your low priority VM’s in the below Metrics tab

You can click on quota to see the available quota limit in this batch account

Conclusion:

At present low-priority virtual machines (VMs) can be used to reduce the cost of Batch workloads. Low-priority VMs make new types of Batch workloads possible by providing a large amount of compute power that is also economical. If your organization is running lots of batch workloads everyday low priority is certainly will be one of your choice in Azure.

read more
Azure VM

File Server migration strategy in Azure with Zero down time for end users without any ACL loss

File Server migration

Estimated Reading Time: 5 minutes

Are you planning to move your on premise file servers to Azure if yes this post can help you to do a better planning for the steps required for seamless movement of the file share to Azure, before you plan the actual move let’s see what are very important factors which we need to consider before the move. The most important factors which should be considered for the file server migrations are as follows:

Does the new file Share Security, authentication and ACL (Access Control List)?

As per our testing and multiple MS articles currently the Azure File Share doesn’t meet all the above requirement, for example the Active Directory-based authentication and ACL support is not present in Azure File Share which is one of the important requirement.

How the end user are accessing the data?

In present days most of the windows based file servers enterprises are using DFSR for the file share technology. In Azure if we mount the file share by using SMB, we don’t have folder-level control over permissions instead we can use shared access signatures (SAS) to generate tokens that have specific file permissions, and which are valid for a specified time interval but that is somewhat will be a complete new to the users and will be a complete change the way how you have implemented the file share in your current on premise environment.

How many users/clients can access the file simultaneously?

The current quota in Azure File Share is 2000

What is the maximum size of the File Share?

Currently the Azure File Share supports maximum up to 5 TiB of data storage. In future it may support upto 100 TiB.

Sample USE Case:

Let’s consider a very common use case which we have considered for this Article. We have considered a large enterprise which have multiple locations around the globe and there are more than 100 file servers which is currently being used. All the file servers are not very big but total data size is around 40 TB. Now in this use case we have consolidated the data in 12 Azure VM’s in different Azure Regions instead of 100 servers on premise. We have achieved the same with the help of DFSR.

Steps we have followed:

To achieve this we have followed the below steps

Fig: Migration steps to move on premise Windows based File Servers to Azure IaaS

Why DFSR is still the best option: If you want to copy files with same set of permissions (same hash) and it should replicate files with latest changes.

DFSR components: DFSR namespace – This is used to publish the data to end users. People can access the virtual Namespace and get to the files and folders on DFSR server.

DFS Replication: This is use to replicate the data between the servers. We can also control the schedule and bandwidth of DFSR replication. We can mark servers read only too. This facility will force read only attribute to the server and no one will be able to make any changes to the specific server. DFSR replication works with Replication groups. In a replication group we define the folders to be replicated between 2 or more servers. This can be fully mesh or we can control it like Hub and Spoke via. Connections. DFSR configures some hidden folders under the replicated folders and stores internal data before processing. We should not remove or add content manually on these folders.

Comparison test between RoboCopy and AzCopy

The question came to our mind whether we will use Robocopy or AzCopy to stage the data test. To test the speed we have done the following comparison test.

Here is the test result:

ToolSize (GB)Time (Min.)Time (Sec.)ACL (Permissions)
RoboCopy

1

17

19

Intact
AzCopy

1

2

8

Lost

It’s very clear that you can’t use AzCopy since the ACL (Permissions) are lost. (Probably that is reason why DoubleTake uses Robocopy internally in their application. J)

We did Robocopy to copy the data from one server to the other to reduce the time for DFSR replication. You can read this small article to understand how fast it is to pre seed the data with Robocopy, rather letting DFSR replicate all of it.

Example command we used to prepopulate the data is:

robocopy.exe “\\WAI-FS01.whyazure.in\j$\DFSR\ABNU-FS-A” “E:\ABNU-FS-A” /e /b /copyall /r:6 /w:5 /MT:64 /xd DfsrPrivate /tee /log:E:\RobocopyLogs\servername.log

This above command is copying folder name ABNU-FS-A to Local E drive on the server from where we are running the command.

MT64 is the thread count, default is 8, and with 16 MT we can copy 200 MB in few seconds. However, as we faced some issues with the network we usually now are running 16 Threads to make sure robocopy will not hang.

Once we robocopy the data we check the file hash. Example is below:

To check the data file hash on the remote source server is:

Get-dfsrfilehash \\WAI-FS01.whyazure.in\j$\DFSR\ABNU-FS-A * – this is to check the file hash on all the folders under ABNU-FS-A.

Get-dfsrfilehash E:\ ABNU-FS-A\*

Note: we need AD PowerShell module to run above command. Once this is done, we add the E drive folder to the replication group and let it sync with DFSR. As we have already copied the data and file hash, matches it will take just few hours for GB’s of data. That’s all.

Now People may think why we have not used the new AzureFileSync which is the buzzword now a days for FileShares

Although we have not used the Azure FileSync, however let’s discuss few things about the Azure File Sync.

What is Azure FileSync?

With Azure File Sync, shares can be replicated to Windows Servers on-premises or in Azure. The users would access the file share through the Windows Server, such as through an SMB or NFS share. This is useful for scenarios in which data will be accessed and modified far away from an Azure datacenter, such as in a branch office scenario. Data may be replicated between multiple Windows Server endpoints, such as between multiple branch offices.

Why this is not the right fit for the work which we are doing?

The main use case where we can use Azure File Share is if you are having multiple branch offices with very slow network speed. The best use case is On-premises with slow network, where the Windows, Linux, and macOS clients can mount a local on-premises Windows File share as a fast cache of the Azure File share. Since we have very good bandwidth to Azure from all the branches with Site to Site connectivity this option for AzureFileSync doesn’t fit here.

Data Transfer Method which are available for the Pre Staging of the files are as follows

  • Azure Export/Import
  • RoboCopy
  • AzCopy
  • Azure FileSync

Conclusion:

There are multiple options to transfer the data from on premises to Azure for the File Servers staging but if you want a very smooth migration where end user will not see any down time this the best approach. However ACL hash can only be supported by RoboCopy and Azure FileSync, use of Azure File Share can be created without the need to manage hardware or an OS instead of what we are doing here building the Azure IaaS VM’s since this is not a possible use case here as we need to preserve the ACL and unfortunately it’s still not supported with Azure File Share at the moment.

read more
Azure VM

Top 20 most helpful information/checklist that any Azure Pre Sales Architect should keep handy in 2017.

no thumb

Estimated Reading Time: 8 minutes

If you are planning to meet your customer for a large transformation and migration deal with Azure offering, you are done with all your homework, presentation and ready to crack the deal, hold on… before you make any promises, please spend some time to check this 20 most helpful webpages or URL’s which may make yourself and your customer happy and your delivery team life lot easier in future. I have complied this list based on my personal experience and I hope this will make a big difference during any RFP/RFI or HLD/LLD or SoW preparation on Azure.

Credit: “Royalty Free” photo from www.pexels.com

1. Azure Pricing Calculator.

Azure pricing calculator is something which you need at every step of your engagement with the customer, here is the link for the Azure Pricing Calculator.

For the Azure CSP the pricing calculator is available in the CSP portal.

2.  Azure Subscription Limits and Quotas.
A must have URL to know about the available quota per subscription which will help for a smooth design during the HLD phase, here is the link for that Azure subscription and service limits, quotas, and constraints.

3. Cost control in Azure.
When you are in deep discussion with the customer one of the basic question customer may ask you what to do if my budget overshoot in Azure, in that case you should be capable enough to answer this tricky question, although there are few 3rd party products like Cloud Cruiser available in Azure Market place for the cost control however they don’t have support for Azure CSP, this below URL is one of the native feature in Azure and that will serve the purpose without much effort Setup Billing Alerts in Azure.

4. Running non-supported Windows OS in Azure.
What will happen to my legacy applications running on Windows 2003, can I move them to Azure? This is one of the frequently asked question you may face during your sessions with your customer and you should be ready with the answer, first of all you should know that Windows 2003 VM is no longer officially supported in Azure however you may run them as long as you want and more details can be found in this Windows 2003 VM’s in Azure. However a 2nd option is to inform the customer about running them in a designated Hyper-V host in Azure which can be easily build with the new nested virtualization introduced in Azure.

5. Azure Site Recovery Supported scenario,
Azure site recovery is very successful in all types of migration activities to Azure except few areas where it may become a pain at a later stage for the delivery team, when they are in mid of a migration process and they may discover that the VM or the physical machine can’t be moved to Azure with the help of ASR due to one or the other unsupported scenarios. In one of my earlier article I have mentioned the same thing, which you can fine here. (Azure ASR Limitations which is difficult to bypass)

Under this type of situation customer may loss trust in your delivery team and there may be conflict arises between delivery and the pre sales team regarding who has promised this deliverable to the customer. So it’s always recommend and advisable to learn the different scenarios which are supported by the ASR process. Please find the below URL’s which can help here.

6. Running Oracle Database in Azure.
Can I able to run my oracle databases in Azure, how can I move large Oracle databases to cloud? This is also one of the common question if the enterprise is having lots of Oracle databases in their environment. ASR may be used for Oracle databases but if the oracle VM’s or physical machine are not supported by ASR, it’s better to use the Oracle data guard for the migration.

Here is an article which can help you to answer some basic questions on Oracle migration to Azure Supported scenarios and Migration options for the Oracle database in Azure.

7. Site connectivity in Azure
Can I able to connect my existing on premise sites to Azure, do I need to invest in new VPN routers and Gateway? This is one of the common question you should be ready to answer for your customer and MS provide a list of the supported VPN routers however this list may not cover all the routers available in the market. For example the TP-LINK router which I am using for my home office is not covered in this list while I able to setup the VPN connectivity with Azure. To know more please click here.

Please find the supported routers Supported VPN Routers in Azure.

8. Comparison with AWS.
Please expect set of questions when you meet your customer about similar offering from the Amazon Web Services, so I will suggest that you should prepare yourself with a high level product comparison between AWS and Azure. I have recently complied a head to head comparison between Azure and AWS offering and I am sure this comparison is definitely going to help you.

Please find my post below Azure VS. AWS Head to Head Comparison Q3 2017

9. Moving resources from one subscription to another.
Now this is an important question if customer already have some foot print in Azure and there is a chance that you can on board them in your CSP subscription or maybe you are advising them for a EA option. The question regarding the movement of resources from once subscription to another is an important question you should be capable to answer at first place.

Here is a post for that Move resources from one Subscription to another.

10. Life Cycle Policy of Azure Resources.
Although this question may not be important for some customer however I have seen many customer wanted to know if there is any impact on their applications if Microsoft changes the underlying hardware.

A detail explanation about the Azure Life Cycle Policy can be found in this article Life Cycle Policy for Azure Resources.

11. Total cost of ownership (TCO) in Azure and in AWS.
This is one of the most discussed topic during the estimation and proposal preparation phase, generally Microsoft pre sales consultant must have already completes this process before the release of the RFP or bid documents, however you should also know about this. And I believe this two URL’s below should help you to answer any quick question on the TCO during your discussion with the customer.

Total cost of Ownership for Azure.

Total cost of ownership for AWS.

12. Azure Stencils.
As an Azure pre-sales architect you will need the Azure Visio and PowerPoint stencils + icon sets and they are available for the download at the Microsoft site, which will help you a lot. This is a must have tool for your successful presentations, for the High Level and Low Level design and you will need it throughout the bid process and every new deal which you will participate. Please download the Azure stencils below here.

Microsoft Azure, Cloud and Enterprise Symbol / Icon Set – Visio stencil, PowerPoint, PNG, SVG

13. Azure data centre compliance.
Compliance of the Azure data center, when the security folks from the customer will ask you about many compliance related questions in Azure and you can directly target them to this URL and they will get the answers of all their questions, so this URL should be a handy one for you or else there is a big chance that the security guys can put a cold water in your presentation and they may switch to different vendor who can convince them better on the security part and no doubt the security guys have an important role in all your deals.

Here is a list of the Compliance of the Azure Data Center.

14. Azure Product Availability by region.
Not all the azure products are available at all the Azure regions, so before you promise anything about any particular Azure data center, please take a quick look into this URL mentioned below:

Product availability by Regions.

15. Azure Backup – Supported Scenarios.
This is an important area which has to be addressed correctly during the pre-sales bid otherwise it may again become a pain for the delivery team. For example recently in one of the project I have found that the pre sales team has promised for the ASR move of the Windows 2008 R2 SP1 VM’s in Azure because they are very well supported by ASR however after the first wave the delivery team found that they can’t install the Azure backup agent in the Windows 2008 VM’s which are 32 bit, and that results in a complete back out of the ASR move. This kind of situation can give a bad name to you during the execution part so be very careful and you should must add this URL in your check list.

Azure Backup-FAQ

Azure VM Backup-FAQ

16. Monitoring – Azure Log Analytics-Supported Data Sources.
And here comes the monitoring and this is going to be part of most of your deals and if you have chosen to prescribe the Azure monitoring solution in your offering please don’t forget to take a quick look on the supported data sources. You should keep in your mind that you can’t monitor everything with the Azure Log Analytics. For example if customer want’s a monitoring solutions for their web applications you may need to direct them to the 3rd party solutions available in the Azure Market Place like AppDynamics etc. However for the present data sources which are supported you can take a look into this below URL.

Azure Log Analytics Supported Data Sources

17. Azure Reference Architecture.
Whether you are a novice or an expert in the on premise architecture design, this is the time you should spend few days understanding the Azure Application Architecture, you have to understand that most of architecture in Azure cloud is based on the SRH guidelines, which is nothing but the scalability, resiliency and high availability. This below two URL’s should be enough to understand and master the probable going to be architecture in Azure for your customers.

Azure Architecture Center.

Azure Reference Architecture.

18. Azure Express Route.
Azure express route is always a point of discussion in many customer’s engagement and many of them would like to put it in the kitty of the network team but you should be ready with some of the FAQ of the Azure ExpressRoute and here is the URL for that.

FAQ-Azure Express Route

19. Business Continuity and Disaster Recovery in Azure.
Azure BCP or DR is something like elephant in the room. This is you need to well plan before the final commitment during the engagement with customer. If required please setup a small POC with few set of application to validate your concept before finalizing the SoW.

You should also should be aware of the common terms which is used in any DR process as shown below and this has to be agreed by your customer or the application owners. Some of them are listed below. You should know what needs to recovered in case of DR.

RTO: The recovery time objective (RTO), which is the maximum acceptable length of time that your application can be offline.

RPO: A recovery point objective (RPO), which is the maximum acceptable length of time during which data might be lost due to a major incident. Note that this metric describes the length of time only; it does not address the amount or quality of the data lost.

Here is a list of URL which are going to help you in this process.

Business Continuity and Disaster Recovery in Azure in the Azure Paired Regions.

Disaster Recovery for the Azure Applications.

High Availability of the Azure Applications.

Designing resilient applications for Azure.

20. What is there in Azure stack?
This is a question which many consultants are facing from the customers for the last few months and as an Azure pre-sales architect you should be aware of what is there in Microsoft Azure Stack and how can you compete with it with the other hyper converged vendors available in the Market. Here is an article which will definitely increase your knowledge on Azure stack.

Key features and concepts in Azure stack.

That’s make the final list of 20 but this is of course not the end, being a player in tough competition, you should constantly can stay informed of innovations, new releases and product reviews of the Azure world to get ahead of others. Hope you will like this post.

Best of luck for your next Azure Assignment.

read more
Azure BackupAzure VM

Should you upgrade to Azure VM Backup Stack V2?

AzureStack

Image result for azure backup

Picture Credit: Royalty Free Pictures from Pexeles.com

The Azure Resource Manager model has come up with the option to upgrade to VM Backup Stack V2.  There are many salient features of the VM Backup Stack V2 , the main price point I believe is the ability to take the snapshot backup of the disks up to 4 TB in Size. As per my experience, I know this is a great ability looking forward to up to 60% failure of the MARS agent backup which is not reliable. The ability to take the snapshot backup will also guarantee 99.99%  recovery of the snapshot disks. In scenarios where large disks were being backed up by MARS agent will definitely be backed by the Azure VM Backup Stack V2 and large disk snapshot backup is possible if you upgrade.

Another feature enhancement as per the MS site is as follows:

  • Ability to see snapshots taken as part of a backup job that’s available for recovery without waiting for data transfer to finish. It reduces the wait time for snapshots to copy to the vault before triggering restore. Also, this ability eliminates the additional storage requirement for backing up premium VMs, except for the first backup.
  • Reduces backup and restore times by retaining snapshots locally, for seven days.
  • Support for disk sizes up to 4 TB.
  • Ability to use an unmanaged VM’s original storage accounts, when restoring. This ability exists even when the VM has disks that are distributed across storage accounts. It speeds up restore operations for a wide variety of VM configurations.

Difference between the Backup Stack V1 and Backup Stack V2

ItemsBackup Stack V1Backup Stack V2
The process of BackupIn two phases, first the VM or disk snapshot has been taken and in next step, the snapshot will be sent to the Azure Recovery Services Vault.In this phase, the snapshot is taken and preserved for 7 days before sending to Azure Recovery Services Vault.
When the Recovery Point is CreatedA recovery point is created once phase 1 and 2 are done.A recovery point is created as soon as the snapshot is taken.
Recovery Point creation SpeedSlowFast
Storage CostNo additional storage costLocal storage cost may increase since snapshot will be stored for 7 days before moving to Recovery Services Vault. According to the current pricing model MS is not charging for storing the managed disks for 7 days.
Impact of the upgrade on Current Backup No impact.

Please note that incremental snapshot will be taken for the un-managed disk but for the managed disks the snapshot will be taken for the full disk. So in case if you planning for 1 TB of managed Disk you need to pay for the snapshot of the full disk.

How to upgrade

Log in to the Azure Portal and Go to the Recovery Services Vault.

Go to properties. In the left side pan you will see the following.

Click on the Upgrade Button

Click on the upgrade to upgrade to Backup Stack V 2.0.

Note: This upgrade is not Vault basis, it is Subscription based. And This change is not reversible too

Conclusion: The Azure VM Backup Stack V 2.0 is a good decision to upgrade if you have the large number of large disk capacity VM’s. You can go for it since there is no additional cost involved at the moment and there will be no additional configuration needed to be done in the recovery services vault and the existing backups will not be impacted.

That’s all for today. You have a great day ahead.

read more
Azure VM

Add managed disk to your Azure VM without any downtime.

vm logo

Estimated Reading Time: 3 minutes

Dear friends, today I am going to show you how to add a non OS disk to the Azure VM in the fly without any downtime. I have a Windows VM, where I would like to add an additional disk. When I click on the this PC it was showing me only the OS disk and the temporary storage drive(Which is by default created with any azure VM) so I have decided to add a data disk to this VM.

To achieve this, I went to VM blade and click on the disks.

The next step is to add the data disks. Click on Add data disk button. The maximum size of the data disk can be added is 4 TB as you can see below.

Once I clicked on the add data disk button it has allowed me to create the managed disks.

The difference between Azure managed disk and non-managed disks is something which I am planning to discuss in a different post in future. You can get an overview of the Azure managed disks here.

Now since I have clicked on the create button. I can find here that I have created a 100 GB disk, which is stored in a storage account which is in Standard_LRS type. Host caching is Read/Write type. (That means the disk is Read/Write type).

Once I have clicked the save button the disk is getting created as you can see the message below “Updating virtual machine disks”

Now the next step is to RDP into my VM and go to disk management. Right click on the disk which is shown and initialize it.

Once it’s initialized I have clicked on New Simple Volume and click on next

You can fine the following screen as shown below.

The next step is to give the size of the disk. I have chosen full 100 GB disk.

In the next step I have assigned a drive letter to the disk

And the last step is to format the disk

Once it’s completed I have clicked on the finish button.

Now the disk is still getting formatted

Now the format has completed and the disk is ready for my use.

That’s all for today, I hope you will like this post. I will bring more on managed disks in my next posts.

read more
Azure VM

Top 10 information which are required for a reasonable sizing estimate of the Azure VM

no thumb

Estimated Reading Time: 1 minute

There are several information which are required to make a reasonable sizing estimate for Azure VMs, and here is a comprehensive list which we used to follow from long time to get the reasonable T-Shirt sizing of the Azure VM’s.

  1. CPU utilization
  2. Memory Utilization
  3. Disk Size
  4. Disk IOPs (I/O’s per second) for C: drive and data / database drives
  5. Disk data transfer rates (Mbytes per second) for C: drive and data / database drives
  6. Backup / restore requirements
  7. Availability / redundancy approach
  8. DR approach
  9. Any specific network requirements (For example DMZ, Load balancer)
  10. VM scheduling opportunities – I.E. when can it be powered off to save money.

Out of these many parameters the first five items are the most critical for sizing as over-provisioning in Azure can have major cost implications. The first five can be easily gathered by configuring Windows Performance Counter. The information of the next 5 can be gathered by the discussion with application owners. In my future posts I will show you how we can capture the information of the first five parameters from Windows Server OS based VM’s or Physical Servers.

read more
Azure VM

How to minimize Brute Force Attacks by hackers in Azure VM’s

no thumb

Estimated Reading Time: 6 minutes

In one of my post in June I have mentioned about the Microsoft Data Center Public IP address ranges and provided the URL to download them. Please note that this IP ranges are also well known to hackers and they are very popular in the hacker’s community. Hacker’s now a days generally uses the Brute force mechanismto attack this IP range. As per the calculations on an average hackers make 5 login attempt per minute to this IP address ranges on RDP and SSH ports and this is going to increase in future as more and more valuable data and information is moving to azure every day.

Picture Credit: FreeClipart.org

There are two ways to minimize or get rid of this attack.

First option is not to use the public IP address for the VM’s and setup all the VM’s in the local area network with private IP address. This is a common scenario which most of the large enterprises are following where they setup the site to site VPN or express route to their on premise data center and Azure and setup a DNS server on premise or azure which assign the private IP address to each VM’s. In this case when a VM is configured for private IP you can see the following thing in place for the public IP address. The public IP address field for this VM is blank.

The network settings of this type of VM will look like this

In this scenario best practice is that you should use a jump box which may be a terminal server in your local area network to login to this VM’s, once you login you can also able to ping the VM if ICMP is allowed on the azure VM’s as you can see below.

This above approach is very much acceptale for large or medium size organisation which also have multi layer firewall devices to protect their hybrid enviroment. However sometimes we require Azure VM’s which need the public IP address. In this scenario you need to follow the second option which will reduce the risk.

The second option is to reduce exposure to a brute force attack by limiting the amount of time that a port is open. The question is how to achieve this.

As you can see below I have another VM which does contain a Public IP address and is part of a public subnet

The best way to achieve this is to enable the JIT (Just in time access) for the Azure Virtual Machines. Now while I say this I should explain why NSG which is also capable to do this activity is not the right fit here. The main reason is that JITA is a combination of Azure RBAC (Role Based Access Control) and NSG.

What is Just in time access for the Azure VM?

Just in time VM access enables you to lock down your VMs in the network level by blocking inbound traffic to specific ports. It enables you to control the access and reduce the attack surface to your VMs, by allowing access only upon a specific need.

Similar to NSG here also we need to mention the ports on the VM where we need to lock down the inbound traffic. The below image will show what is actually going to happen in case of JITA.

As you can see in the above diagram when a user requests access to a VM, Security Center checks in the RBAC(Role Based Access Control) whether the user has write Access to this VM. If the user have write permissions, the request is approved and Security Center automatically configures the Network Security Groups (NSGs) to allow inbound traffic to the management ports for the amount of time you specified. After the time has expired, Security Center restores the NSGs to their previous states.

JIT is a very good option since Azure network administrator don’t need to go again and again and change the NSG settings however it will incurr additional charges to your Azure subscription as it is the part of the Security Center Standard Pricing Tier. For more information on the Security Center Tier’s please click this URL.

Another thing which you can find here that if you upgrade the secuirty tier to standard it will apply to all the eligible resources in a particular resource group. As you can see below it will charge you USD 15 per Node per Month.

So it’s something you should keep in mind so that you will not be surprised after 90 days’s when you will receive your Azure bill and it will include these charges.

Steps to enable Just in Time Access to this VM

Go to Azure Security Center


Go down to the JIT tab as you can see below


Go to the recommended tab in the JIT window


Select the VM where you want to enable JIT


Click on enable JIT on 1 VM

And you can see the default configuration here

Click on Save and JIT has been activated in this VM.

Now you can click on Request Access Button as shown below.

Here you can find the list of default ports which security center recommend to enable the JIT. I have selected port number 3389 for the RDP.

Now MyIP will automatically take the public IP address of your computer as the source IP and allow the RDP access to destination VM which is the Vm where JIT has been configured. Once it’s done you can check the Last User name below where it will show the username which have the access to this VM. For example my account which already have the write access to this VM has been granted RDP permissions in this VM for three hours.

I have tried to RDP to this server and you can see that I can able to login without any problem.

After 3 hours when I have tried again I was unable to RDP and was getting this error

You can also able to edit the JIT policy by clicking on edit option in the configured tab

You can also audit the JIT Activity Log by going to the Activity Log Settings as shown below.

Activity log provides a filtered view of previous operations for that VM along with time, date, and subscription. You can download the log in the CSV format.

If you wanted to remove the JIT you can remove that by clicking the remove button as shown here.

Conclusion

Private IP address helps you to restrict the Azure VM access only two internal users and just in time VM access in Security Center helps you to control access to your Azure virtual machines when the VM’s are having public IP address and thus minimize the risk associated with Brute Force Attacks. I will bring more posts on Azure VM security on future.

Good luck with your Azure Assignment and enjoy your rest of the day/night.

read more
Azure VM

Doing Azure Assessment – Disk IOPS and Bandwidth has high impact on correct sizing

no thumb

Estimated Reading Time: 9 minutes

I know many of you while doing a Azure Assessment mainly focusing on Memory, CPU Core and Size and no. of the disks to find the right T-Shirt size of the Azure VM’s, however most important part which many assessment tools may ignore is the requirement of the correct disk IOPS, latency and throughput sizing, which is critical for most of the applications otherwise you may need to change the template at a later stage which will impact overall migration cost forecast.

To know more about the sizing parameters, you can refer one of my post that the top 10 information which are required for a reasonable sizing estimate of the Azure VM.

People who are from the storage background were very much aware of the terms related to IOPS, latency and throughput. In the golden days of SAN storage the disk manufacturer generally bench mark their product with the value of the maximum throughput the disk can deliver.

Throughput is nothing but the Average IO size X IOPS which is generally measured in MB\Seconds.

This can be compared with the new Bullet train which is going to be launch in India by 2020. It’s the maximum speed which the bullet train can reach, currently the maximum speed of the bullet train is 500 Km\Hr. a disk can also have 500 MB/Sec throughput which is the maximum it can deliver.

The next parameter is the IOPS which is very important for the correct sizing of disks. This means IO operations per second, which means the amount of read or write operations that could be done in one second.

Another important parameter is the IO Size which is the size of the data which can be processed by the I/O operations.

The last and one of the important parameter is the latency. Latency is how fast a single I/O-request is handled.

What is the best way to get the IOPS and throughput information from a Windows Server where the present application is running?

There multiple ways and multiple tools available in the internet like Iometer from Intel and Diskpad however I’ll recommend that if you are evaluating the disks of a Windows OS based system you should always use the Windows Performance Counter for your assessment. The Perfmon will give the required metrics for the correct assessment.

To collect the metrics you should configure the data collector set in a way that it should capture the right set of metrics and the perfmon counter should run in a period when the VM or Physical machine should witness the highest activity. If you consider a business case the metrics for an ERP application can be taken for a period of Monday to Friday because the application is at its pick at that time and that period data should be considered for the right sizing of the IOPS and throughput. However for few database VM’s the highest pick can be at weekend because the team may run some jobs in weekend and if that the case you should consider the data collection period for the weekend. To know the best time period for collecting the metrics you should contact the application owners.

Now let’s consider the main important part of this article. How we are going to determine the correct VM size of an Azure VM, before we understand this we should find out how Microsoft has sized their VM templates. In my analysis I have taken couple of on premise VM’s to understand the sizing.

I must tell that Microsoft is non consistent in the parameters which it has defined for the disk sizes across the template. However in most of the cases Microsoft has considered the following two parameters.

Microsoft measures the disk throughput and they usually consider this two parameters for the throughput calculation.

  • IOPS (Input/output operations per second)
  • MBps (Bandwidth for disk traffic only, where MBps = 10^6 bytes/sec.)

Please note that IOPS is a number here and the unit of Bandwidth is in MBps.

As I have informed you earlier we can collect the server storage data with windows perfomon counter. I have configured the items marked in red in the data collector set which I ran for 24 hrs. to collect the metrics from the server.

To understand it better let’s take three use cases, the first one is for low configuration application server and the 2nd one for the high configuration database server and 3rd one for an old ERP Server.

And in this sample example below you can find that I have taken an example of a low configuration on premise Application Server. As you can see in the below graph I have collected the storage (Physical Disk) data of the VM.

Fig: Physical Disks IOPS and throughput usage for 24 hrs. For the on premise sample application server.

As here you can see the in the above example I have collected the Perfmon data for 24 hrs. in a typical business day and have plotted a graph against the IOPS and throughput (disk bandwidth). In the above example the maximum IOPS is showing 310 IOPS. And in the above graph plot which is only for disk bandwidth the maximum bandwidth is showing around 12 MBps.

Based on the above metrics I can conclude that a VM template which can support IOPS 300 to 400 and bandwidth of above 12 MBps is suitable for this Application. Now let’s took a look into the CPU and Memory utilization of this server.

For the same VM the CPU and Memory Usage is showing as below.

If you look at the CPU utilization you can see the average CPU utilization is around 40% since this system is having two cores, and the average memory utilization is around 2.15 GB. Since CPU utilization is around 40% we can choose a 1 core VM however that will not fit since the memory requirement is high.

So if you take a look on the general purpose A2 series VM you can find that a Standard_A2 template is suitable for this Application.

Now you may ask the question that in this above table we don’t see the information for the 2nd parameter for storage which is disk network bandwidth which I have mentioned above. To get that information you need to refer another table here.

So for the VM templates where you don’t find the Storage bandwidth please refer to the above table. Where it is mentioned that Standard Tier VM will support Max bandwidth of 60 MB/s

As per the assessment with the data collected from the perfmon metrics below table describe the sizing parameters which we have considered and what is the best fit.

ParametersOn Premise VMSelected Azure Template (Standard_A2)
CPU Core22
Memory2.2 GB (Max Utilized)3.5 GB
IOPS310 (Max required)500 (Stripe Volume NA)
Storage Bandwidth12 MBps60 MBps

Now let’s talk about our 2nd example where we considered a database Server. As you can see below this is a high IOPS database server so the graph will look like this as shown below.

In this above figure you can find the IOPS is going above 20000 and network bandwidth is touching 700 Mbps. Let’s now check the CPU and Memory utilization for this server.

The above figure shows the CPU and Memory utilization data. Except few spikes in CPU we can see average CPU utilization is below 30% and the average memory utilization is below 128 GB however there are occasional spikes.

In this example let’s look into the following table for ESv3-series. The ESv3-series is a series of memory optimized VM’s, ESv3-series instances are based on the 2.3 GHz Intel XEON ® E5-2673 v4 (Broadwell) processor and can achieve 3.5GHz with Intel Turbo Boost Technology 2.0 and use premium storage. Ev3-series instances are ideal for memory-intensive enterprise applications.

The ESv3-series VM template table will look like this.

As you can find in the above table in the column number seven it will show the Max IOPS Size and Max Disk Network Size.

Now in next step you need to concentrate on this table for premium disks. You can add the number of disk to achieve the IO and Disk Bandwidth

For the Standard_D32s_v3 VM the IOPS is 51200 which I have marked in red and throughput is 768 MBps also have CPU core of 32 and Memory of 256 GB can be the best fit for this VM. However if we can consider the average CPU utilization, Memory Utilization and IOPS and Disk Network Speed we can also select the Standard_E16s_v3 template to get the max utilization of the resources. This is a call which Azure System Admin need to take. Please note that Azure VM template can be easily upgraded in case utilization causes any issue. Price wise there will be almost 50% difference in both the VM template.

Let’s verify the sizes here which we have considered in this exercise.

ParametersOn Premise VMSelected Azure Template (Standard_E32s_v3)Optimized Azure Template (Standard_E16s_v3)
CPU Core323232
Memory (Average)128 GB (Average) 250 GB (Max)256 GB128 GB
IOPS (Average)200005120025600
Storage Bandwidth (Max)350 MBps768 MBps384 MBPs

Let’s take another 3rd example, this server is old and have only 8 core CPU but 48 GB Memory which has been increased based on the requirement in last 6 years, the utilization is showing as below

In the above physical server the CPU usage is 80 to 100 percent and the max memory usage is 90%

If we look at the IOPS and Disk Network Usage Graph it will show like this

Max IOPS is touching 18000 and max disk network bandwidth is touching 850 MBps

This is a special case where IO and Disk Network Bandwidth Requirement is very high for this case we need to select the high IO intensive VM from the template. And if you look into high IO intensive VM the table look like this

In the above table my selection will be Standard_L8s which will fit for CPU and Memory but for achieving the IOPS and Network Bandwidth requirement we need to consider striped disk volume of minimum (Please refer to above premium disk table) four disk (P40) which will help us to achieve a disk bandwidth requirement of 900 Mbps, however this is not guaranteed/possible as per this article by MS. If you need guarantee, you need to choose Standard_L32S which will be a dedicated machine for your workload in Azure and very much over kill in terms of CPU and Memory but fit well for the Network Bandwidth Usage, however it will be super expensive as well.

As per the below link, it is mentioned that VM throughput limit should be higher than the combined IOPS/throughput limit of the attached disks, which will discard our above disk striping idea since the combined limit of the VM is less than what is required here. Please check this URL for more details.

Clearly as per the above statement VM template overrides the combined striped disk IOPS and Disk Network bandwidth which is a sad news. L

Let’s verify the correct sizing if we go with above article by MS, here which we can considered in this exercise.

ParametersOn Premise VMSelected Azure Template (Standard_L32S*)
CPU Core832
Memory48 GB (Max Utilized)256 GB
IOPS18000 (Max required)40000
Storage Bandwidth900 MBps1000 MBps

Standard_L32S is a dedicated VM and will incur huge cost for the enterprise and it will overkill the CPU and memory.

Conclusion:

As we have seen in our examples IOPS and Disk Network Bandwidth is playing an important role to do correct VM sizing, so it is always recommended that you should consider these parameters while you do your next Azure Assessment otherwise it will be nightmare for you. If you are migrating on premise VM or Physical Server to Azure and you find IOPS and Network Bandwidth Requirement is very high, you should always request the application owners if they can tune the application or database in the server so that it will help in reducing the T shirt size in the VM. Azure assessment is not very easy process and it needs time and effort to make the best utilization of your Azure budget.

read more
Azure VM

How to take backup of the Azure storage account and why incremental snapshot should be the best practice to save the cost

no thumb

Estimated Reading Time: 5 minutes

I have been frequently asked this question in many meetup by Azure developers who have created hundreds and thousands of containers inside the Azure storage account and they wanted to know how they can take the backup of the complete Azure storage account.

I think this is a common question which has been asked by many people.

To answer this question I should say that practically it’s not possible to take the backup of Azure storage account what we need to do is to take the snapshot of the blob container and download it for a point in time backup.

Fig: Azure Blob Hierarchy

What is a blob snapshot?

As per Microsoft the snapshot is a read-only version of a blob that’s taken at a point in time. Snapshots are useful for backing up blobs. After you create a snapshot, you can read, copy, or delete it, but you cannot modify it.

A snapshot of a blob is identical to its base blob, except that the blob URI has a DateTime value appended to the blob URI to indicate the time at which the snapshot was taken.

Most common use case of Snapshot.

The most common use case of blob snapshot is the snapshot of the VHD file. A VHD store the current information of a VM disk. If you have taken the snapshot of a VHD file you can later create a VM from that snapshot. In this article I am not going to show you how to do that because that is already shown in many videos and blogs. In my present post I will try to explain the underline mathematics of the Azure Blob Snapshots so that you can understand the billing of Azure Blob Snapshot.

We can also backup of disks using the snapshot for Azure VM’s, this is a common practice and Azure administrator generally schedule backup in a regular interval of time.

Why it’s a case of worry

I have seen many Azure admins lately surprised with many billing related issues related to Azure storage account which has been created and in a period of time multiple snapshots has incurred huge billing cost as there is a math behind every snapshot and it is also important to delete the snapshot in time to time to save the cost so it is very important we should understand how the snapshot billing done in Azure.

Understanding Snapshot Billing

To understand this in a better way let’s take a very simple example of multiple identical twins who are studying in the same school. Now in my example I have considered multiple identical twins in a class in a school and there is a special rule in this school that school will charge only single fee for the identical twins.

Scenario 1:

In this below figure let’s consider there are three students in a class (Left side) and they have three identical twins (Right Side) in the class and as per the rule of the school ,the school will charge the fees only for three students who are in left side. In this below figure left side students represent three blobs and right side students represents the snapshot of each blob. So if the fees for each student is considered has USD 1000 the total fees needed to be paid is USD 3000 in this case.

Base Blob Snapshot

In technical words in the above figure you have three blocks in the left hand side blob and in the right side you have three snapshots of those blocks taken in any point of time and after that there is no change done in the base blob so the charges incurred only for the three unique blocks in the left hand side.

Scenario 2:

In this scenario let’s say the 3rd student has changed the color of his uniform to green in this case the school will charge the fees from four students instead of three since the school will consider the student who has changed the color of his uniform as another unique student.

Base Blob Snapshot

In technical words if base blob is being updated and the 3rd block in the left hand side has been changed however no new snapshot has not been taken. Since there is a change in the 3rd block Azure will charge for three previous snapshot and one for the third base block which has been updated.

Scenario 3:

In this scenario the 3rd student in the left hand side completely replaced by a new student but there is no change in the left hand side identical twins. In this case the school will charge the fees from four students instead of three since the school will consider the student who has been replaced as another unique student.

In technical words in this case the base blob has been updated, but the snapshot has not. Block 3  in left hand side was replaced with a new block in the base blob, but the snapshot still reflects block 3. As a result, the account is charged for four blocks.

Scenario 4:

In this scenario all the students in the left hand side has been replaced by new students and there is no change in the identical students in the right side so the school will consider all the six students as unique and take fees from all of them.

In technical words the base blob has been completely updated with new set of blocks and all the original blocks has been replaced also in the left hand side there is no change in the snapshot blocks so Azure will charge for all the six blocks present here.

Can we copy the snapshot to a different storage account?

Yes we can copy a snapshot created in a storage account to a different storage account as a blob. When a snapshot copied from one storage account to another account it will maintain the same size of the base blob and will incur same cost of storage.

What is Incremental Snapshot and why it is considered as the best practice at present?

Incremental snapshot is similar to incremental backup of any database, here in case of a blob when a snapshot is created from the base blob, with the help of an API called GetPageRange API only changes which happened just after the last snapshot taken. When we copy one complete snapshot from one storage account to another storage account that can be very slow and can consume much storage space which will increase the storage cost. With the incremental snapshot backup successive copies of the data contain only that portion that has changed since the preceding snapshot copy was made.
This way, the time to copy and the space to store backups is reduced.

Conclusion

If you are following a customized backup solution for the Azure blobs the snapshot is the best possible solution at this moment. Incremental snapshot can reduce the cost and helps you to manage the storage cost effectively.

read more
1 2
Page 1 of 2