close

Azure VM

All related post for Microsoft Azure VM

Azure NetworkAzure VM

Mozilla Firefox Overcomes Microsoft Edge Default Barrier with In-Built Button

no thumb

Mozilla-Firefox-Logo

Microsoft Edge is the default browser on Windows, as you would expect considering it is Microsoft’s own platform. However, the company gives users the choice to switch to other default browsers, among them Mozilla Firefox. We recently reported how switching to another default browser (or any program) will become more difficult on Windows 11.

To overcome Microsoft’s various paths to change the default, Mozilla has quietly made some changes. The company is making it easier for users to switch permanently to Firefox. The changes will work on Windows 11, as well as Windows 10.

With the release of version 91 of Firefox, Mozilla included a method for setting the browser as default in Windows. In fact, this was a clever bit of reverse engineering, taking Microsoft’s one-click default choice for Edge and applying it to Firefox.

In other words, it is now possible to set Firefox as the default from directly within the browser.

“People should have the ability to simply and easily set defaults, but they don’t,” Mozilla tells The Verge. “All operating systems should offer official developer support for default status so people can easily set their apps as default. Since that hasn’t happened on Windows 10 and 11, Firefox relies on other aspects of the Windows environment to give people an experience similar to what Windows provides to Edge when users choose Firefox to be their default browser.”

Windows 11 Changes

With the launch of Windows 11 next month, making non-Microsoft apps default will be more difficult.

On Windows 11, users get just one chance to see and select browser alternatives before Microsoft hides them Specifically, like on Windows 10 when you open a web link for the first time, or install a new browser, the platform gives you the opportunity to select it as default.

Unless you choose “always use this app” this first time, the browser will not become default. Sure, this is the same as normal, and equal to Windows 10. Users simply could right click a file, web link, app, or shortcut and choose the “open with” option and then “always use this app”.

On Windows 11, this process is different and more confusing. Instead of a catch all toggle like “always use this app” or “always open with this program”, Microsoft is changing app defaults. The company will now force users to set defaults for every link type or file. Yes, instead of a universal toggle, you have to work through each format like HTML, PDF, SVG, HTTPS, etc.

For example, to set Mozilla Firefox as the default browser, you will need to change the default across 11 individual file types. Of course, this is an overly complex process with the only explanation being Microsoft is trying to make it harder for users to choose other browsers.

Tip of the day: Did you know that your data and privacy might be at risk if you run Windows 10 without encryption? A bootable USB with a live-linux distribution is often just enough to gain access to all of your files.

If you want to change that, check out our detailed BitLocker guide where we show you how to turn on encryption for your system disk or any other drive you might be using in your computer.

Source Winbuzzer

read more
Azure VM

Microsoft Launches Visual Studio 2022 Preview 3

no thumb

Microsoft-Visual-Studio-2022-Preview

Microsoft is now rolling out Visual Studio 2022 Preview 3 to Windows 10 users. With this latest preview for VS 2022, Microsoft is adding several new features. Among the additions are a dark theme update, support for multi-repo Git, and a new project designer.

Starting with the project designer, Microsoft says the tool works with .NET SDK projects. With the designer, Visual Studio 2022 users can create projects more easily. This is possible through a single columns of actionable tools that are more clearly defined.

While previous VS 2022 previews have supported dark theme, Microsoft says Preview 3 is adding improvements. In fact, the company describes these improvements as “big changes”. For example, a new accent color provides a viewing experience that is easier on the eyes.

Although it was rolled out in Preview 2, Microsoft is talking up the Hot Reload support for C++ apps in the Preview 3 changelog. Earlier this month, Microsoft published a blog post highlighting the benefits of Hot Reload.

In terms of code, the new Hot Reload, it can work alongside the existing XAML Hot Reload feature. Furthermore, it also works in unison with debugger capabilities like breakpoints that are already part of Visual Studio 2022 Preview.

Changelog

  • “One example is the improvements in the attach to process dialog. The dialog is now async, shows the command line arguments for processes, IIS information for w3wp.exe processes, and lasty the dialog has an optional tree view mode for showing parent-child process relationships. These capabilities reduce a lot of the friction in deciding which process to debug in advanced scenarios.
  • With Preview 3 there’s a brand-new project properties designer for .NET SDK projects. The new designer is easier to use and browse with a single column of options with clear descriptions. Best of all the new designer has built in search so it’s now easy to
  • Dark theme improvements: In preview 3 you’ll see big changes to the dark theme to improve the usability of Visual Studio. The new dark theme has a new accent color, which is less intense, and used more sparingly to reduce distraction and eyestrain. The new accent color now matches the latest product visual identity, which helps you quickly find the right window when they are navigating among multiple tools.
  • With Preview 2, Hot Reload now supports C++ apps.
  • Developing modern apps: With Visual Studio 2022, we are building tools to both support your existing applications and tools for building the latest types of applications. For example, in preview 3 we’re adding new capabilities to run tests in Linux environments and a new project types for frontend development with React and Vue.js applications using either TypeScript or JavaScript.
  • Remote testing: With remote testing you can now get feedback from your cross-platform tests, and even debug them from the comfort of Visual Studio! The feature works with a range of remote environments such as Linux containers, WSL, and over SSH connections – empowering you to test modern cross platform .NET applications.”

Tip of the day: By default, the most used apps group in your start menu shows the six most frequently used apps. However, you can customize your Windows 10 Start Menu to exclude certain apps from the list or get rid of the most used apps section entirely.

Source Winbuzzer

read more
Azure VM

Microsoft Azure Monitor Launches to Improve Windows Virtual Desktop

Azure-Monitor-Windows-Virtual-Desktop-572×420

Microsoft has released a tool to make it easy for organizations to manage their Windows Virtual Desktop environments. Called Azure Monitor, Microsoft says the tool is now generally available from this week.

The big benefit of Azure Monitor for Windows Virtual Desktop (WVD) is the ability to gain betting insights into virtual desktop environments.

Windows Virtual Desktop (WVD) allows users to create a virtualization of Windows 7, Windows 10, Office 365 ProPlus apps and third-party apps. With WVD, customers receive remote desktop sessions by running virtualizations in Azure virtual machines.

While WVD provides a secure way to run Windows on the cloud, it has some complex components. The aim of Azure Monitor is to make the complexities of the service less of a problem.

“With Azure Monitor for Windows Virtual Desktop, you can find and troubleshoot problems in the deployment, view the status and health of host pools, diagnose user feedback and understand resource utilization,” Microsoft’s announcement points out.

Improvements

As Azure Monitor moves to general availability, it comes with the following improvements over the preview version:

  • Improved data collection and new guidance to help you optimize for cost
  • Updated setup experience with easier UI, expanded support for VM set-up, automated Windows Event Log setup, and more
  • Relocated Windows Virtual Desktop agent warnings and errors at the top of the Host Diagnostics page to help you prioritize issues with the highest impact
  • Accessibility enhancements
  • Workbook versioning: GA release is Version 1.0.0

Azure Monitor is not free, so it’s worth checking out Microsoft’s pricing page here. The company recommends customers start with pay-a-you-go pricing and adjust to scale once more usage information is known.

Azure Monitor only works with the latest version of Windows Virtual Desktop of newer. Interestingly, Microsoft does not specifically say which version this is.

Source Winbuzzer

read more
Azure VM

Apple Ready to Join the VR Party with Headset Launch in 2022

apple-arkit-official-696×391

Virtual reality (VR) and augmented reality (AR) are technologies that promise to change our way of life. While they can be transformative in entertainment, VR and AR also open doors in industry and eventually the possibilities could be endless. The biggest names in tech are already taking VR and AR seriously, but one is conspicuous by its absence… Apple.

While giants like Microsoft, Google, Samsung, Sony, and others already have VR and AR hardware/platforms, Apple does not. Still, we are told regularly that the company is serious about the technology. We are told Apple’s advancements are at least on par with its rivals, minus the hardware.

We are told one day Apple’s developments will reach market in the form of a platform and device. A new report by Bloomberg suggests that may happen. However, it seems Apple remains in no rush to jump on board the VR train. In fact, the report says the company’s first headset will not arrive until 2022.

As we have come to expect from Cupertino, the product will be high-end. It will also reportedly be aimed at a niche market with Apple not expecting big sales. That said, the device will be consumer-focused and capable of gaming, communication, and video. Furthermore, it could also support AR in some capacity.

Is this Apple’s big play, developing a headset that is both VR and AR? We are used to Apple defining categories by making devices useful to everyday users. Combining VR and AR technology could be a major step forward.

High-End Specs

Although, the report points out AR is not the focus of the headset. Instead, VR will be the leading technology. Apple still has bigger plans for AR, including a pair of glasses that will likely arrive later.

Developed under the codename N301, Apple’s headset will be made of premium materials and could rival Microsoft’s HoloLens in terms of price ($3,500). The company will use the most powerful processors it has at its disposal and a fan to keep the device cool. Elsewhere, it is constructed of fabric materials and displays that best the resolution of any VR headset currently available.

That fan has caused some design problems, the report suggests. With a fan and high-end components, the headset is too big and uncomfortable. To save size, Apple removed compatibility for people who wear glasses.

Instead of cutting out a larger percentage of the population with that space-saving, the company has included a way to slot prescription lenses into the headset.

How this will work remains to be seen and it is worth noting we are very much in the rumor phase with this story.

Tip of the day:

Did you know you can use Windows 10´s built in antivirus Microsoft Defender also with scheduled scans? In our tutorial we give you step-by-step instructions on how to program your personal scan-schedule to keep your free of malware.

Source Winbuzzer

read more
Azure ADAzure VM

Microsoft Sustainability Calculator Aims to Help Cloud Customers Manage their Carbon Footprint

no thumb

Microsoft has announced a new tool that will help cloud customers better understand their carbon emissions. Called the Microsoft Sustainability Calculator, the product has landed in private preview ahead of a future launch.
According to Redmond, the aim of the Microsoft Sustainability Calculator is to give customers more oversight across their carbon output. More specifically, to become more transparent about what their environmental impact is.
“It’s challenging to make and meet meaningful carbon reduction goals without the ability to measure carbon emissions,” Microsoft says.

Microsoft has announced a new tool that will help cloud customers better understand their carbon emissions. Called the Microsoft Sustainability Calculator, the product has landed in private preview ahead of a future launch.

According to Redmond, the aim of the Microsoft Sustainability Calculator is to give customers more oversight across their carbon output. More specifically, to become more transparent about what their environmental impact is.

“It’s challenging to make and meet meaningful carbon reduction goals without the ability to measure carbon emissions,” Microsoft says.

By leveraging AI technology, the calculator provides accurate accounting for carbon usage. It highlights the impact caused by Microsoft cloud service across an organization’s footprint. Armed with the information, company’s can make concise decisions about their environmental impact.

For example, the Microsoft Sustainability Calculator measures the impact of moving regular applications to the cloud and how this can reduce the carbon output of a company.

Carbon Negative

It looks like the tool goes hand-in-hand with Microsoft’s own push to be carbon negative by 2030. Redmond made that pledge earlier this year. The decision followed a 2017 commitment to cut 75% of its carbon emissions by the same date and builds on 2019 revisions of 70% renewable energy by 2023.

“While the world will need to reach net zero, those of us who can afford to move faster and go further should do so,” Microsoft President Brad Smith said of the new commitment. “That’s why today we are announcing an ambitious goal and a new plan to reduce and ultimately remove Microsoft’s carbon footprint.”

Source Winbuzzer

read more
Azure VM

Independent Benchmarks Suggest AMD’s Ryzen H-Series Could Beat Intel Comet Lake

ryzen-4000-windows-696×392

AMD made bold claims at CES 2020 last month, including that its Ryzen H-series of laptop chips would be on-par with Intel’s high-end and even the i7-9700K. However, its main comparison was against its competition’s current offering, the i7-1065G7. Intel may not be advancing at the same pace as AMD, but it has new chips due sometime this year.

Now, Twitters _rogame has spotted 3DMark results that pit the 4800H and an RX 5600M against Intel’s i7-10760H Comet Lake with a 2060 Max Q. The results are quite interesting. The 4800H edges by 10-11% in individual graphics and physics scores, but Intel wins out by 7.8% in the combined test.

The graphics score of a 3D Mark test measures just the GPU and the physics score solely the CPU. As you’d expect, combined tests both, which indicates that Intel has found a better meshing between the hardware at this early stage, perhaps due to power management.

Share on Facebook
Tweet on Twitter

AMD made bold claims at CES 2020 last month, including that its Ryzen H-series of laptop chips would be on-par with Intel’s high-end and even the i7-9700K. However, its main comparison was against its competition’s current offering, the i7-1065G7. Intel may not be advancing at the same pace as AMD, but it has new chips due sometime this year.

Now, Twitters _rogame has spotted 3DMark results that pit the 4800H and an RX 5600M against Intel’s i7-10760H Comet Lake with a 2060 Max Q. The results are quite interesting. The 4800H edges by 10-11% in individual graphics and physics scores, but Intel wins out by 7.8% in the combined test.

The graphics score of a 3D Mark test measures just the GPU and the physics score solely the CPU. As you’d expect, combined tests both, which indicates that Intel has found a better meshing between the hardware at this early stage, perhaps due to power management.

For reference, the 4800H is an 8-core/16 thread 43W with a base close of 2.9 GHz. The i7-10750H has 6 cores and 12 threads with a base of 2.6. However, the Intel chip is able to achieve a higher boost clock than AMD’s chip, at 4.69 GHz vs 2.97 GHz. 3D Mark tends to reward higher clock speeds over cores, being primarily based on gaming performance.

As a result, we may see Ryzen giving even better results in scenarios that can better utilize its cores. That could include tasks like 3D rendering, editing, and general multitasking. Even so, you can’t fully trust benchmarks at this stage as they may be based on prototype hardware or have other inaccuracies.

It’ll be interesting to see how they stack up once reviewers get their hands on them, as this could be the first truly convincing laptop lineup AMD has managed to muster. The question for Microsoft fans, of course, is what the company’s Surface lineup will go with, the Surface Laptop 3 somewhat unsuccessfully breaking the mold with a Ryzen offering.

Source Winbuzzer

read more
Azure VM

Announcing new AMD EPYC™-based Azure Virtual Machines

no thumb

Microsoft is focused on giving our clients industry-driving execution for every one of their outstanding burdens. Subsequent to being the main worldwide cloud supplier to report the sending of AMD EPYC™ based Azure Virtual Machines in 2017, we’ve been cooperating to keep carrying the most recent development to endeavors.

Today, we are reporting our second-age HB-arrangement Azure Virtual Machines, HBv2, which highlights the most recent AMD EPYC 7002 processor. Clients will almost certainly build HPC execution and versatility to run physically bigger remaining burdens on Azure. We’ll additionally be bringing the AMD 7002 processors and Radeon Instinct GPUs to our group of cloud-based virtual work areas. At last, our new Dav3 and Eav3-arrangement Azure Virtual Machines, in see today, give more client decision to meet an expansive scope of prerequisites for universally useful outstanding tasks at hand utilizing the new AMD EPYC™ 7452 processor.

Our developing Azure HPC contributions

Clients are picking our Azure HPC contributions (HB-arrangement) consolidating original AMD EPYC Naples for their exhibition and versatility. We’ve seen a 33 percent memory data transfer capacity advantage with EPYC, and that is a key factor for a considerable lot of our clients’ HPC outstanding tasks at hand. For instance, liquid elements is one remaining task at hand in which this preferred position is profitable. Purplish blue has an expanding number of clients for whom this is a centerpiece of their R&D and even creation exercises. On ANSYS Fluent, a broadly utilized liquid elements application, we have estimated EPYC-controlled HB examples conveying a 54x exhibition improvement by scaling crosswise over almost 6,000 processor centers. Furthermore, this is 24 percent quicker than a main uncovered metal arrangement with an indistinguishable InfiniBand organize. Furthermore, not long ago, Azure turned into the primary cloud to scale a firmly coupled HPC application to 10,000 centers. This is 10x higher than what had been already conceivable on some other cloud supplier. Sky blue clients will be among the first to exploit this ability to handle the hardest difficulties and enhance with reason.

New HPC, universally useful, and memory streamlined Azure Virtual Machines

Purplish blue is proceeding to build its HPC abilities, thanks to a limited extent to our coordinated effort with AMD. In primer benchmarking, HBv2 VMs including 120 CPUs from the second era EPYC processor are exhibiting execution additions of more than 100 percent on HPC remaining tasks at hand like liquid elements and car collision test examination. HBv2 adaptability points of confinement are likewise expanding with the cloud’s first sending of 200 Gigabit InfiniBand, on account of the second era EPYC processor’s PCIe 4.0 ability. HBv2 virtual machines (VMs) will support up to 36,000 centers for MPI outstanding tasks at hand in a solitary virtual machine scale set, and up to 80,000 centers for our biggest clients.

We’ll additionally be bringing AMD EPYC 7002 processor to our group of cloud-based remote work areas, blending with the Radeon MI25 GPU for clients running Windows-based conditions. The new arrangement offers exceptional GPU resourcing adaptability, giving clients more decision than any time in recent memory to measure virtual machines right from 1/eighth of a solitary GPU up to an entire GPU.

At long last, we are likewise declaring new Azure Virtual Machines as a feature of the Dv3 and Ev3-arrangement—improved for broadly useful and memory concentrated outstanding burdens. These new VM sizes highlight AMD’s EPYC™ 7452 processor. The new broadly useful Da_v3 and Das_v3 Azure Virtual Machines give up to 64 vCPUs, 256 GiBs of RAM, and 1,600 GiBs of SSD-based impermanent capacity. Moreover, the new memory enhanced Ea_v3 and Eas_v3 Azure Virtual Machines give up to 64 vCPUs, 432 GiBs of RAM, and 1,600 GiBs of SSD-based brief stockpiling. Both VM arrangement bolsters Premium SSD plate stockpiling. The new VMs are as of now in see in the East US Azure area and with accessibility coming soon to different areas.

Da_v3 and Das_v3 virtual machines can be utilized for a wide scope of broadly useful applications. Model use cases incorporate most venture grade applications, social databases, in-memory reserving, and investigation. Applications that request quicker CPUs, better nearby plate execution or higher recollections can likewise profit by these new VMs. Furthermore, the Ea_v3 and Eas_v3 VM arrangement are enhanced for other enormous in-memory business basic remaining burdens.

Details shortly…Azure.Microsoft

read more
Azure VM

How to minimize Brute Force Attacks by hackers in Azure VM’s

no thumb

Estimated Reading Time: 6 minutes

In one of my post in June I have mentioned about the Microsoft Data Center Public IP address ranges and provided the URL to download them. Please note that this IP ranges are also well known to hackers and they are very popular in the hacker’s community. Hacker’s now a days generally uses the Brute force mechanismto attack this IP range. As per the calculations on an average hackers make 5 login attempt per minute to this IP address ranges on RDP and SSH ports and this is going to increase in future as more and more valuable data and information is moving to azure every day.

 

Picture Credit: FreeClipart.org

There are two ways to minimize or get rid of this attack.

First option is not to use the public IP address for the VM’s and setup all the VM’s in the local area network with private IP address. This is a common scenario which most of the large enterprises are following where they setup the site to site VPN or express route to their on premise data center and Azure and setup a DNS server on premise or azure which assign the private IP address to each VM’s. In this case when a VM is configured for private IP you can see the following thing in place for the public IP address. The public IP address field for this VM is blank.

The network settings of this type of VM will look like this

In this scenario best practice is that you should use a jump box which may be a terminal server in your local area network to login to this VM’s, once you login you can also able to ping the VM if ICMP is allowed on the azure VM’s as you can see below.

This above approach is very much acceptale for large or medium size organisation which also have multi layer firewall devices to protect their hybrid enviroment. However sometimes we require Azure VM’s which need the public IP address. In this scenario you need to follow the second option which will reduce the risk.

The second option is to reduce exposure to a brute force attack by limiting the amount of time that a port is open. The question is how to achieve this.

As you can see below I have another VM which does contain a Public IP address and is part of a public subnet

The best way to achieve this is to enable the JIT (Just in time access) for the Azure Virtual Machines. Now while I say this I should explain why NSG which is also capable to do this activity is not the right fit here. The main reason is that JITA is a combination of Azure RBAC (Role Based Access Control) and NSG.

What is Just in time access for the Azure VM?

Just in time VM access enables you to lock down your VMs in the network level by blocking inbound traffic to specific ports. It enables you to control the access and reduce the attack surface to your VMs, by allowing access only upon a specific need.

Similar to NSG here also we need to mention the ports on the VM where we need to lock down the inbound traffic. The below image will show what is actually going to happen in case of JITA.

As you can see in the above diagram when a user requests access to a VM, Security Center checks in the RBAC(Role Based Access Control) whether the user has write Access to this VM. If the user have write permissions, the request is approved and Security Center automatically configures the Network Security Groups (NSGs) to allow inbound traffic to the management ports for the amount of time you specified. After the time has expired, Security Center restores the NSGs to their previous states.

JIT is a very good option since Azure network administrator don’t need to go again and again and change the NSG settings however it will incurr additional charges to your Azure subscription as it is the part of the Security Center Standard Pricing Tier. For more information on the Security Center Tier’s please click this URL.

Another thing which you can find here that if you upgrade the secuirty tier to standard it will apply to all the eligible resources in a particular resource group. As you can see below it will charge you USD 15 per Node per Month.

So it’s something you should keep in mind so that you will not be surprised after 90 days’s when you will receive your Azure bill and it will include these charges.

Steps to enable Just in Time Access to this VM

Go to Azure Security Center


Go down to the JIT tab as you can see below


Go to the recommended tab in the JIT window


Select the VM where you want to enable JIT


Click on enable JIT on 1 VM

And you can see the default configuration here

Click on Save and JIT has been activated in this VM.

Now you can click on Request Access Button as shown below.

Here you can find the list of default ports which security center recommend to enable the JIT. I have selected port number 3389 for the RDP.

Now MyIP will automatically take the public IP address of your computer as the source IP and allow the RDP access to destination VM which is the Vm where JIT has been configured. Once it’s done you can check the Last User name below where it will show the username which have the access to this VM. For example my account which already have the write access to this VM has been granted RDP permissions in this VM for three hours.

I have tried to RDP to this server and you can see that I can able to login without any problem.

After 3 hours when I have tried again I was unable to RDP and was getting this error

You can also able to edit the JIT policy by clicking on edit option in the configured tab

You can also audit the JIT Activity Log by going to the Activity Log Settings as shown below.

Activity log provides a filtered view of previous operations for that VM along with time, date, and subscription. You can download the log in the CSV format.

If you wanted to remove the JIT you can remove that by clicking the remove button as shown here.

Conclusion

Private IP address helps you to restrict the Azure VM access only two internal users and just in time VM access in Security Center helps you to control access to your Azure virtual machines when the VM’s are having public IP address and thus minimize the risk associated with Brute Force Attacks. I will bring more posts on Azure VM security on future.

read more
Azure VM

What you should know about Azure’s Low priority VM’s and possible use cases

vm logo4

Estimated Reading Time: 4 minutes

In a recent design discussion with the development team folks we were talking about the Azure Low Priority VM’s deployment in their next project to save the cost and this is quite natural since after its launch in May it has drawn much attention from the press and many of you like other enterprises are trying to get the advantage of the Azure low cost low priority VM’s, however Azure low priority VM’s needs to have a good business case which can be correlated with good use case. Without a good business case and a good use case it will make no sense to experience the Power of Azure Low Priority VM’s

What is Azure Low Priority VM’s?

Similar to AWS spot instances, Microsoft also came out in last May with Azure Low Priority VM’s. Low Priority VM’s are the VM’s which are available at significant discounted price. Low priority VM’s are provided from the unused set of VM’s in Azure or in other words it can be allocated from the Azure Excess Compute Capacity to the customer who request for it.

Price for Azure Low Priority VM’s

Low-Priority Linux VM’s accompany 80% discount while the Windows VM’s accompany 60% discounted price. The discount is figured in contrast with their On-Demand hourly cost. This is available across most of the VM instance types in Azure.

A sample discounted price for the general purpose standard Av2 series windows instances can be seen below.

For more details on pricing of the Azure Low Priority VM’s please check this URL

Features of this low priority VM’s are as follows:

  • Up to 80% discount, fixed price.
  • Uses surplus capacity, availability can vary all the time.
  • VM’s can be seized at any time.
  • Available at all regions.
  • Do more with same price.

Up to this everything is fine however please note the point number three where I have mentioned that the VM can be seized at any time so always remember that nodes can go up and down in Low Priority VM’s so all the workloads are not suitable for the Azure Low Priority VM’s, now the question is what type of workloads are suitable for the Azure Low Priority VM’s.

What type of workloads are suitable for the Azure Low Priority VM’S?

  • The workload which is tolerant of interruption
  • The workload which is tolerant of reduced capacity

Suitable Workloads are as follows

  • Batch processing ( Asynchronous distributed jobs and tasks running on many VM’s)
  • Stateless Web UI
  • Containerized applications
  • Map/Reduce type applications
  • Job completion time flexible
  • Short duration tasks

Low-priority VMs are currently available only for workloads running in Batch however it will be extended to other workloads in future.

Now as you see one of the most common use case is the Batch tasks and when we talk about the batch task one common example which will come to our mind is what will happen if the VMs when a job is interrupted due to VM preemption?

In case of any interruption the tasks will be automatically executed and rescheduled and re executed at a later stage when the VM’s are available again.

Lifecycle of the Low Priority VM’s batch job in case preemption

What are options of creating the low priority VM’s?

There are multiple options available with Azure low priority VM pool creation and it depends on what is the target.

Option 1: Lowering the cost

In this scenario all the VM’s will be configured as the low priority VM in the pool. No dedicated VM’s will be available.

Option 2: Lowering the cost with a guaranteed baseline

In this scenario the pool will be configured for fixed number of low priority VM’s and a fixed number of high priority VM’s (The number of low priority VM’s will be 60 to 80%)

Option 3: Lowering the cost while maintain a capacity

In this scenario pool will have all low priority, set dedicated = preempted. In this case batch will have full dedicated VM’s but if it find low priority VM’s it will scale down to the low priority VM’s

Steps to check number of Low Priority VM’s available currently

First navigate to Azure Batch Accounts

And click on create the batch account in your subscription.

Once the batch account has been created you should be able to find this as shown below

You can view the metrics of your low priority VM’s in the below Metrics tab

You can click on quota to see the available quota limit in this batch account

Conclusion:

At present low-priority virtual machines (VMs) can be used to reduce the cost of Batch workloads. Low-priority VMs make new types of Batch workloads possible by providing a large amount of compute power that is also economical. If your organization is running lots of batch workloads everyday low priority is certainly will be one of your choice in Azure.

read more
Azure VM

File Server migration strategy in Azure with Zero down time for end users without any ACL loss

File Server migration

Estimated Reading Time: 5 minutes

Are you planning to move your on premise file servers to Azure if yes this post can help you to do a better planning for the steps required for seamless movement of the file share to Azure, before you plan the actual move let’s see what are very important factors which we need to consider before the move. The most important factors which should be considered for the file server migrations are as follows:

Does the new file Share Security, authentication and ACL (Access Control List)?

As per our testing and multiple MS articles currently the Azure File Share doesn’t meet all the above requirement, for example the Active Directory-based authentication and ACL support is not present in Azure File Share which is one of the important requirement.

How the end user are accessing the data?

In present days most of the windows based file servers enterprises are using DFSR for the file share technology. In Azure if we mount the file share by using SMB, we don’t have folder-level control over permissions instead we can use shared access signatures (SAS) to generate tokens that have specific file permissions, and which are valid for a specified time interval but that is somewhat will be a complete new to the users and will be a complete change the way how you have implemented the file share in your current on premise environment.

How many users/clients can access the file simultaneously?

The current quota in Azure File Share is 2000

What is the maximum size of the File Share?

Currently the Azure File Share supports maximum up to 5 TiB of data storage. In future it may support upto 100 TiB.

Sample USE Case:

Let’s consider a very common use case which we have considered for this Article. We have considered a large enterprise which have multiple locations around the globe and there are more than 100 file servers which is currently being used. All the file servers are not very big but total data size is around 40 TB. Now in this use case we have consolidated the data in 12 Azure VM’s in different Azure Regions instead of 100 servers on premise. We have achieved the same with the help of DFSR.

Steps we have followed:

To achieve this we have followed the below steps

Fig: Migration steps to move on premise Windows based File Servers to Azure IaaS

Why DFSR is still the best option: If you want to copy files with same set of permissions (same hash) and it should replicate files with latest changes.

DFSR components: DFSR namespace – This is used to publish the data to end users. People can access the virtual Namespace and get to the files and folders on DFSR server.

DFS Replication: This is use to replicate the data between the servers. We can also control the schedule and bandwidth of DFSR replication. We can mark servers read only too. This facility will force read only attribute to the server and no one will be able to make any changes to the specific server. DFSR replication works with Replication groups. In a replication group we define the folders to be replicated between 2 or more servers. This can be fully mesh or we can control it like Hub and Spoke via. Connections. DFSR configures some hidden folders under the replicated folders and stores internal data before processing. We should not remove or add content manually on these folders.

Comparison test between RoboCopy and AzCopy

The question came to our mind whether we will use Robocopy or AzCopy to stage the data test. To test the speed we have done the following comparison test.

Here is the test result:

ToolSize (GB)Time (Min.)Time (Sec.)ACL (Permissions)
RoboCopy

1

17

19

Intact
AzCopy

1

2

8

Lost

It’s very clear that you can’t use AzCopy since the ACL (Permissions) are lost. (Probably that is reason why DoubleTake uses Robocopy internally in their application. J)

We did Robocopy to copy the data from one server to the other to reduce the time for DFSR replication. You can read this small article to understand how fast it is to pre seed the data with Robocopy, rather letting DFSR replicate all of it.

Example command we used to prepopulate the data is:

robocopy.exe “\\WAI-FS01.whyazure.in\j$\DFSR\ABNU-FS-A” “E:\ABNU-FS-A” /e /b /copyall /r:6 /w:5 /MT:64 /xd DfsrPrivate /tee /log:E:\RobocopyLogs\servername.log

This above command is copying folder name ABNU-FS-A to Local E drive on the server from where we are running the command.

MT64 is the thread count, default is 8, and with 16 MT we can copy 200 MB in few seconds. However, as we faced some issues with the network we usually now are running 16 Threads to make sure robocopy will not hang.

Once we robocopy the data we check the file hash. Example is below:

To check the data file hash on the remote source server is:

Get-dfsrfilehash \\WAI-FS01.whyazure.in\j$\DFSR\ABNU-FS-A * – this is to check the file hash on all the folders under ABNU-FS-A.

Get-dfsrfilehash E:\ ABNU-FS-A\*

Note: we need AD PowerShell module to run above command. Once this is done, we add the E drive folder to the replication group and let it sync with DFSR. As we have already copied the data and file hash, matches it will take just few hours for GB’s of data. That’s all.

Now People may think why we have not used the new AzureFileSync which is the buzzword now a days for FileShares

Although we have not used the Azure FileSync, however let’s discuss few things about the Azure File Sync.

What is Azure FileSync?

With Azure File Sync, shares can be replicated to Windows Servers on-premises or in Azure. The users would access the file share through the Windows Server, such as through an SMB or NFS share. This is useful for scenarios in which data will be accessed and modified far away from an Azure datacenter, such as in a branch office scenario. Data may be replicated between multiple Windows Server endpoints, such as between multiple branch offices.

Why this is not the right fit for the work which we are doing?

The main use case where we can use Azure File Share is if you are having multiple branch offices with very slow network speed. The best use case is On-premises with slow network, where the Windows, Linux, and macOS clients can mount a local on-premises Windows File share as a fast cache of the Azure File share. Since we have very good bandwidth to Azure from all the branches with Site to Site connectivity this option for AzureFileSync doesn’t fit here.

Data Transfer Method which are available for the Pre Staging of the files are as follows

  • Azure Export/Import
  • RoboCopy
  • AzCopy
  • Azure FileSync

Conclusion:

There are multiple options to transfer the data from on premises to Azure for the File Servers staging but if you want a very smooth migration where end user will not see any down time this the best approach. However ACL hash can only be supported by RoboCopy and Azure FileSync, use of Azure File Share can be created without the need to manage hardware or an OS instead of what we are doing here building the Azure IaaS VM’s since this is not a possible use case here as we need to preserve the ACL and unfortunately it’s still not supported with Azure File Share at the moment.

read more
1 2 3
Page 1 of 3