close

Azure App Service

Microsoft Azure App Service

Azure App Service

Australia Seeks Switch to Microsoft Azure-Based Cloud Records

Microsoft-Azure-official-Microsoft-768×461

Australia has announced governmental organizations will soon allow records to be accessed in the cloud. The push is part of an initiative by the Digital Transformation Agency (DTA) as it continues to digitize the country’s government.

Specifically, the Digital Records Transformation Initiative (DRTI) wants to modernize the public sector and improve productivity. Automation and cloud integration will drive the evolution. A focus will be placed on allowing government agencies to access data inter-organizationally.

Now the DTA is searching for a software-as-a-service platform to handle government digital practices and to solve current problems in records management.

“The customer wants to create an information management environment that is agile and suited to current and emerging records management needs,” the DTA wrote.

“The customer is seeking an innovative records management solution that connects to multiple business systems and repositories to automatically capture, classify, and securely dispose of records based on their content with minimal user input in accordance with the requirements of applicable records management legislation, standards, and authorities.”

In its statement, the DTA confirmed Microsoft Azure is the corporate environment underpinning the transformation. It is also leveraging Microsoft 365. Any successful solution would need to be based on this current environment.

Australia’s federal government has embarked on ambitious plans to overhaul the country’s digital capabilities.

“Government has made significant progress on our digital journey, however, there is a long way to go. If we are to deliver the high calibre of digital services that Australians deserve, we need to act quickly and strategically to lift digital capability,” the APSC, alongside the DTA, wrote in APS Digital Professional Stream Strategy.

Data Limitations in Australia?

Last year, Microsoft Chief Counsel and president, Brad Smith claimed it was becoming increasingly difficult for companies to store data in Australia. He described the country’s encryption regulations as “no longer comfortable”.

In 2018, the Australian government passed new data encryption laws. A first of its kind in the world, the legislation was created in an effort to stop crime and terrorism. Critics have said the laws could have a worse impact on security and will compromise user privacy.

The law forces companies to show data, even if it has been encrypted. This means companies are compelled to share information they don’t always have access to. Smith said this could mean companies stop storing data in Australia.

“But when I travel to other countries I hear companies and governments say ‘we are no longer comfortable putting our data in Australia. So, they are asking us to build more data centres in other countries, and we’ll have to sort through those issues.”

Source WInbuzzer

read more
Azure App Service

GitHub Mobile Version 1.2 Arrives on iOS and Android

GitHub-Mobile-Android-GitHub-768×467

Last year, Microsoft’s GitHub open-source code repository announced it was arriving on mobile. In March, the company’s new GitHub mobile apps landed on Android and iOS. Since it has been running in beta testing the tool has become popular.

Now GitHub mobile is receiving an update that bumps the service up to version 1.2 and adds a few interesting changes. One of the new tools is improved pull requests on GitHub. This includes support for marking files as viewed, deleting, and collapsing files.

Furthermore, image, PDF, and markdown files will now render when selected. Elsewhere, GitHub mobile now has smoother performance when comments are being typed. Badges for organizations on profiles will now link directly back to the connected organization.

Below is the full changelog

  • Improved pull request review experience, with support for marking files as viewed, collapsing files, deleted files, and more
  • Markdown, image, and PDF files now render if you click on them while browsing code
  • Typing new comments is now smoother than ever, with no jumping or flickering
  • Labels, user statuses, and commit messages that used emoji shortcodes now properly render emojis
  • Organization badges on user profiles link to the mentioned organization
  • You can view multiple author avatars for commits on pull request timelines
  • New fork badge on repository profile that links to the parent repository
  • New “Metaphorical Technology” custom app icon!
  • New support for iPad pointer effects
  • Fixed an iPad bug where keyboard dismisses while typing a review
  • Fixed voice-over bugs in the inbox filter view”

GitHub Mobile

With the mobile apps, users of the Microsoft-owned code repository service can manage their projects on mobile devices. Furthermore, developers can provide feedback, respond to comments, organize tasks, and review pull requests.

It is also possible to view code but that as far as the app takes it in terms of code management. You won’t be able to edit code on the app, at least not yet.

You can check out the GitHub mobile app from Google Play or App Store.

Source Winbuzzer

read more
Azure App Service

How to move your process time from Office 365 to Azure Batch – Part 2

21-04-2020-541-Azure-Batches-Part-2-1024×553

Many of the services offered by Azure can be used to extend the capabilities and improve the efficiency Office 365. This is especially the case for SharePoint, both on-premises and in the cloud. One of the services offered by Azure is “Azure Batch”, which is a Platform as a Service (PaaS) solution that provides all the physical infrastructure plus the core software to run programs in groups, removing excessive server loads from production.

In Part One of this article, where we explored how Azure batch works, how it works with SharePoint and how to initiate this in your environment. In Part Two, we’ll be looking at the configuration of Azure Batch, its execution and the results the solution creates. The complete source code for the application can be found in the GitHub repository.

The Configuration of Azure Batch

Step 8 – The MainAsync routine is the principal entry point to create the Azure infrastructure. Here you need to start the batch process, gather the results, and send them back to SharePoint. This element of the task is asynchronous and takes care of the creation and decommissioning of the relevant Azure resources in a proactive way.

Azure Batch script

Figure 6. The principal part of the MainAsync method

Note: the batch operations (inside the “batchClient” object creation) are defined as:

Azure batch script screenshot

Figure 7. Definition of the Batch operations

Step 9 – The storageAccount variable contains a pointer to the Azure Storage using its connection string. The ConfigureAzureStorage routine creates all the necessary containers and blobs in the Azure Storage and returns a variable blobClient of type CloudBlobClient.

Here, you’ll create three blobs: one to contain the XML files from SharePoint, one for the executables that will process the XML files, and one for the results of the processing operation.

A screenshot of Azure batch script

Figure 8. The Azure Storage Blob Containers are created dynamically

Step 10 – The “UploadFilesToAzureStorage” routine carries out two different tasks:

  • It reads the XML files for each item in the SharePoint List and uploads them to the data file container
  • It uploads the files of the executable CreatePdf.exe that will process the data (the code of this executable is explained at the end of the article). This routine returns a DotNet Tuple that contains two generic lists with objects of type ResourceFile containing references to each of the files uploaded to Storage

The first part of the routine loops through all the elements of the SharePoint List using the SharePoint Client Side Object Model. It then converts the attached XML file of each element to a stream, creates a generic list of Tuples that contains the element name and stream of the XML file, and uploads it to the Azure storage container. The second part is identical, but for the files of the processor.

Step 11 – To upload the files processed by the Batch, Azure needs the SAS (Shared Access Signature) of the output blob container to be used in Azure Storage. The SAS is calculated by the GetContainerSasUrl routine:

Figure 9. The calculation of the SAS for the output container

Step 12 – Initially, in the part corresponding to the creation of the infrastructure needed in Azure, the Pool that contains the processing Virtual Machines is created using the routine CreatePoolAsync:

Figure 10. The creation of the Virtual Machines Pool

During the creation of the Pool, you need to specify how many nodes (Virtual Machines) are needed (two in the example), which type of machines (“standard_d1_v2” in the example, the list with possible values can be found in here) and the type of operating system (“6” in the example for Windows Server 2019; the list with values can be found in here).

It’s also possible modify the configuration, that is, if more power is needed, the Pool itself creates additional Virtual Machines. The Pool specifies a command that is executed when each Virtual Machine starts. In this case, it is indicating that two working variables must be created to contain the directories where the executables will be uploaded and where the working files are to be maintained.

Step 13 – Once we have the Pool, we need to create the Job that will contain the Tasks. This is done by the CreateJobAsync routine.

Figure 11. The creation of the Job

Step 14 – The AddTasksAsync method is responsible for creating the Tasks to be executed and started:

Azure batch script

Figure 12. The creation of the Tasks

Task must be defined for each data file that has been uploaded to the storage. One of these Tasks is a command that you need to run; for example, the command starts the CreatePdf.exe with two parameters: a specific data file, and the SAS of the Storage Output Blob.

Azure Batch distributes the Tasks across the available Virtual Machines, balancing the load between them. Finally, the JobOperations method starts the Azure Batch job, creates the Virtual Machines, downloads the executable and data files into them, and gives the command to start the job.

Step 15 – Tasks can be monitored when they are finished. The MonitorTasks routine defines the timeout as a parameter. If the timeout fires before any Task has finished, a message is displayed. Also, if a Task ends smoothly or with an error, the corresponding message is also displayed.

Step 16 – When the task finishes running, it uploads the result to the Azure Storage output blob. When all tasks are finished, the DownloadBlobsFromContainerAsync routine uploads the PDF files to the corresponding SharePoint List items using, again, the SharePoint Client Side Object Model.

Figure 13. The results are uploaded to Azure and SharePoint

Step 17 – When the PDF files are uploaded to SharePoint, we do not need to keep the Azure Batch or Azure Storage infrastructure for longer (if the resources are up and running, you need to pay for them). To delete all those resources, the “CleanUpResources” function is called at the end of the process.

Figure 14. Clean up the Azure resources

Step 18 – The program run by the Tasks must be a Windows Console Application. Add a new Project in the Visual Studio Solution (called CreatePdf in the example). To create the PDFs, the example uses iText7, a CSharp library that allows you to create files of this type (https://github.com/itext/itext7-dotnet, it is a commercial product, but you can use the Open Source version for testing).

You need to install the iText7 NuGet in the project and add using directives to iText.Kernel.Pdf, iText.Layout, and iTextSharp.Layout.Element. Also, use the NuGet Windows.Azure.Storage to upload the files to the Azure Storage, and add using directives to Microsoft.WindowsAzure.Storage.Blob.

This program simply reads the XML file coming from SharePoint and creates a PDF file:

Figure 15. The creation of the PDF

The UploadFileToContainer uploads the created PDF file to the Azure Storage output blob:

Figure 16. Upload of the PDF to the Azure Storage blob

Step 19 – Set the constant values at the beginning of the code with the real values which builds the complete solution. Run the program and observe the messages in the console:

Figure 17. The output of the batch processing

When the application finishes working, the Azure resources are deleted and each item in the SharePoint List have two files as attachments (the XML and PDF files), containing the fields information:

Figure 18. The result files in SharePoint

Conclusion

Within the many ways to use Azure services to complement the operation of SharePoint OnLine or OnPrem, Azure Batch enables you to move CPU and/or memory-intensive operations outside the SharePoint tenant to remote processes in Azure Virtual Machines.

Azure Batch works in conjunction with Azure Storage to receive the data files to be processed, executable files, and the results from processing. The system is scalable with virtually no calculation capacity and memory limits. The use cases are intelligent, going from relatively straightforward examples as the creation of PDFs as in the example, passing through extended engineering stress calculations for mechanisms and civil buildings, to time and resources expensive calculations for solving algorithms for physics or astronomical problem.

Source Practical365

read more
Azure App Service

Microsoft Discusses Azure Cloud Capacity Changes It Made to Handle the COVID-19 Crisis

Azure-Availability-Zone-Microsoft-768×420

Throughout the COVID-19 crisis, Microsoft has been consistently tweaking Azure cloud capacity to keep up with the demand of stay-at-home workers. While some countries are emerging from the crisis, many are still locked in the grip of the COVID-19 pandemic

This week, Microsoft offered an update on how it is handling Azure cloud capacity. As usual, the focus is on how the company manages demand for Microsoft Teams, which is based on Azure. Microsoft workplace collaboration/video communication platform has seen a significant rise in users during COVID-19.

From a daily active userbase of around 20 million, Microsoft Teams usage has soared to over 75 million. Even that figure comes from Microsoft’s financial announcement last month. It is reasonable to presume Teams use has continued to rise since then.

Either way, the jump in users obviously put a greater load on the cloud base. Microsoft spoke in April about how it was adjusting capacity for some services and reducing for others based on importance.

“Teams went from a service that was cool and convenient and something that people realized was the future of communication to very quickly becoming a mission critical, can’t-live-without-it sort of thing,” said Mark Longton, a principal group program manager for Microsoft Teams. “Really what this did was accelerate us into the future.”

Upping Capacity

In an update this week, Microsoft says employees in its datacenters have been putting in long shifts to ensure new servers are installed and running. Of course, the company points out these round-the-clock shifts observed distancing guidelines of six feet.

According to the company, a priority was put on adding servers to the regions most affected by COVID-19. Furthermore, Microsoft doubled the capacity of one of its undersea cables that runs under the Atlantic. Microsoft also “negotiated with owners of another to open up additional capacity.”

The United States and Europe were two of the worst hit regions. Microsoft tripled deployment capacity through its America Europe Connect cable.

Source Winbuzzer

read more
Azure App Service

Microsoft Azure Expanding to Deal with COVID-19 Demand

Azure-Availability-Zone-Microsoft-696×381

Microsoft has revealed it is expanding the capacity of its cloud services and Microsoft Azure to keep up with demand amid the COVID-19 pandemic. In a blog post, the company says it is constantly working to ensure there is enough service performance for all users, but is prioritizing frontline workers, first responders, and healthcare organizations.

To help handle the load, Microsoft is boosting its worldwide data center output and says it is adding more resource limits to new Azure customers.

On the Microsoft Azure blog, Microsoft says how it is managing “business continuity with Azure”, including helping customers complete urgent work and continuing to expand Azure alongside ongoing demands.

At the end of last month, Microsoft said it was tweaking cloud services like Microsoft Teams to handle growing demand. In an Azure blog post, Microsoft said “We have seen a 775 percent increase of our cloud services in regions that have enforced social distancing or shelter in place orders.”

Microsoft Teams Demand

Last week, Microsoft corrected that 775% information because the data only related to Teams in Italy. And it seems Microsoft Teams is the focus of the company’s continued push to manage Azure to keep up with demand.

In the latest blog post, Microsoft explains how Teams continues to grow:

“Last month, the surging use of Teams for remote work and education due to the pandemic crossed into unprecedented territory. Although we had seen surges in specific data center regions or wider geographies before, such as in response to natural disasters, the substantial Teams demand increase from Asia and then quickly followed in Europe indicated that we were seeing something very different, and increasingly global.”

To refocus Teams and prepare it for the massive surge in users, Microsoft took these steps:

  • “Optimized and load-balanced Teams architecture without interrupting the customer experience.
  • Expediting additional server capacity to the specific regions that faced constraints.
  • Approving the backlog of customer quota requests.
  • Removing restrictions for new free and benefit subscriptions in several regions.
  • Refining our Azure demand models.”

Source Winbuzzer

read more
Azure App Service

Microsoft Azure’s Remote Rendering Service Reaches Public Preview

azure-remote-rendering-service-696×392

Over a year after its announcement, Azure’s Remote Rendering service is available to the public. A preview version of the tool was announced in February last year and has been polished by private users since then.

Remote Rendering leverages cloud streaming technology to render high-quality 3D content remotely and deliver it to even low-end devices. A clear target here is the HoloLens 2, which has little space for powerful internals but still has a demand for detailed visuals.

In an enterprise environment, the availability of high-fidelity renders can be especially important. A higher polygon count can help medical students better understand operating procedures or an engineer can get guidance in a similar fidelity to the real world.

“A traditional approach to viewing 3D content on untethered devices is called decimation, which compresses the models and removes polygons. This simplifies the model to a point where it can run on slower GPU hardware,” explained Microsoft. “The result can be a loss of important detail that’s needed to make key business and design decisions. Azure Remote Rendering renders content in the cloud and streams it to devices in real time so that people can use interactive, high-quality 3D models with every detail intact and no compromise on quality.”

The experience is available via an SDK, which Microsoft says is easy to integrate into applications. Some of the same tech is likely being used in Project xCloud to deliver game streaming, so this is another example of a crossover between the company’s seemingly unrelated brands.

You can sign-up for the Remote Rendering preview on the Azure site, with $280 in free credits available to new users and a 10-minute quickstart tutorial.

Source Winbuzzer

read more
Azure App Service

Microsoft Rolls out a Patch for Leaked SMBv3 Vulnerability

Security-Free-Reuse-1-768×512

Microsoft is rolling out a patch for the SMBv3 vulnerability it let slip earlier this week. The ‘wormable’ bug was inadvertently revealed in the lead up to March’s Patch Tuesday, despite no mitigation rolling out with the updates.

Summaries of the bug were posted by Cisco Talos and Fortinet, who were given early access to the information and published it after a miscommunication. Attackers can exploit the bug by sending a specially crafted packet to a target SMBv3 server, allowing them to take complete control of vulnerable systems.

According to Microsoft, the issue exists int he way SMBv3 handles certain requests and is classed as a buffer overflow. To make use of the bug, an attacker would have to configure a malicious SMB server in a certain way and convince them to connect to it.
The vulnerability has raised particular concern due to the use of SMB by ransomwares WannaCry and NotPetya. Several security researchers said it took them no more then five minutes to find the bug’s location in SMB code after the advisories were published. Some have also developed proof of concepts, suggesting it won’t be too long until we see this in use in the wild.
Thankfully, researchers think this won’t have as big of an impact as the aforementioned ransomware. In the case of WannaCry, the exploit fell in SMBv1, which sees much wider usage. Rendition Security’s Jake Williams also said there may be some kernel mitigation.
“Core SMB sits in kernel space and KASLR is great at mitigating exploitation,” he tweeted. “Assuming this is kernel space, any unsuccessful exploitation results in [the blue screen of death] BSOD. Even with trigger code, you still have to remotely bypass KASLR (not an easy task). If you need proof, look at BUCKEYE. They had the EternalBlue trigger, but had to chain it with another information disclosure vulnerability to gain code execution. This isn’t easy.”

Source Winbuzzer

read more
Azure App Service

Microsoft PowerShell 7 Becomes Generally Available

PowerShell-on-Linux-official-Microsoft-768×409

Microsoft yesterday announced the release of PowerShell 7, which is available in full general release. The latest version of Microsoft’s cross-platform automation solution comes with some interesting new features, such as pipeline parallelization.

PowerShell 7 has been in preview for over a year. It expands the capabilities of the tool that gives users a command-line shell and solution for managing scripts and modules.

“Today, we’re happy to announce the Generally Available (GA) release of PowerShell 7.0,” Microsoft’s Joey Aiello writes in the announcement. “We’d like to thank our many, many open-source contributors for making this release possible by submitting code, tests, documentation, and issue feedback. PowerShell 7 would not have been possible without your help.”
PowerShell 7 is once again built on .NET Core. As we have previously reported, the service is also available on Mac and Linux.
“We believe that this could be occurring because existing Windows PowerShell users have existing automation that is incompatible with PowerShell Core because of unsupported modules, assemblies, and APIs,” explains Steve Lee, principal software engineer for PowerShell said at the time.
Details
Among the new features bundled into the PowerShell 7 package. Microsoft has added an easier way to see errors and new operators to the shell. Furthermore, a new compatibility layer for importing modules is also available, although only in Windows.
Other new features include automatic notifications for new versions, alongside APIs and bug fixes.
You can get PowerShell 7 across a range of platforms. On the Windows side, the tool is available on Windows 7, 8.1, and 10, Windows Server 2008 R2 or newer. Other supported platforms include macOS 10.13 or newer, and several Linux variants, including Alpine Linux 3.8+, Debian 9+, Fedora 29+, openSUSE 15+, Red Hat Enterprise Linux/Cent OS 7+, and Ubuntu 16.04+.

Source Winbuzzer

read more
Azure App Service

Leaked Surface Duo Video Shows New “Peek” Feature

Surface-Duo-Microsoft-768×512

We have known about the Surface Duo since it was announced by Microsoft in October. However, the company made it clear the Android smartphone was a prototype and would not launch until holiday season 2020. As such, information about how the Duo will perform and what features it has is scarce.

While some Microsoft patents shed a little light on the Surface Duo, we really know little about it. Today, a video leaked online shows a new feature that Microsoft is introducing on the Duo. Reported by WalkingCat, the feature called “Peek” allows users to see notifications without unfolding the Duo.

As you probably know, the Surface Duo is a twin-screen smartphone. Both screens are located on the inside of a book-like shell form factor. Both 5.6-inch displays are separated by a middle (albeit small) bezel. However, they can be extended into an 8.3-inch tablet screen. With a 360-degree hinge, the Surface Duo will be hugely functional in numerous modes.
That said, it won’t be very functional when it is closed. In fact, it will be a hard shell on both sides with no access to the screens. This is where the Peek feature will be important as it will allow users to interact with the handset without opening it.

Interacting with A Closed Phone

How to allow users to interact with a closed shell device is something other smartphone manufacturers have dealt with. Samsung and Motorola with their folding screen devices have placed a secondary small notification on the outside.

Microsoft is seemingly not taking this approach. Instead, Peek will allow the users to only open the Surface Duo marginally to see the notification.

I am not sure how productive this feature is. On paper, it seems pointless as if you open the Duo a little, you may as well open it the whole way.

Microsoft’s video shows that the UI will be slick but does little to convince me a screen on the outside would not be a better option.

Source Winbuzzer

read more
Azure App Service

Nvidia’s GeForce Now Takes on xCloud with Free Game Streaming Tier, $5 a Month Premium Deal

geforce-now-696×317

After years in closed beta, GeForce Now is finally available to the masses. After years of free testing, Nvidia’s full launch is even more generous than anyone expected. You can utilize the app free of charge today, so long as you’re okay with starting a new session every hour. Alternatively, priority access to GeForce servers, 6-hour sessions, and RTX capabilities costs just $5 a month. This includes a 90-day free trial.

I’ve been using GeForce Now personally for around 8 months now. Though I own a gaming PC, it’s proved invaluable while traveling or at LAN parties. An Android client also means you can start it up on your phone, much like xCloud.

It’s going to be extremely interesting to see how Microsoft’s xCloud and Google Stadia respond to this. Stadia currently costs £119 just to get on board with its premier edition, and you have to purchase titles again from its very limited library. In comparison, GeForce Now lets you play most titles you already own on Steam, UPlay, and the Epic Store. With this, Stadia is essentially dead in the water if it doesn’t make any changes.
However, it’s worth noting that xCloud has a slightly different value proposition. Unlike its competitors, it really is more of a ‘Netflix for Games’. Microsoft’s subscription will bundle its first-party titles, those on Game Pass, and other Xbox titles at no extra cost. If it can keep the price down to $10, that could easily be worth it.
If you want raytracing, though, Nvidia is your only option right now. Microsoft doesn’t currently have RTX-powered consoles and its current beta is based on the Xbox One S. Still, it’s worth noting that Nvidia is assumedly planning to up its pricing at the end of 2020. What exactly that will be is unclear at this point, but it calls the current $4.99 “discounted”.

Source Winbuzzer

read more
1 2 3
Page 1 of 3