close

SharePoint Document Management

SharepointSharePoint Document Management

How Throttling Impacts Tenant-to-Tenant SharePoint Online Content Migrations

124-05-10-2021-BLOG-Throttling-–-Your-World-Will-Not-End-LOW-1-340×200

Every organization faced with a large tenant-to-tenant migration is concerned about how quickly they can migrate their content. Inevitably, these organizations will raise concerns about being throttled during their SharePoint Online (SPO) content migration.

Administrators accustomed to Exchange Online throttle policies are often surprised with the limitations they encounter during an SPO migration.

What’s the Problem with Throttling Anyway?

Basically, throttling slows down the content migration process based on external limitations. A good way to think about throttling is that it’s a bit like the restrictor plate used on NASCAR race cars during selected races. The restrictor plate effectively limits the top speed of the race car, as speeds higher than 190 mph may result in cars flipping over which can cause crashes.

By controlling the migration pace and keeping a minimum threshold flow for content, throttling maintains the stability and usability of the customer’s tenants. This not only protects the content migration process, but also enables users to continue using the tenant.

How Does Throttling Work?

Each tenant implements throttling at the service level. The service throttles the Client Side Object Model (CSOM) calls and the Graph API calls. The service throttling rules, and the migration API self-throttling rules are based on the Compute and SQL availability. The migration API also adjusts how many tasks run in a tenant based on the availability of the backend resources.

Microsoft does not explicitly state exactly what the throttling rules are. Nor is there an official or unofficial policy by which Microsoft will remove throttling from a tenant. However, Microsoft can and will monitor a tenant if there are concerns about heavy throttling.

When all goes well, throttling maintains a smooth flow of traffic for all SPO tenants.

What Do Throttling Errors Look Like?

When migration tools use the CSOM calls or the REST API that exceeds usage limits, the migration service throttles any further request from the user for a time. You can still be throttled when using the Graph API, and the throttling occurs when uploading batches to a public or private Azure storage container.

Below are some examples of common throttling errors:

429 Error: Too Many Requests

What you will see in response to the throttling on HTTP Request calls is a high volume of HTTP 429 errors (“Too Many Requests”), HTTP 503 errors (“Server Too Busy”), and/or HTTP 500 errors (“Operation Timeout”). Specifically, an HTTP 429 error displays as follows:

Retry-After Value

The Retry-After value is an integer value indicating the number of seconds after which the request can be resent. If you send a request before the retry value has elapsed, your request is not processed, and a new Retry-After value is returned. There’s a possibility that several asynchronous calls will receive a Retry-After value if they are processed in proximity of the retry value. Thus, repeatedly sending a request while still receiving a 429 error is futile.

503 Error: Server Too Busy

The Retry-After value used with 503 errors indicates in seconds how long the service is expected to be unavailable. You may see a 503 error with the message “Server Too Busy.” This error will likely appear when you are uploading a lot of content to an Azure storage container. Like 429 errors, repeatedly sending a request while still receiving a 503 error is futile.

500 Error: Operation Timeout

The 500 error is a very general HTTP status code that means something has gone wrong on the website’s server, but the internal server could not be more specific on what the exact problem is. Sometimes, the 500 error is due to an incorrect permission on one or more files or folders.

Other times, an application is shutting down or restarting on the server. It’s difficult to know exactly what is happening, and there is no Retry-After value provided. In fact, this error usually has nothing to do with throttling, but it can be an indicator that the service is having trouble keeping up with demand.

What does Microsoft Recommend?

Per their general migration performance guidance:

  1. Use app-based authentication (OAuth).
  2. Try to migrate during off-peak business hours.
    1. Business week evenings are obviously better than business daytime hours.
    1. Business weeknights and weekends are the best.
  3. Do not submit more than 5,000 migration jobs/requests at one time; over-queuing the network will create an extra load on the database and slow migration down.
  4. Implement Microsoft’s guidance and best practices on the back off and retry code
    1. Good practice is to implement an exponential back off and retry – delay each following request exponentially to allow the migration service to “catch up.”

What Happens When the Migration is Throttled?

In real life, throttling looks a bit like ramp meters placed on highway onramps. Ramp meters are used to control when and how often vehicles can enter the highway, and the goal is to keep traffic moving on the highway. As a result, movement on the onramps may be slower at times.

This is the same experience with throttling and migrating SPO content. The content will move smoothly until heavy congestion is detected in the backend of the tenant. Then you will start seeing 429 errors returned with Retry-After values. The Retry-After values will force new content submissions to back off and wait until the backend congestion is reduced.

Can Microsoft Turn Off Throttling to Help You with Your SPO Migration?

Officially…No. Throttling rules cannot be disabled or suspended and opening a support ticket will not lift throttle. In a previous version of the same guidance document, Microsoft states, “throttling is implemented to ensure the best user experience and reliability of SharePoint Online. [Throttling] is primarily used to load balance the database and can occur if you misconfigure migration settings, such as migrating all your content in a single task or attempting to migrate during peak hours.”

In my experience I’ve never heard of an instance where Microsoft has lifted the throttling rules for content migration for a customer, including Microsoft Consulting Services. Microsoft’s migration tools do not have preferred App IDs that bypass throttling, and there’s no secret back entrance to avoid throttling.

What Would Happen if Throttling Was Lifted?

In the grand scheme of things, this would be bad for your tenant. Unrestricted migration of content to a tenant significantly increases the amount of content moving to the services, and the services could eventually fail due to the heavy load. The virtual network adapters could fail, or the SQL Server could stop responding to requests. Users on the tenant would see a significant drop in performance of online services – possibly a complete failure.

Of course, this situation can quickly deteriorate even more so. Tenants that share hardware environments are impacted by the heavy load placed on any one of the tenants. Each tenant will experience a degradation in performance. Thus, the problem of one tenant becomes the problem of many tenants.

What Can You Do?

Back Off and Retry Code
For starters – the migration software you chose will certainly have an impact on migration throughput. The software must implement back off and retry code as recommended by Microsoft.

The migration software should also use OAuth authorization, an App ID, app-based authentication, as well as the Import API to create migration jobs in the target tenant and the Export API for reading from source tenants. The use of CSOM should be limited to features that are not supported by the migration API or the Graph API – and that can happen.

Migration Windows
Second, understand that the best times to write content to the target tenant is during off-peak times. Business daytime hours will generally see a higher probability of throttling as the SPO tenant is trying to maintain stability for M365 users.

Business week evenings are good times to migrate since there are fewer M365 users online. However, there may be backend processes running in M365 during these times. These processes may trigger throttling rules to ensure that they can complete successfully without interference from heavy migration processing.

However, the best times to migrate are business weeknights and weekends as there should be almost no M365 users online and fewer backend processes running. Weekends should be the primary target for scheduling content migrations.

Weekly Migration Throughput
Third, plan for a total weekly migration throughput based on the amount of content that can be migrated at different hours during the week. For example, a sample content migration throughput plan for OneDrive might appear as below, and you can see that the throughput during business weekday hours is only 1TB. However, the non-business weekday hours throughput is higher at 3TB, and the weekend throughput is much higher:

How Throttling Impacts Tenant-to-Tenant SharePoint Online Content Migrations

This is typical for large migrations, but you must consider the following factors:

  1. Not every migration is typical.
  2. The type of content being migrated has a significant influence on throughput.
  3. The throughput plan should indicate whether other content migrations are taking place at the same time.
    1. You cannot exclude other migrations to SPO, OneDrive or Teams in the same target tenant just because a different team is running a migration process, or a different migration tool is being used.
    2. What matters is that the content is migrating to the same tenant.

Another consideration is when the source and target tenants are in different geographical regions, as this may reduce the total amount of non-business hours available to your migration. Consider the following example: an organization is migrating content from New York, USA to Berlin, Germany. At 6PM on a Friday evening in Berlin, the migration window is open for the weekend. However, it is still 12PM in New York. The source tenant may still throttle on reads to maintain stability for users, and the rules may stay in effect for another 6 hours.

At the other end of the weekend, the throttling rules on the target tenant can start at 6AM in Berlin. However, it is only midnight in New York, and it will be another 6 hours until throttling rules take effect to protect the source tenant. Thus, your total potential migration throughput for this scenario can be reduced by 12 hours on the weekend. The same limitation exists for your evening and night-time processing.

Set Appropriate Expectations on Migration Throughput
Fourth, it’s important to set realistic expectations with your customer on what to expect for migration throughput. Factors that impact throttling include:

  • Multiple migration workloads
  • Lots of small items in lists and small files in libraries
  • Lots of permissions and metadata
  • File versions
  • Can be throttled on both source and target tenants

For example, imagine driving on a highway where there is little traffic. Some trucks are carrying large loads, but not necessarily heavy loads – this is akin to carrying large files. Their throughput can be very high, but they can load and unload quickly.

Another type of truck is carrying a load of sugar beets. The load is like migrating thousands of small files, and this truck cannot go as fast as the other trucks. It’s on the same highway; but it is heavier, needs more time to load, travels at a slower speed, and needs more time to unload. It also takes more time to process all the sugar beets at the factory after they are unloaded.

With these two different scenarios, different expectations should be made. First, no two migration loads are the same. Second, even when the total migration sizes are equal, large files will move and process faster than small files. Thus, the expectation that measuring migration throughput performance based on size is false.

Try to Avoid Throttling by Not Following Best Practices
Fifth, try to avoid throttling with inventive solutions because you heard from someone or read online somewhere about a “recommended” approach. Here are a couple of my favorites:

  • Running multiple migration solutions concurrently. Deemed to be faster because each migration solution uses OAuth authorization and has its own App ID; thus, each migration solution will not be throttled, and you can push as much content as possible.
    • This is false – throttling is managed at the service level, not the individual migration solutions. Using OAuth and App ID allows for more throughput in comparison to CSOM, but they can still be throttled by the service.
  • Running a content migration with multiple apps installed in Azure AD. Each app uses a different App ID and service principal, and the migration solution uses every app to send content migrations through. The likelihood of being throttled is greatly reduced because multiple service principals are being used, and each service principal is seen as a unique migration process and the throttling will be determined uniquely by the service. Thus, your throughput will increase!
    • This is false – and not a best practice supported by Microsoft.
    • In fact, Microsoft will warn you if they determine you’ve implemented this solution and will ask you to remove it.

Do Not Panic Over Throttling Errors
Lastly, do not panic if you see throttling errors. This is normal and is usually an indicator that your content migration solution is pushing content to the limit of what the migration service can support. You should reduce the requests being submitted if you see warnings regarding CSOM or the Rest API.

Just like racing a car, there are times where you want to see the RPMs close to the red zone, but you don’t exactly want to see the needle in the red zone for too long. It’s not good for the engine – it could cease. For migrations, this could mean the migration service will lock you out and you’ll have to wait a few days for the service to let you back in again.

Conclusion

First recourse should not be to look for ways to bypass throttling, as it maintains the stability of your tenants. Removal of throttling is not possible and would likely result in your tenant crashing anyway.

When looking for ways to expedite a tenant-to-tenant content migration, there are several actions you can take to absorb this extra time without extending deadlines. These include planning to migrate at off-peak times; setting appropriate expectations with customers; and avoiding inventive solutions that do not follow best practices.

Source Practical365

read more
SharepointSharePoint Document Management

Hands-on SharePoint Syntex: Part 2

07-12-2020-739-p365-How-to-Sharepoint-LOW-1

In part 1 of this series, we introduced you to SharePoint Syntex, Microsoft’s new service, which brings the power of automation to content processing and transforms your content into knowledge. We explained the licensing requirements for SharePoint Syntex and showed how to license and set up SharePoint Syntex in your Microsoft 365 environment.

In part two, we look at adding document understanding models into our newly created Syntex Content Center and how to add, classify, and train documents with SharePoint Syntex.

Finally, in part three, we consider creating forms processing models from SharePoint document libraries by using AI Builder, a feature of Microsoft PowerApps.

Setting up a Document understanding model in SharePoint Syntex

With SharePoint Syntex licensed and set up in a tenant, we can explore its real value by adding a Document understanding model and then training some documents to extract the information we want.

To create a Document understanding model within SharePoint Syntex, open the Syntex Content Center created in part one and complete the following steps.

  1. Click on New, and select Document understanding model:
Graphical user interface, application, PowerPoint

Description automatically generated

Figure 1 – Creating a Document understanding model

  • For this example, I will use PDF files of my payslips. So, I will name this model Payslips.  The first step is to create a new content type. A content type in SharePoint Online is a reusable collection of metadata (columns), workflow, behavior, and other settings for a category of items or documents in a SharePoint list or document library. You may also select existing content types. For more information on content types, please refer to this Microsoft article.  I will also choose to apply a Retention label to any content to which this model is used.  My retention label is set to trigger a compliance administrator’s Disposition review at the end of the retention period (Figure 2). Click Create when the required settings for the model are complete.
Graphical user interface, text, application, email

Description automatically generated

Figure 2 – Naming the document understanding model, creating a content type, and assigning a retention label

  • The model creation wizard takes you to the next step, where you will see four key actions to develop your newly created model (Figure 3).
Graphical user interface, website

Description automatically generated

Figure 3 – Key actions are shown in the document understanding model

  • Now that we have our new model, we should add some example files.  To do this, click on Add files.

The example files are used to train the model.  You may upload either files or folders. We must upload at least 5 files of the same (positive) type and 1 file of a different (negative) type.  In this instance, I have chosen to upload 5 of my payslip PDFs as positive examples and 1 negative example, a PDF of my Microsoft certification transcript (Figure 4).  Once the example files are uploaded, click Add.

Graphical user interface, table

Description automatically generated

Figure 4 – Adding positive and negative example files

  • This takes you back to the main key actions page for your model.  Next, we need to classify our files and run training.  To do this, select the option to Train the classifier.

From the classifier screen, we need to select each of the documents we uploaded to our model in the left pane, then on the preview pane to the right, we choose yes or no to the question Is this file an example of Payslips? (Figure 5).

**Note that I have redacted information displayed in my preview pane in the examples that follow to protect my confidential details.

Graphical user interface, application, website

Description automatically generated

Figure 5 – Labelling files as positive or negative examples

  • Make your selection against each file, and then move onto the next one by clicking on the Next file.

Figure 6 shows that I have labeled all the payslip files as positive examples and my Microsoft transcript file as the one required negative example.

An important consideration here is that ideally, a negative file should be as close an example as possible to the positive file examples.  In this case, my negative example is a completely different format to that of the positive.  Whilst this does work, it is not the best real-world example, but it does show you how the process works.

Graphical user interface, table

Description automatically generated with medium confidence

Figure 6 – All upload example files have been labeled

  • Now we need to run the training on our files.  Click on the Train tab, and you will be prompted to add an explanation that is required to help the model distinguish this type of document from others or identify the information to extract.  Click on Add explanation as shown in Figure 7.
Graphical user interface, text, application, chat or text message

Description automatically generated

Figure 7 – Add an explanation

  • For our explanation, we will give it the name of Payslips and choose the Phrase list option, where we may enter words or phrases that will be used to identify the information we wish to extract.  All my payslips contain the phrase PRIVATE & CONFIDENTIAL, so I have used this as my phrase (Figure 8).  I have also selected the checkbox to match exact capitalization.  With our explanation details completed, we may now click on Save.
Graphical user interface, text, application, email

Description automatically generated

Figure 8 – Choose a name and type for your explanation, and add a list of phrases

  • Click on Train Model. If successful, you will see a Match against your files, as shown in Figure 9.  However, if you see a Mismatch, you will need to add further explanations to provide more information and rerun the training.
Graphical user interface, text, application

Description automatically generated

Figure 9 – File matching completed successfully

  1. In the preview pane against each file, we can see where a file has been Correctly predicted as a positive example. Similarly, we can see where a file has been Correctly predicted as a negative example (Figure 10).
Graphical user interface, website

Description automatically generated

Figure 10 – Correctly predicted negative example

  1. Click on the Test tab within your classifier, and you may add and train further files if you wish or need to (Figure 11).  Then click on Exit Training.
Graphical user interface, application

Description automatically generated

Figure 11 – Add further files if required and exit the training

  1. Next, back on the key actions page, we have an optional stage where we can create extractors that will extract specific information from our positively matched documents and display these as columns in the SharePoint document libraries to which our model is applied.  Click on Create extractor.

I want to extract the date from each of my payslips, so I will create an extractor named Paid Date (Figure 12).  Click on Create.

Graphical user interface, application

Description automatically generated

Figure 12 – Creating a new entity extractor

  1. From the Label tab of our new extractor, we need to scroll through each example file again and highlight the required information, which is the date from each payslip (Figure 13).
Graphical user interface, application

Description automatically generated

Figure 13 – Highlight the required information to extract

  1. Once a file has been appropriately labeled, click on Next file to move to the next one.  When reaching the last file, which is my negative example, I need to click on No label for this one and then click on Save.

Next, I will click on the Train tab to train my extractor.  I will need to add an explanation for the extractor at this point by clicking on Add explanation (Figure 14)

Graphical user interface, text, application, chat or text message

Description automatically generated

Figure 14 – Adding an explanation for the extractor

  1. I will name this explanation as Date Paid, and this time I will choose Pattern list as the type.  As the pattern list will reference a date, I can choose to add a list of patterns from a template (Figure 15).
Graphical user interface, text, application, email

Description automatically generated

Figure 15 – Add and name and type to your explanation, and add a list of patterns from a template

  1. You will now see a list of the available explanation templates.  Here I will choose the Date (Numeric) option below and click Add (Figure 16).
Graphical user interface, application, email

Description automatically generated

Figure 16 – Add the chosen explanation template

  1. The template patterns for the date format are added (Figure 17), and we may now click on Save.
Graphical user interface, text, application, email

Description automatically generated

Figure 17 – Save the pattern list

  1. Now we need to click on Train Model for our new extractor, and hopefully, we will see a match as shown in Figure 18.
Graphical user interface, application

Description automatically generated

Figure 18 – Training the model for the extractor

  1. The creation of the extractor is now complete. Click the Test tab to complete further training if required, and then Exit Training when we are satisfied that the extractor will match the content we wish to be shown in a column in our document libraries (Figure 19).
Graphical user interface, text, application

Description automatically generated

Figure 19 – Click to exit the training of the extractor

  • The final step is to apply the model to any chosen document libraries within SharePoint Online.  To do this, return to the key actions page, and click on Apply model.

Select the required document library, then click Add. The Payslips model was now applied to my chosen document library.  To open this document library, click on Go to the library.

You can immediately see that the document library shows some extra columns related to our newly applied document model.  These include our extractor column of Paid Date and the Retention label column. The document model will automatically run against any new files added to this document library, or we can select files and then choose Classify and extract.

The result is that my payslips are all now shown with a Content-Type of Payslips, and extracted Paid Date value, a Retention label of Disposition Review Label, a Confidence Score, and a Classification Date (Figure 20).

Figure 20 Document model shown applied to document library with added columns

Our Document understanding model is set up, complete with some compliance in the form of retention labels, and an extractor applied which shows extracted information in a column in the document libraries to which our model is applied.

Summary

This post showed you how SharePoint Syntex could be used to create document understanding models in the SharePoint Syntex Content Center.  We learned how to add, classify and train documents with SharePoint Syntex, how to extract information from the documents that you mark as positive examples, and how to apply a document model to a SharePoint document library.

In part three of this blog series, we look at how forms processing models may be created from SharePoint document libraries using the AI Builder feature of Microsoft PowerApps.

Source Practical365

read more
SharepointSharePoint Document Management

How to create SharePoint Single AppPart Pages

sharepoint

SingSingle Page Application (SPA) is a paradigm to create modern web applications where the information is presented to the user through a single HTML page. This ensures that the sites are more responsive and closely replicate a desktop application or native app. A SPA retrieves all the application’s code such as HTML, JavaScript, and CSS on the initial load. Alternatively, depending on the user activity or affecting events, it may load resources dynamically in response to that update.
The Microsoft SharePoint interface shows pages built from several components which are called AppParts, these are originated from different sources built during runtime.

Until early 2019, it wasn’t possible to install one single AppPart filling the complete page’s real-state in SharePoint and simulate, somehow, the behavior of SPA sites. However, since the introduction of version 1.7 of the SharePoint Framework (SPFx), it’s now possible to configure SharePoint and the AppParts so we can carry out this task. At the time of writing this article, this option is only available for SharePoint Online, not for SharePoint Server. Nonetheless, it’s certainly a possibility that Microsoft may add this option to the Server version in a future Service Pack.
Using Single AppParts pages in SharePoint is important because it makes it possible to create: more complex AppParts, a filled page area, more controls and visual elements inside it, and, at the same time, enriching the user experience.
The process to create a Single AppPart page is a two steps process: create the SPFx component and configure it to fill the complete working area and configure the SharePoint page to render it consequently.
Create the SPFx AppPart
Initially, the creation of an SPFx AppPart for a Single AppPart page is similar to that of an AppPart. Microsoft offers extended information about the development of SPFx AppParts and the creation of a development environment on their documentation site.
Step 1 – Here, we need to use Yeoman, the tool used to generate SPFx projects, to scaffold a new AppPart for SharePoint. Ensure that you choose SharePoint Online only (latest) as the environment you want to use and select the client-side component type for the WebPart you want to create. You should then see a HelloWorld AppPart that can be run in the local Workbench gulp serve.

Step 2 – Open the file ../src/webparts/helloWorld/HelloWorldWebPart.manifest.json using your code editor, for example, Visual Studio Code. Search for the supportedHosts section and then add a new value called SharePointFullPage as shown below.

Step 3 – Open the file “../src/webparts/helloWorld/HelloWorldWebPart.manifest.scss” and comment out using two backslash characters (“//”), as indicated in Figure 4. the max-width attribute. This is not required by the Single AppPart Page, but it will show the AppPart using the full page-width when it is hosted in SharePoint.

Install the AppPart and use it on a page

Step 4 – Compile the AppPart gulp bundle –ship and follow the instructions in the Microsoft documentation to create and compile SPFx AppParts, and then generate its deployment package gulp package-solution –ship.

Step 5 – Open the SharePoint Catalog site and upload the AppPart package to the Apps for SharePoint library. Use the Make this solution available to all sites in the organization option to ensure that the AppPart will be immediately available for all site collections.

Step 6 – Create a new Blank page in one of the SharePoint site collections. Open the page and install the AppPart. It will behave like a normal AppPart, which means, it will be shown as one of the AppParts that can be installed in a zone and column of the page. The only difference is that the AppPart will use the full width of the column because we changed this option in the CSS file of the part.

Configure the page to become a Single AppPart Page

Although Microsoft describes different ways to configure SharePoint to make the page a Single AppPart Page (using JavaScript or the SharePoint CLI), the best way is using PowerShell PnP. Patterns and Practices for SharePoint (PnP) which is an Open Source initiative hosted in GitHub. This is closely monitored by Microsoft to enhance the SharePoint object models and PowerShell accessibility, filling the gaps that Microsoft should have done natively but never did.

Step 7 – To use the PowerShell PnP module, install it first, open a PowerShell console as Administrator and run the following command:

Step 8 – To log in to Office 365, use the next command and provide your credentials when asked. Replace [domain] and [SiteName]” with the name of your Office 365 domain and site collection name:

Step 9 – Then, run the following script. Change the value of “[NamePage.aspx]” to the correct one. The first command must be in one line text

“SingleWebPartAppPage” is the property that will convert the page to a Single AppPart page.

Step 10 – Go back to the page and refresh it.

Figure 7. The AppPart configured as full page in a SharePoint page

As you can see, the complete command bar at the top of the page and the header of the page are invisible now.

Step 11 – If you change the header of the page to Compact from the Settings in the Change the look and Header options, and remove the Quick Launch menu, the complete interface of the page will be available for the AppPart, making it look like the design of a SPA:

Figure 8. The AppPart used as full page with the quick launch menu removed

Step 12 – To return the page to the ­­­normal rendering, change PageLayoutType property in the script to Article, and run it again:

There are a few things you should note about the Single AppPart Pages in SharePoint:

  • These pages are made to host only one AppPart
  • If you install more than one AppPart on the page and then convert it to Single AppPart Page, only the first AppPart will be rendered. The other AppParts are only hidden and will be visible again if the page is configured back to “Article”
  • Single AppPart Pages can also be used by the ‘out of the box’ AppParts
  • The configuration panel of the AppParts is fully useable for both, custom and ‘out of the box’ AppParts
  • There seems to be a bug in these pages for users with read-only rights: the new layout disappears, and the page is rendered as a normal SharePoint page. The bug is reported to Microsoft.

As a conclusion, we can say that the Single AppPart pages in SharePoint are a useful option to create SPFx AppParts that need to use the complete real-state of SharePoint. In this way, the designer and developer can create more rich and useable interfaces. Additionally, the needed changes to implement it are not intrusive and easy to recognize.

Source – Practical365

read more
SharePoint Document Management

SharePoint Folders vs. Metadata

sharepoint-folders-vs-metadata-hero

Folders vs. Metadata is an endless debate in SharePoint world. I published a number of articles on the topic myself. Here are the links to some of them:

I always wanted to create a slide deck to help my loyal blog readers to visualize issues with folders and the benefits of metadata. I finally got few hours to create such a slide deck. You can even download it from SlideShare, if you wish. Enjoy!

 

read more
SharePoint Document Management

SHAREPOINT DOCUMENT TYPES

content-types

Are you trying to organize your documents in SharePoint and scratching your head as to how to name your docs? Perhaps you have different types of documents, like meeting notes, schedules, budgets, etc.? While you can tag your documents against various properties, like document owner, revision, etc., one of the most common “drop-down” choices is Document Types (as shown in image below).

SharePoint Document Library

 

COMMON SHAREPOINT DOCUMENT TYPES

In case you want to take advantage of this, I summarized all the frequently used SharePoint Document Types in a slide deck below. While every organization is unique and you will have your own document types as well, this list covers most common types of documents. So feel free to download this slide deck and copy and paste the list into your column drop-down choices or Term Store. Enjoy!

read more
SharePoint Document Management

2 WAYS TO SEARCH FOR FILES IN SHAREPOINT

Capture

One of the great advantages of SharePoint over file shares is its ability to search and find the content you are looking for. Network drives, file shares, DropBox are great if you want to store content. However, how do you find your content (documents)? With this blog post, I would like to explain the available options for searching and finding the documents in SharePoint.

HOW TO SEARCH FOR FILES IN SHAREPOINT

Option 1: Site search box

I am sure you have seen this and I am sure your have used this already. Every site has this search box in the upper right-hand corner, which allows you to surface up content based on what you have typed in.

How to search for files in SharePoint using Site Search Box

  1. Navigate to the Search Box in the upper-right handcorner of your SharePoint Site
  2. Type the text/keyword you are looking for
  3. Hit Enter

Search Box

 

Pros:

  • Works out of the box
  • Searches for keyword typed in in file name, metadata and text inside of the files (Only MS Office and readable PDF files)

Cons:

  • By default, searches in a site + subsites that reside under the site where you typed in the search text. That means that sometimes, depending on keyword typed in, might return too many irrelevant results, as the scope is usually the whole site collection (unless the search has been specifically configured by your SharePoint Administrators)
  • By default, might not return all the relevant results. SharePoint makes an assumption about some of the files and might think they are duplicates of one another – so they won’t even show up in search results. Reference this blog post for more info, courtesy of Mike Smith.
  • By default, searches for all types of content, not just documents. So in other words, the search results will display any content (folders, events, tasks, contacts, whole sites and libraries) that match whatever keyword/term you typed in. So unless the search has been specifically configured by your SharePoint Administrators – the search results might be a bit overwhelming for end users. See an image below for what I mean… You also might want to check out related blog post “SharePoint Document Library – one or many?” to see what I mean.

search results all

Option 2: Document Library Search Box (my favorite)

Despite its presence for quite some time (the feature became available in SharePoint 2013), not many users know about it or get to use it. Every Document Library in SharePoint 2013 has a search box located just above the documents themselves. The beauty about this search box is that it allows you to search for documents just within the specific document library.

How to search files in SharePoint using Library Search Box

  1. Navigate to the root of the Document Library
  2. You will notice a search window present in the header portion of the document library (to the right of where all the views are
  3. Type the text/keyword you are looking for
  4. Hit Enter

search box library

Pros:

  • Works out of the box
  • Just like the “global” search in Option 1, the document library search box surfaces up content based on file name, metadata, and text inside the files themselves
  • More precise search results. Since you are searching within specific document library, you will only get results that are documents and not other junk (sorry, I meant content) located on your site

Cons:

  • If you have documents located in multiple libraries/sites, this option won’t help much. You will need to search separately in those document libraries or rely on Global search listed in Option 1.

Bonus: Wildcard Search

Another cool search feature you can use with both Options 1 & 2 is wild card search. That is when you don’t know exact keywords, only a portion of the text you are looking for (i.e. first few letters). In the example below, I am searching for the same keyword I searched for above (vehicle), except now I am searching based on first few letters. As you can see, I am getting same results!

wildcard-search

For wildcard to work in SharePoint…

  • You have to start with the first few letter of the word. In other words, in a word “vehicle”, you can’t search for text “ehic“, it has to be “veh
  • The wildcard character in SharePoint is “*“. You have to put the asterisk (wildcard character) afterthe first few letters, not before. For example, veh*, not *veh
  • You can use SharePoint wildcard search with both Options (global search and library-level search)
read more
SharePoint Document Management

SEARCH FOR DOCUMENTS IN A DOCUMENT LIBRARY USING METADATA NAVIGATION

e7dd31f2-48ad-4229-85f0-a8e17ebb8921

Before we proceed, I want to mention that this filtering mechanism (official name in SharePoint for this is Metadata Navigation) really only makes sense if you configure your library with custom metadata/columns. If you don’t have custom metadata setup, the only filters you will have are the ones that exist by default in any library (Modified, Modified By, etc.).

Metadata Navigation is a feature in SharePoint that allows users to dynamically filter and find content in SharePoint lists and document libraries.

Step 1. Configure your metadata, upload documents

I will assume that you already know how to do this. If you don’t – you might want to check out this slide deck for step-by-step instructions.

Step 2: Activate Metadata Navigation feature

Not sure why it is setup that way, but the cool looking filter (also known as Metadata Navigation) is not something that is activated out of the box. You have to enable the feature at the site level first. For this, you need to be a Site Admin (or have Full Control permission to the site).

  1. Go to Site Settings > Manage Site Features.
  2. Scroll down to Metadata Navigation and Filtering and click Activate button.

1

Step 3: Configure Metadata Navigation Settings

Once the Metadata Navigation is activated, you can now setup filters at the library level

  1. Go to the library where you want to add the filters
  2. Go to Library Settings tab – you will now see an option called Metadata Navigation settings. That option did not exist previously (it only appears after you activate that Metadata Navigationfeature above)2
  3. On the next screen you can configure your filters. You have full control over which filters to display and in which order. The filter list on the left side contains all of metadata filters available to you (both out of the box + custom metadata you created yourself).metadata filters
  4. Click OK

You are done – enjoy your search! Now use can search your library based on a combination of metadata filters. Just choose your choices on the left, click Apply filter and your library will adjust results accordingly.

BONUS

read more
SharePoint Document Management

SHAREPOINT DOCUMENT LIBRARY – ONE OR MANY?

no thumb

Shall I put all my documents in one library or multiple libraries? This is the question that always comes up when it is time to create the sites and migrate your documents from file shares / network drives to SharePoint. With this post I hope to answer that question and explain pros and cons of single vs. multiple document libraries concept.

SHAREPOINT DOCUMENT LIBRARY – ONE OR MANY?

To start off, if you are moving from network drives / folder structures to SharePoint, you will never want to put all of your files and folders in one big SharePoint document library. As I stated numerous times in my previous blog posts, the best practice is to break up that content and place it into different sites, depending on the business function/intent and unique security of the site. For example, all the HR documents will go to HR site, all Finance documents to Finance site, all Project documents to Project/Team Site and so on.

Assuming you did this, here is a next dilemma you might face. Say, each of your Departments has its own set of policies/forms/ templates they use. And you want all of these policies to be available in one spot. How do you deal with this? Well, there are 2 options available to you. Let me explain both.

Option 1: Each Department stores their policies, forms or templates on their respective sites

The obvious option would be to let each department manage and store their own policies, forms or templates. However, by doing so, you are making the task of aggregating these various documents in 1 place a very complicated one. Yes, it is possible to roll up the documents from multiple libraries and sites into single site/location, however, not something that can be done straight out of the box – it does require you to use advanced SharePoint Web Parts like CQWP (Content Query Web Part or CSWP (Content Search Web Part) and you need above an average Power User /Administrator knowledge of SharePoint to achieve this. And the rolled up content will look like this…

CSWP Search Results

To put in simple terms, there is no Out of the Box way to roll up content from multiple document libraries into another document library.

Moreover, because of the decentralized nature of this (every department is on their own), you might not have a good mechanism or governance to standardize on naming conventions, metadata tagging of those policies.

Option 2: All departments store their policies in one library / site

The second option would be to provision 1 site dedicated to Policies. On that site, you can create a single document library, configure metadata with properties that are relevant to Policies. Examples of such metadata would be:

  • Policy Owner (example: list of Departments like Accounting, HR, IT)
  • Policy Audience (example: Department names, types of employees like Full-time, Part-Time, Contractor)
  • Policy Type (example: policy, guideline, procedure)
  • Policy Status (example: draft, approved)
  • Policy Expiration Date

Here is an example of what such library might look like when all is set and done

Document Library Metadata Navigation

All you need to make sure is that policy owners from each respective Department have Contribute Access to this site/library and are properly trained on the new business process.

I ALWAYS ADVOCATE OPTION # 2 TO MY CLIENTS. HERE IS WHY:

  1. It is all about End Users! When you have all your policies in a single document library – you are making it super-convenient for your end users (Content Consumers) to find stuff. All they have to do is navigate to the site or library and it is all there for them. When you have content spread out in multiple sites, you are making it easier for content owners, but not content consumers. If you were an End User, would you prefer going to 1 place to find all your company policies or multiple? It is like one-stop-shop!
  2. Standard categorization. Since all of the files are in single document library and not spread over multiple site/document libraries, it is much easier to come up with uniform categorization (metadata) for all the policies. And there is only one document library that you need to setup, not many!
  3. Advanced filtering criteria. Since all polices are organized in single document library and you did your homework with metadata, finding stuff based on metadata is super easy! You can use various view, filters to group, sort your policies any way you want. Or you can enable metadata navigation to provide user with nice-looking filter to search for documents. You just won’t get the same nice look and feel and interface when you are rolling up content from multiple sites.
  4. No need to roll up content or write complicated search queries. If you have your policies on multiple sites and libraries, I hope you are an advanced power user or SharePoint Administrator with intimate knowledge of how search works and ability to write queries using CSWP web part.

With that being said, there are obviously situations when you cannot and should not put all your documents in one library. Project files are a good example. Project files will sit in each and separate project or team site and in case you want to roll up or aggregate documents from multiple project sites – you will be forced to use search queries mentioned above. However, for certain types of content, just like the one mentioned above, just by making slight changes in your business process, you can easily standardize on your documents, easily create nice search experience for your end users (content consumers) and alleviate yourself from major effort and overhead associated with writing queries and setting up custom searches in SharePoint.

read more
SharePoint Document Management

How not to copy files in SharePoint

download

I usually advocate for many wonderful features of SharePoint to be used as much as possible, but today I would like to explain the feature which I would not want you to use. I had few clients inquire about best ways to copy files in SharePoint between sites and libraries. In particular, about the feature called “Send To” or “Copy”.

COPY FILES IN SHAREPOINT USING “SEND TO” COMMAND

There is a feature accessible via SharePoint File ribbon and it is called “Send To” or “Copy”.

Send To 1

In theory, the intent is great. The functionality allows you to copy file from one library to another and establish a link, so an update in source document library will update the file in the destination document library. I am sure you can see a number of business scenarios where this could be required. For example, HR could develop a number of company policies on the internal HR site, work through multiple changes and revisions, but only publish the official version to HR Employee site.

SOUNDS GREAT IN THEORY, BUT NOT AS USEFUL IN PRACTICE. LET ME EXPLAIN…

  1. Feature is not user-friendly at all. You have to insert the path of the destination library. You can’t browse and your URL has to be a root of the library, not a particular view – otherwise it will error outSend To 2
  2. The file update does not occur automatically. Should you change the file in source library, users have to manually force the updates to destination library. This means extra things and steps for user they won’t rememberManage copiesManage copies-update
  3. Only works on 1 file at a time. You can only copy 1 file at a time, which is waste of time if you need to copy/send a few.
  4. Feature is useless if destination library has custom metadata. In case if you use custom metadata in destination library to tag the file, you cannot assign it from the same menu when you send/copy the file over. That means you have to go to destination library anyway to assign metadata after file ends up there.
  5. Can’t send/copy folders – just individual files

Based on the above, I do not recommend that you use the Send To/Copy functionality. While great in theory, in practice it is not user-friendly and really does not add much value. You will be much better off educating your users on the manual process of uploading files in both libraries/sites. If you do decide to implement this for whatever reason, it is a sure way to kill SharePoint User Adoption.

read more
1 2 3 4
Page 1 of 4