Parsing SharePoint ULS Logs and Visualize Them in Kibana

In an effort to consolidate our diagnostic data in Elasticsearch and Kibana, one thing that came on my plate was to figure out a way to load the relevant SharePoint ULS log data in Elasticsearch to search and visualize it in Kibana. The process at a high level is the following:

  1. Get a list of log files to load for each server based on the last incremental load.
  2. Read each log files and exclude lines based on a predefined list of events
  3. Partition the events based on the timestamp of the log event
    1. This process is fairly important if you have a large number of events to be imported in Elasticsearch.
    2. Partitioning the ULS log data by day will allow you simply drop the index in Elasticsearch for the data that is not relevant anymore. No need to query the index to find documents matching a specific retention period.
    3. For instance, if you want to cleanup by month, you can use the following function from libElasticsearch.psm1 to drop the data from December 2016:
      1. Remove-ElasticsearchIndex -serverName -indexName sharepointulslog-2016.12.*
  4. Create batches of events for each partition to facilitate the insertion in Elasticsearch
  5. Send each batch of events to Elasticsearch
  6. Once the load is finished, persist the date and time of the last file that was loaded as a checkpoint

In order to come up with a good list of events to be excluded, I iteratively loaded small batches of log files in Elasticsearch and visualized the number of event per event Id and Category. Looking at each of the events with the most occurences, I checked whether those events would be useful from a troubleshooting standpoint. SharePoint is quite chatty depending the the logging level that is set. There are a lot of events for things like process starting, running and completing. After a few hours of reviewing the events in our quality assurance environment, I ended up with a list of 271 events in the exclusion list and that’s still a work in progress as more data is coming in.

Now let’s get into the actual details of running this process.After you have downloaded and extracted the latest release of GEM Automation,the first thing that needs to be done is to populate a file named \SharePoint\SharePointServers.csv . The file is simply a comma delimited file that contains the list of servers for which you want to collect the logs along some additional information. Here’s what it looks like

SPT1001,Quality Assurance,\\SPT1001\c$\Program Files\Common files\Microsoft Shared\Web Server Extensions\14\LOGS\
SPT1002,Quality Assurance,\\SPT1002\c$\Program Files\Common files\Microsoft Shared\Web Server Extensions\14Program Files\Common files\Microsoft Shared\Web Server Extensions\14\LOGS\
SPT1003,Production,\\SPT1003\c$\Program Files\Common files\Microsoft Shared\Web Server Extensions\14Program Files\Common files\Microsoft Shared\Web Server Extensions\14\LOGS\
SPT1004,Production,\\SPT1004\c$\Program Files\Common files\Microsoft Shared\Web Server Extensions\14Program Files\Common files\Microsoft Shared\Web Server Extensions\14\LOGS\

Once that’s populated, you can change your current directory in PowerShell to .\Elasticsearch. You can now run Import-SharePointULS.ps1 as following:

.\Import-SharePointULS.ps1 -elasticsearchHostName <server name>

Kicking this off will make the following happen:

  1. For each of the servers, a list of the log files in the specified ULSLogNetworkBasePath from the SharePointServers.csv file is gathered
  2. Then, all the log files that have not been processed since the last checkpoint with the exception of the log file currently in use by SharePoint will be used for the current execution
  3. For each of the log file remaining, Parse-SharePointULS.ps1 will be called by passing the environment specified in the SharePointServers.csv file
    1. The environment name can be used to more easily segment your dev/test data from the production data in Kibana.
  4. Parse-SharePointULS.ps1 will then read the log file line by line and check if the line needs to be excluded base on the file SharePoint\SharePointULSLog_Exclusions.txt
    1. If the line should be imported, an hashtable will be built with the parse columns from the log currently parsed line
  5. Once all the relevant lines have been processed, they can now be sent to elasticsearch using the following function from libElasticsearch.psm1
    1. Add-ElasticSearchDocumentBatch -serverName $elasticsearchHostName -batchSize 1000 -indexName sharepointulslog -documentType ulslogentry -documents $logEntries -partitionKey “timestamp” -indexPartitionType “Daily” -indexDefinitionPath .\IndexDefinitions\SharePointULS.json
      1. A few key things to notice.
        1. The partition key and type are specified to make sure we spread all the event data in daily indexes to speed up querying in elasticsearch. Kibana is able to issue queries against multiple indexes at a time i.e. sharepointulslog-*.
        2. We need pass the definition of the index as it will be used to create each of the index that will partition the data.
        3. Add-ElasticsearchDocumentBatch will then call:
          1. Partition-ElasticsearchDocument to split the ULS events in daily buckets
          2. Create-ElasticsearchDocumentBatch to split the documents in the partition into batches that will be sent to Elasticsearch
          3. If the index for the partition doesn’t exist it will get created at the beginning of the processing of the partition
  6. The process is then repeated for each log files that requires processing

Before you can see the data in Kibana, you’ll need to configure a new index pattern:


Once that’s completed, you will now be able to create custom searches over that index pattern. Those searches can then be used to build specific visualization which in turn will be used to build a dashboard like the following (I apologize from the cramped screenshot, will create a better one soon):


You can now slice and dice millions of ULS log events in seconds using Kibana. For example, you could filter out your dashboard based on the Database event category to find when SQL Server connectivity issues are happening.

Another interesting aspect of sending that data in Elasticsearch is that will facilitate finding errors associated with a specific correlation ID. You can simply put correlationid:<correlation id> in the search bar in Kibana and the results will be returned as quickly as a few milliseconds.

If you have any questions about this, let me know in the comments below!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s