Log Analytics Data Collection and Configuration with Bicep

Previously I showed you how you can use Bicep to deploy Log Analytics, App Insights, Azure Sentinel, Azure Monitor for VMs, Azure Monitor for Containers. As well as linked Automation Accounts for Change Tracking and Update Management. This week I have several Bicep templates for you. Data collection for Log Analytics, as well as saved query and function with Bicep templates. Have favorite queries or functions you deploy with all your Log Analytics workspaces? Want to collect certain Event Logs, Metrics or Syslog?

First, this is for what I’ll call workspace data collection in which data collection is setup on the workspace. Not to be confused with the new Data Collection Rules (DCR) for the new Azure Monitor Agent. That said later I will have Bicep samples for DCR’s as well.

As always I have personally tested everything in this blog and on my repo. That said I’m still one person, so if you find an error or something doesn’t work, please let me know and I will do my best to resolve the issue.

TLDR: Repo here

Deploy Saved Queries and Functions

Most people I know that use Log Analytics have queries or functions they like to deploy with each workspace. We can easily do this in Bicep.

One of my favorite queries to deploy is a Usage query written by a former Microsoft employee.

bicep template for log analytics data collection

There are only two parameters needed, Workspace and Location.

These two need to be unique per each query you deploy.


resource workspaceName_Usage
name: '${workspaceName_resource.name}/Usage'

So you’ll want to change them up.

The other thing I want to call out is the \r\n this is a line break for the query. So if you want your query to be pretty in Log Analytics, you can add these line breaks, otherwise it will come in all jumbled.

Once deployed your saved query will show up under the Query Explorer on the right hand side in the Logs view. Then it will show up under the category you specified, in my case it was Usage.

bicep template for log analytics data collection

Unfortunately at this time Saved Queries deployed by ARM don’t show up on the left hand side saved queries view like one would expect.

Functions

Deploying Functions is very similar to Saved Queries.

bicep template for log analytics data collection

Again your resource deployment names need to be unique. I did find that category was hit or miss on this one. For instance I tried “test” and it did not create such a category, like it did for saved queries. However, I used “Security” and it did deploy it under that category.

There are two ways to view your saved functions in the Logs view, one is by category, in which case they’ll either show up under Other, or if you used Security or some other supported type of category they will show up there. The other way is by Solution in which cause they will show up under “workspace funtions”

 

Log Analytics Data Collection

Now we come to the meat and potatoes of Log Analytics, data collection. The question is to hard code or to parameterize. In my repo I have provided examples of both. For instance if you always deploy the same counters with the same collection intervals it makes since to hard code the template. However, if you deploy based on customers needs these will likely be different every time. I have seen customers that want 10 counters all at 30 second collection. But other customers only want 5 counters at 120 second collection. So your mileage my vary, which is why I have provided both.

As mentioned previously this is not for the new Data Collection Rules for the new Azure Monitor Agent, these are for current workspace level collection, meaning every machine connected will collect these.

Perf Counters

To hardcode Performance counters is very straightforward


param workspaceName string
param location string = resourceGroup().location

resource workspaceName_resource 'Microsoft.OperationalInsights/workspaces@2020-08-01' = {
    name: workspaceName
    location: location
}

resource workspaceName_perfcounter1 'Microsoft.OperationalInsights/workspaces/datasources@2015-11-01-preview' = {
    name: '${workspaceName_resource.name}/perfcounter1'
    kind: 'WindowsPerformanceCounter'
    properties: {
       objectName: 'LogicalDisk'
       instanceName: 'C:'
       intervalSeconds: 60
       counterName: '% Free Space'
}
}

This example is from the deployWVDEventsCounters template from here. This template has both Perf Counters and Event log hard coded. These are the required counters for the new WVD Insights workbook provided by the WVD Product Group.

You can deploy it with this azure cli example

az deployment group create --name WVDSetup --resource-group azmoneastus2 --template-file .\deployWVDEventsCounters.bicep

I have also parameterized metric deployment for both linux and windows counters.


param workspaceName string = 'la-blog-eastus2-cloudsma'
param metricLocation string = resourceGroup().location
@allowed([
       'WindowsPerformanceCounter'
       'LinuxPerformanceObject'
])
param metricKind string = 'WindowsPerformanceCounter'
param metricObjectName string
param metricInstanceName string = '_Total'
param metricIntervalSeconds string = '120'
param metricCounterName string = '% Processor Time'

var metricDeploymentName = '${workspaceName_resource.name}/${uniqueString(subscription().subscriptionId, deployment().name)}'

resource workspaceName_resource 'Microsoft.OperationalInsights/workspaces@2020-08-01' = {
    name: workspaceName
    location: metricLocation
}

resource workspaceName_metricDeploymentName 'Microsoft.OperationalInsights/workspaces/datasources@2020-08-01' = {
    name: metricDeploymentName
    kind: metricKind
    properties: {
        objectName: metricObjectName
        instanceName: metricInstanceName
        intervalSeconds: metricIntervalSeconds
        counterName: metricCounterName
}
}

There’s a lot going on with this as it took my quite a while to iron out issues. The most important of which is the var metricDeploymentName. Because everything in Azure is a resource, even deploying metrics have to have a unique resource name. Because deploying a second counter with the same resource name will over write the first counter. My first thought was to use the counter name as that will always be unique. Until I tried to deploy counters with % in the name. Azure resources can’t have a % in the name. I then tried uniqueString, which works but not on the resource group level. As it was still overwriting previously created counters on the workspace. What I have found works is uniqueString with subscriptionId and deployment name.

Its also very important to pay attention to the instance name you want to collect. For instance with disks, you can collect * which would be every disk on every machine. Or you can use C: for the instance name, which would only collect C: across your environment. Similarly for Memory and Processor which is where you would want to use * typically for Memory and _Total for Processor.

Some examples for deploying metrics with this template.

az deployment group create --name LogicalDiskAvgWrite --resource-group azmoneastus2 --template-file .\templates\loganalytics\workspacedatacollection\deployMetrics.bicep --parameters metricObjectName='LogicalDisk' metricCounterName='Avg. Disk Bytes/Write' metricInstanceName='*'

 

az deployment group create --name PercentFreeSpaceC --resource-group azmon --template-file .\deployMetrics.bicep --parameters metricObjectName='LogicalDisk' metricCounterName='% Free Space' metricInstanceName='C:'

 

az deployment group create --name PercentProcessorTime --resource-group azmoneastus2 --template-file .\deployMetrics.bicep --parameters metricObjectName='Processor' metricCounterName='% Processor Time' metricIntervalSeconds='60'

Note: While I have setup this template to be able to do both Linux and Windows counters, at this time Linux deployment is too inconsistent. I have experienced successful deployments that never collect any data, or successful deployments that do collect that counter, but never show up under the agents configuration in Log Analytics. I am trying to get this resolved internally. My recommendation for the moment is to hard code Linux counters.

Event Log


param workspaceName string = 'la-blog-eastus-cloudsma'
param location string = resourceGroup().location
param eventLogName string = 'System'
param eventLevel array = [
     'Error'
     'Warning'
     'Information'
]

var deploymentName = '${workspaceName_resource.name}/${uniqueString(subscription().subscriptionId, deployment().name)}'

resource workspaceName_resource 'Microsoft.OperationalInsights/workspaces@2020-08-01' = {
    name: workspaceName
    location: location
}

resource workspaceName_deploymentName 'Microsoft.OperationalInsights/workspaces/datasources@2020-08-01' = {
    name: deploymentName
    kind: 'WindowsEvent'
    properties: {
         eventLogName: eventLogName
         eventTypes: [for Level in eventLevel: {
         eventType: Level
      }]
  }
}

For Event Log we have our Event Log name and Level which is an array. So that we can pass in all 3 levels or 1 just. Again if you want to hard code them, you can follow the examples in the WVDDeployment file.

az deployment group create --name hypervAdmin --resource-group azmon --template-file .\deployEventLog.bicep --parameters eventLogName='microsoft-windows-hyper-v-compute/admin' eventLevel="['Error','Warning']"
az deployment group create --name applogtest --resource-group azmon --template-file .\templates\loganalytics\workspacedatacollection\deployEventLog.bicep --parameters eventLogName='Application' eventLevel="['Error','Warning']"
az deployment group create --name fslogixtest --resource-group azmon --template-file .\templates\loganalytics\workspacedatacollection\deployEventLog.bicep --parameters eventLogName='FSLogix-Apps/Operational' eventLevel="['Error','Warning','Information']"

Syslog

Syslog is setup the same way as Event log, with an array for severity level.


param workspaceName string = 'la-blog-eastus2-cloudsma'
param location string = resourceGroup().location
param syslogName string = 'kern'
param severityLevel array = [
     'emerg'
     'alert'
     'crit'
     'err'
     'warning'
     'notice'
     'info'
     'debug'
]

resource workspaceName_resource 'Microsoft.OperationalInsights/workspaces@2020-08-01' = {
    name: workspaceName
    location: location
}

resource workspaceName_Syslog 'Microsoft.OperationalInsights/workspaces/datasources@2020-08-01' = {
    name: '${workspaceName_resource.name}/${syslogName}'
    kind: 'LinuxSyslog'
    properties: {
        syslogName: syslogName
        syslogSeverities: [for Level in severityLevel: {
        severity: Level
    }]
  }
}

resource workspaceName_SyslogCollection 'Microsoft.OperationalInsights/workspaces/datasources@2020-08-01' = {
     name: '${workspaceName_resource.name}/Enable'
     kind: 'LinuxSyslogCollection'
     properties: {
         state: 'Enabled'
    }
}
az deployment group create --name syslogDeamon --resource-group azmoneastus2 --template-file .\deploySyslog.bicep --parameters syslogName='daemon' severityLevel="['emerg','alert']"

az deployment group create --name syslogDeamon --resource-group azmoneastus2 --template-file .\deploySyslog.bicep --parameters syslogName='daemon' severityLevel="['emerg','alert','crit','err']"

Summary

Using Bicep to configure our Log Analytics data collection is relatively easy. I hope you find these examples worthwhile, please feel free to contribute to my repo, and if you have any suggestions for further examples let me know.