Building an end-to-end monitoring solution with Azure Arc, Log Analytics and Workbooks–Part 3: Data persistence in Log Analytics
In part 1 I explained that we want to setup an application health dashboard to gain insights on the availability and health of the on-premise parts of our applications. Specifically we want to monitor our application pools, scheduled tasks and windows services. I introduced the overall architecture and explained the building blocks.
Part 2 was all about the data collection part using Azure Arc Data Collection rules.
Today I’ll focus on how we used a custom table in Log Analytics to persist our data.
Why a custom table
The built-in Windows event logs in Log Analytics (the Event table) contain a lot of data, but the format isn't optimized for health-status queries. Parsing event log XML to extract service states or scheduled task results on every query adds latency and complexity.
When you query the Event table for service state changes, you're filtering through thousands of rows, parsing semi-structured XML from the EventData column, and then correlating multiple events to determine current state. Here's what a typical query looks like:
like:
Event
| where EventLog == "System"
| where Source == "Service Control Manager"
| where EventID in (7036, 7040)
| parse EventData with * '<Data Name="param1">' ServiceName '</Data>' *
| parse EventData with * '<Data Name="param2">' ServiceState '</Data>' *
| summarize arg_max(TimeGenerated, *) by ServiceName
| project TimeGenerated, Computer, ServiceName, ServiceState
That works, but it's slow at scale, and it only gives you state transitions — not current state snapshots. If a service has been running for days, there's no recent event to query.
A custom table lets us store pre-structured health snapshots — one row per component per collection interval — with a schema we designed ourselves. The equivalent query becomes:
OnPremHealthStatus_CL
| where TimeGenerated > ago(1h)
| where ResourceType== "WindowsService"
| summarize arg_max(TimeGenerated, *) by Server
| project ServerName, Name, Results
It's faster, clearer, and easier to build dashboards on top of.
Designing the schema
The schema is the contract between your data collection process and your queries. Get it right up front, and everything downstream becomes easier. Get it wrong, and you'll be fighting type conversions and missing columns in every query.
For our health monitoring use case, we're tracking three component types: IIS application pools, Windows services, and scheduled tasks. They share some common attributes (name, status, timestamp) but each has unique properties too.
You have two options: create a single table with a union schema that covers all three types, or create three separate tables (one per component type). We went with a single table because:
- It's easier to query across all component types in one go (for the summary dashboard)
- The schema is still manageable — we're not creating dozens of nullable columns
- It keeps the DCR configuration simpler (one destination vs. three)
Here's the schema we landed on:
Notice that we used a ‘dynamic’ type for the Result column as the information we want to capture is different between an AppPool, Windows Service or Scheduled task.
Creating the table
Custom tables in Log Analytics are created through the Tables management interface or via the REST API. The portal method is straightforward for getting started, and the API method is better for automation and version control.
Navigate to your Log Analytics Workspace in the Azure portal, then go to Settings > Tables and click Create > New custom log (DCR-based).
Table name: ServiceHealth_CL
The _CL suffix is automatically appended by Azure to denote a custom log table, so enter ServiceHealth without the suffix. The system will add it.
Schema definition: You'll define each column with its name and type. Azure supports the following column types for custom tables:
string— text data, up to 32KB per valueint— 32-bit integerlong— 64-bit integerreal— floating-point numberboolean— true/falsedatetime— timestampdynamic— JSON object (stored as string, queryable with JSON functions)
Schema evolution
One of the advantages of custom tables is that you can evolve the schema as your needs change. If you later decide you want to track additional properties — like the service account a Windows service runs under, or the priority of a scheduled task — you can add columns to the table without breaking existing data or queries.
To add a column, go to Tables > [Your Table] > Schema and click Add column. Existing rows will have null values for the new column, and new data can populate it.
You cannot change the type of an existing column or delete columns. If you need to make breaking schema changes, you'll need to create a new table and migrate.
Retention and cost
Log Analytics retention is configurable per table. The default is 30 days, but you can extend it up to 730 days (2 years) or reduce it to as low as 4 days.
For a health monitoring dashboard, 30 days is usually sufficient. You get a month of history for trend analysis and incident investigation without paying for long-term retention you don't need. If you want to archive data for compliance or long-term trend analysis, consider exporting to Azure Storage using a data export rule.
Cost model: You pay for data ingestion (per GB) and retention (per GB-month). Custom tables follow the same pricing as built-in tables.
For our use case, ingestion volume is low. If you're monitoring 50 components across 20 VMs, collecting health snapshots every 5 minutes, that's roughly:
50 components × 20 VMs × 12 snapshots/hour × 24 hours × 30 days = 8,640,000 rows/month
Average row size ≈ 300 bytes
Total monthly ingestion ≈ 2.5 GB
At current Azure pricing (approximately $2.76/GB for ingestion in most regions), that's under $7/month for ingestion. Retention adds a small amount on top. This scales linearly — double your VMs or components, double your cost.
What's next
At this point, you have a custom table in Log Analytics with a schema that matches the data your DCR is sending. The pipeline is complete: the Azure Monitor Agent is collecting health data, the DCR is transforming and routing it, and the table is ready to store it.
In Part 4, we'll build the Azure Workbook that queries this table and turns the raw data into an interactive dashboard.