Publish Application Logs to Azure Monitor Logs

This guide describes how to use the Graal Development Kit for Micronaut (GDK) to create an application that publishes logs to Azure Monitor Logs. Typically, application logs are written to stdout and to files, but the Micronaut® Logging module supports multiple logging frameworks such as Logback, and logging to various other appenders, to email, a database, or other destinations.

Prerequisites

Follow the steps below to create the application from scratch. However, you can also download the completed example:

The application ZIP file will be downloaded in your default downloads directory. Unzip it and proceed to the next steps.

A note regarding your development environment

Consider using Visual Studio Code, which provides native support for developing applications with the Graal Development Kit for Micronaut Extension Pack.

Note: If you use IntelliJ IDEA, enable annotation processing.

Windows platform: The GDK guides are compatible with Gradle only. Maven support is coming soon.

1. Create the Application

Create an application using the GDK Launcher.

  1. Open the GDK Launcher in advanced mode.

  2. Create a new project using the following selections.
    • Project Type: Application (Default)
    • Project Name: azure-logging-demo
    • Base Package: com.example (Default)
    • Clouds: Azure
    • Build Tool: Gradle (Groovy) or Maven
    • Language: Java (Default)
    • Test Framework: JUnit (Default)
    • Java Version: 17 (Default)
    • Micronaut Version: (Default)
    • Cloud Services: Logging
    • Features: GraalVM Native Image (Default)
    • Sample Code: Yes (Default)
  3. Click Generate Project, then click Download Zip. The GDK Launcher creates an application with the package com.example in a directory named azure-logging-demo. The application ZIP file will be downloaded to your default downloads directory. Unzip it, open it in your code editor, and proceed to the next steps.

Alternatively, use the GDK CLI as follows:

gdk create-app com.example.azure-logging-demo \
 --clouds=azure \
 --services=logging \
 --features=graalvm \
 --build=gradle \
 --jdk=17  \
 --lang=java

Open the micronaut-cli.yml file, you can see what features are packaged with the application:

features: [app-name, gdk-azure-cloud-app, gdk-azure-logging, gdk-bom, gdk-license, graalvm, http-client, java, java-application, junit, logback, maven, maven-enforcer-plugin, micronaut-http-validation, netty-server, properties, readme, serialization-jackson, shade, static-resources]

The GDK Launcher creates a multi-module project with two subprojects: azure for Microsoft Azure, and lib for common code and configuration shared across cloud platforms. You develop the application logic in the lib subproject, and keep the Microsoft Azure-specific configurations in the azure subproject.

The Micronaut Logging service that you selected at the project generation step bundles Logback, Jackson Databind, Azure Logging, and other necessary dependencies. The Logback appender publishes logs to Azure Monitor Logs.

1.1. Controller Class

The example code includes a controller in a file named azure/src/main/java/com/example/LogController.java, which enables you to send POST requests to publish a message to a log:

package com.example;

import io.micronaut.http.annotation.Body;
import io.micronaut.http.annotation.Controller;
import io.micronaut.http.annotation.Post;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

@Controller
class LogController {

    private static final Logger LOG = LoggerFactory.getLogger(LogController.class);

    @Post("/log")
    void log(@Body String message) {
        LOG.info(message);
    }
}

2. Create Azure Cloud Resources

You will create a resource group, a Log Analytics Workspace and a table in the workspace to hold log event data, a Data Collection Endpoint, and a Data Collection Rule.

First define the following environment variables. You can customize the values as needed:

export LOCATION=eastus
export TABLE_NAME=GDKTable_CL
export COLLECTION_ENDPOINT_NAME=gdkCollectionEndpoint
export RESOURCE_GROUP_NAME=gdkguidelogging
export GDK_WORK_SPACE=gdktworkspace
export STREAM_NAME=Custom-GDKTable

2.1. Create a Resource Group

Some of the following commands use jq, which is a lightweight and flexible command-line JSON processor.

We recommend that you create a new resource group for this guide, but you can use an existing resource group instead.

Run the az group create command to create a resource group named gdkguides in the eastus region:

az group create --location $LOCATION --name $RESOURCE_GROUP_NAME

If you prefer using the region geographically closer to you, run az account list-locations to list all available regions.

2.2. Add the monitor-control-service CLI Extension

Run the az extension add command to add the monitor-control-service CLI Extension:

az extension add -n monitor-control-service

2.3. Add the log-analytics CLI Extension

Run the az extension add command to add the log-analytics CLI Extension:

az extension add -n log-analytics

2.4. Create a Log Analytics Workspace

Run the az monitor log-analytics workspace create command to create a Log Analytics Workspace:

WORKSPACE_RES=$(az monitor log-analytics workspace create \
    --name $GDK_WORK_SPACE \
    --resource-group $RESOURCE_GROUP_NAME )
echo $WORKSPACE_RES

The response should look like this:

{
  "createdDate": "2024-08-19T20:11:18.4002398Z",
  "customerId": "c222d080-5b14-43e7-b648-71c2b358dc74",
  ...
  "name": "gdkworkspace",
  "provisioningState": "Creating",
   ...
}

Save the value of the customerId attribute. This is your workspace GUID, and it will be needed later.

GDK_WORK_SPACE_ID=$(echo $WORKSPACE_RES | jq -r .customerId)

2.5. Create a Data Collection Endpoint

Run the az monitor data-collection endpoint create command to create a Data Collection Endpoint:

COLLECTION_ENDPOINT_CREATE=$(az monitor data-collection endpoint create \
    --data-collection-endpoint-name $COLLECTION_ENDPOINT_NAME \
    --public-network-access Enabled \
    --resource-group $RESOURCE_GROUP_NAME )
echo $COLLECTION_ENDPOINT_CREATE

The response should look like this:

{
  ...
  "logsIngestion": {
    "endpoint": "https://gdkcollectionendpoint-xxxx.eastus-1.ingest.monitor.azure.com"
  },
  ...
  "name": "gdkCollectionEndpoint",
  ...
  "type": "Microsoft.Insights/dataCollectionEndpoints"
}

Save the value of the logsIngestion.endpoint attribute. This is the logs ingestion endpoint URL, and it will be needed later.

COLLECTION_ENDPOINT_URL=$(echo $COLLECTION_ENDPOINT_CREATE | jq -r '.logsIngestion.endpoint|ltrimstr("https://")')

2.6. Create a Table in Your Log Analytics Workspace

Run the az monitor log-analytics workspace table create command to create a table in your Log Analytics workspace to hold the log records:

az monitor log-analytics workspace table create \
    --name $TABLE_NAME \
    --workspace-name $GDK_WORK_SPACE \
    --columns EventTimestamp=long Source=string Subject=string Data=string TimeGenerated=datetime \
    --resource-group $RESOURCE_GROUP_NAME

2.7. Create a Data Collection Rule

Create a file named dcr.json with this content:

{
   "location": "LOCATION",
   "properties": {
      "streamDeclarations": {
         "STREAM_NAME": {
            "columns": [
               {
                  "name": "TimeGenerated",
                  "type": "datetime"
               },
               {
                  "name": "EventTimestamp",
                  "type": "long"
               },
               {
                  "name": "Source",
                  "type": "string"
               },
               {
                  "name": "Subject",
                  "type": "string"
               },
               {
                  "name": "Data",
                  "type": "string"
               }
            ]
         }
      },
      "destinations": {
         "logAnalytics": [
            {
               "workspaceResourceId": "/subscriptions/SUBSCRIPTION_ID/resourceGroups/RESOURCE_GROUP_NAME/providers/microsoft.operationalinsights/workspaces/GDK_WORK_SPACE",
               "name": "gdkLogDestination"
            }
         ]
      },
      "dataFlows": [
         {
            "streams": [
               "STREAM_NAME"
            ],
            "destinations": [
               "gdkLogDestination"
            ],
            "transformKql": "source | extend TimeGenerated = now()",
            "outputStream": "Custom-TABLE_NAME"
         }
      ]
   }
}

Substitute the placeholders inside the file with the environment variables you saved earlier.

sed -i -e "s/LOCATION/$LOCATION/" dcr.json
sed -i -e "s/SUBSCRIPTION_ID/$SUBSCRIPTION_ID/" dcr.json
sed -i -e "s/RESOURCE_GROUP_NAME/$RESOURCE_GROUP_NAME/" dcr.json
sed -i -e "s/GDK_WORK_SPACE/$GDK_WORK_SPACE/" dcr.json
sed -i -e "s/TABLE_NAME/$TABLE_NAME/" dcr.json
sed -i -e "s/STREAM_NAME/$STREAM_NAME/" dcr.json

Run the az monitor data-collection rule create command to create a data collection rule:

RULE_RESULT=$(az monitor data-collection rule create \
    --name gdkCollectionRule \
    --location $LOCATION \
    --endpoint-id /subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Insights/dataCollectionEndpoints/$COLLECTION_ENDPOINT_NAME \
    --rule-file "dcr.json" \
    --resource-group $RESOURCE_GROUP_NAME )
echo $RULE_RESULT

The response should look like this:

{
  "dataCollectionEndpointId": "/subscriptions/fe053...",
  "dataFlows": [
    {
      "destinations": [
        "gdkLogDestination"
      ],
      "outputStream": "Custom-GDKTable_CL",
      "streams": [
        "Custom-GDKTable"
      ],
      "transformKql": "source | extend TimeGenerated = now()"
    }
  ],
  ...
  "immutableId": "dcr-e7ebfceb7df24631b64d7ae880eb8ada",
  "location": "eastus",
  "name": "gdkCollectionRule",
  ...
  "type": "Microsoft.Insights/dataCollectionRules"
}

Save the value of the immutableId attribute. This is the rule ID, and it will be needed later.

RULE_ID=$(echo $RULE_RESULT | jq -r .immutableId)

2.8. Authorize Sending Logs

Define the following environment variables:

export SUBSCRIPTION_ID=<subscription-id>
export SUBSCRIPTION_EMAIL=<email>

replacing <email> with the email address associated with your account, and <subscription-id> with your Azure Subscription ID.

Authorize sending logs by assigning the Monitoring Metrics Publisher role to yourself:

az role assignment create \
    --role "Monitoring Metrics Publisher" \
    --assignee $SUBSCRIPTION_EMAIL \
    --scope "/subscriptions/$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP_NAME/providers/Microsoft.Insights/dataCollectionRules/gdkCollectionRule"

Note that it can take a few minutes for the role grant to propagate.

3. Configure Appender and Application Configuration

3.1. Logback Appender

The GDK Launcher generated a file named azure/src/main/resources/logback.xml containing the configuration for an appender that publishes log events to Azure Monitor Logs:

<configuration debug='false'>

    <!--
    You can un-comment the STDOUT appender and <appender-ref ref='STDOUT'/> in
    the cloud appender to log to STDOUT as the 'emergency' appender.
    -->

    <!--
    <appender name='STDOUT' class='ch.qos.logback.core.ConsoleAppender'>
        <encoder>
            <pattern>%cyan(%d{HH:mm:ss.SSS}) %gray([%thread]) %highlight(%-5level) %magenta(%logger{36}) - %msg%n</pattern>
        </encoder>
    </appender>
    -->

    <appender name='AZURE' class='io.micronaut.azure.logging.AzureAppender'>
        <!-- <appender-ref ref='STDOUT'/> -->
        <encoder class='ch.qos.logback.core.encoder.LayoutWrappingEncoder'>
            <layout class='ch.qos.logback.contrib.json.classic.JsonLayout'>
                <jsonFormatter class='io.micronaut.azure.logging.AzureJsonFormatter'/>
            </layout>
        </encoder>
    </appender>

    <root level='INFO'>
        <appender-ref ref='AZURE'/>
    </root>

</configuration>

Note: You can un-comment the STDOUT appender as the 'emergency' appender. See the file for details.

3.2. Set Application Configuration Properties

Update the Azure application.properties as follows:

(1)
azure.logging.data-collection-endpoint=<endpoint>
(2)
azure.logging.rule-id=<ruleid>
(3)
azure.logging.stream-name=Custom-GDKTable

1 Replace <endpoint> with the logs ingestion endpoint URL that you saved earlier

2 Replace <ruleid> with the rule ID that you saved earlier

3 Set the value of the azure.logging.stream-name property with the name of the workspace table you created, "Custom-GDKTable"

Having configured the appender and the application configuration, you can proceed to run the application, publishing the logs.

4. Run the Application, Publish and View Logs

To run the application, use the following command, which starts the application on port 8080.

./gradlew :azure:run

Send some curl requests to test logging:

MESSAGE="Hello World"
POST_RES=$(curl -s -id '{"message":"'"$MESSAGE"'"}' \
       -H "Content-Type: application/json" \
       -X POST http://localhost:8080/log)
echo $POST_RES

Run the az monitor log-analytics query command to retrieve log events that were pushed while running your application:

  LOG_RES=$(az monitor log-analytics query \
  --workspace $GDK_WORK_SPACE_ID  \
  --analytics-query "search in ($TABLE_NAME) \"$MESSAGE\" | where TimeGenerated > ago(1h) | take 1" )
  echo $LOG_RES

Note that it can take a few minutes for log entries be available.

5. Clean up Cloud Resources

Once you are done with this guide, you can delete the Azure resources created to avoid incurring unnecessary charges.

Delete the resource group and all of its resources with:

az group delete --name $RESOURCE_GROUP_NAME

Alternatively, run these commands to delete resources individually:

az monitor data-collection rule delete --name gdkCollectionRule --resource-group $RESOURCE_GROUP_NAME
az monitor log-analytics workspace table delete --name $TABLE_NAME --workspace-name $GDK_WORK_SPACE --resource-group $RESOURCE_GROUP_NAME
az monitor data-collection endpoint delete --name $COLLECTION_ENDPOINT_NAME --resource-group $RESOURCE_GROUP_NAME
az monitor log-analytics workspace delete --workspace-name $GDK_WORK_SPACE --resource-group $RESOURCE_GROUP_NAME --force
az group delete --name $RESOURCE_GROUP_NAME

Summary

This guide demonstrated how to use the GDK to create an application that publishes logs to Azure Monitor Logs. Then, using Azure Monitor Logs, you viewed the logs produced by the application. Finally, you saw how to build a native executable for this application with GraalVM Native Image, and ran it to test publishing logs to Azure Monitor Logs.