pi膮tek, 27 grudnia 2024

CloudWatch - integration with Spring application


This document is a quick summary of the all steps that you need to do to have a logs in CloudWatch in a Spring Application. I won't go into detail about each step here because I want to gather all of that different areas needed to accomplish wider goal and I want this manual to be concise and short. 


馃搶 #1 IAM role 

IAM policy CloudWatchLogsFullAccess needs to be added to IAM Role attached to your ec2.

馃搶 #2 Spring and Logback

At the very beginning you need as Maven Dependency the Logback library: 
  
  <dependency>
    <groupId>ch.qos.logback</groupId>
    <artifactId>logback-classic</artifactId>
    <version>1.5.6</version>
</dependency>
Then you have to add Logback configuration file in this directory src/main/resources/logback.xml. Below there is an example file content: 
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <include resource="org/springframework/boot/logging/logback/base.xml"/>

    <property name="LOGS" value="./logs" />

    <appender name="Console"
              class="ch.qos.logback.core.ConsoleAppender">
        <layout class="ch.qos.logback.classic.PatternLayout">
            <Pattern>
                %black(%d{ISO8601}) %highlight(%-5level) [%blue(%t)] %yellow(%C{1}): %msg%n%throwable
            </Pattern>
        </layout>
    </appender>

    <appender name="RollingFile"
              class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOGS}/your-application-standard-logger.log</file>
        <encoder
                class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
            <Pattern>%d %p %C{1} [%t] %m%n</Pattern>
        </encoder>

        <rollingPolicy
                class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOGS}/archived/your-application-standard-logger-%d{yyyy-MM-dd}.%i.log
            </fileNamePattern>
            <timeBasedFileNamingAndTriggeringPolicy
                    class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
                <maxFileSize>10MB</maxFileSize>
            </timeBasedFileNamingAndTriggeringPolicy>
        </rollingPolicy>
    </appender>

    <root level="info">
        <appender-ref ref="RollingFile" />
        <appender-ref ref="Console" />
    </root>

    <logger name="com.your.application" level="trace" additivity="false">
        <appender-ref ref="RollingFile" />
        <appender-ref ref="Console" />
    </logger>

</configuration>

馃搶 #3 Running Application Container 

Container with your Spring Application needs to be started with special configuration - you need to map logs inside the container to outside scope which is some ec2 directory:

docker run -d -v /path-in-the-ec2/logs:/logs <<IMAGE>>

To see the logs in the Docker Container, go into the running container and open the log file: 

docker exec -it <container_id> bash

cat logs/your-application-logger.log

On the other hand you can see the logs in the ec2 like that:

tail -50 /path-in-the-ec2/logs/your-application-standard-logger.log

馃搶 #4 CloudWatch Agent 

You need to install CloudWatch Agent in your ec2 (following command for the Amazon Linux): 

sudo yum install amazon-cloudwatch-agent

The set up Agent configuration file using command sudo nano /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json and an example content of that file that I've proposed: 

{
    "agent": {
        "region": "your-region"
    },
    "logs": {
        "logs_collected": {
            "files": {
                "collect_list": [
                    {
                        "file_path": "/path-in-the-ec2/logs/your-application-standard-logger.log",
                        "log_group_name": "your-application-group",
                        "log_stream_name": "your-application-stream",
                        "timestamp_format": "%Y-%m-%d %H:%M:%S.%f"
                    }
                ]
            }
        }
    },
    "force_flush_interval": 15
}

Then you can start CloudWatch Agent with appended configuration file:

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -a append-config -s -c file:/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json


To check the status of the Agent:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a status
And you can stop the Agent manually:
sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-ctl -m ec2 -a stop


馃搶 #5 Summary

After that you should go into AWS -> CloudWatch -> Log Groups and see new Log Group.  All described steps as diagram below:



niedziela, 8 grudnia 2024

CloudWatch Insights - monitoring async workers traffic distribution

Symfony Framework Messenger allows to run parts of your system's processes asynchronously. To gain a better understanding of how your application really works, you can use the following queries. 

These queries filter all logs that comes from Symfony Messenger Workers and aggregate them to provide broader perspective. These also prepare a summary of all asynchronous execution units in your system, giving you greater awareness of the overall complexity and the building blocks that make up the whole.

So, at the very beginning let me explain the data from the logs in my application. This way you can apply these queries to your system, keeping in mind that my log structures might differ partially or entirely.

  • message - with value MESSENGER_WORKER_MESSAGE_FINISH indicates Symfony Messenger Message that was successfully handled, 
  • extra.messengerWorker.class - event/command class that was handled by Worker
  • extra.messengerWorker.executionId - represents each message that was handled by Messenger Worker 
  • extra.requestDuration - whole Worker duration, 
  • context.messengerWorker.processingDuration - execution of a single Message duration
  • context.messengerWorker.memoryUsageMb - whole Worker memory usage

馃搳 List all unique Workers 

Thanks to that query you can easily check how many different async-workers do you have in your system. dedup function helps a lot with removing duplicated message class names.    

fields @timestamp, @message, @logStream, @log
| filter message = "MESSENGER_WORKER_MESSAGE_FINISH"
| display extra.messengerWorker.class 
| dedup extra.messengerWorker.class 

All unique Messenger events/commands that are handel asynchronously.
This summary gathers all async execution units (that have worked in a given period of time)
of you application in one place. 

馃搳 Reveal traffic distribution per Worker 

That query shows the busiest workers in your system.     

fields @timestamp, @message, @logStream, @log
| filter message = "MESSENGER_WORKER_MESSAGE_FINISH"
| stats count(*) as executionCount by extra.messengerWorker.class  
| sort by executionCount desc

It's a simple list which execution count for each individual worker that was handled in a given period of time.

The wider time period you'll set for the query the more (probably) unique worker type you might get as a results. 

Changing the time period of the query to only one day would reveal less worker types
but returned metrics might be more useful. 

馃搳 More detailed Workers data 

Each row symbolizes a single worker per instances, how many messages they handled, when the first and last messages were handled, and, of course, the message type they handled. Take look at interesting usage of  latest(fieldName) function which allows to present not aggregated data in this summary.      

fields @timestamp, @message, @logStream, @log
| parse @logStream "/php/*" as log_stream_id
| filter message = "MESSENGER_WORKER_MESSAGE_FINISH"
| stats latest(substr(log_stream_id, 0, 6)) as instance,
        count(extra.messengerWorker.executionId) as executions, 
        min(@timestamp) as start,
        max(@timestamp) as end,
        max(extra.requestDuration / 60) as duration_in_minutes,
        latest(extra.messengerWorker.class) as message_class
  by extra.executionId as worker_exectuion_id

You can use that query when you need to check a specific worker performance.

馃搳 The longest Workers executions

To identify the longest workers execution in you application just use the query bellow. These spots could be the first candidates for optimization or potential bottlenecks that slow down your application.       

fields @timestamp, @message, @logStream, @log
| filter message like /(?i)(MESSENGER_WORKER_MESSAGE_FINISH)/
| display datetime, 
          context.messengerWorker.processingDuration / 60,
          context.messengerWorker.memoryUsageMb,
          extra.messengerWorker.class
| sort context.messengerWorker.processingDuration desc

niedziela, 1 grudnia 2024

CloudWatch Insights - revealing endpoint's clients

I'll show you how to list all the clients of your's application HTTP endpoints, how large is the traffic they generate and how it's exactly distributed in a given period of time. That few queries below could easily reveal the knowledge about application dependencies (with a minimal effort on your end) that you might be not aware of.

At the very beginning I'm gonna show you how the log structure looks like in my case. It may differ how it would look like in your application but what's matter is to highlighting a values that you need to achieve our goal to find out who is calling your application endpoints. 


JSON structure

  • User Agent of the client - in my case it's passed in request headers as key user-agent which needs to be parsed first because even that key is auto-discovered by CloudWatch it contains a dash (-) in a key name so cannot be used in a query,
  • value that distinguish Http Request Logmessage,
  • Http Request Method - context.method,
  • Http Request Namecontext.route - we cannot use exact URI that was called by client because the same URIs may differ when endpoint has param inside endpoint path e.g. {{id}} of the resource it refers to - which makes logs aggregation not possible,  
  • Caller IPcontext.clientIp,  

馃搳 List all the Clients per endpoint

fields @timestamp, @message, @logStream, @log
| parse @message '"user-agent":["*"]' as userAgent
| filter message = "HTTP_REQUEST" 
| display context.method, 
          context.route, 
          userAgent, 
          context.clientIp
| sort context.method, 
       context.route, 
       userAgent asc
| dedup context.route, 
        context.clientIp


馃搳 Count all requests done by each Client

fields @timestamp, @message, @logStream, @log
| parse @message '"user-agent":["*"]' as userAgent
| filter message = "HTTP_REQUEST" and
         ispresent(userAgent)
| stats count(userAgent) as endpointCallByExternalService by userAgent
| sort endpointCallByExternalService desc


馃搳 Traffic distribution in a time per demand Client

Based on results above you need to manually parse each Client that you want to see in this chart and add it in the stats section to finally display it. 
fields @timestamp, @message, @logStream, @log
| parse @message '"user-agent":["*"]' as userAgent
| parse userAgent /(?<isGo>Go-http-client?)/
| parse userAgent /(?<isSymfony>Symfony?)/
| parse userAgent /(?<isPostman>PostmanRuntime?)/
| parse userAgent /(?<isJava>Java?)/
| parse userAgent /(?<isPython>python-requests?)/
| parse userAgent /(?<isRuby>Ruby?)/
| filter message = "HTTP_REQUEST" and
         ispresent(userAgent)
| stats count(isGo),
        count(isSymfony),
        count(isPostman),
        count(isJava),
        count(isPython),
        count(isRuby)
  by bin(1h)
        


Conclusion


Since you got the new perspective and knowledge of the system that you're maintain, you can start asking questions about the things that are still unrevealed:

  • does one User Agent is represented by two or more systems (e.g. two application implemented in Java)?
  • which of the User Agents are internal system and which one are external? 
  • why these are needing date from your application?
  • why traffic distribution looks exactly like this?