niedziela, 8 grudnia 2024

CloudWatch Insights - monitoring async workers traffic distribution

Symfony Framework Messenger allows to run parts of your system's processes asynchronously. To gain a better understanding of how your application really works, you can use the following queries. 

These queries filter all logs that comes from Symfony Messenger Workers and aggregate them to provide broader perspective. These also prepare a summary of all asynchronous execution units in your system, giving you greater awareness of the overall complexity and the building blocks that make up the whole.

So, at the very beginning let me explain the data from the logs in my application. This way you can apply these queries to your system, keeping in mind that my log structures might differ partially or entirely.

  • message - with value MESSENGER_WORKER_MESSAGE_FINISH indicates Symfony Messenger Message that was successfully handled, 
  • extra.messengerWorker.class - event/command class that was handled by Worker
  • extra.messengerWorker.executionId - represents each message that was handled by Messenger Worker 
  • extra.requestDuration - whole Worker duration, 
  • context.messengerWorker.processingDuration - execution of a single Message duration
  • context.messengerWorker.memoryUsageMb - whole Worker memory usage

馃搳 List all unique Workers 

Thanks to that query you can easily check how many different async-workers do you have in your system. dedup function helps a lot with removing duplicated message class names.    

fields @timestamp, @message, @logStream, @log
| filter message = "MESSENGER_WORKER_MESSAGE_FINISH"
| display extra.messengerWorker.class 
| dedup extra.messengerWorker.class 

All unique Messenger events/commands that are handel asynchronously.
This summary gathers all async execution units (that have worked in a given period of time)
of you application in one place. 

馃搳 Reveal traffic distribution per Worker 

That query shows the busiest workers in your system.     

fields @timestamp, @message, @logStream, @log
| filter message = "MESSENGER_WORKER_MESSAGE_FINISH"
| stats count(*) as executionCount by extra.messengerWorker.class  
| sort by executionCount desc

It's a simple list which execution count for each individual worker that was handled in a given period of time.

The wider time period you'll set for the query the more (probably) unique worker type you might get as a results. 

Changing the time period of the query to only one day would reveal less worker types
but returned metrics might be more useful. 

馃搳 More detailed Workers data 

Each row symbolizes a single worker per instances, how many messages they handled, when the first and last messages were handled, and, of course, the message type they handled. Take look at interesting usage of  latest(fieldName) function which allows to present not aggregated data in this summary.      

fields @timestamp, @message, @logStream, @log
| parse @logStream "/php/*" as log_stream_id
| filter message = "MESSENGER_WORKER_MESSAGE_FINISH"
| stats latest(substr(log_stream_id, 0, 6)) as instance,
        count(extra.messengerWorker.executionId) as executions, 
        min(@timestamp) as start,
        max(@timestamp) as end,
        max(extra.requestDuration / 60) as duration_in_minutes,
        latest(extra.messengerWorker.class) as message_class
  by extra.executionId as worker_exectuion_id

You can use that query when you need to check a specific worker performance.

馃搳 The longest Workers executions

To identify the longest workers execution in you application just use the query bellow. These spots could be the first candidates for optimization or potential bottlenecks that slow down your application.       

fields @timestamp, @message, @logStream, @log
| filter message like /(?i)(MESSENGER_WORKER_MESSAGE_FINISH)/
| display datetime, 
          context.messengerWorker.processingDuration / 60,
          context.messengerWorker.memoryUsageMb,
          extra.messengerWorker.class
| sort context.messengerWorker.processingDuration desc

niedziela, 1 grudnia 2024

CloudWatch Insights - revealing endpoint's clients

I'll show you how to list all the clients of your's application HTTP endpoints, how large is the traffic they generate and how it's exactly distributed in a given period of time. That few queries below could easily reveal the knowledge about application dependencies (with a minimal effort on your end) that you might be not aware of.

At the very beginning I'm gonna show you how the log structure looks like in my case. It may differ how it would look like in your application but what's matter is to highlighting a values that you need to achieve our goal to find out who is calling your application endpoints. 


JSON structure

  • User Agent of the client - in my case it's passed in request headers as key user-agent which needs to be parsed first because even that key is auto-discovered by CloudWatch it contains a dash (-) in a key name so cannot be used in a query,
  • value that distinguish Http Request Logmessage,
  • Http Request Method - context.method,
  • Http Request Namecontext.route - we cannot use exact URI that was called by client because the same URIs may differ when endpoint has param inside endpoint path e.g. {{id}} of the resource it refers to - which makes logs aggregation not possible,  
  • Caller IPcontext.clientIp,  

馃搳 List all the Clients per endpoint

fields @timestamp, @message, @logStream, @log
| parse @message '"user-agent":["*"]' as userAgent
| filter message = "HTTP_REQUEST" 
| display context.method, 
          context.route, 
          userAgent, 
          context.clientIp
| sort context.method, 
       context.route, 
       userAgent asc
| dedup context.route, 
        context.clientIp


馃搳 Count all requests done by each Client

fields @timestamp, @message, @logStream, @log
| parse @message '"user-agent":["*"]' as userAgent
| filter message = "HTTP_REQUEST" and
         ispresent(userAgent)
| stats count(userAgent) as endpointCallByExternalService by userAgent
| sort endpointCallByExternalService desc


馃搳 Traffic distribution in a time per demand Client

Based on results above you need to manually parse each Client that you want to see in this chart and add it in the stats section to finally display it. 
fields @timestamp, @message, @logStream, @log
| parse @message '"user-agent":["*"]' as userAgent
| parse userAgent /(?<isGo>Go-http-client?)/
| parse userAgent /(?<isSymfony>Symfony?)/
| parse userAgent /(?<isPostman>PostmanRuntime?)/
| parse userAgent /(?<isJava>Java?)/
| parse userAgent /(?<isPython>python-requests?)/
| parse userAgent /(?<isRuby>Ruby?)/
| filter message = "HTTP_REQUEST" and
         ispresent(userAgent)
| stats count(isGo),
        count(isSymfony),
        count(isPostman),
        count(isJava),
        count(isPython),
        count(isRuby)
  by bin(1h)
        


Conclusion


Since you got the new perspective and knowledge of the system that you're maintain, you can start asking questions about the things that are still unrevealed:

  • does one User Agent is represented by two or more systems (e.g. two application implemented in Java)?
  • which of the User Agents are internal system and which one are external? 
  • why these are needing date from your application?
  • why traffic distribution looks exactly like this?