Home Cloud Computing Use pure language to question Amazon CloudWatch logs and metrics (preview)

Use pure language to question Amazon CloudWatch logs and metrics (preview)

0
Use pure language to question Amazon CloudWatch logs and metrics (preview)

[ad_1]

Voiced by Polly

To make it straightforward to work together together with your operational knowledge, Amazon CloudWatch is introducing right now pure language question era for Logs and Metrics Insights. With this functionality, powered by generative synthetic intelligence (AI), you possibly can describe in English the insights you’re on the lookout for, and a Logs or Metrics Insights question shall be routinely generated.

This characteristic gives three primary capabilities for CloudWatch Logs and Metrics Insights:

  • Generate new queries from an outline or a query that will help you get began simply.
  • Question rationalization that will help you be taught the language together with extra superior options.
  • Refine current queries utilizing guided iterations.

Let’s see how these work in apply with a number of examples. I’ll cowl logs first after which metrics.

Generate CloudWatch Logs Insights queries with pure language
Within the CloudWatch console, I choose Log Insights within the Logs part. I then choose the log group of an AWS Lambda operate that I wish to examine.

I select the Question generator button to open a brand new Immediate subject the place I enter what I want utilizing pure language:

Inform me the period of the ten slowest invocations

Then, I select Generate new question. The next Log Insights question is routinely generated:

fields @timestamp, @requestId, @message, @logStream, @period 
| filter @kind = "REPORT" and @period > 1000
| type @period desc
| restrict 10

Console screenshot.

I select Run question to see the outcomes.

Console screenshot.

I discover that now there’s an excessive amount of data within the output. I desire to see solely the information I want, so I enter the next sentence within the Immediate and select Replace question.

Present solely timestamps and latency

The question is up to date based mostly on my enter and solely the timestamp and period are returned:

fields @timestamp, @period 
| filter @kind = "REPORT" and @period > 1000
| type @period desc
| restrict 10

I run the up to date question and get a outcome that’s simpler for me to learn.

Console screenshot.

Now, I wish to know if there are any errors within the log. I enter this sentence within the Immediate and generate a brand new question:

Depend the variety of ERROR messages

As requested, the generated question is counting the messages that include the ERROR string:

fields @message
| filter @message like /ERROR/
| stats depend()

I run the question and discover out that there are extra errors than I anticipated. I want extra data.

Console screenshot.

I exploit this immediate to replace the question and get a greater distribution of the errors:

Present the errors per hour

The up to date question makes use of the bin() operate to group the lead to one hour intervals.

fields @timestamp, @message
| filter @message like /ERROR/
| stats depend(*) by bin(1h)

Let’s see a extra superior question about reminiscence utilization. I choose the log teams of some Lambda capabilities and sort:

Present invocations with essentially the most over-provisioned reminiscence grouped by log stream

Earlier than producing the question, I select the gear icon to toggle the choices to incorporate my immediate and an evidence as remark. Right here’s the outcome (I break up the reason over a number of traces for readability):

# Present invocations with essentially the most over-provisioned reminiscence grouped by log stream

fields @logStream, @memorySize/1000/1000 as memoryMB, @maxMemoryUsed/1000/1000 as maxMemoryUsedMB, (@memorySize/1000/1000 - @maxMemoryUsed/1000/1000) as overProvisionedMB 
| stats max(overProvisionedMB) as maxOverProvisionedMB by @logStream 
| type maxOverProvisionedMB desc

# This question finds the quantity of over-provisioned reminiscence for every log stream by
# calculating the distinction between the provisioned and most reminiscence used.
# It then teams the outcomes by log stream and calculates the utmost
# over-provisioned reminiscence for every log stream. Lastly, it kinds the outcomes
# in descending order by the utmost over-provisioned reminiscence to point out
# the log streams with essentially the most over-provisioned reminiscence.

Now, I’ve the knowledge I want to know these errors. On the opposite aspect, I even have EC2 workloads. How are these cases operating? Let’s take a look at some metrics.

Generate CloudWatch Metrics Insights queries with pure language
Within the CloudWatch console, I choose All metrics within the Metrics part. Then, within the Question tab, I exploit the Editor. Should you desire, the Question generator is out there additionally within the Builder.

I select Question generator like earlier than. Then, I enter what I want utilizing plain English:

Which 10 EC2 cases have the best CPU utilization?

I select Generate new question and get a outcome utilizing the Metrics Insights syntax.

SELECT AVG("CPUUtilization")
FROM SCHEMA("AWS/EC2", InstanceId)
GROUP BY InstanceId
ORDER BY AVG() DESC
LIMIT 10

To see the graph, I select Run.

Console screenshot.

Properly, it appears to be like like my EC2 cases will not be doing a lot. This outcome exhibits how these cases are utilizing the CPU, however what about storage? I enter this within the immediate and select Replace question:

How about essentially the most EBS writes?

The up to date question replaces the typical CPU utilization with the sum of bytes written to all EBS volumes connected to the occasion. It retains the restrict to solely present the highest 10 outcomes.

SELECT SUM("EBSWriteBytes")
FROM SCHEMA("AWS/EC2", InstanceId)
GROUP BY InstanceId
ORDER BY SUM() DESC
LIMIT 10

I run the question and, by wanting on the outcome, I’ve a greater understanding of how storage is being utilized by my EC2 cases.

Strive getting into some requests and run the generated queries over your logs and metrics to see how this works together with your knowledge.

Issues to know
Amazon CloudWatch pure language question era for logs and metrics is out there in preview within the US East (N. Virginia) and US West (Oregon) AWS Areas.

There isn’t any extra price for utilizing pure language question era throughout the preview. You solely pay for the price of operating the queries in response to CloudWatch pricing.

Generated queries are produced by generative AI and depending on elements together with the information chosen and accessible in your account. For these causes, your outcomes might range.

When producing a question, you possibly can embody your authentic request and an evidence of the question as feedback. To take action, select the gear icon within the backside proper nook of the question edit window and toggle these choices.

This new functionality may help you generate and replace queries for logs and metrics, saving you effort and time. This method permits engineering groups to scale their operations with out worrying about particular knowledge information or question experience.

Use pure language to research your logs and metrics with Amazon CloudWatch.

Danilo



[ad_2]

LEAVE A REPLY

Please enter your comment!
Please enter your name here