Skip to content

Prerequisites

  • projectx-prod-vpc has been created with subnets configured.
  • projectx-prod-jumpbox EC2 instance exists and is accessible.
  • AWS CLI configured with appropriate credentials.
  • Your AWS username for the bucket naming convention.
  • project-x-sec-box configured with Wazuh.
  • S3 security datalake bucket has been created with the cloudtrail/ folder structure.
  • S3 bucket policy configured to allow CloudTrail to write logs.

Note

This guide is written for Wazuh version 4.9.2. UI elements and navigation paths may differ in other versions.

Network Topology

Base Layout
(Click to zoom)

Overview

What is AWS CloudTrail?

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account. CloudTrail provides event history of your AWS account activity, including actions taken through the AWS Management Console, AWS SDKs, command line tools, and other AWS services.

CloudTrail logs include:

  • API Calls: Who made what API call, when, and from where
  • Resource Changes: What resources were created, modified, or deleted
  • Identity Information: Which IAM user or role performed the action
  • Source IP Addresses: Where the API calls originated from

👉 CloudTrail is essential for security monitoring, compliance, and forensic analysis of AWS account activity.

About CloudTrail Trails

A CloudTrail trail is a configuration that enables delivery of events as log files to an Amazon S3 bucket. You can create a trail that applies to all regions or a trail that applies to one region.

For this guide, we'll create a management trail that logs all API activity across all regions in your AWS account and delivers those logs to your S3 security datalake.

Create CloudTrail Trail

Open the CloudTrail service in the AWS Console.

In the left navigation pane, select Trails.

Click Create trail.

Create Trail
(Click to zoom)

Configure Trail Details

Trail Name

  • Trail name: projectx-prod-management-trail

👉 Use a descriptive name that indicates this is a management trail for production.

  • Storage location: Select Use an existing S3 bucket and choose projectx-prod-datalake-[username].

  • S3 bucket prefix (optional): cloudtrail/

  • 👉 This organizes CloudTrail logs in a dedicated folder within your datalake bucket
Trail Details
(Click to zoom)

Additional Settings

  • Log file SSE-KMS encryption: Leave Disabled for this lab
  • 👉 For production, consider enabling KMS encryption for additional security

  • Log file validation: Leave Enabled

  • 👉 Log file validation creates a digest file that can be used to verify log file integrity

  • SNS notification delivery: Leave Disabled

  • 👉 You can enable this to receive notifications when new log files are delivered

Click Next.

Configure Logging

Here is where we configure which events we would look to capture.

Event Type

  • Event type: Select Management events | Data Events
  • 👉 This captures all API activity including read and write events

  • Management events: Leave as Read | Write.

  • Data events: Choose Lambda | Log only write events.

Event Type
(Click to zoom)

Click Next.

Review and Create Trail

Review your trail configuration:

  • Trail name: projectx-prod-management-trail
  • Storage location: projectx-prod-datalake-[username]/cloudtrail/
  • Event type: All events
  • Management events: Read and write

Click Create trail.

Trail Created
(Click to zoom)

Verify Trail Status

After creation, you should see your trail listed with a status of Enabled.

👉 It may take a few minutes for CloudTrail to start delivering log files to your S3 bucket. Log files are typically delivered within 5 minutes of API activity.

Trail Status
(Click to zoom)

Configure S3 Lifecycle Policy for CloudTrail Logs

To automatically delete CloudTrail logs after 14 days, we'll create a lifecycle policy that applies specifically to the cloudtrail/ prefix in your S3 bucket.

Open the S3 service in the AWS Console.

Select your bucket: projectx-prod-datalake-[username]

Go to the Management tab.

Scroll down to Lifecycle rules.

Click Create lifecycle rule.

Lifecycle Rules
(Click to zoom)

Configure Lifecycle Rule

Basic Configuration

  • Lifecycle rule name: delete-cloudtrail-logs-after-14-days

  • Rule scope: Select Limit the scope of this rule using one or more filters

  • Prefix: cloudtrail/

    • 👉 This ensures the rule only applies to CloudTrail logs, not other data in the bucket
Rule Scope
(Click to zoom)

Lifecycle Rule Actions

Select Expire current versions of objects.

  • Days after object creation: 14

👉 This will automatically delete CloudTrail log files 14 days after they are created, helping manage storage costs and comply with data retention policies.

👉 Note: "Expire current versions of objects" permanently deletes the current versions of objects after the specified number of days.

Delete Action
(Click to zoom)

Verify Lifecycle Rule

Navigate back to ManagementLifecycle rules.

You should see your rule delete-cloudtrail-logs-after-14-days listed.

👉 The lifecycle rule will automatically apply to all objects in the cloudtrail/ prefix. Objects will be deleted 14 days after creation.

Rule List
(Click to zoom)

Verify CloudTrail Log Delivery

Check S3 Bucket

Navigate to your S3 bucket: projectx-prod-datalake-[username]

Click on the cloudtrail/ folder.

You should see CloudTrail log files being delivered. Log files are typically organized by date:

  • AWSLogs/[account-id]/CloudTrail/[region]/[year]/[month]/[day]/
CloudTrail Logs
(Click to zoom)

👉 If you don't see log files immediately, wait a few minutes and perform some AWS console actions (like viewing EC2 instances) to generate API activity that CloudTrail will log.

Verify Trail Activity

Navigate back to CloudTrailTrails.

Click on your trail: projectx-prod-management-trail

Select the Event history tab to view recent API activity.

👉 Event history shows the last 90 days of events. For longer retention, logs are stored in S3.

Event History
(Click to zoom)

Wazuh Integration

Once CloudTrail logs are being delivered to your S3 bucket, you can configure Wazuh to ingest and analyze these logs.

Configure Wazuh S3 Integration

Power on [project-x-sec-box] VM.

Navigate to Server management Settings.

Navigate to the <wodle name="syscollector"> block.

Add the following <wodle> section below the <syscollector> block.

<wodle name="aws-s3">
  <disabled>no</disabled>
  <interval>10m</interval>
  <run_on_start>yes</run_on_start>
  <skip_on_error>yes</skip_on_error>
  <bucket type="cloudtrail">
    <name>projectx-prod-datalake-[username]</name>
    <aws_profile>default</aws_profile>
    <path>cloudtrail/</path>
  </bucket>
</wodle>
Wazuh Events
(Click to zoom)

👉 Replace [username] with your actual AWS username (e.g., projectx-prod-datalake-johnsmith).

👉 The aws_profile should match the AWS credentials profile configured on the Wazuh server. If using the IAM user created earlier, ensure the AWS credentials are configured in ~/.aws/credentials.

Select Save.

Then Restart Manager to apply changes.

Restart Wazuh Manager

Restart the Wazuh manager to apply the configuration:

sudo systemctl restart wazuh-manager

Verify Wazuh Integration

Check the Wazuh logs to verify CloudTrail logs are being ingested:

sudo tail -f /var/ossec/logs/ossec.log

You should see messages indicating CloudTrail logs are being processed from S3.

👉 Look for entries like "Processing CloudTrail logs" or "Fetched X events from S3 bucket".

View CloudTrail Events in Wazuh

Navigate to the Wazuh Dashboard.

Go to Cloud securityAmazon Web Services.

You should start to see data being collected.

Adjust the time range if needed.

Wazuh Events
(Click to zoom)

You can also go to ExploreDiscover.

Select the index pattern that includes CloudTrail events (typically wazuh-alerts-*).

Search for CloudTrail-related events using filters like:

  • data.aws.eventName - API event names
  • data.aws.userIdentity - User or role that performed the action
Wazuh Events
(Click to zoom)

Summary

Success!

Your AWS account activity is now being logged, stored in S3, and analyzed by Wazuh for security monitoring and compliance purposes. The automatic deletion policy ensures storage costs remain manageable while maintaining a 14-day retention period for security analysis.