How to Create an IT Audit Ecosystem in AWS — Part 1 Gathering Data

IT auditors have to reinvent themselves and their approach, because of IT-team speed and increasingly fast technology adoption.

Audits have to become faster. Photo by Karol D on Pexels

The traditional approach to auditing, in which auditors make just one audit in one, two, or even three weeks is pretty slow. This is similar to a race between a rabbit and a turtle, in which audit teams are the turtle.

There are some important issues that make IT audits slow, such as: lack of independence to get the information, few auditors vs vast and diverse technology, high level of customization in the audit program, and poor technology understanding by auditors.

The good news is that in this and the following posts, I take care of these issues.

In part 1, I’ll show you a serverless approach for getting metadata from many resources when you are in a multi-account environment.

In the next parts, I’m going to show you: querying metadata from resources using Athena, creating smart reports with Sagemaker, implementing custom rules in Config using its RDK, auditing deep in RDS, and Machine Learning case studies for IT audit. At the end of this series, you’ll have an IT Audit Ecosystem in AWS cloud.

This is our guide:

  1. Understanding the simplified architecture at a high level.
  2. Creating the Parameter in the Parameter Store.
  3. Implementing the Lambda Boss and its role.
  4. Building a Cross-Account Role in one User Account.
  5. Creating an example of Lambda Auditor and its role
  6. Running the whole solution.
  7. Further ideas.

Basically, there are two kinds of lambda functions in the Audit Account with their respective role, and an S3 Bucket with the purpose of storing the metadata collected by each Lambda Auditor i.

Each Lambda Auditor i is triggered by the Lambda Boss through asynchronous calls. Then Lambda Auditors travel across each j account under our audit scope and perform their auditing task, such as collecting data. Once the information is collected, it’s stored in the S3 bucket.

Simplified architecture

Below I describe the purpose of each component.

  • Lambda Boss gets the ID accounts. This activity can be performed for example, by reading them from the Parameter Store or getting them in the Login Account. Once the Lambda Boss has the accounts list, it makes asynchronous calls to each one Lambda Auditor i.
  • Lambda Auditor i is a set of Lambda functions, where “i” is an abstraction of the type of Lambda Auditor. These Lambdas travel across each account, then perform their respective auditing or get their respective metadata. And to finish, results are stored in the Bucket resources metadata.

Inside AWS Systems Manager, choose Parameter Store services, and click on “Create parameter”. Appropriately fill out the fields, and in (1) you have to put the account IDs, separated by a comma.

To start, you have to create a lambda function using Python 3.8. Take a look at the screenshot below to see the basic settings.

Here is the code to snip:

Next, you should create an IAM policy that allows your lambda function to invoke other Lambdas and consults the Parameter created one step before. Once you’ve created the policy, then attach it to your lambda boss role.

Below is the policy suggested:

This role establishes a trusted relationship between the audit account and the user account, that will allow lambdas to travel from the audit account to the user accounts. You have to create this role on the user account that you want to audit.

In the picture below you have to complete the field “Account ID” with the account number of your audit account, and click on next.

Then you should choose the policies for the audit-cross-account-access-role. In this example I just pick ReadOnlyAccess, but it could be changed according to the scope of your Lambda Auditor. Please take a look at the next picture.

In this example, I use Boto3 — the SDK for Python — with the goal of getting metadata related to the RDSs and their tags. To perform this task you have to create the following four functions: get_credentials, get_tags, export_to_bucket, and lambda_handler. So let’s go in deeper through each function.

  • get_credential: This function receives the Role’s ARN of the audited account and uses an “sts” client to get the credential for assuming this role.
  • get_tags: This function receives as parameters the RDS client and the Amazon resource name of the RDS, then the function gets all the tags associated with the RDS.
  • export_to_bucket: This function receives as parameters the info to save, the bucket name in which the information is going to be saved, the folder, name of the new file and the S3 client.This function then saves the collected information in an S3 bucket.
  • Lambda_handler: This is the main lambda function which calls all the aforementioned functions, and also instantiates an RDS client to get the RDS metadata.

I called this lambda get-meta-rds, and its runtime is Python 3.8. Here is the code to snip:

Next, you should create an IAM policy that allows your lambda function to write on the audit S3 bucket (in this case called “audit-metadata-bucket”) and to assume our previously created role on the audited account. Once you create the policy, then attach it to your lambda get-meta-rds role.

Below is the policy suggested:

I used AWS CLI to test the whole solution. I started invoking the lambda boss. Next I listed the objects in the bucket, and I got the object. You can see the aforementioned steps in the pictures below.

  • Create other kinds of lambda auditors to perform audits or get metadata, for instance: Control Access auditing, EC2 auditing, storage-resources auditing, and so on.
  • Mine all collected information by querying them using Athena, and then connect Athena with Quicksight to show your auditing results through a dashboard.
  • Process this information using Sagemaker and create machine learning models to aid your audits.
  • Implement this whole architecture using Cloudformation and a CI/CD strategy.
  • Schedule Lambda Boss using CloudWatch, so you can make more frequent audit evaluations

In the following entries I am going to explain some of these ideas.

If you found this post interesting and you want to do something similar, I’m happy to share my ideas and experiences with you. In the next post I’ll show you how to create living smart-reports with Sagemaker.