Implementing a Data Mesh Architecture with AWS Redshift Spectrum and Lake Formation

Abdul Rafee Wahab
4 min readMay 25, 2023

Use Case / Requirements

You have been tasked with joining / integrating cross-domain data across more than two separate data stores (i.e. Redshift and S3) stored across multiple AWS accounts for deriving custom analytics for business reporting. This can be achieved by implementing the Data Mesh concept.

Background

The Data Mesh concept aims to distribute data ownership and management across different teams / data domains, enabling decentralized data processing and analytics. Redshift Spectrum allows you to query data stored in S3 directly from your Redshift cluster, while Lake Formation simplifies data lake management and access control.

By combining these services, you can achieve a flexible and scalable Data Mesh architecture.

We will cover the following steps:

  1. Setting up an AWS Lake Formation Data Lake
  2. Preparing and ingesting data into the data lake
  3. Configuring Redshift Spectrum to access data in the data lake
  4. Creating external tables and querying data with Redshift Spectrum
  5. Managing access and permissions with AWS Lake Formation

Prerequisites:

You will need:

  1. An AWS account with sufficient privileges to create and manage Lake Formation and Redshift resources.
  2. Basic knowledge of AWS services (i.e. S3, Redshift, and AWS Identity and Access Management (IAM)).
  3. Familiarity with SQL and database concepts.

High-Level Architecture

Diagram Source: Abdul R. Wahab

Step 1: Setting up an AWS Lake Formation Data Lake

  1. Create an S3 bucket to serve as the data lake storage. You can follow the instructions in the AWS documentation: Creating a Bucket
  2. Enable versioning on the bucket to maintain historical data. Refer to the AWS documentation for instructions: Enabling Versioning
  3. Set up an AWS Lake Formation Data Lake by following the AWS documentation guidelines: Getting Started with AWS Lake Formation

Step 2: Preparing and ingesting data into the data lake

  1. Organize and refine your data into logical datasets based on business domains or teams.
  2. Ensure that each dataset has a consistent schema and data format.
  3. Upload the datasets to the appropriate locations within the data lake bucket. Follow the instructions in the AWS documentation: Uploading Objects into Amazon S3

Step 3: Configuring Redshift Spectrum to access data in the data lake

  1. Create an Amazon Redshift cluster or use an existing one. Follow the steps outlined in the AWS documentation: Creating an Amazon Redshift Cluster
  2. Ensure that the Redshift cluster and the data lake bucket are in the same AWS region.
  3. Configure the Redshift cluster to use an appropriate IAM role with sufficient permissions to access the data lake. Refer to the AWS documentation for details on configuring IAM roles for Redshift: Using Identity-Based Policies for Amazon Redshift

Step 4: Creating external tables and querying data with Redshift Spectrum

  1. Connect to the Redshift cluster using a SQL client (i.e. DBeaver) or the AWS Console.
  2. Create an external schema that references the data lake location using the CREATE EXTERNAL SCHEMA statement. Specify the appropriate IAM role ARN for the IAM_ROLE parameter.

Example:

CREATE EXTERNAL SCHEMA spectrum_schema 
FROM DATA CATALOG DATABASE 'data_mesh_tutorial_db'
IAM_ROLE 'data_mesh_tutorial_iam_role_arn'
CREATE EXTERNAL DATABASE IF NOT EXISTS;

3. Create external tables that point to the datasets within the data lake using the CREATE EXTERNAL TABLE statement. Adjust the table schema, location, and other properties as needed.

Example:

CREATE EXTERNAL TABLE spectrum_schema.coffee_orders_inventory 
(
coffee_flavor_id INT,
customer_id INT,
order_date DATE,
order_amount DECIMAL(10,2)
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
LOCATION 's3://data_mesh_tutorial_bucket/coffee_orders_inventory/'
TABLE PROPERTIES ('skip.header.line.count'='1');

4. Query the data using standard SQL queries on the external tables. Example:

SELECT * FROM spectrum_schema.coffee_orders_inventory WHERE order_date >= '2023-05-24';

Step 5: Managing access and permissions with AWS Lake Formation

  1. Define and create appropriate data lake permissions using AWS Lake Formation. Refer to the AWS documentation for instructions: Setting Up Permissions in AWS Lake Formation
  2. Create data lake administrators, data lake users, and data lake roles using AWS Lake Formation. Follow the steps outlined in the AWS documentation: Creating and Managing Users, Groups, and Roles in AWS Lake Formation
  3. Grant access to specific datasets or tables within the data lake to different teams or users using AWS Lake Formation permissions. Refer to the AWS documentation for details: Managing Data Access with AWS Lake Formation

Benefits of Using Redshift Spectrum with LakeFormation

Using Redshift Spectrum with LakeFormation provides several benefits over using Redshift Spectrum with Glue databases.

Some of these benefits include:

Better Security and Governance

LakeFormation provides a central location for managing access control to your data lake, making it easier to enforce security policies and ensure compliance. This is especially important for organizations with complex data access requirements and regulatory obligations.

Better Performance

LakeFormation’s centralized data catalog allows for faster query performance as data is stored in a more efficient manner. Additionally, LakeFormation allows for the use of AWS Glue Data Catalog metadata, which can be leveraged for advanced query optimizations.

Easier Management

LakeFormation simplifies the management of data in your data lake by providing a unified data catalog that can be used across multiple services. This allows for easier management of data and reduces the need for duplicate metadata across different services.

Improved Flexibility

With Redshift Spectrum and LakeFormation, you can store and access data in any format, which allows for greater flexibility in data ingestion and processing. Additionally, you can easily add new data sources to your data lake without having to make changes to existing data structures.

Closing thoughts 👏

By now, you have learned how to implement a Data Mesh architecture using Redshift Spectrum and AWS Lake Formation. You can now leverage Redshift Spectrum to query data stored in your data lake and distribute data ownership and management across different teams or domains. Remember to manage access and permissions through AWS Lake Formation to ensure data security and control.

Feel free to ask any questions / provide inputs below. Thanks!

--

--

Abdul Rafee Wahab

Tech guy. I like building cool software, & also leading others in building cool things. All views shared are my own.