Combining Amazon Aurora, Lambda, and SQS to go Beyond the Native Capabilities of MySQL

Comments 0

Share to social media

Adoption of open-source database engines, such as MySQL and PostgreSQL, has become a major driver shaping the future of modern cloud-native data architectures. By combining open-source database technologies, alongside existing commercial database solutions, companies can essentially eat their cake and have it too – use the more expensive, commercial database solutions only for the specific applications that absolutely require it, all while enabling expansion and flexibility of the ‘database fleet’ using lower-cost open source database alternatives. This is the 80/20 rule of modern data architectures. Yet the transition isn’t always simple.

Most commercial database systems are ‘monolithic’ and integrate a great deal of advanced functionality and capabilities directly into the database itself. As one example, both Microsoft and Oracle provide an asynchronous messaging/queuing mechanism that is part of the database engine. Oracle calls it Advanced Queueing and Microsoft calls it Service Broker. Both are great implementations of queueing directly within the database, and many applications have been closely developed around specific database features.

The problem is that PostgreSQL and MySQL lack some of the built-in capabilities you can find in a commercial relational database system forcing customers to either deploy 3rd party solutions (such as a dedicated queue system, external to the database) or develop custom functionality within the applications themselves.

But luckily for us, when it comes to running MySQL and PostgreSQL in the cloud, there are simpler alternatives for mitigating missing functionality in the database itself. For example, when your run your MySQL databases as an Aurora MySQL cluster in Amazon Web Services, you also have access to the full set of powerful AWS ecosystem services and solutions which you can leverage where native database functionality is missing.

While Aurora MySQL itself can provide increased performance and high-availability capabilities that go beyond what native MySQL can offer, there are also application-facing advantages. In this article I will provide one such example: using a combination of AWS Simple Queue Services (SQS) — which are fully managed message queues for microservices and distributed systems plus AWS Lambda — which allows you to run code without thinking about servers to provide seamless queueing capabilities in the MySQL database itself.

Essentially, you will be performing a three-step process so that you can enqueue and dequeue messages directly from within an Amazon Aurora MySQL cluster:

  1. You will create an Amazon SQS Queue that will be used to store messages as JSON payloads.

  2. You will create a Lambda Function which will use the SQS API to send and receive messages from Amazon SQS.

  3. You will invoke the Amazon Lambda Function from within an Aurora MySQL Cluster using mysql.lambda_async.

Time to get started!

Create an SQS Queue

The first step is to create the Amazon SQS queue which will store the messages or events. From within the AWS web console –

  1. Navigate to Services and type SQS. Select Simple Queue Service.

  2. From the next page, click on Create New Queue. If this is your first Amazon SQS queue, you may need to click on the Get Started Now button instead.

  3. Next, you will have to name your queue. In this example, I’ll aptly name my queue best_queue_ever. You will also need to choose the type of queue you wish to create.

    You have two options for queue type: Standard Queue and FIFO Queue. If you prioritize queue performance over atomicity of queue payload operations, choose a Standard Queue type. If you need strict atomicity of operations on your queue with reduced total throughout, choose a FIFO Queue type. For this example, select a Standard Queue type.

  4. Next, click on Quick Create Queue to create your SQS Queue. For this simple example, you don’t have to specify any additional advanced configuration options.

  5. At this point, your SQS Queue has been created and should appear in the SQS Service main page. You will need to remember the Queue URL so that you can use it inside your Amazon Lambda function. Select the queue and copy & paste the queue URL to a text editor.

    The queue URL will be in the following format:

  6. You will also need to assign permissions on who can read or write to the new SQS queue. With the queue still selected, navigate to the Permissions tab and click on the Add a Permission button.

    On the next page, you can add permissions for the newly created queue. Select the Everybody checkbox under Principal and All SQS Actions under Actions. Note that this gives everybody permissions to do anything with this queue which is great for keeping the demo simple and straightforward, but in a real production environment, you should probably be more granular with permissions on your SQS queues.

  7. As a final step for queue creation, click on the Add Permission button to apply the new permissions on your SQS queue.

Create the Lambda Function

With the SQS Queue configured, now it’s time to create two Amazon Lambda Functions which will be used to enqueue and dequeue messages from the SQS queue. You will later see how to get your Aurora MySQL cluster to invoke the Lambda functions and interact with the queue.

  1. To create the Lambda functions, navigate to Services and type lambda. Then select Lambda.

  2. In the Lambda Functions screen, click Create Function to create the first Lambda function. This one will be used to enqueue messages, i.e., send messages, to the SQS Queue.

  3. Choose Author from scratch, as you do not need to use a blueprint for this particular Lambda function.

  4. Enter the Lambda function name, I used sqs_enqueue but feel free to be as creative as you’d like as long as you stick with letters, numbers, hyphens, or underscores.

  5. Select Create a custom role under the Role dropdown list. This will open the IAM Management Console in a new tab in your browser. When you create a Lambda function, you must assign permissions on which resources inside your AWS Account the Lambda function can access. Lambda Functions use an IAM role that grants your code permissions to access the AWS resources it needs.

  6. In the new IAM Management Console page, select Create a new IAM Role and specify the role name, such as lambda_role1.

  7. Click on Allow at the bottom right of the screen once finished to create your new IAM Role.

  8. Back to the Lambda Function Author from Scratch page, click the Create Function button so that you can write the Lambda Function code which will send messages to the SQS Queue.

  9. You will modify the following reference code to fit the requirements. In the snippet below, replace https://QUEUE_URL with the queue URL of your SQS queue. Then use it to replace the code in the code window.

    ../Desktop/Screen%20Shot%202018-05-16%20at%204.42.00%20PM.png

    The Lambda Function is pretty straightforward:

    • You configure the SQS queue URL, so that the Lambda Function can interact with the specific queue created earlier.

    • It loads the AWS SDK so that it can use the SQS API.

    • It constructs an SQS object in the region where the SQS queue resides (using new AWS.SQS). If your SQS queue has been created in a different region, modify the region accordingly.

    • It calls the SQS API with the sqs.sendMessage call and sends the JSON Payload to the SQS Queue.

     

  10. Click on the Save button to save the sqs_enqueue Lambda function and then click Test to test its functionality by sending a mockup JSON Payload to the queue. This opens a window to create the test event. The specific contents of the JSON payload don’t really matter for testing the Lambda function. You can use the default test event using the provided Hello World template.

    Give the event a name, such as DummyPayload, and click Create.

  11. Click on the Test button to send the JSON payload to the queue and verify that the sqs_enqueue Lambda function works correctly.

    You should expect to see an Execution result: succeeded message after clicking on the Test button which signifies that the message was written successfully to the queue.

  12. Copy the ARN (Amazon Resource Name) for the Lambda function you just created. An ARN is used when you need to specify a resource unambiguously across all of AWS. In this case, you will use this ARN in the Stored Procedure you will create in the Aurora MySQL database.

    ../Desktop/Screen%20Shot%202018-05-16%20at%204.45.37%20PM.png

  13. Repeat the same steps to create the second Lambda Function sqs_dequeue, used to dequeue (read) messages from the SQS queue.

  14. When creating the sqs_dequeue Lambda function, under role, select Choose an existing role and select the lambda_role1 role you created when you created the sqs_enqueue Lambda function.

    ../Desktop/Screen%20Shot%202018-05-16%20at%204.48.03%20PM.png

  15. Use the code below for the sqs_dequeue Lambda function. Remember to replace the QUEUE_URL with the URL for your SQS Queue and your region. Note that this time you are using the sqs.receiveMessage API call.

    ../Desktop/Screen%20Shot%202018-05-16%20at%204.52.57%20PM.png

Configure your AWS Aurora MySQL Instance

Now it’s time to set up your Amazon Aurora w/MySQL Compatibility Cluster and create the MySQL Stored Procedure which will be used to invoke the Lambda functions for dequeuing and enqueuing messages from and to SQS.

Note that there are several steps required in order for an Aurora MySQL cluster to successfully invoke Lambda functions. Otherwise, you can run into permission issues. You will need to do the following:

  • Create a new IAM Role.

  • Assign the AWSLambdaFullAccess policy to the role so that the IAM Role will be able to execute Lambda Function API calls.

  • Create a new custom DB Cluster Parameter Group.

  • Modify the aws_default_lambda_role parameter with the value of the IAM Role ARN so that the Aurora MySQL instance will have permissions to invoke AWS Lambda function.

  • Create a new Aurora MySQL Cluster, using the custom Cluster Parameter Group.

  • Create a stored procedure invoking the Lambda function using the Lambda function ARN.

It’s actually much simpler than it looks. Let’s get started!

Configure IAM Role and Create a Cluster Parameter Group

Start by creating an IAM Role to which you will associate permissions to execute Lambda functions. Later, you will attach this IAM Role to your AWS Aurora Cluster.

  1. Navigate to the AWS IAM Service page by clicking on Services and typing iam. Select IAM.

  2. Once on the IAM Service page, click on Roles and select Create Role.

  3. Select AWS Service as the role type, RDS as the Service that will use this role and RDS – Cloud HSM and Directory Services for the Use Case. Click on Next: Permissions once done.

    ../Desktop/Screen%20Shot%202018-05-16%20at%207.05.57%20PM.png

  4. Leave the settings as-is on the next screen and click Next: Review. On the following screen, provide the role with a name, such as rds_role, and click Create Role

  5. After the role has been created, you need to add a policy to this role which allows for execution of Lambda functions. To accomplish that, in your IAM Service page, click on Roles and the name of the IAM Role you have just created – rds_role.

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.44.56%20PM.png

  6. Click Attach Policy to select the policies you want granted to the IAM Role

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.46.54%20PM.png

  7. On the next screen, search for the AWSLambdaFullAccess policy name, select the checkbox next to it. The AWSLambdaFullAccess policy grants full access to Amazon Lambda. In a production environment, you will probably want to be more granular with the permissions you grant, but for this demo, granting the AWSLambdaFullAccess role is ok. Click Attach Policy.

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.50.24%20PM.png

     

    Your new IAM Role now has the AWSLambdaFullAccess policy attached to it.

  8. Copy the Role ARN and store it somewhere safe. You will need to use this value for a parameter in the AWS Aurora Cluster Parameter Group.

    ../Desktop/Screen%20Shot%202018-05-16%20at%205.44.56%20PM.png

Create a new Cluster Parameter Group

After the IAM Role has been created, you will need to create a custom Aurora Cluster Parameter Group. This is required, as without doing so, your Aurora Cluster won’t have the permissions to Invoke Lambda functions.

  1. Click Services add type rds. Select RDS to open the RDS Service Dashboard.

  2. Select Parameter Groups from the RDS Dashboard and click Create Parameter Group.

    ../Desktop/Screen%20Shot%202018-05-16%20at%205.47.47%20PM.png

  3. Choose aurora-mysql5.7 as the Parameter group family (this is the Parameter Group Family for Aurora w/MySQL compatibility version 5.7), select DB Cluster Parameter Group for the Type and enter a Group name and description. I’m using mysql-lambda for my demo. You will also need to enter a description for the parameter group.

    ../Desktop/Screen%20Shot%202018-05-16%20at%205.49.51%20PM.png

    Once done, click Create.

  4. Back in the RDS Service page, navigate to Parameter Groups, select the custom DB Cluster Parameter Group you have created for your AWS Aurora MySQL Cluster, click on Parameter group actions and choose Edit.

    ../Desktop/Screen%20Shot%202018-05-16%20at%205.52.57%20PM.png

  5. Search for a parameter with the name: aws_default_lambda_role, and for the value of this parameter, enter the IAM Role ARN from the earlier step when creating the role.

    ../Desktop/Screen%20Shot%202018-05-16%20at%205.56.26%20PM.png

    Click on Save Changes.

Create your AWS Aurora MySQL Instance

After your IAM Role and custom Cluster Parameter Group have been created, you can provision your Aurora MySQL cluster.

  1. Click Services and type rds. Select RDS to open the RDS Service Dashboard.

  2. In the RDS Dashboard main screen click on Launch an Aurora DB instance. You may have to click Get started now if this is the first time you have been to this dashboard.

    ../Desktop/Screen%20Shot%202018-05-16%20at%205.59.08%20PM.png

  3. Select Amazon Aurora and choose the MySQL-5.7 compatible edition. Click Next.

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.00.20%20PM.png

  4. Specify the DB Instance Class for your database (which determines the number of CPUs and RAM allocated to the Aurora instances), specify the Multi-AZ Deployment option which will create automatic replicas of your MySQL instance in multiple AZs for increased high availability and read scalability and a name for the database (such as mysql01). Also create a Master username and Master password.

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.01.46%20PM.png

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.01.52%20PM.png

    Click Next once done.

  5. Specify the desired network, security and database options for your AWS Aurora MySQL Cluster. For example, selecting Yes for Public accessibility will make your Aurora MySQL cluster accessible from the internet. You can also choose the VPC security group which determines the rules for allowing or disabling connections to your cluster. The default options are fine for our demo but are usually not suited for production databases.

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.05.54%20PM.png

  6. Further down the screen, provide a name for your Aurora MySQL cluster and for the database – mysql-cluster01 and mydb in my environment. Also make sure you select the custom Cluster Parameter Group you have created earlier under the DB Cluster Parameter Group option. Without doing so, you will not be able to invoke Lambda functions from within your Aurora MySQL database.

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.14.39%20PM.png

  7. Accept the defaults for the remaining options and click Launch DB instance at the bottom of the screen to create the cluster.

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.17.16%20PM.png

    You will be presented with a notification indicating that your Aurora MySQL Cluster is being created.

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.18.27%20PM.png

  8. After your AWS Aurora MySQL Database Cluster is ready, you will need to assign the IAM Role you created earlier. In the RDS Dashboard screen, click on Clusters, select the Aurora MySQL cluster you have just created and click on Actions followed by Manage IAM Roles.

    ../Desktop/Screen%20Shot%202018-05-16%20at%206.21.40%20PM.png

  9. Choose the IAM Role you have created earlier (rds_role) and click Add role.

    ../Desktop/Screen%20Shot%202018-05-16%20at%207.14.21%20PM.png

  10. Wait until the changes have been applied to your AWS Aurora MySQL Cluster and make sure no restart is required. After Status changes to Active click on Done.

    ../Desktop/Screen%20Shot%202018-05-16%20at%207.15.43%20PM.png

Create the Stored Procedure That Will Execute the Lambda Function

If you reached this far, it means that you have completed all of the prerequisites steps required to allow Aurora MySQL access to invoke your Lambda function and as such, enqueue and dequeue messages to SQS.

    1. In the Amazon RDS console, click on Clusters and click on the name of the Aurora MySQL Cluster you have created earlier.

      Screen%20Shot%202018-05-16%20at%207.17.33%20PM.png

    2. Copy the DNS address that appears under Cluster endpoint. This is the DNS address of your Cluster and the one you will use when configuring your connection. Be sure to choose the Cluster endpoint address and not the Reader endpoint address.

      Since Amazon Aurora clusters are comprised of a writer node and multiple reader nodes, the Cluster endpoint will always point to the writer node in the cluster (even if one of the reader nodes has been prompted to a new writer, such as after a failover) and the Reader endpoint will load balance across all of the reader nodes in your cluster. Connecting to the reader endpoint will not allow you to write changes to your Aurora MySQL cluster.

      Screen%20Shot%202018-05-16%20at%207.18.17%20PM.png

    3. You will now need to connect to your Aurora MySQL cluster and invoke the Lambda functions you have created earlier. There are multiple tools available, but I usually prefer to use DBeaver, which is a free universal SQL client with good MySQL support.

    4. After you have downloaded and installed DBeaver, you will need to create a new connection to your Aurora MySQL cluster. You can accomplish that by clicking on the Create New Connection button. Note that the location of this may differ depending on your DBeaver screen layout.

      Screen%20Shot%202018-05-16%20at%207.33.26%20PM.png

    5. In the first screen of the Create new connection wizard, choose MySQL as the driver and click Next.

    6. Next, you will need to provide the connection properties of your new Aurora MySQL cluster. This includes the Cluster endpoint you copied earlier for the Server Host, as well as the database name, username and password which you have configured during the creation of your Aurora cluster. Once done click Test Connection and if the test was successful click Next.

      Screen%20Shot%202018-05-16%20at%207.27.45%20PM.png

    7. You can skip the next two screens by clicking on Next twice.

    8. Complete the creation of your new MySQL connection by providing a connection name and clicking on Finish.

      Screen%20Shot%202018-05-16%20at%207.31.10%20PM.png

    9. Right click on the new connection in the left-hand pane and open a new SQL Editor window.

      Screen%20Shot%202018-05-16%20at%207.36.40%20PM.png

    10. You can invoke a Lambda function from within MySQL by using the mysql.lambda_asyc procedure. Specify the ARN for the Lambda function you have created earlier and provide the JSON Payload for the message which will be sent to the queue:

      You can execute the Lambda function call by clicking the Execute button.

      Screen%20Shot%202018-05-16%20at%207.39.46%20PM.png

Note that if you receive a “missing credential” error at this point, you have probably missed one of the steps in configuring your IAM Role or Cluster parameter group settings.

  1. An easy way to verify that your messages have been sent successfully to the SQS queue through the usage of the Lambda function is to poll the SQS queue itself. Simply navigate to the AWS SQS console.

    Screen%20Shot%202018-05-16%20at%207.42.12%20PM.png

  2. Select the queue which you have created earlier (best_queue_ever, in my environment), click on Queue Actions and choose View/Delete messages.

    Screen%20Shot%202018-05-16%20at%207.42.25%20PM.png

  3. In the pop-up screen click on the Start Polling for Messages button in the center of the screen:

    Screen%20Shot%202018-05-16%20at%207.42.31%20PM.png

  4. And if everything was successful up to this point, you should be able to see your messages appear. This indicates that you have invoked a Lambda function which sent JSON payloads to an SQS queue from within an Aurora MySQL cluster.

    Screen%20Shot%202018-05-16%20at%207.45.24%20PM.png

  5. Additionally, you can also wrap mysql.lambda_asyc with your own custom procedure that can further enhance the business logic you want to execute as well as perform error handling. Plus, you won’t have to specify the Lambda function ARN on each execution. You can find one such very simple example of embedding mysql.lambda_asyc inside a stored procedure below:

So there you have it! Using a Lambda function from within an Amazon Aurora MySQL cluster to read and write messages to an AWS SQS queue. This is just one example of how you can extend the core functionality of MySQL by leveraging additional cloud-native components provided by AWS. To finish up, you will probably want to delete the instances.

Load comments

About the author

David Yahalom

See Profile

David Yahalom is the CTO of Naya Tech, a leading database, cloud and big data solutions provider, located in San Jose, California and specializing in cloud database migrations. David is a certified database expert and a cloud solutions architect with over 15 years of experience in information technology and database platforms including Oracle, Hadoop, Amazon Web Services, Microsoft Azure, MySQL, and PostgreSQL.

David's latest contributions: