Many AWS services work just fine by themselves, but even better together! This important aspect of our model allows you to select a single service, learn about it, get some experience with it, and then extend your span to other related services over time. On the other hand, opportunities to make the services work together are ever-present, and we have a number of them on our customer-driven roadmap.
Today I would like to tell you about two new features for Amazon Aurora, our MySQL-compatible relational database:
Lambda Function Invocation – The stored procedures that you create within your Amazon Aurora databases can now invoke AWS Lambda functions.
Load Data From S3 – You can now import data stored in an Amazon Simple Storage Service (S3) bucket into a table in an Amazon Aurora database.
Because both of these features involve Amazon Aurora and another AWS service, you must grant Amazon Aurora permission to access the service by creating an IAM Policy and an IAM Role, and then attaching the Role to your Amazon Aurora database cluster. To learn how to do this, see Authorizing Amazon Aurora to Access Other AWS Services On Your Behalf.
Lambda Function IntegrationRelational databases use a combination of triggers and stored procedures to enable the implementation of higher-level functionality. The triggers are activated before or after some operations of interest are performed on a particular database table. For example, becauseAmazon Aurora is compatible with MySQL, it supports triggers on the INSERT, UPDATE, and DELETE operations. Stored procedures are scripts that can be run in response to the activation of a trigger.
You can now write stored procedures that invoke Lambda functions. This new extensibility mechanism allows you to wire your Aurora-based database to other AWS services. You can send email using Amazon Simple Email Service (SES), issue a notification using Amazon Simple Notification Service (SNS), insert publish metrics to Amazon CloudWatch, update a Amazon DynamoDB table, and more.
At the appliction level, you can implement complex ETL jobs and workflows, track and audit actions on database tables, and perform advanced performance monitoring and analysis.
Your stored procedure must call the mysql_lambda_async procedure. This procedure, as the name implies, invokes your desired Lambda function asynchronously, and does not wait for it to complete before proceeding. As usual, you will need to give your Lambda function permission to access any desired AWS services or resources.
To learn more, read Invoking a Lambda Function from an Amazon Aurora DB Cluster.
Load Data From S3As another form of integration, data stored in an S3 bucket can now be imported directly in to Aurora (up until now you would have had to copy the data to an EC2 instance and import it from there).
The data can be located in any AWS region that is accessible from your Amazon Aurora cluster and can be in text or XML form.
To import data in text form, use the new LOAD DATA FROM S3 command. This command accepts many of the same options as MySQL’s LOAD DATA INFILE, but does not support compressed data. You can specify the line and field delimiters and the character set, and you can ignore any desired number of lines or rows at the start of the data.
To import data in XML form, use the new LOAD XML from S3 command. Your XML can look like this:
Xml<row column1="value1" column2="value2" /> ... <row column1="value1" column2="value2" />
Or like this:
Xml<row> <column1>value1</column1> <column2>value2</column2> </row> ...
Or like this:
Xml<row> <field name="column1">value1</field> <field name="column2">value2</field> </row> ...
To learn more, read Loading Data Into a DB Cluster From Text Files in an Amazon S3 Bucket.
Available NowThese new features are available now and you can start using them today!
There is no charge for either feature; you’ll pay the usual charges for the use of Amazon Aurora, Lambda, and S3.
by Jeff Barr
Many AWS services work just fine by themselves, but even better together! This important aspect of our model allows you to select a single service, learn about it, get some experience with it, and then extend your span to other related services over time. On the other hand, opportunities to make the services work together are ever-present, and we have a number of them on our customer-driven roadmap.
Today I would like to tell you about two new features for Amazon Aurora, our MySQL-compatible relational database:
Lambda Function Invocation – The stored procedures that you create within your Amazon Aurora databases can now invoke AWS Lambda functions.
Load Data From S3 – You can now import data stored in an Amazon Simple Storage Service (S3) bucket into a table in an Amazon Aurora database.
Because both of these features involve Amazon Aurora and another AWS service, you must grant Amazon Aurora permission to access the service by creating an IAM Policy and an IAM Role, and then attaching the Role to your Amazon Aurora database cluster. To learn how to do this, see Authorizing Amazon Aurora to Access Other AWS Services On Your Behalf.
Lambda Function IntegrationRelational databases use a combination of triggers and stored procedures to enable the implementation of higher-level functionality. The triggers are activated before or after some operations of interest are performed on a particular database table. For example, becauseAmazon Aurora is compatible with MySQL, it supports triggers on the INSERT, UPDATE, and DELETE operations. Stored procedures are scripts that can be run in response to the activation of a trigger.
You can now write stored procedures that invoke Lambda functions. This new extensibility mechanism allows you to wire your Aurora-based database to other AWS services. You can send email using Amazon Simple Email Service (SES), issue a notification using Amazon Simple Notification Service (SNS), insert publish metrics to Amazon CloudWatch, update a Amazon DynamoDB table, and more.
At the appliction level, you can implement complex ETL jobs and workflows, track and audit actions on database tables, and perform advanced performance monitoring and analysis.
Your stored procedure must call the mysql_lambda_async procedure. This procedure, as the name implies, invokes your desired Lambda function asynchronously, and does not wait for it to complete before proceeding. As usual, you will need to give your Lambda function permission to access any desired AWS services or resources.
To learn more, read Invoking a Lambda Function from an Amazon Aurora DB Cluster.
Load Data From S3As another form of integration, data stored in an S3 bucket can now be imported directly in to Aurora (up until now you would have had to copy the data to an EC2 instance and import it from there).
The data can be located in any AWS region that is accessible from your Amazon Aurora cluster and can be in text or XML form.
To import data in text form, use the new LOAD DATA FROM S3 command. This command accepts many of the same options as MySQL’s LOAD DATA INFILE, but does not support compressed data. You can specify the line and field delimiters and the character set, and you can ignore any desired number of lines or rows at the start of the data.
To import data in XML form, use the new LOAD XML from S3 command. Your XML can look like this:
Xml<row column1="value1" column2="value2" /> ... <row column1="value1" column2="value2" />
Or like this:
Xml<row> <column1>value1</column1> <column2>value2</column2> </row> ...
Or like this:
Xml<row> <field name="column1">value1</field> <field name="column2">value2</field> </row> ...
To learn more, read Loading Data Into a DB Cluster From Text Files in an Amazon S3 Bucket.
Available NowThese new features are available now and you can start using them today!
There is no charge for either feature; you’ll pay the usual charges for the use of Amazon Aurora, Lambda, and S3.
by Jeff Barr
Oops! Something went wrong while submitting the form