
'pgpagila','category', 'eu-west-1', options :='format csv') Pagila=> SELECT * from aws_s3.query_export_to_s3('select * from category', 'pgpagila','address', 'eu-west-1', options :='format csv') Pagila=> SELECT * from aws_s3.query_export_to_s3('select * from address', Rows_uploaded | files_uploaded | bytes_uploaded 'actor', 'eu-west-1', options :='format csv') We can now test the export, I don’t use a s3_uri as documented in AWS manual, as I will have to create one for each table ( pgpagila is the bucket’s name): pagila=> SELECT * FROMĪws_s3.query_export_to_s3('select * from actor','pgpagila', Let’s assign all this to our Aurora PostgreSQL instance: Export We should end-up with the following role: The JSON overview should be similar to this:įinally, we need to create a role that we will assign later to the database instance:


Now that the user is created, we can continue with the policy: To achieve that, we need to create a user and policy. We also need to grant permissions to our PostgreSQL instance for writing to S3. Then we need to create a S3 bucket in which we will store the data: Permissions NOTICE: installing required extension "aws_commons" The first on is to install an extension, aws_s3, in PostgreSQL: pagila=> CREATE EXTENSION IF NOT EXISTS aws_s3 CASCADE RDS instances have the possibility to export directly to S3, a bit like RedShift, but it requires some manual steps. My RDS Aurora PostgreSQL instance is of course running:Īnd the sample data is loaded: Exporting to S3 Today, we will see how we can export some tables from AWS RDS Aurora PostgreSQL and import them on MDS.įor this exercise, the data used is pagila, a port of the Sakila example database.
#AWS POSTGRESQL TABLE CSV HOW TO#

In previous posts we already saw how to export data from PostgreSQL and AWS RedShift, and import it on MySQL Database Service in OCI using MySQL Shell importTable utility:
