How to use AWS CLI to manage Amazon S3

Andrei Maksimov

Andrei Maksimov

0
(0)

The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through scripts. This article covers how to use AWS CLI to manage Amazon S3 buckets and objects with lots of examples that you can use during your day-to-day AWS activities.

In addition to AWS CLI, we strongly recommend installing aws-shell. This command-line shell program provides convenience and productivity features to help both new and advanced users of the AWS Command Line Interface. Key features include the following:

  • Fuzzy auto-completion for AWS CLI Commands, Options and Resources
  • Dynamic in-line documentation
  • Execution of OS shell commands
  • Export executed commands to a text editor

And lastly, we recommend you install the Session Manager plugin for the AWS CLI, which allows you to use the AWS Command Line Interface (AWS CLI) to start and end sessions that connect you to your EC2 instances.

AWS CLI Installation

You can install AWS CLI on Windows, macOS, and Linux. In addition to that, Amazon Linux AMI already contains AWS CLI as a part of the OS distribution, so you don’t have to install it manually.

Windows

For modern Windows distributions, we recommend you to use Chocolatey package manager to install AWS CLI:

# AWS CLI
choco install awscli

# Session Manager plugin
choco install awscli-session-manager

# AWS-Shell
choco install python
choco install pip
pip install aws-shell

macOS

To install AWS CLI on macOS, we recommend you to use brew package manager:

# AWS CLI
brew install awscli

# Session Manager plugin
brew install --cask session-manager-plugin

# AWS-Shell
pip install aws-shell

Linux

Depending on your Linux distribution, the installation steps are different.

CentOS, Fedora, RHEL

For YUM-based distributions (CentOS, Fedora, RHEL), you can use the following installation steps:

# AWS CLI
sudo yum update
sudo yum install wget -y
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum install epel-release-latest-7.noarch.rpm
sudo yum -y install python-pip
sudo pip install awscli

# Session Manager plugin
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm" \
  -o "session-manager-plugin.rpm"
sudo yum install -y session-manager-plugin.rpm

# AWS-Shell
pip install aws-shell

Debian, Ubuntu

For APT-based distributions (Debian, Ubuntu), you can use slightly different installation steps:

# AWS CLI
sudo apt-get install python-pip
sudo pip install awscli

# Session Manager plugin
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" \
  -o "session-manager-plugin.deb"
sudo dpkg -i session-manager-plugin.deb

# AWS-Shell
pip install aws-shell

Other Linux distributions

For other Linux distributions, you can use manual AWS CLI installation steps.

Difference between AWS s3, s3api, and s3control

The main difference between the s3,  s3api and s3control commands are that the s3 commands are high-level commands built on top of lower-level s3api commands driven by the JSON models.

s3s3apis3control
These commands are specifically designed to make it easier to manage your S3 files using the CLI.These commands are generated from JSON models, which directly model the APIs of the various AWS services. This allows the CLI to generate commands that are a near one-to-one mapping of the service’s APIThese commands allow you to manage the Amazon S3 control plane
aws s3 lsaws s3api list-objects-v2 --bucket my-bucketaws s3control list-jobs --account-id 123456789012
Difference between AWS s3, s3api, and s3control

If you’d like to see how to use these commands to interact with VPC endpoints, check out our Automating Access To Multi-Region VPC Endpoints using Terraform article.

AWS S3 CLI Commands

Usually, you’re using AWS CLI commands to manage S3 when you need to automate S3 operations using scripts or in your CICD automation pipeline. For example, you can configure the Jenkins pipeline to execute AWS CLI commands for any AWS account in your environment.

This section of the article will cover the most common examples of using AWS CLI commands to manage S3 buckets and objects.

Managing S3 buckets

AWS CLI supports create, list, and delete operations for S3 bucket management.

Create S3 bucket

To create an S3 bucket using AWS CLI, you need to use the aws s3 mb (make bucket) command:

aws s3 mb s3://hands-on-cloud-example-1

Note: S3 bucket name has to be always started from the s3:// prefix.

To create an S3 bucket using AWS CLI in the specific AWS region, you need to add –region argument to the previous command:

aws s3 mb s3://hands-on-cloud-example-2 --region us-east-2

List S3 buckets

To list S3 buckets using AWS CLI, you can use either aws s3 ls or aws s3api list-buckets commands.

aws s3 ls

The aws s3api list-buckets command produces JSON as an output:

aws s3api list-buckets

Using aws s3api command allows you to use --query parameter to perform JMESPath queries for specific members and values in the JSON output.

Let’s output only buckets which names starts from hands-on-cloud-example:

aws s3api list-buckets --query \
  'Buckets[?starts_with(Name, `hands-on-cloud-example`) == `true`].Name'

We can extend the previous command to output only S3 buckets names:

aws s3api list-buckets --query \
  'Buckets[?starts_with(Name, `hands-on-cloud-example`) == `true`].[Name]' \
  --output text

Delete S3 bucket

To delete the S3 bucket using AWS CLI, you can use either aws s3 rb or aws s3api delete-bucket commands.

aws s3 rb s3://hands-on-cloud-example-1

Note: you can delete only empty S3 buckets

If your S3 bucket contains objects, you can use the --force argument to clean up the bucket before deletion:

aws s3 rb s3://hands-on-cloud-example-2 --force

Note: the --force argument is not deleting versioned objects which would cause the bucket deletion to fail.

To delete the S3 bucket with enabled objects versioning, you have to clean it up first:

export bucket_name="hands-on-cloud-versioning-enabled"

# Deleting objects versions

aws s3api delete-objects \
    --bucket $bucket_name \
    --delete "$(aws s3api list-object-versions \
    --bucket $bucket_name \
    --output=json \
    --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"

# Deleting delete markers

aws s3api delete-objects \
    --bucket $bucket_name \
    --delete "$(aws s3api list-object-versions \
    --bucket $bucket_name \
    --output=json \
    --query='{Objects: Contents[].{Key:Key,VersionId:VersionId}}')"

# Deleting S3 bucket

aws s3 rb s3://$bucket_name

Managing S3 Objects

In this section of the article, we’ll cover the most common AWS CLI operations for managing S3 objects.

Upload file to S3 bucket

To upload file to S3 bucket using AWS CLI, you need to use ether aws s3 cp command:

aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1

If required, you can change the uploaded S3 object name during the upload operation:

aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1/image.png

In addition to that, you can specify the S3 storage class during upload:

aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1 --storage-class ONEZONE_IA

Supported parameters for --storage-class argument are:

If the file has to be encrypted with default SSE encryption, you need to provide --sse argument:

aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1 --sse AES256

For KMS encryption, use the following command:

aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1 --sse 'aws:kms' --sse-kms-key-id KMS_KEY_ID

Note: replace KMS_KEY_ID in the command above with your own KMS key ID.

Upload multiple files to S3 bucket

To upload multiple files to the S3 bucket, you need to use either aws s3 cp command with --recursive argument or aws s3 sync command.

aws s3 cp ./directory s3://hands-on-cloud-example-1/directory --recursive

Note: the command above will not upload empty directories if they exist within the ./directory path (will not create the S3 objects to represent them).

You can use the same arguments as in the examples above to set up the S3 storage class or encryption if required.

In addition to that, you can use --include and --exclude arguments to specify a set of files to upload.

For example, if you need to copy only .png files from the ./directory, you can use the following command:

aws s3 cp ./directory s3://hands-on-cloud-example-1/directory --recursive --exclude "*" --include "*.png"

You can achieve the same result by using the aws s3 sync command:

aws s3 sync ./directory s3://hands-on-cloud-example-1/directory

Note: the aws s3 sync command supports the same arguments for setting up the S3 storage class and encryption.

The benefit of using the aws s3 sync command is that this command will upload only changed files from your local file system at the next execution.

You can use the --delete argument to delete objects from the S3 bucket if they were deleted on your local file system (complete synchronization):

aws s3 sync ./directory s3://hands-on-cloud-example-1/directory --delete

Download file from S3 bucket

To download a single file from the S3 bucket using AWS CLI, you need to use the aws s3 cp command:

aws s3 cp s3://hands-on-cloud-example-1/image.png ./image.png

Download multiple files from S3 bucket

To download multiple files from the S3 bucket using AWS CLI, you need to use either the aws s3 cp or aws s3 sync command:

aws s3 cp s3://hands-on-cloud-example-1/directory ./directory --recursive

Note: if the S3 bucket contains empty “directories” within the /directory prefix, the execution of the command above will create empty directories on your local file system.

Similarly to the upload operation, you can synchronize all objects from the S3 bucket within the common prefix to your local directory:

aws s3 sync s3://hands-on-cloud-example-1/directory ./directory

Note: for both commands (aws s3 cp and aws s3 sync) you can use the --include and --exclude arguments to download or synchronize only a specific set of files.

Note: using the --delete argument with the aws s3 sync command allows you to get a complete mirror of S3 objects prefix in your local folder.

List files in S3 bucket

To list files in the S3 bucket using AWS CLI, you need to use the aws s3 ls command:

aws s3 ls s3://hands-on-cloud-example-1

You can get human-readable object sizes by using the --human-readable argument:

aws s3 ls s3://hands-on-cloud-example-1 --human-readable

You can use the --recursive argument to list all S3 objects within the S3 bucket or having the same prefix:

# Recursive listing of the entire S3 bucket 
aws s3 ls s3://hands-on-cloud-example-1 --recursive

# Recursive listing for the S3 prefix 
aws s3 ls s3://hands-on-cloud-example-1/directory --recursive

Rename S3 object

To rename S3 object using AWS CLI, you need to use the aws s3 mv command:

aws s3 mv s3://hands-on-cloud-example-1/image.png s3://hands-on-cloud-example-1/image2.png

Note: you can not only rename S3 objects but also change their storage class and encryption, for example:

aws s3 mv s3://hands-on-cloud-example-1/image2.png s3://hands-on-cloud-example-1/image.png \
   --sse AES256 --storage-class ONEZONE_IA

Rename S3 “directory”

To rename S3 “directory” using AWS CLI, you need to use the aws s3 mv command:

aws s3 mv s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-1/directory2 --recursive

Note: the --recursive argument does not move empty “directories” within specified S3 “directory,” so if you’re expecting a complete “directory” move, you might need to recreate empty “directories” in the target directory (aws s3 put-object command) and remove them from the source directory (see the examples below).

Create empty S3 “directory”

To create an empty S3 “directory” using AWS CLI, you need to use the aws s3 put-object command:

aws s3api put-object --bucket hands-on-cloud-example-1 --key directory_name/

Note: the / character in the object name is required to create an empty directory. Otherwise, the command above will create a file object with the name directory_name.

Copy/move files between S3 buckets

To copy files between S3 buckets using AWS CLI, you need to use either the aws s3 cp or aws s3 sync command. To move files between S3 buckets, you need to use the aws s3 mv command.

To copy files between S3 buckets within the same AWS Region:

aws s3 cp s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory --recursive

If the source and destination S3 buckets are located in different AWS regions, you need to use the --source-region and --region (specified destination S3 bucket location) arguments:

aws s3 cp s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory --recursive \
   --region us-west-2 --source-region us-east-1

To move objects between S3 buckets within the same region, you need to use the aws s3 mv command:

aws s3 mv s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory --recursive

If the source and destination S3 buckets are located in different AWS regions, you need to use the --source-region and --region (specified destination S3 bucket location) arguments:

aws s3 mv s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory --recursive \
   --region us-west-2 --source-region us-east-1

Note: you can use --storage-class and --sse arguments to specify storage class and encryption method in the target S3 bucket

Note: you can use --include and --exclude arguments to select only specific files to be copied/moved from the source S3 bucket

Note: the --recursive argument does not copy/move empty “directories” within specified S3 prefix, so if you’re expecting a complete “directory” copy/move, you might need to recreate empty “directories” in the target directory (aws s3 put-object command). See the examples above.

To synchronize “directories” between S3 buckets, you need to use the aws s3 sync command, for example:

aws s3 sync s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory

Note: you can use arguments like --storage-class, --sse, --include and --exclude with the aws s3 sync command:

aws s3 sync s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory \
   --region us-west-2 --source-region us-east-1 --sse AES256

Deleting S3 objects

To delete S3 objects using AWS CLI, you need to use the aws s3 rm command:

aws s3 rm s3://hands-on-cloud-example-1/image.png

Note: you can use the --recursive, --include, and --exclude arguments with the aws s3 rm command.

Generate pre-signed URLs for S3 object

To generate the pre-signed URL for the S3 object using AWS CLI, you need to use the aws s3 presign command:

aws s3 presign s3://hands-on-cloud-example-1/image.png --expires-in 604800

Note: the –expires-in argument defines pre-signed URL expiration time in seconds between 3600 (min) and 604800 (max) seconds.

Now, you can use generated pre-signed URL to download the S3 object using a web browser or wget command, for example:

wget generated_presigned_url

Or replace the S3 object using the curl command:

curl -H "Content-Type: image/png" -T image.png generated_presigned_url

Note: if you’re getting the The request signature we calculated does not match the signature you provided. Check your key and signing method. error message, you have to regenerate your AWS Access Key and AWS Secret Key. The primary reason for the error is that AWS credentials contain certain characters like +, %, and /.

Summary

In this article, we’ve covered how to use AWS CLI to manage Amazon S3 buckets and objects with lots of examples that you can use during your day-to-day AWS activities.

How useful was this post?

Click on a star to rate it!

As you found this post useful...

Follow us on social media!

We are sorry that this post was not useful for you!

Let us improve this post!

Tell us how we can improve this post?

Like this article?

Share on Facebook
Share on Twitter
Share on Linkdin
Share on Pinterest

Want to be an author of another post?

We’re looking for skilled technical authors for our blog!

Leave a comment

If you’d like to ask a question about the code or piece of configuration, feel free to use https://codeshare.io/ or a similar tool as Facebook comments are breaking code formatting.