The AWS Command Line Interface (CLI) is a unified tool to manage your AWS services from your terminal shell. With just one tool to download and configure, you can control multiple AWS services from the command line and automate them through shell scripts. The AWS CLI introduces a set of simple file commands for efficient file transfer to and from Amazon S3. The AWS CLI supports most of the functionality of the Amazon S3 web console, so you can use the AWS CLI instead of the Amazon S3 web console whenever possible.
This AWS CLI tutorial contains many AWS CLI S3 command execution examples you can use during your day-to-day AWS activities to manage Amazon S3 buckets and objects.
Table of contents
In addition to AWS CLI, we strongly recommend installing aws-shell. This command-line shell program provides convenience and productivity features to help new and advanced AWS Command Line Interface users. Key features include the following:
- Fuzzy auto-completion for commands, options, and resources
- Dynamic in-line documentation
- Integration with other OS shell commands
- Export executed commands to a text editor
And lastly, we recommend you install the Session Manager plugin for the AWS CLI, which allows you to use the AWS Command Line Interface (AWS CLI) to start and end sessions that connect you to your EC2 instances.
Did you know? AWS CLI is not the only way to manage S3 buckets with a little Python knowledge, you can start Working with S3 in Python using the Boto3 library.
Installation process
You can install AWS CLI on Windows, macOS, and Linux. In addition to that, Amazon Linux AMI already contains AWS CLI as a part of the OS distribution, so you don’t have to install it manually.
Note: We suggest to configure AWS CLI using the required region for your buckets using the AWS_REGION
environment variable. By default, the AWS CLI will use us-east-1
as your default region. To change the default region, run the aws configure
command and specify the --region
option followed by the desired region code. For example, to use us-west-2
as your default region, you would run aws configure --region us-west-2
.
Note: You can specify a source folder when installing the AWS CLI. This allows you to keep all AWS CLI commands in one place. To specify a source folder, use the --source
option followed by the path to the desired folder. For example, if you wanted to use ~/awscli
as your source folder, you would run aws configure --source ~/awscli
. Specifying a source folder is unnecessary and might be required only to keep all of the commands in one place.
To simplify managing access to multiple AWS environments, we suggest you use the aws-vault.
Windows
For modern Windows distributions, we recommend you use the Chocolatey package manager to install AWS CLI:
# AWS CLI
choco install awscli
# Session Manager plugin
choco install awscli-session-manager
# AWS-Shell
choco install python
choco install pip
pip install aws-shell
macOS
To install AWS CLI on macOS, we recommend you use the brew package manager:
# AWS CLI
brew install awscli
# Session Manager plugin
brew install --cask session-manager-plugin
# AWS-Shell
pip install aws-shell
Linux
Depending on your Linux distribution, the installation steps are different.
CentOS, Fedora, RHEL
For YUM-based distributions (CentOS, Fedora, RHEL), you can use the following installation steps:
# AWS CLI
sudo yum update
sudo yum install wget -y
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum install epel-release-latest-7.noarch.rpm
sudo yum -y install python-pip
sudo pip install awscli
# Session Manager plugin
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/linux_64bit/session-manager-plugin.rpm" \
-o "session-manager-plugin.rpm"
sudo yum install -y session-manager-plugin.rpm
# AWS-Shell
pip install aws-shell
Debian, Ubuntu
For APT-based distributions (Debian, Ubuntu), you can use slightly different installation steps:
# AWS CLI
sudo apt-get install python-pip
sudo pip install awscli
# Session Manager plugin
curl "https://s3.amazonaws.com/session-manager-downloads/plugin/latest/ubuntu_64bit/session-manager-plugin.deb" \
-o "session-manager-plugin.deb"
sudo dpkg -i session-manager-plugin.deb
# AWS-Shell
pip install aws-shell
Other Linux distributions
For other Linux distributions, you can use manual AWS CLI installation steps.
Difference between AWS s3, s3api, and s3control
The main difference between the s3
, s3api
and s3control
commands are that the s3
commands are high-level commands built on top of lower-level s3api
commands driven by the JSON models.
s3 | s3api | s3control |
---|---|---|
These commands are designed to make it easier to manage your S3 files using the CLI. | These commands are generated from JSON models, which directly model the APIs of the various AWS services. This allows the CLI to generate commands that are a near one-to-one mapping of the service’s API | These commands allow you to manage the Amazon S3 control plane. |
aws s3 ls | aws s3api list-objects-v2 \ --bucket my-bucket | aws s3control list-jobs \ --account-id 123456789012 |
If you’d like to see how to use these commands to interact with VPC endpoints, check out our Automating Access To Multi-Region VPC Endpoints using Terraform article.
AWS S3 CLI Commands
Usually, you’re using AWS CLI S3 commands to manage S3 when you need to automate S3 operations using scripts or in your CICD automation pipeline. For example, you can configure the Jenkins pipeline to execute the AWS CLI command for any AWS account in your environment.
You can check the results of executing AWS CLI commands in the AWS Management Console.
Managing S3 buckets
CLI S3 commands allow you to execute create, list, and delete operations for S3 bucket management.
Create S3 bucket
To create an S3 bucket using AWS CLI, you need to use the aws s3 mb
(make bucket) command:
aws s3 mb s3://hands-on-cloud-example-1
Note: S3 bucket name has to be always started from the s3://
prefix.
Open a terminal window and type in the following command: aws s3 mb s3://[bucketname]
. Replace [bucket namespace]
with the name you want to use for your bucket. Once the bucket has been created, you can upload files using the aws s3 cp
command.
For example, to upload only files having the .JPG
extension (JPG files) in an image folder, you would use the following command:
aws s3 cp Images/ s3://[bucketname] --recursive --include "*.jpg"
Note that the --recursive
flag in the above command is necessary if you want to upload files in subfolders. Using the --max-size
flag, you can specify a file size limit.
To create an S3 bucket using CLI in the specific AWS region, you need to add the --region
argument to the previous command:
aws s3 mb s3://hands-on-cloud-example-2 --region us-east-2
List S3 buckets
To list S3 buckets using CLI, you can use either aws s3 ls
or aws s3api list-buckets
commands.
aws s3 ls
The aws s3api list-buckets
command produces JSON as an output:
aws s3api list-buckets
Using aws s3api
command allows you to use --query
parameter to perform JMESPath queries for specific members and values in the JSON output.
Let’s output only buckets whose names start from hands-on-cloud-example
:
aws s3api list-buckets --query \
'Buckets[?starts_with(Name, `hands-on-cloud-example`) == `true`].Name'
We can extend the previous command to output only S3 buckets names:
aws s3api list-buckets --query \
'Buckets[?starts_with(Name, `hands-on-cloud-example`) == `true`].[Name]' \
--output text
Both commands will return a list of S3 buckets in your AWS account but differ in the output format. If you want to see a list of bucket names without any additional information, you can use the aws s3 ls
command. This command will only return the name of each bucket without any other details. Alternatively, if you want more information about each bucket, such as the creation date and size, you can use the aws s3api list-buckets
command. This command will return a JSON object with detailed information about each bucket. To save the command output to the file, open up your terminal and navigate to the local directory where you want to store the output file. Then, enter the following command: aws s3api list-buckets > output.txt This will create a TXT file (output.txt) in your local directory containing a JSON object containing information about all the buckets.
Delete S3 bucket
To delete the S3 bucket using CLI, you can use either aws s3 rb
or aws s3api delete-bucket
commands.
aws s3 rb s3://hands-on-cloud-example-1
Note: you can delete only empty S3 buckets
If your S3 bucket contains objects, you can use the --force
argument to clean up the bucket before deletion:
aws s3 rb s3://hands-on-cloud-example-2 --force
Note: the --force
argument is not deleting versioned objects which would cause the bucket deletion to fail.
To delete the S3 bucket with enabled objects versioning, you have to clean it up first:
export bucket_name="hands-on-cloud-versioning-enabled"
# Deleting objects versions
aws s3api delete-objects \
--bucket $bucket_name \
--delete "$(aws s3api list-object-versions \
--bucket $bucket_name \
--output=json \
--query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
# Deleting delete markers
aws s3api delete-objects \
--bucket $bucket_name \
--delete "$(aws s3api list-object-versions \
--bucket $bucket_name \
--output=json \
--query='{Objects: Contents[].{Key:Key,VersionId:VersionId}}')"
# Deleting S3 bucket
aws s3 rb s3://$bucket_name
Managing S3 Objects
This section will cover the most common CLI operations for managing S3 objects.
Upload file to S3 bucket
To upload a file to the S3 bucket using CLI, you need to use ether aws s3 cp
command:
aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1
To upload a file to an S3 bucket, you will first need to create a data object. This data object can be created from scratch, or you can use the data from an existing file. Once you have created the data object, you must upload it to the appropriate bucket. The data will then be stored in that one bucket until you move it to another. You will need to specify the bucket’s name and the path to the file on your local system.
If required, you can change the uploaded S3 object name during the upload operation:
aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1/image.png
In addition to that, you can specify the S3 storage class during upload:
aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1 --storage-class ONEZONE_IA
Supported parameters for --storage-class
argument are:
- STANDARD – default, Amazon S3 Standard
- REDUCED_REDUNDANCY – Amazon S3 Reduced Redundancy Storage
- STANDARD_IA – Amazon S3 Standard-Infrequent Access
- ONEZONE-IA – Amazon S3 One Zone-Infrequent Access
- INTELLIGENT_TIERING – Amazon S3 Intelligent-Tiering
- GLACIER – Amazon S3 Glacier
- DEEP_ARCHIVE – Amazon S3 Glacier Deep Archive
If the file has to be encrypted with default SSE encryption, you need to provide --sse
argument:
aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1 --sse AES256
For AWS KMS encryption, use the following command:
aws s3 cp ./Managing-AWS-IAM-using-Terraform.png s3://hands-on-cloud-example-1 --sse 'aws:kms' --sse-kms-key-id KMS_KEY_ID
Note: replace KMS_KEY_ID
in the command above with your own KMS key ID.
Upload multiple files to the S3 bucket
To upload files recursively to the S3 bucket, you need to use either aws s3 cp
command with --recursive
argument or aws s3 sync
command.
aws s3 cp ./directory s3://hands-on-cloud-example-1/directory --recursive
Note: the command above will not upload empty directories if they exist within the ./directory path (it will not create the S3 objects to represent them). Feel free to check out uploaded files using the AWS console.
You can use the same arguments as in the examples above to set up the S3 storage class or encryption if required.
In addition to that, you can use --include
and --exclude
arguments to specify a set of files to upload.
For example, if you need to copy only .png
files from the ./directory
, you can use the following command:
aws s3 cp ./directory s3://hands-on-cloud-example-1/directory --recursive --exclude "*" --include "*.png"
You can achieve the same result by using the aws s3 sync
command:
aws s3 sync ./directory s3://hands-on-cloud-example-1/directory
Note: the aws s3 sync
command supports the same arguments for setting up the S3 storage class and encryption.
The benefit of using the aws s3 sync
command is that this command will upload only changed files from your local file system at the next execution.
You can use the --delete
argument to delete objects from the S3 bucket if they were deleted on your local file system (complete synchronization):
aws s3 sync ./directory s3://hands-on-cloud-example-1/directory --delete
Download the file from the S3 bucket
To download a single file from the S3 bucket using CLI, you need to use the aws s3 cp
command:
aws s3 cp s3://hands-on-cloud-example-1/image.png ./image.png
Once connected to the service, you can navigate to the desired bucket and select the file you wish to download. You will need to specify a destination bucket for the file. Once the download is complete, you can access the file from the destination bucket.
Download multiple files from the S3 bucket
To download multiple files from the S3 bucket using CLI, you need to use either the aws s3 cp
or aws s3 sync
command:
aws s3 cp s3://hands-on-cloud-example-1/directory ./directory --recursive
Note: if the S3 bucket contains empty “directories” within the /directory
prefix, the execution of the command above will create empty directories on your local file system.
Similarly to the upload operation, you can synchronize all objects from the S3 bucket within the common prefix to your local directory:
aws s3 sync s3://hands-on-cloud-example-1/directory ./directory
Note: for both commands (aws s3 cp
and aws s3 sync
) you can use the --include
and --exclude
arguments to download or synchronize only a specific set of files.
Note: using the --delete
argument with the aws s3 sync
command allows you to get a complete mirror of S3 objects prefix in your local folder.
List files in the S3 bucket
To list files in the S3 bucket using CLI, you need to use the aws s3 ls
command:
aws s3 ls s3://hands-on-cloud-example-1
You can get human-readable object sizes by using the --human-readable
argument:
aws s3 ls s3://hands-on-cloud-example-1 --human-readable
You can use the --recursive
argument to list all S3 objects within the S3 bucket or having the same prefix:
# Recursive listing of the entire S3 bucket
aws s3 ls s3://hands-on-cloud-example-1 --recursive
# Recursive listing for the S3 prefix
aws s3 ls s3://hands-on-cloud-example-1/directory --recursive
Rename S3 object
To rename an S3 object using AWS CLI, you need to use the aws s3 mv
command:
aws s3 mv s3://hands-on-cloud-example-1/image.png s3://hands-on-cloud-example-1/image2.png
Note: you can not only rename S3 objects but also change their storage class and encryption, for example:
aws s3 mv s3://hands-on-cloud-example-1/image2.png s3://hands-on-cloud-example-1/image.png \
--sse AES256 --storage-class ONEZONE_IA
Rename S3 “directory”
To rename S3 “directory” using AWS CLI, you need to use the aws s3 mv
command:
aws s3 mv s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-1/directory2 --recursive
Note: the --recursive
argument does not move empty “directories” within specified S3 “directory,” so if you’re expecting a complete “directory” move, you might need to recreate empty “directories” in the target directory (aws s3 put-object
command) and remove them from the source directory (see the examples below).
Create an empty S3 “directory”
To create an empty S3 “directory” using AWS CLI, you need to use the aws s3 put-object
command:
aws s3api put-object --bucket hands-on-cloud-example-1 --key directory_name/
Note: the /
character in the object name is required to create an empty directory. Otherwise, the command above will create a file object with the name directory_name
.
Copy/move files between S3 buckets
To copy files between S3 buckets using AWS CLI, you need to use either the aws s3 cp
or aws s3 sync
command. To move files between S3 buckets, you need to use the aws s3 mv
command.
To copy files between S3 buckets within the same AWS Region:
aws s3 cp s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory --recursive
If the source and destination S3 buckets are located in different AWS regions, you need to use the --source-region
and --region
(specified destination S3 bucket location) arguments:
aws s3 cp s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory --recursive \
--region us-west-2 --source-region us-east-1
To move objects between S3 buckets within the same region, you need to use the aws s3 mv
command:
aws s3 mv s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory --recursive
If the source and destination S3 buckets are located in different AWS regions, you need to use the --source-region
and --region
(specified destination S3 bucket location) arguments:
aws s3 mv s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory --recursive \
--region us-west-2 --source-region us-east-1
Note: you can use --storage-class
and --sse
arguments to specify storage class and encryption method in the target S3 bucket
Note: you can use --include
and --exclude
arguments to select only specific files to be copied/moved from the source S3 bucket
Note: the --recursive
argument does not copy/move empty “directories” within specified S3 prefix, so if you’re expecting a complete “directory” copy/move, you might need to recreate empty “directories” in the target directory (aws s3 put-object
command). See the examples above.
To synchronize “directories” between S3 buckets, you need to use the aws s3 sync
command, for example:
aws s3 sync s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory
Note: you can use arguments like --storage-class
, --sse
, --include
and --exclude
with the aws s3 sync
command:
aws s3 sync s3://hands-on-cloud-example-1/directory s3://hands-on-cloud-example-2/directory \
--region us-west-2 --source-region us-east-1 --sse AES256
Deleting S3 objects
To delete S3 objects using AWS CLI, you need to use the aws s3 rm
command:
aws s3 rm s3://hands-on-cloud-example-1/image.png
Note: you can use the --recursive
, --include
, and --exclude
arguments with the aws s3 rm
command. Feel free to check out the result of the command in the AWS console.
Generate pre-signed URLs for S3 object
To generate the pre-signed URL for the S3 object using AWS CLI, you need to use the aws s3 presign
command:
aws s3 presign s3://hands-on-cloud-example-1/image.png --expires-in 604800
Note: the --expires-in
argument creates a pre-signed URL for temporary access with an expiration time between 3600 (min) and 604800 (max) seconds.
Now, you can use generated pre-signed URL to download the S3 object using a web browser or wget
command, for example:
wget generated_presigned_url
Or replace the S3 object using the curl
command:
curl -H "Content-Type: image/png" -T image.png generated_presigned_url
Note: if you’re getting the The request signature we calculated does not match the signature you provided. Check your key and signing method.
error message, you must regenerate your AWS Access Key and Secret Key. The primary reason for the error is that AWS credentials contain certain characters like +
, %
, and /
.
FAQ
Aws S3 Sync Vs. Cp
The aws s3 cp
command is primarily suited for copying single objects to, from, or between S3 buckets. The aws s3 sync
command streamlines copy operation (mirroring) of a set of files or objects grouped by the same file path on the file system or object prefix in the S3 bucket.
Summary
In this article, we’ve covered how to use AWS CLI to manage Amazon S3 buckets and objects, with many examples you can use during your day-to-day AWS activities.
Great content! Keep up the good work!
Hello, I regenerated credentials so they don’t contain those forbidden symbols, but still getting “The request signature we calculated does not match the signature you provided.” Generated presigned url from terminal and then tried to upload image to that url using curl.
Hi Yuriy,
Please, use the following code to generate parameters for cURL:
Here’s an example of using cURL:
curl -X POST -F key='filename.txt' -F AWSAccessKeyId='AKIA...EWS3' -F policy='eyJle...dfQ==' -F signature='RcU9...XLw=' -F file=@local-filename.txt https://some-bucket.s3.amazonaws.com/