Table of contents
- Installing ‘curl’ on Linux
- Examples of using ‘curl’
- Related articles
There are several ways to control and communicate with data in the Linux terminal and
curl is the best way to do that.
curl is a command-line tool that you can use for transferring data from or to a server. You can use the
curl command for downloading and uploading data using any of the supported protocols, which comprises FTP, HTTP, SFTP, HTTPS, and SCP. Curl has a range of features, including the ability to restrict bandwidth, resume transfers, user authentication, support for proxy servers, and a lot more.
Below is the list of protocols currently supported by the
- Telnet and TFTP
Additional features include:
- User and password authentication
- Digest Plain
- Proxy tunneling
- Resume file transfer operation
- SSL certificates
- HTTP and HTTPS forms upload
curl are often compared because their functionality overlaps to some degree. Both tools can retrieve content from the Internet but wget has more features like web scraping, recursive downloads as well it is more user-friendly. wget is considered as a better option if you’d like only to download files in the terminal.
curl command basics will help you in uploading and downloading files with advanced HTTP authentication procedures. Furthermore,
wget only supports FTP and HTTP(S), while
curl supports way more protocols.
That’s all about
curl command. Now let’s jump straight to the terminal.
Installing ‘curl’ on Linux
If you do not have
curl on your Linux system, Install it by utilizing the following command. Otherwise, skip the installation steps and move toward the examples.
sudo apt-get update sudo apt-get install curl -y
Now, verify the
curl is available on your system by checking its version:
Examples of using ‘curl’
We can do a lot of cool things using
curl. Let’s take a look at some of them.
Getting server external IP
There’s an amazing resource on the internet, which allows you to get your internet IP address – https://ifconfig.me (named in the glory of the famous Linux network configuration utility –
If you sent an HTTP request to that site using curl, it will return you external IP address in the terminal in the form of the simple string:
So, you can easily put this result to the bash variable:
MY_EXTERNAL_IP=$(curl -s https://ifconfig.me) echo $MY_EXTERNAL_IP
Here’s the result:
-s argument allows to avoid curl download progress output:
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 12 100 12 0 0 181 0 --:--:-- --:--:-- --:--:-- 181
Do you want to feel yourself like a hacker and display the weather information right in the terminal? You can do it using
Saving ‘curl’ output to a text file
You can save the output of the
curl command to the specified file.
Here’s an example of saving JSON API output (StarWars demo API) to the file:
curl https://swapi.dev/api/planets/1/ -o Tatooine.json
curl is smart enough to detect binary file download. Here’s an example of downloading one of the most popular open-source infrastructure-as-code management software Terraform (learn more about Terraform):
curl https://releases.hashicorp.com/terraform/0.15.0/terraform_0.15.0_linux_amd64.zip -o terraform_0.15.0_linux_amd64.zip
Downloading multiple files
You can use
curl to download multiple files at a time. Just add
-o argument as many times you need.
curl -o https://example.com/files/file-1 -o file-2 https://example.com/files/file-2 -o file-3 https://example.com/files/file-3
Limiting the download speed
Another useful feature of
curl is the restriction of the file download speed. You can do it by using the
--limit-rate argument and specifying the speed rate:
curl --limit-rate 1M -O https://releases.hashicorp.com/terraform/0.15.0/terraform_0.15.0_linux_amd64.zip
The given speed is measured in bytes/second, unless a suffix is appended. Appending ‘k’ or ‘K’ will count the number as kilobytes, ‘m’ or M’ makes it megabytes, while ‘g’ or ‘G’ makes it gigabytes.
Downloading URLs list
In this example, we’ll download all files listed in the text file. To do that you need to use a combination of
xargs -n 1 curl -O < urllists.txt
Here’s an output:
You can use the
-u argument to provide username and password for basic HTTP authentication (basic authentication):
curl -u username:password -O https://example.com/files/README
Getting the URL headers
HTTP headers are key-value pairs separated by colons that contain information like the requested resource content type, user agent, encoding, etc. With the request or response, headers are transferred between the client and the server. To get the headers information of any website, use
curl -I https://hands-on.cloud
In subsequent requests to the same website, you may need to use the cookies.
To save the cookies received from the web server, use the following command:
curl -s -o /dev/null -c google_cookies.txt 'https://www.google.com'
Here arguments are:
-s– silent mode, curl does not print download progress information
-o /dev/null– do not print web page output to the terminal
-c google_cookies.txt– save cookies information to the file
curl -b google_cookies.txt 'https://www.google.com'
Checking HTTP/2 support
-s options together to verify if the specified site supports HTTP/2 headers:
curl -I --http2 -s https://hands-on.cloud/ | grep HTTP
In this article, we covered the most commonly used examples of using
curl command. We hope, this article will allow you to get started with this powerful utility. If you like the article, please, help us to spread it to the world!
We are sorry that this post was not useful for you!
Let us improve this post!
Please, tell us what's wrong with this post, and we'll fix it ASAP!
I’m a passionate Cloud Infrastructure Architect with more than 15 years of experience in IT.
Any of my posts represent my personal experience and opinion about the topic.