Note: This article has also featured on geeksforgeeks.com .
Requests is a versatile HTTP library in python with various applications. One of its applications is to download a file from web using the file URL.
Installation: First of all, you would need to download the requests library. You can directly install it using pip by typing following command:
pip install requests
Or download it directly from here and install manually.
# imported the requests library import requests image_url = "https://www.python.org/static/community_logos/python-logo-master-v3-TM.png" #URL of the image to be downloaded is defined as image_url r = requests.get(image_url) # create HTTP response object # send a HTTP request to the server and save # the HTTP response in a response object called r with open("python_logo.png",'wb') as f: ''' Saving recieved content as a png file in binary format ''' #write the contents of the response (r.content) # to a new file in binary mode. f.write(r.content)
This small piece of code written above will download the following image from the web. Now check your local directory(the folder where this script resides), and you will find this image:
All we need is the URL of the image source. (You can get the URL of image source by right-clicking on the image and selecting the View Image option.)
Download large files
The HTTP response content (r.content) is nothing but a string which is storing the file data. So, it won’t be possible to save all the data in a single string in case of large files. To overcome this problem, we do some changes to our program:
- Since all file data can’t be stored by a single string, we use r.iter_content method to load data in chunks, specifying the chunk size.
r = requests.get(URL, stream = True)
Setting stream parameter to True will cause the download of response headers only and the connection remains open. This avoids reading the content all at once into memory for large responses. A fixed chunk will be loaded each time while r.iter_content is iterated.
Here is an example:
import requests file_url = "http://codex.cs.yale.edu/avi/db-book/db4/slide-dir/ch1-2.pdf" r = requests.get(file_url, stream = True) with open("python.pdf","wb") as pdf: for chunk in r.iter_content(chunk_size=1024): ''' writing one chunk at a time to pdf file ''' if chunk: pdf.write(chunk)
In this example, we are interested in downloading all the video lectures available on this web-page. All the archives of this lecture are available here. So, we first scrape the webpage to extract all video links and then download the videos one by one.
import requests from bs4 import BeautifulSoup ''' URL of the archive web-page which provides link to all video lectures. It would have been tiring to download each video manually. In this example, we first crawl the webpage to extract all the links and then download videos. ''' # specify the URL of the archive here archive_url = "http://www-personal.umich.edu/~csev/books/py4inf/media/" def get_video_links(): #create response object r = requests.get(archive_url) #create beautiful-soup object soup = BeautifulSoup(r.content,'html5lib') #find all links on web-page links = soup.findAll('a') #filter the link sending with .mp4 video_links = [archive_url + link['href'] for link in links if link['href'].endswith('mp4')] return video_links def download_video_series(video_links): for link in video_links: ''' iterate through all links in video_links and download them one by one ''' #obtain filename by splitting url and getting last string file_name = link.split('/')[-1] print "Downloading file:%s"%file_name #create response object r = requests.get(link, stream = True) #download started with open(file_name, 'wb') as f: for chunk in r.iter_content(chunk_size = 1024*1024): if chunk: f.write(chunk) print "%s downloaded!\n"%file_name print "All videos downloaded!" return if __name__ == "__main__": #getting all video links video_links = get_video_links() #download all videos download_video_series(video_links)
Advantages of using Requests library to download web files are:
- One can easily download the web directories by iterating recursively through the website!
- This is a browser-independent method and much faster!
- One can simply scrape a web page to get all the file URLs on a webpage and hence, download all files in a single command-
I hope you enjoyed reading it! 🙂