Skip to content Skip to sidebar Skip to footer

Timeout A File Download With Python Urllib?

Python beginner here. I want to be able to timeout my download of a video file if the process takes longer than 500 seconds. import urllib try: urllib.urlretrieve ('http://www.

Solution 1:

Better way is to use requests so you can stream the results and easily check for timeouts:

import requests

# Make the actual request, set the timeout for no data to 10 seconds and enable streaming responses so we don't have to keep the large files in memory
request = requests.get('http://www.videoURL.mp4', timeout=10, stream=True)

# Open the output file and make sure we write in binary modewithopen('filename.mp4', 'wb') as fh:
    # Walk through the request response in chunks of 1024 * 1024 bytes, so 1MiBfor chunk in request.iter_content(1024 * 1024):
        # Write the chunk to the file
        fh.write(chunk)
        # Optionally we can check here if the download is taking too long

Solution 2:

Although urlretrieve does not have this feature, you can still set the default timeout (in seconds) for all new socket objects.

import socket
import urllib    

socket.setdefaulttimeout(15)

try:
   urllib.urlretrieve ("http://www.videoURL.mp4", "filename.mp4")
except Exception as e:
   print("error")

Solution 3:

urlretrieve does not have that option. But you can easily perform your example with the help of urlopen and writing the result in a file, like so:

request = urllib.urlopen("http://www.videoURL.mp4", timeout=500)
withopen("filename.mp4", 'wb') as f:
    try:
        f.write(request.read())
    except:
        print("error")

That's if you are using Python 3. If you are using Python 2, you should rather use urllib2.

Post a Comment for "Timeout A File Download With Python Urllib?"