0 bookmark(s) - Sort by: Date ↓ / Title / - Bookmarks from other users for this tag
The trick was to set stream = True in the get(). After this python process stopped to suck memory (statys around 30kb regardless size of the download file). Thank you @danodonovan for you syntax I use it here
def download_file(url): local_filename = url.split('/') -1 » # NOTE the stream=True parameter r = requests.get(url, stream=True) with open(local_filename, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) f.flush() return local_filename See http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow for further reference.
python -m SimpleHTTPServer 8080
First / Previous / Next / Last
/ Page 1 of 0