Tags: python* + 2gb* + http* + requests*

0 bookmark(s) - Sort by: Date ↓ / Title /

  1. The trick was to set stream = True in the get(). After this python process stopped to suck memory (statys around 30kb regardless size of the download file). Thank you @danodonovan for you syntax I use it here

    def download_file(url):
    local_filename = url.split('/') -1 »
    # NOTE the stream=True parameter
    r = requests.get(url, stream=True)
    with open(local_filename, 'wb') as f:
    for chunk in r.iter_content(chunk_size=1024):
    if chunk: # filter out keep-alive new chunks
    f.write(chunk)
    f.flush()
    return local_filename
    See http://docs.python-requests.org/en/latest/user/advanced/#body-content-workflow for further reference.
    2014-04-19 Tags: , , , by klotz

Top of the page

First / Previous / Next / Last / Page 1 of 0 SemanticScuttle - klotz.me: tagged with "python+2gb+http+requests"

About - Propulsed by SemanticScuttle