file upload using requests library in python -
i have following script allows me upload files usersfiles. not work large files (eg 1gb). change make?
import requests import random import re filehandle = open("menu.avi", "rb") resp=requests.get("https://usersfiles.com/") sess_id = re.search('sess_id.*=?"(.*)?"', str(resp.text)).group(1) srv_tmp_url = re.search('srv_tmp_url.*=?"(.*)?"', str(resp.text)).group(1) upload_type = re.search('upload_type.*=?"(.*)?"', str(resp.text)).group(1) uid = '' in range(0, 12): uid = uid + '' + str(random.randint(0,10)) url2="https://up11.usersfiles.com/cgi-bin/upload.cgi?upload_id="+uid+"&js_on=1&utype=reg&upload_type="+upload_type r = requests.post(url2, data={"upload_type":upload_type , "sess_id":sess_id, "srv_tmp_url":srv_tmp_url}, files = {"file_0":filehandle}) link_usersfiles = re.search('name=.fn.>(.*?)<', str(r.text)).group(1)
this script generates me error:
body.write(data)
memoryerror
by default when uploading files, requests
reads entire file memory, , liable run out when uploading large files. easiest way around install requests-toolbelt
can stream file uploads.
for example, use this:
import requests requests_toolbelt.multipart.encoder import multipartencoder # ... code preparing upload ... m = multipartencoder( fields={'upload_type': upload_type, 'sess_id': sess_id, 'file_0': ('filename', file handle, 'text/plain')} ) r = requests.post(url2, data=m, headers={'content-type': m.content_type})
for further information see https://toolbelt.readthedocs.org/en/latest/uploading-data.html