Python CSV parsing fills up memory -
i have csv file has on million rows , trying parse file , insert rows db.
open(file, "rb") csvfile: re = csv.dictreader(csvfile) row in re: //insert row['column_name'] db
for csv files below 2 mb works more ends eating memory. because store dictreader's contents in list called "re" , not able loop on such huge list. need access csv file column names why chose dictreader since provides column level access csv files. can tell me why happening , how can avoided?
the dictreader
not load whole file in memory read by chunks explained in this answer suggested dhruvpathak.
but depending on database engine, actual write on disk may happen @ commit. means database (and not csv reader) keeps data in memory , @ end exhausts it.
so should try commit every n
records, n
typically between 10 1000 depending on size of lines , available memory.