Python CSV parsing fills up memory -


i have csv file has on million rows , trying parse file , insert rows db.

    open(file, "rb") csvfile:          re = csv.dictreader(csvfile)         row in re:         //insert row['column_name'] db 

for csv files below 2 mb works more ends eating memory. because store dictreader's contents in list called "re" , not able loop on such huge list. need access csv file column names why chose dictreader since provides column level access csv files. can tell me why happening , how can avoided?

the dictreader not load whole file in memory read by chunks explained in this answer suggested dhruvpathak.

but depending on database engine, actual write on disk may happen @ commit. means database (and not csv reader) keeps data in memory , @ end exhausts it.

so should try commit every n records, n typically between 10 1000 depending on size of lines , available memory.


Popular posts from this blog

c# - ODP.NET Oracle.ManagedDataAccess causes ORA-12537 network session end of file -

matlab - Compression and Decompression of ECG Signal using HUFFMAN ALGORITHM -

utf 8 - split utf-8 string into bytes in python -