Created by: dannysepler
I use csvkit
a lot and absolutely love it. csvkit
(and csvstat
in particular) can be very slow on large CSVs, and I decided to look into it. I used snakeviz to poke around.
# Download a large dataset
~/Code/csvkit (master) $ curl https://www.stats.govt.nz/assets/Uploads/Annual-enterprise-survey/Annual-enterprise-survey-2020-financial-year-provisional/Download-data/annual-enterprise-survey-2020-financial-year-provisional-csv.csv > nz.csv
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 5743k 100 5743k 0 0 742k 0 0:00:07 0:00:07 --:--:-- 924k
# See performance on master (~24 seconds)
~/Code/csvkit (master) $ time csvstat nz.csv > before.txt
csvstat nz.csv > before.txt 24.07s user 0.20s system 98% cpu 24.591 total
# See performance on feature branch (~2.6 seconds)
~/Code/csvkit (master) $ git checkout sniff-limit
Switched to branch 'sniff-limit'
~/Code/csvkit (sniff-limit) $ time csvstat nz.csv > after.txt
csvstat nz.csv > after.txt 2.38s user 0.15s system 96% cpu 2.629 total
# Check contents are identical
~/Code/csvkit (sniff-limit) $ diff before.txt after.txt
~/Code/csvkit (sniff-limit) $
This results in a 10x speedup for very large CSVs! I'll write some notes by each change I'm making.