Araq is always right... **_Except_** when he tells you to throw away 
[TAoUP](https://en.wikipedia.org/wiki/The_Art_of_Unix_Programming). (Or makes 
me use RST, which doesn't let me italicize a link.) 

There's the [minimalist 
school](https://en.wikipedia.org/wiki/Minimalism_\(computing\)) of software 
design, perhaps best summarized by [cat-v.org's "Harmful Software" 
page](http://harmful.cat-v.org/software/). A lot of people dismiss it as a 
[hairshirt](https://en.wikipedia.org/wiki/Cilice#In_popular_culture) cult, but 
it's still important to understand their position. I think the comparison of 
Nim and Golang is a great counter-argument against taking minimalism to 
impractical extremes, but their arguments should still be treated with respect.

Anyway the question was "what would be the **fastest** way" (not the most 
elegant, powerful, interoperable, future-proof, etc) to filter 100,000 CSV 
records. This seems like a real-world business problem, and re-engineering the 
whole business process to do it all "the right way" isn't always an option. And 
there are downsides to using SQLite for high-volume logs that could screw up 
much more critical operations than this log filter. So, regardless of where one 
stands on Ivory Tower anti-cat -v-ism questions, this is a matter of finding 
the fastest grep.

I think this rustland article provides some insights: 
[blog.burntsushi.net/ripgrep](http://blog.burntsushi.net/ripgrep/)

Reply via email to