@jyapayne \- not according to him. :-) But your two messages clearly passed in 
flight. As Araq mentioned, that `gzipfiles` module may need some work. You 
should do it! Personally, I have been avoiding gzip since at least 2007. There 
are just much better algos & tools now across every dimension except perhaps 
"nostalgia". :-) I think gzip should be retired when & where it can, but I 
understand that it remains popular.

Also, I suspect doing a `popen` implementation for Windows in Nim's stdlib and 
just making that utterance I mentioned above more portable might provide higher 
long-term value. The library approach only really adds value over external 
commands when people have many small compressed files which is probably not 
that common a case. It's very easy to separate the concerns of "which 
compressor" with external commands.

@markebbert \- Consider migrating to a parallel compressor and especially 
decompressor if your colleagues/data stakeholders would approve. You may also 
want to be careful about list comprehensions in terms of memory footprint. E.g, 
for one L1-cachable row they may be fine, but for the whole O(100GB) file a bad 
idea. Oh, and Nim really can be as fast as C/C++/Fortran. You should give it 
some more time when you have a chance. In some ways it's even simpler than 
Python (e.g. UFCS).

Reply via email to