Hi, thanks for the help of all of you. Yes, I'm pre-calculating things. In the data orchestration process I'm involved in, I can usually estimate the time of a rendering based on the number of rows I'm processing. It is a linear process and the processing time is typically not much affected as a function of the size of each line.
The files have an average of several gigabytes in size. Some have tens of millions of lines. Knowing the amount of lines before processing guarantees some interesting benefits to the user. I'm lucky because the data is on an SSD server... I already had a performance gain using the tips provided by @cblake (using import memfiles). All the best AN