>>> why should a simple `basename` or ${name##*/} hurt the performance?
>>
>> Such data deletion causes also corresponding processing costs, doesn't it?
>> Would it be nicer to avoid them if the initial input data could be provided
>> as “basenames” already?
>
> Sorry, it seems I missed the most important question:
> What is your exact use case?
You might be looking for more information than I suggest so far.
Possible use cases:
* The reduced names can belong to known directories.
* File name usage can be analysed across the folder hierarchy.
> We need numbers and commands to reproduce it.
I propose to take another look at the applied data processing style.
* When is it more efficient to work only with the required data at
the beginning of an algorithm?
* Under which circumstances will you tolerate that more data can be
provided than are needed for an action?
> Otherwise, this is all just guesswork, and we could spend years to discuss
> what could maybe be considered, etc.
How much will it matter to compare consequences from eager and lazy evaluation?
> You already got the ideal answer for probably 99.9% of the cases - "use
> find(1)"
This tool is already usable for various file searches.
> - so unless you describe your outstanding edge case in more detail,
> I'm afraid there's nothing more we could help you with.
I imagine that there can be a target conflict between the flexibility of
a known program and a search task with special constraints.
How much will data processing for the parameter “-printf” influence
rum time characteristics in undesired ways when the output function
could be a fixed one like “basename()”?
Regards,
Markus