On 11/27/2012 01:31 PM, Joshua Niehus wrote:
On Tuesday, 27 November 2012 at 19:40:56 UTC, Charles Hixson wrote:
Is there a better way to do this? (I want to find files that match any
of some extensions and don't match any of several other strings, or
are not in some directories.):

import std.file;

...

string exts = "*.{txt,utf8,utf-8,TXT,UTF8,UTF-8}";
string[] exclude = ["/template/", "biblio.txt", "categories.txt",
"subjects.txt", "/toCDROM/"]

int limit = 1
// Iterate a directory in depth
foreach (string name; dirEntries(sDir, exts, SpanMode.depth))
{ bool excl = false;
foreach (string part; exclude)
{ if (part in name)
{ excl = true;
break;
}
}
if (excl) break;
etc.

maybe this:?

import std.algorithm, std.array, std.regex;
import std.stdio, std.file;
void main()
{
enum string[] exts = [`".txt"`, `".utf8"`, `".utf-8"`, `".TXT"`,
`".UTF8"`, `".UTF-8"`];
enum string exclude =
`r"/template/|biblio\.txt|categories\.txt|subjects\.txt|/toCDROM/"`;

auto x = dirEntries("/path", SpanMode.depth)
.filter!(`endsWith(a.name,` ~ exts.join(",") ~ `)`)
.filter!(`std.regex.match(a.name,` ~ exclude ~ `).empty`);;

writeln(x);
}

That's a good approach, except that I want to step through the matching paths rather than accumulate them in a vector...though ... the filter documentation could mean that it would return an iterator. So I could replace
writeln (x);
by
foreach (string name; x)
{
        ...
}
and x wouldn't have to hold all the matching strings at the same time.

But why the chained filters, rather than using the option provided by dirEntries for one of them? Is it faster? Just the way you usually do things? (Which I accept as a legitimate answer. I can see that that approach would be more flexible.)

Reply via email to