This is a mess of conditionals is what a direct array of 256 entries would avoid:

        int idx;
        if (value[0] != '_') {
                idx = value[0] - 'a';
                if (idx == 26) return false;
        } else {
                idx = 26;
        }
        
        if (idx < 0 || idx > 26) {
                // debug writeln("kword: ", idx, ':', value[0]);
                return false;
        }
        
        if (keywords[idx] is null) return false;
        
        return keywords[idx].canFind(value);

Gaining some speed in the process. Plus another layer of array to discern keywords by length. You see why I suggested to generate the code in the first place ? ;)

BTW what's the reason to separate keywords and type keywords? They are processed the same in lexer and only parser somewhere up above knows what to do with these regardless. Just return different token values for each.

I changed it and merged them together.
Also I use now a array of 256 entities, but I must keep the check if idx is < 0, because 'D' - 'a' is negative.
And yes I see what you meant.^^

Code: http://dpaste.1azy.net/317241c0
I reach still 215 - 230 msecs.

Reply via email to