02-Mar-2013 23:01, Namespace пишет:
I hope I understand you right this time:
http://dpaste.1azy.net/4c2e4428

Was this your idea?
With this I reached between 215 and 230 msecs.

This is a mess of conditionals is what a direct array of 256 entries would avoid:

        int idx;
        if (value[0] != '_') {
                idx = value[0] - 'a';
                if (idx == 26) return false;
        } else {
                idx = 26;
        }
        
        if (idx < 0 || idx > 26) {
                // debug writeln("kword: ", idx, ':', value[0]);
                return false;
        }
        
        if (keywords[idx] is null) return false;
        
        return keywords[idx].canFind(value);

Gaining some speed in the process. Plus another layer of array to discern keywords by length. You see why I suggested to generate the code in the first place ? ;)

BTW what's the reason to separate keywords and type keywords? They are processed the same in lexer and only parser somewhere up above knows what to do with these regardless. Just return different token values for each.

--
Dmitry Olshansky

Reply via email to