On Tue, Jan 11, 2022 at 11:16:13AM +0000, Moth via Digitalmars-d-announce wrote: > On Tuesday, 11 January 2022 at 03:20:22 UTC, Salih Dincer wrote: > > [snip] > > glad to hear you're finding it useful! =]
One minor usability issue I found just glancing over the code: many of your methods take char[] as argument. Generally, you want const(char)[] instead, so that it will work with both char[] and immutable(char)[]. No reason why you can't copy some immutable chars into a FixedString, for example. Another potential issue is with the range interface. Your .popFront is implemented by copying the entire buffer 1 char forwards, which can easily become a hidden performance bottleneck. Iteration over a FixedString currently is O(N^2), which is a problem if performance is your concern. Generally, I'd advise not conflating your containers with ranges over your containers: I'd make .opSlice return a traditional D slice (i.e., const(char)[]) instead of a FixedString, and just require writing `[]` when you need to iterate over the string as a range: FixedString!64 mystr; foreach (ch; mystr[]) { // <-- iterates over const(char)[] ... } This way, no redundant copying of data is done during iteration. Another issue is the way concatenation is implemented. Since FixedStrings have compile-time size, this potentially means every time you concatenate a string in your code you get another instantiation of FixedString. This can lead to a LOT of template bloat if you're not careful, which may quickly outweigh any benefits you may have gained from not using the built-in strings. > hm, i'm not sure how i would go about fixing that double character > issue. i know there's currently some wierdness with wchars / dchars > equality that needs to be fixed [shouldn't be too much trouble, just > need to set aside the time for it], but i think being able to tell how > many chars there are in a glyph requires unicode awareness? i'll look > into it. [...] Yes, you will require Unicode-awareness, and no, it will NOT be as simple as you imagine. First of all, you have the wide-character issue: if you're dealing with anything outside of the ASCII range, you will need to deal with code points (potentially wchar, dchar). You can either take the lazy way out (FixedString!(n, wchar), FixedString!(n, dchar)), but that will exacerbate your template bloat very quickly. Plus, it wastes a lot of memory, esp. if you start using dchar[] -- 4 bytes per character potentially makes ASCII strings use up 4x more memory. (And even if you decide using dchar[] isn't a concern, there's still the issue of graphemes -- see below, which requires non-trivial decoding anyway.) Or you can handle UTF-8, which is a better solution in terms of memory usage. But then you will immediately run into the encoding/decoding problem. Your .opSlice, for example, will not work correctly unless you auto-decode. But that will be a performance hit -- this is one of the design mistakes in hindsight that's still plaguing Phobos today. IMO the better approach is to iterate over the string *without* decoding, but just detecting codepoint boundaries. Regardless, you will need *some* way of iterating over code points instead of code units in order to deal with this properly. But that's only the beginning of the story. In Unicode, a "code point" is NOT what most people imagine a "character" is. For most European languages this is the case, but once you go outside of that, you'll start finding things like accented characters that are composed of multiple code points. In Unicode, that's called a Grapheme, and here's the bad news: the length of a Grapheme is technically unbounded (even though in practice it's usually 2 or occasionally 3 -- but you *will* find more on rare occasions). And worst of all, determining the length of a grapheme requires an expensive, non-trivial algorithm that will KILL your performance if you blindly do it every time you traverse your string. And generally, you don't *want* to do grapheme segmentation anyway -- most code doesn't even care what the graphemes are, it just wants to treat strings as opaque data that you may occasionally want to segment into substrings (and substrings don't necessarily require grapheme segmentation to compute, depending on what the final goal is). But occasionally you *will* need grapheme segmentation (e.g., if you need to know how many visual "characters" there are in a string); for that, you will need std.uni. And no, it's not something you can implement overnight. It requires some heavy-duty lookup tables and a (very careful!) implementation of TR14. Because of the foregoing, you have at least 4 different definitions of the length of the string: 1. The number of code units it occupies, i.e., the number of chars / wchars / dchars. 2. The number of code points it contains, which, in UTF-8, is a non-trivial quantity that requires iterating over the entire string to compute. Or you can just use wchar[] or dchar[], but then your memory footprint will increase, potentially up to 4x. 3. The number of graphemes it contains, i.e., how many "visual characters" (the way most people understand the word "character") it contains. This requires grapheme segmentation, is expensive to compute, and generally shouldn't be done unless you have some concrete reason why you want to do this. 4. The rendered width of the string, i.e., how much space it occupies if displayed on the screen. Even on a monospace-font text terminal, this is a non-trivial quantity because some Unicode codepoints are double-width (e.g., East Asian block), and some are *zero*-width (e.g., shy hyphens, zero-width breaking spaces). And it depends on how your terminal emulator renders these characters (what Unicode defines as a double-width may not necessarily be rendered that way). And of course, on a GUI application measuring the length of a string requires font details. Welcome to the *cough* wonderful world of Unicode, where everything is possible but nothing is simple. :-D T -- This sentence is false.