pitrou commented on a change in pull request #11298: URL: https://github.com/apache/arrow/pull/11298#discussion_r741999073
########## File path: cpp/src/arrow/compute/kernels/scalar_string.cc ########## @@ -672,11 +678,64 @@ struct Utf8TitleTransform : public FunctionalCaseMappingTransform { template <typename Type> using Utf8Title = StringTransformExec<Type, Utf8TitleTransform>; +struct Utf8NormalizeTransform : public FunctionalCaseMappingTransform { + using State = OptionsWrapper<Utf8NormalizeOptions>; + + const Utf8NormalizeOptions* options; + + explicit Utf8NormalizeTransform(const Utf8NormalizeOptions& options) + : options{&options} {} + + int64_t MaxCodeunits(const uint8_t* input, int64_t ninputs, + int64_t input_ncodeunits) override { + const auto option = GenerateUtf8NormalizeOption(options->method); + const auto n_chars = + utf8proc_decompose_custom(input, input_ncodeunits, NULL, 0, option, NULL, NULL); + + // convert to byte length + return n_chars * 4; Review comment: Also, there is a more fundamental problem with this approach: if the input is e.g. `["a", "\u0300"]`, normalizing the entire input data to NFC results in one codepoint (`"à"` or `"\u00e0"`). However, normalizing each string indepedently results in one codepoint for each string. So this estimate may be too low in some cases. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: github-unsubscr...@arrow.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org