mkhludnev commented on issue #12393: URL: https://github.com/apache/lucene/issues/12393#issuecomment-1622010377
Noob says: Tokenizers for word embeddings https://github.com/huggingface/tokenizers are quite different to ours. `thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU.` And it _might be_ related to Rust's vectorization. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
