On 2021-05-07 Fri 16:47, Joost Kremers wrote: > On Fri, May 07 2021, Titus von der Malsburg wrote: >>> Apparently, =json-parse-{buffer|string}= then gives you a symbol with a >>> space >>> in it... >> >> I now see that symbol names “can contain any characters whatever” [1]. But >> many >> characters need to be escaped (like spaces) which isn’t pretty. > > Agreed. But if you pass such a symbol to =symbol-name= or to =(format "%s")=, > the escape character is removed, so when it comes to displaying those symbols > to > users, it shouldn't matter much. > > Note, though, that the keys in CSL-JSON don't seem to contain any spaces or > other weird characters. There are just lower case a-z and dash, that's all.
I agree that weird characters are unlikely going to be an issue. Nonetheless, strings seem slightly more future-proof. Funky unicode stuff is now appearing everywhere (I’ve seen emoji being used for variable names) and the situation could be different a couple of years down the line. >>> This works for the Elisp library =json.el=, but Emacs 27 can be compiled >>> with >>> native JSON support, which, however, doesn't provide this option, >>> unfortunately. >> >> I see. In this case it might make sense to propose string keys as a feature >> for >> json.c. The key is a string anyway at some point during parsing, so avoiding >> the >> conversion to symbol may actually be the best way to speed things up. > > True. I'll ask on emacs-devel. Personally, I'd prefer strings, too, but I'm a > bit hesitant about doing the conversion myself, esp. given that in Ebib, all > the > keys would need to be converted back before I can save a file. Sure, converting all keys in parsebib is not attractive. >>> That would be easy to support, but IMHO is better handled in >>> bibtex-completion: >>> just parse the buffer and then call =gethash= on the resulting hash table. >>> Or >>> what use-case do you have in mind? >> >> One use case: bibtex-completion drops fields that aren’t needed early on to >> save >> memory and CPU cycles. (Some people work with truly enormous bibliographies, >> like crypto.bib with ~60K entries.) But this means that we sometimes have to >> read an individual entry again if we need more fields that were dropped >> earlier. >> In this case I’d like to be able to read just one entry without having to >> reparse the complete bibliography. > > Makes sense. For .bib sources, this should be fairly easy to do. For .json, I > can't really say how easy it would be. It's not difficult to find the entry > key > in the buffer, but from there you'd have to be able to find the start of the > entry in order to parse it. Currently, I don't know how to do that. Not a big deal. Since it’s just about individual entries and the code isn’t super central, we can easily hack something. >>>> - Functions for resolving strings and cross-references. > [...] >>> parsebib has a lower-level API and a higher-level API, and the latter does >>> essentially what you suggest here. I thought bibtex-completion was already >>> using it... >> >> Nope. I think the high-level API didn’t exist when I wrote my code in 2014. > > No, it didn't. I seem to remember, though, that you gave me the idea for the > higher-level API, which is probably why I assumed you were using it. > > So that part of =parsebib= hasn't been tested much... (Ebib doesn't use it, > either). If you do decide to start using it, please test it and report any > issues you find. And let me know if I can help with testing. The organically grown parsing code in the Bibtex completion has been bugging me for a while. So I'm keen on rewriting this. But I may not get to it until the summer. I'll keep you posted when I start working on it. Titus