In similar situations -but my files are not huge- I extract what I want into flattened CSV using one or more XQuery scripts, and then load the CSV files with J. The code is clean, compact and easy to maintain. For recurrent XQuery patterns, m4 occasionally comes to the rescue. Expect minor portability issues when using different XQuery processors (extensions, language level...).
Never got round to SAX parsing beyond tutorials, so I cannot compare. De : Mariusz Grasko <[email protected]> À : [email protected] Sujet : [Jprogramming] Is is good idea to use J for reading large XML files ? Date : 10/08/2021 18:05:45 Europe/Paris Hi, We are ecommerce company and have a lot of integrations with suppliers, products info is nearly always in XML files. I am thinking about using J as an analysis tool, do you think that working with large files that need to be parsed SAX- style without reading everything at once is good idea in J ? Also is this even advantageous (as in, would code be terse). Right now XML parsing is done in Golang, so if parsing in J is not very good we could try to rely more on CSV exports. CSV is definiately very good in J. I am hoping that maybe XML parsing is very good in J and the code would become much smaller, if this is the case, then I would think about using J for XMLs with new suppliers. Best Regards M.G. ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm ---------------------------------------------------------------------- For information about J forums see http://www.jsoftware.com/forums.htm
