Good ideas... I've started a wiki page at http://docs.ofbiz.org/x/ZwU to track some of these ideas...feel free to add your own or correct my interpretation of your wish.
If some seem doable or someone is actually working on implementing one or two, naturally, create a JIRA issue around it. --- Jacopo Cappellato <[EMAIL PROTECTED]> wrote: > This is my personal wish list: > > 1) more detailed error log (in a separate file selectable from ui > before > the import?) where the data file, line number in the file containing > bad > data is logged > > 2) add the ability to run the import service (from ui) as an async > service; also the creation of a report log (summary) would help > > 3) in the datafile definition (grammar) add the attributes to set the > > commit strategies for the file: row-level commit, file commit, abort > on > commit, continue on commit etc... > > Jacopo > > > Chris Howe wrote: > > It's been my limited experience that if you have a failure in your > > datafile, you may need to make adjustments to all your records in > the > > batch; good records and bad records. That they should have been > > entered as a group. So, if one record fails, you'll likely still > want > > the entire datafile to fail, but seperate the failed records so > that > > you can track down the error(s) quickly. > > > > > > --- Daniel Kunkel <[EMAIL PROTECTED]> wrote: > > > >> Thanks. Great. > >> > >> Just to be sure I've communicated what I had in mind clearly, I'll > >> say I > >> didn't think you'd ever retry the good records... as they > shouldn't > >> need to be retried because they've made it into the database. > >> However, > >> Saving the good records somewhere is a good idea since someone is > >> bound > >> to get bit from a "disappearing" import. Perhaps the best solution > is > >> to > >> write them to a new file, *.xml.done as each record is > successfully > >> imported. Anyway, I think you got the main jist... widdle down > the > >> import so it's easy to find and fix those nagging leftover imports > >> that > >> fail. > >> > >> Daniel > >> > >> > >> On Thu, 2007-03-01 at 20:11 -0800, Chris Howe wrote: > >>> Daniel, > >>> > >>> I like it. In addition maybe have an option to write the good > >> records > >>> to one file and the bad records to another...Follow Andrew's > recent > >>> counting pattern and retry both files with the good ones first. > >> I'll > >>> probably put some time away for it next weekend. > >>> > >>> > >>> --- Daniel Kunkel <[EMAIL PROTECTED]> wrote: > >>> > >>>> Chris... > >>>> > >>>> Another way I've thought about doing this is to "distructively > >> widdle > >>>> down" a directory import. > >>>> > >>>> The idea is to load as many records as possible into the > >> database, > >>>> and > >>>> delete them from the import directory as they are imported. The > >>>> system > >>>> already does this on a file level, but I was thinking that doing > >> it > >>>> on a > >>>> record level would take care of all your issues, and leave > >> someone > >>>> with > >>>> a much smaller mess to clean up with the "hopefully few" records > >> that > >>>> are left that have problems. > >>>> > >>>> Optionally going another step... It would be great if it even > >> could > >>>> handle "circular references." An implementation scheme that > might > >>>> work > >>>> would do it by working backwards, moving each single record that > >>>> causes > >>>> a problem to a new directory to be re-attempted the next > >> iteration. > >>>> Hopefully, when you the right combination of problem records are > >>>> removed, the circular reference will be accepted in one lump > >> commit. > >>>> Daniel > >>>> > >>>> > >>>> On Thu, 2007-03-01 at 14:23 -0800, Chris Howe wrote: > >>>>> The error is most likely on this side of the keyboard, but the > >>>>> dummy-fks didn't work for me going from mysql to postgres. > >> Even > >>>> with > >>>>> it ticked, postgres got mad about referential integrity. I > >> didn't > >>>> dig > >>>>> into it any further, that's going to be one of the things I do > >> look > >>>>> into when i set aside some time. > >>>>> > >>>>> I'm just thinking abstractly, wouldn't something like the > >> following > >>>>> work for writing to the correct order > >>>>> > >>>>> Start with a HashSet > >>>>> > >>>>> Get Record > >>>>> If has parent > >>>>> get parent > >>>>> Is parent in Hashset? > >>>>> yes->write record > >>>>> no-> does parent have parent? > >>>>> ..etc > >>>>> If does not have parent > >>>>> write record > >>>>> > >>>>> > >>>>> --- "David E. Jones" <[EMAIL PROTECTED]> wrote: > >>>>> > >>>>>> On Mar 1, 2007, at 1:57 PM, Chris Howe wrote: > >>>>>> > >>>>>>> 2. Data write/load order for hierarchy fk integrity > >> (parent*Id > >>>> -> > >>>>>> *Id) > >>>>>> > >>>>>>> I think 2 can be addressed pretty well (of course not 100% > >> fool > >>>>>> proof) > >>>>>>> if the output file is written in the right order. > >>>>>> This is actually not possible to do, ie sorting a graph with > >>>> loops is > >>>>>> > >>>>>> NP-hard. > >>>>>> > >>>>>> That is why we have the dummy-fks thing, which of course > >> should > >>>> ONLY > >>>>>> be used for a case like this where you are sure that there > >> are no > >>>> bad > >>>>>> > >>>>>> fk records. > >>>>>> > >>>>>> -David > >>>>>> > >>>>>> > >>>> -- > >>>> Daniel > >>>> > >>>> > *-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*- > >>>> Have a GREAT Day! > >>>> > >>>> Daniel Kunkel [EMAIL PROTECTED] > >>>> BioWaves, LLC http://www.BioWaves.com > >>>> 14150 NE 20th St. Suite F1 > >>>> Bellevue, WA 98007 > >>>> 800-734-3588 425-895-0050 > >>>> http://www.Apartment-Pets.com http://www.Illusion-Optical.com > >>>> http://www.Card-Offer.com http://www.RackWine.com > >>>> http://www.JokesBlonde.com http://www.Brain-Fun.com > >>>> > *-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*- > >>>> > >>>> > >> > > >