Daniel,

I like it.  In addition maybe have an option to write the good records
to one file and the bad records to another...Follow Andrew's recent
counting pattern and retry both files with the good ones first. I'll
probably put some time away for it next weekend.


--- Daniel Kunkel <[EMAIL PROTECTED]> wrote:

> Chris...
> 
> Another way I've thought about doing this is to "distructively widdle
> down" a directory import.
> 
> The idea is to load as many records as possible into the database,
> and
> delete them from the import directory as they are imported. The
> system
> already does this on a file level, but I was thinking that doing it
> on a
> record level would take care of all your issues, and leave someone
> with
> a much smaller mess to clean up with the "hopefully few" records that
> are left that have problems.
> 
> Optionally going another step... It would be great if it even could
> handle "circular references." An implementation scheme that might
> work
> would do it by working backwards, moving each single record that
> causes
> a problem to a new directory to be re-attempted the next iteration.
> Hopefully, when you the right combination of problem records are
> removed, the circular reference will be accepted in one lump commit.
> 
> Daniel
> 
> 
> On Thu, 2007-03-01 at 14:23 -0800, Chris Howe wrote:
> > The error is most likely on this side of the keyboard, but the
> > dummy-fks didn't work for me going from mysql to postgres.  Even
> with
> > it ticked, postgres got mad about referential integrity.  I didn't
> dig
> > into it any further, that's going to be one of the things I do look
> > into when i set aside some time.
> > 
> > I'm just thinking abstractly, wouldn't something like the following
> > work for writing to the correct order
> > 
> > Start with a HashSet
> > 
> > Get Record
> > If has parent 
> >   get parent
> >   Is parent in Hashset?
> >   yes->write record
> >   no-> does parent have parent?
> >   ..etc
> > If does not have parent
> >   write record
> > 
> > 
> > --- "David E. Jones" <[EMAIL PROTECTED]> wrote:
> > 
> > > 
> > > On Mar 1, 2007, at 1:57 PM, Chris Howe wrote:
> > > 
> > > > 2. Data write/load order for hierarchy fk integrity (parent*Id
> ->
> > > *Id)
> > > 
> > > > I think 2 can be addressed pretty well (of course not 100% fool
> > > proof)
> > > > if the output file is written in the right order.
> > > 
> > > This is actually not possible to do, ie sorting a graph with
> loops is
> > >  
> > > NP-hard.
> > > 
> > > That is why we have the dummy-fks thing, which of course should
> ONLY 
> > > 
> > > be used for a case like this where you are sure that there are no
> bad
> > >  
> > > fk records.
> > > 
> > > -David
> > > 
> > > 
> > 
> -- 
> Daniel
> 
> *-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-
> Have a GREAT Day!
> 
> Daniel Kunkel           [EMAIL PROTECTED]
> BioWaves, LLC           http://www.BioWaves.com
> 14150 NE 20th St. Suite F1
> Bellevue, WA 98007
> 800-734-3588    425-895-0050
> http://www.Apartment-Pets.com  http://www.Illusion-Optical.com
> http://www.Card-Offer.com      http://www.RackWine.com
> http://www.JokesBlonde.com     http://www.Brain-Fun.com 
> *-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-.,,.-*"*-
> 
> 

Reply via email to