That's a database error, not an ORM thing. And it's database specific. And
all the database gives you is the error string. So you HAVE to parse it if
that's what you want. The less hacky approach would be to detect the
prevent the duplicate key error upfront with custom logic.

On Fri, Sep 28, 2018 at 9:28 AM Tony Giaccone <[email protected]> wrote:

> Yeah, that's pretty much what I ended up doing. Even reading the file line
> by line and doing an insert after each object is created only made the run
> time go to 4 minutes and I can live with that. What I really wanted to do
> was find a way to recover from a larger commit. It seems that's not really
> possible. The one feature that would make that failure easier to deal with
> would be some kind of data value in commit error that would identify the
> class and key value of the object that caused the commit exception. I
> recognize that the value is there in the text, but parsing through that
> text message to find the value is a serious hack. It would be better if the
> framework included in the commit exception, the class type and the key
> value of the entity that caused the problem.
>
> Now maybe in the larger scheme of things, it doesn't make sense to identify
> which item in the set of items being committed caused the problem. It's
> clear it makes sense in my use case, but in the general use case, maybe
> not..
>
>
> Tony
>
> On Thu, Sep 27, 2018 at 5:10 PM John Huss <[email protected]> wrote:
>
> > Commit the ObjectContext after each object/row and rollback the
> > ObjectContext on failure.
> >
> > On Thu, Sep 27, 2018 at 3:57 PM Tony Giaccone <[email protected]> wrote:
> >
> > > So the question isn't as much about who to manage the transaction. It's
> > > more about how to recover and eliminate the offending object so that
> the
> > > commit can be made again.
> > >
> > > On Thu, Sep 27, 2018 at 3:52 PM John Huss <[email protected]> wrote:
> > >
> > > > I'd just wrap the whole thing in a database transaction. Then commit
> > your
> > > > ObjectContexts as often as you want to, but the real DB commit won't
> > > happen
> > > > until the end.
> > > >
> > > > TransactionManager transactionManager =
> > > CayenneRuntime.*getThreadInjector*
> > > > ().getInstance(TransactionManager.*class*);
> > > >
> > > > transactionManager.performInTransaction(*new*
> > > > TransactionalOperation<Void>() {
> > > >
> > > > @Override
> > > >
> > > > *public* Void perform() {
> > > >
> > > > *return* *null*;
> > > >
> > > > }
> > > >
> > > > });
> > > >
> > > >
> > > >
> > > > On Thu, Sep 27, 2018 at 2:36 PM Tony Giaccone <[email protected]>
> > wrote:
> > > >
> > > > > I'm processing a large number of rows, over 600,000 and the key
> value
> > > > > should be unique in this file but I'd like to ensure that. I also
> > want
> > > > this
> > > > > to happen with some rapidity.  To speed this process upI'm going to
> > > read
> > > > > lines from the file, create objects and commit the changes after
> 500
> > > have
> > > > > been created.
> > > > >
> > > > > The problem with this is that if I have a duplicate value I won't
> > catch
> > > > it
> > > > > till I do the commit.
> > > > >
> > > > > When I insert a second key value the first exception is a db level
> :
> > > > > org.postgresql.util.PSQLException
> > > > >
> > > > > eventually this gets wrapped by a Cayenne Commit error.
> > > > >
> > > > > So I'd like to get a sense of what folks think. Given that I want
> to
> > > > > balance these conflicting goals of speed and accuracy.
> > > > >
> > > > > Can I easily figure out what object or objects caused the error and
> > > can I
> > > > > exclude them from the context and redo the commit? f
> > > > >
> > > > > Is this a reasonable path to follow.
> > > > >
> > > > >
> > > > >
> > > > > Tony Giaccone
> > > > >
> > > >
> > >
> >
>

Reply via email to