Hi Marshall,

Thank you very much for your detailed report.

Sounds like you're loading tons of rows if the last step takes so long
(turning the Result into the List<Foo>). In that case, it might be
generally useful to use fetchLazy() instead of fetch(), because that will
not materialise all the rows in the client before further processing them.
Also, you might want to specify a ResultQuery.fetchSize(), which translates
to JDBC's Statement.setFetchSize(). This may help with both JDBC and jOOQ.
Some info about lazy fetching here:
https://www.jooq.org/doc/latest/manual/sql-execution/fetching/lazy-fetching

Regarding your individual findings

On Fri, Oct 19, 2018 at 12:10 AM Marshall Pierce <[email protected]>
wrote:

> The profiler indicates that creating the Result is almost entirely spent
> in `CursorRecordInitializer#setValue()`, and that is almost 60% setting
> `OffsetDateTimes`. (That's not jOOQ's fault -- that parser in the JDK is
> just slow. I actually want `Instant`, so I wonder if parsing those could be
> any faster?)
>

Oh wow, that's an interesting catch! I should check if the PostgreSQL JDBC
driver finally implemented native support for JSR 310 types, in case of
which we shouldn't pass through an intermediate String representation on
the client side.

I can't tell (yet), I'd have to see the relevant parts of your query.
Probably, however, you could get best results by using a custom data type
binding for Instant:
https://www.jooq.org/doc/latest/manual/sql-building/queryparts/custom-bindings

I do wonder, however, if jOOQ should support the Instant type out of the
box. The mapping is straightforward, and we could bypass that parsing that
you've measured:
https://github.com/jOOQ/jOOQ/issues/7952


> When transforming the `Result` into the type I want, that time is almost
> entirely spent on `Record.into()` hydrating `BlahRecord` instances for each
> row. (I create tuples of the various `*Record` types, and then traverse
> over them grouping, etc, as needed to assemble the types the rest of the
> system uses.) `into()` is mostly spent in `RowImpl#indexOf()` -- perhaps
> caching field offsets might help?
>

There are some caches, but they only work under specific conditions.
Perhaps your case could be cached as well... Could you provide an example
piece of code that resembles your real code, that helps reproduce this
particular part of the problem?


> Anyway, while there are clearly some optimizations I could perform (rely
> on sorting to eliminate some grouping, for instance), I'm wondering if
> there's anything obviously wrong I'm doing, or do I simply need to slice
> the query up into different pieces so the combinatorial explosion of joined
> rows isn't so egregious?
>

Yes, that definitely helps. Mapping to-many relationships using joins is
not always the best solution. In particular, it can also be simply wrong.
Consider:

class Book {
  List<Author> authors;
  List<Category> categories;
}

If you now join both the authors to-many relationship and the categories
to-many relationship in one single query, you will get a cartesian product
between authors and categories, and that could lead to wrong results later
on, especially when you transitively join to-many relationships inside of
authors and/or categories.

The SQL solution here would be the MULTISET operator, which allows for
ad-hoc nesting of tables:
https://github.com/jOOQ/jOOQ/issues/3884

Unfortunately, it's not yet supported by jOOQ, nor by most databases (it
can be emulated with XML or JSON, though).

Another option which would probably be best in your case, and even bring
the query time down from 900ms to something more reasonable would be to run
more than one query, as you've mentioned. If done correctly, this will not
lead to N+1 problems, but maybe to 3-4 queries.

I hope this helps.
Lukas

-- 
You received this message because you are subscribed to the Google Groups "jOOQ 
User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to