Thanks for bringing this up! My initial theory is that this table has a ton
of stats data that you have to read. That could happen in a couple of cases.

First, you might have large values in some columns. Parquet will suppress
its stats if values are larger than 4k and those are what Iceberg uses. But
that could still cause you to store two 1k+ objects for each large column
(lower and upper bounds). With a lot of data files, that could add up
quickly. The solution here is to implement #113
<https://github.com/apache/incubator-iceberg/issues/113> so that we don't
store the actual min and max for string or binary columns, but instead a
truncated value that is just above or just below.

The second case is when you have a lot of columns. Each column stores both
a lower and upper bound, so 1,000 columns could easily take 8k per file. If
this is the problem, then maybe we want to have a way to turn off column
stats. We could also think of ways to change the way stats are stored in
the manifest files, but that only helps if we move to a columnar format to
store manifests, so this is probably not a short-term fix.

If you can share a bit more information about this table, we can probably
tell which one is the problem. I'm guessing it is the large values problem.

On Thu, Apr 18, 2019 at 11:52 AM Gautam <gautamkows...@gmail.com> wrote:

> Hello folks,
>
> I have been testing Iceberg reading with and without stats built into
> Iceberg dataset manifest and found that there's a huge jump in network
> traffic with the latter..
>
>
> In my test I am comparing two Iceberg datasets, both written in Iceberg
> format. One with and the other without stats collected in Iceberg
> manifests. In particular the difference between the writers used for the
> two datasets is this PR:
> https://github.com/apache/incubator-iceberg/pull/63/files which uses
> Iceberg's writers for writing Parquet data. I captured tcpdump from query
> scans run on these two datasets.  The partition being scanned contains 1
> manifest, 1 parquet data file and ~3700 rows in both datasets. There's a
> 30x jump in network traffic to the remote filesystem (ADLS) when i switch
> to stats based Iceberg dataset. Both queries used the same Iceberg reader
> code to access both datasets.
>
> ```
> root@d69e104e7d40:/usr/local/spark#  tcpdump -r
> iceberg_geo1_metrixx_qc_postvalues_batch_query.pcap | grep
> perfanalysis.adlus15.projectcabostore.net | grep ">" | wc -l
> reading from file iceberg_geo1_metrixx_qc_postvalues_batch_query.pcap,
> link-type EN10MB (Ethernet)
>
> *8844*
>
>
> root@d69e104e7d40:/usr/local/spark# tcpdump -r
> iceberg_scratch_pad_demo_11_batch_query.pcap | grep
> perfanalysis.adlus15.projectcabostore.net | grep ">" | wc -l
> reading from file iceberg_scratch_pad_demo_11_batch_query.pcap, link-type
> EN10MB (Ethernet)
>
> *269708*
>
> ```
>
> As a consequence of this the query response times get affected drastically
> (illustrated below). I must confess that I am on a slow internet connection
> via VPN connecting to the remote FS. But the dataset without stats took
> just 1m 49s while the dataset with stats took 26m 48s to read the same
> sized data. Most of that time in the latter dataset was spent split
> planning in Manifest reading and stats evaluation.
>
> ```
> all=> select count(*)  from iceberg_geo1_metrixx_qc_postvalues where
> batchId = '4a6f95abac924159bb3d7075373395c9';
>  count(1)
> ----------
>      3627
> (1 row)
> *Time: 109673.202 ms (01:49.673)*
>
> all=>  select count(*) from iceberg_scratch_pad_demo_11  where
> _ACP_YEAR=2018 and _ACP_MONTH=01 and _ACP_DAY=01 and batchId =
> '6d50eeb3e7d74b4f99eea91a27fc8f15';
>  count(1)
> ----------
>      3808
> (1 row)
> *Time: 1608058.616 ms (26:48.059)*
>
> ```
>
> Has anyone faced this? I'm wondering if there's some caching or
> parallelism option here that can be leveraged.  Would appreciate some
> guidance. If there isn't a straightforward fix and others feel this is an
> issue I can raise an issue and look into it further.
>
>
> Cheers,
> -Gautam.
>
>
>
>
>

-- 
Ryan Blue
Software Engineer
Netflix

Reply via email to