The number you posted seems to show that the query elapse time is highly
impacted by the number of scan minor fragments (scan parallelization
degree).

In Drill, scan parallelization degree is capped at minimum of # of parquet
row groups, or 70% of cpu cores. In your original configuration, since you
only have 3 file each with one row group, Drill will have only up to 3 scan
minor fragments ( you can confirm that by looking at the query profile).
With decreased blocksize, you have more parquet files, and hence higher
scan parallelization degree, and better performance. In the case of 4
cores, the scan parallelization degree is capped at 4*70% = 2, which
probably explains why reducing blocksize does not help.

The 900MB total parquet  file size is relatively small. If you want to turn
Drill for such small dataset, you probably need smaller parquet file size.
In the case of 4 cores, you may consider bump up the following parameter.

`planner.width.max_per_node`





On Fri, Jul 28, 2017 at 7:03 AM, Dan Holmes <dhol...@revenueanalytics.com>
wrote:

> Thank you for the tips.  I have used 4 different block sizes.   It appears
> to scale linearly and anything less than the 512 blocksize was of similar
> performance.  I rounded the numbers to whole seconds.  The data is local to
> the EC2 instance; I did not put the data on EFS.  I used the same data
> files.  After I created it the first time I put the data on s3 and copied
> it to the others.
>
> If there are other configurations that someone is interested in, I would
> be willing to try them out.  I have something to gain in that too.
>
> Here's the data for the interested.
>
> vCPU x Blocksize
>                 64      128     256     512
> m3.xlarge - 16  6       6       5       12
> c3.2xlarge - 8  11      11      11      20
> c4.4xlarge - 4  20      20      20      20
>
>
> Dan Holmes | Revenue Analytics, Inc.
> Direct: 770.859.1255
> www.revenueanalytics.com
>
> -----Original Message-----
> From: Kunal Khatua [mailto:kkha...@mapr.com]
> Sent: Friday, July 28, 2017 2:38 AM
> To: user@drill.apache.org
> Subject: RE: Drill performance tuning parquet
>
> Look at the query profile's (in the UI) "operator  profiles - overview"
> section. The % Query Time is a good indicator of which operator consumes
> the most CPU. Changing the planner.width.max_per_node actually will affect
> this (positively or negatively, depending on the load).
>
> Within the same Operator Profile, also look at Average and Max Wait times.
> See if the numbers are unusually high.
>
> Scrolling further down, (since you are working with parquet) the Parquet
> RowGroup Scan operator can be expanded to show the minor fragment
> (worker/leaf fragments) metrics. Since you have 3 files, you probably will
> see only 3 entries... since each fragment will scan 1 row group in the
> parquet file. (I'm making the assumption that u have only 1 rowgroup per
> file). Just at the end of that table, you'll see "OperatorMetrics". This
> gives you the time (in nanosec) and other metrics it takes for these
> fragments to be handed data by the pool of Async Parquet Reader threads.
>
> Most likely, changing the number of Parquet files being produced using
> CTAS to a larger value (i.e. with smaller file sizes) might actually help
> leverage the surplus CPU capacity you have. For that, you'll need to tweak
> (and experiment) with parquet block sizes of something less than 512MB. You
> could try 128MB (or even 64MB). All of this depends on the nature of the
> dat.. so it's a bit of experimenting that's needed here on.
>
> Beyond that, see the memory (in "Fragment Profiles - Overview") , and
> whether you need to bump up that setting.
>
> It is also possible that since you're running on AWS, the compute and
> storage layers and not as tightly coupled as Athena is with their own S3,
> which would make sense since they need an incentive for users to try Athena
> on their AWS infrastructure. :)
>
> Happy Drilling!
>
> -----Original Message-----
> From: Dan Holmes [mailto:dhol...@revenueanalytics.com]
> Sent: Thursday, July 27, 2017 6:23 PM
> To: user@drill.apache.org
> Subject: RE: Drill performance tuning parquet
>
> Let's pretend there is only this one query or a similar style aggregation
> perhaps with a WHERE clause.  I am trying to understand how to get more out
> of the drill instance I have.  That set of parquet files is ~900MB.  It was
> 6.25 GB as PSV.  There are 13million records.
>
> How can I tell if I am IO bound and I need more reader threads?  If there
> were more files would that be better?
>
> I don't think I am CPU bound based on the stats that EC2 gave me.  I am
> using this example to both learn how to scale drill and also how to
> understand it.
>
> We are considering using it for our text file processing as an exploratory
> tool, ETL (since it will convert to parquet) and because of its ability to
> join disparate datasources as a db layer for tools like tableau.
>
> Other tools we have thought of are Athena.  It is crazy fast.  That same
> query against the text files runs is ~3 seconds.  (My 4 vCPU drill instance
> did that same query against s3 txt files in 180 seconds) But it does have
> drawbacks.  It only works on AWS.  My on-premise solutions would have to be
> designed differently.
>
> I don't need performance parity but to do this right I need to understand
> drill better.  That is the essence of this inquiry.
>
> Dan Holmes | Revenue Analytics, Inc.
> Direct: 770.859.1255
> www.revenueanalytics.com
>
> -----Original Message-----
> From: Saurabh Mahapatra [mailto:saurabhmahapatr...@gmail.com]
> Sent: Thursday, July 27, 2017 6:52 PM
> To: user@drill.apache.org
> Subject: Re: Drill performance tuning parquet
>
> Hi Dan,
>
> Here are some thoughts from my end.
>
> So this is just one query and you have the numbers. But how about a
> representative collection? Do you have the use cases? Now, I know from
> experience that if you can predict the pattern of the queries to about 60%,
> that would be great. The rest could be ad hoc and you could plan for it.
>
> For that 60%, it would be good to share some numbers along these lines:
>
> 1. SQL query, response time measured, response time expected, size of the
> tables that are part of the query 2. Do you have any data skew?
> 3. What is the EC2 configuration you have: memory, CPU cores?
>
> So the approach would be to tune it for the entire set (which means you
> will end up trading off the various parameters) and then scale out. (Scale
> out is not cheap).
>
> Thanks,
> Saurabh
>
> On Thu, Jul 27, 2017 at 1:37 PM, Kunal Khatua <kkha...@mapr.com> wrote:
>
> > You haven't specified what kind of query are you running.
> >
> > The Async Parquet Reader tuning should be more than sufficient in your
> > usecase, since you seem to be only processing 3 files.
> >
> > The feature introduces a small fixed pool of threads that are
> > responsible for the actual fetching of bytes from the disk, without
> > blocking the fragments that already have some data available to work on.
> >
> > The "store.parquet.reader.pagereader.buffersize" might be of interest.
> > The default for this is 4MB and can be tuned to match the parquet page
> > size (usually 1MB). This can reduce memory pressure and improve the
> > pipeline behavior.
> >
> > Apart from this, the primary factors affecting your query performance
> > is the number of cores (which is what you seem to be tuning) and memory.
> > By design, the parallelization level is a function of the num-of-cores.
> > From the look of things, it looks like that is helping. You can try
> > further tuning it with this:
> > planner.width.max_per_node (default is 70% of num-of-cores)
> >
> > For memory,
> > planner.memory.max_query_memory_per_node (default is 2GB)
> >
> >
> > This is where you'll find more about this:
> > https://drill.apache.org/docs/performance-tuning/
> >
> > ~ Kunal
> >
> > -----Original Message-----
> > From: Dan Holmes [mailto:dhol...@revenueanalytics.com]
> > Sent: Thursday, July 27, 2017 1:06 PM
> > To: user@drill.apache.org
> > Subject: RE: Drill performance tuning parquet
> >
> > I did not partition the data when I created the parquet files (CTAS
> > without a PARITION BY)
> >
> > Here is the file list.
> >
> > Thank you.
> >
> >
> > [dholmes@ip-10-20-49-40 sales_p]$ ll
> > total 1021372
> > -rw-rw-r-- 1 dholmes dholmes 393443418 Jul 27 19:05 1_0_0.parquet
> > -rw-rw-r-- 1 dholmes dholmes 321665234 Jul 27 19:06 1_1_0.parquet
> > -rw-rw-r-- 1 dholmes dholmes 330758061 Jul 27 19:06 1_2_0.parquet
> >
> > Dan Holmes | Revenue Analytics, Inc.
> > Direct: 770.859.1255
> > www.revenueanalytics.com
> >
> > -----Original Message-----
> > From: Dan Holmes [mailto:dhol...@revenueanalytics.com]
> > Sent: Thursday, July 27, 2017 3:59 PM
> > To: user@drill.apache.org
> > Subject: Drill performance tuning parquet
> >
> > I am performance testing a single drill instance with different vCPU
> > configurations in AWS.  I have a parquet files on an EFS volume and
> > use the same data for each EC2 instance.
> >
> > I have used 4vCPUs, 8 and 16.  Drill performance is ~25 second, 15 and 12
> > respectively.  I have not changed any of the options.   This an out of
> the
> > box 1.11 installation.
> >
> > What Drill tuning options should I experiment with?  I have read
> > https://drill.apache.org/docs/asynchronous-parquet-reader/ but it is
> > so technical that I can't consume it but it reads like the default
> > options are the best ones.
> >
> > The query looks like this:
> > SELECT store_key, SUM(sales_dollars) sd FROM dfs.root.sales_p GROUP BY
> > store_key ORDER BY sd DESC LIMIT 10
> >
> > Dan Holmes | Architect | Revenue Analytics, Inc.
> >
> >
>

Reply via email to