Re: "broadcast" tablet replication for kudu?

2019-04-24 Thread Clifford Resnick
Probably a narrow reach, but do these particular dimension tables possibly have 
a common column that can be transitively joined with other dimension tables?  
Possibly by some light denormalization?  If so you can add a (redundant) 
predicate and only the filtered set from that dim table will be broadcast (at 
least with Impala).

For instance, this will broadcast all of DIM_2:

SELECT  f.a,d1.b,d2.c
from FACT f
inner join DIM_1 d1 on f.dim_1_id = d1.id
inner join DIM_2 d2 on f.dim_2_id = d2.id
where f.dim_1_id = 123;

This equivalent query will broadcast a filtered rowset.

SELECT f.a,d1.b,d2.c
from FACT f
inner join DIM_1 d1 on f.dim_1_id = d1.id
inner join DIM_2 d2 on f.dim_2_id = d2.id
where f.dim_1_id = 123
and d2.dim_1_id = d1.id;




From: Boris Tyukin 
Reply-To: "user@kudu.apache.org" 
Date: Wednesday, April 24, 2019 at 12:02 PM
To: "user@kudu.apache.org" 
Subject: Re: "broadcast" tablet replication for kudu?

sorry to revive the old thread but curious if there is a better solution 1 year 
after...We have a few small tables (under 300k rows) which are practically used 
with every single query and to make things worse joined more than once in the 
same query.

Is there a way to replicate this table on every node to improve performance and 
avoid broadcasting this table every time?

On Mon, Jul 23, 2018 at 10:52 AM Todd Lipcon 
mailto:t...@cloudera.com>> wrote:

On Mon, Jul 23, 2018, 7:21 AM Boris Tyukin 
mailto:bo...@boristyukin.com>> wrote:
Hi Todd,

Are you saying that your earlier comment below is not longer valid with Impala 
2.11 and if I replicate a table to all our Kudu nodes Impala can benefit from 
this?

No, the earlier comment is still valid. Just saying that in some cases exchange 
can be faster in the new Impala version.


"
It's worth noting that, even if your table is replicated, Impala's planner is 
unaware of this fact and it will give the same plan regardless. That is to say, 
rather than every node scanning its local copy, instead a single node will 
perform the whole scan (assuming it's a small table) and broadcast it from 
there within the scope of a single query. So, I don't think you'll see any 
performance improvements on Impala queries by attempting something like an 
extremely high replication count.

I could see bumping the replication count to 5 for these tables since the extra 
storage cost is low and it will ensure higher availability of the important 
central tables, but I'd be surprised if there is any measurable perf impact.
"

On Mon, Jul 23, 2018 at 9:46 AM Todd Lipcon 
mailto:t...@cloudera.com>> wrote:
Are you on the latest release of Impala? It switched from using Thrift for RPC 
to a new implementation (actually borrowed from kudu) which might help 
broadcast performance a bit.

Todd

On Mon, Jul 23, 2018, 6:43 AM Boris Tyukin 
mailto:bo...@boristyukin.com>> wrote:
sorry to revive the old thread but I am curious if there is a good way to speed 
up requests to frequently used tables in Kudu.

On Thu, Apr 12, 2018 at 8:19 AM Boris Tyukin 
mailto:bo...@boristyukin.com>> wrote:
bummer..After reading your guys conversation, I wish there was an easier 
way...we will have the same issue as we have a few dozens of tables which are 
used very frequently in joins and I was hoping there was an easy way to 
replicate them on most of the nodes to avoid broadcasts every time

On Thu, Apr 12, 2018 at 7:26 AM, Clifford Resnick 
mailto:cresn...@mediamath.com>> wrote:
The table in our case is 12x hashed and ranged by month, so the broadcasts were 
often to all (12) nodes.

On Apr 12, 2018 12:58 AM, Mauricio Aristizabal 
mailto:mauri...@impact.com>> wrote:
Sorry I left that out Cliff, FWIW it does seem to have been broadcast..

[Image removed by sender.]
Not sure though how a shuffle would be much different from a broadcast if 
entire table is 1 file/block in 1 node.

On Wed, Apr 11, 2018 at 8:52 PM, Cliff Resnick 
mailto:cre...@gmail.com>> wrote:
From the screenshot it does not look like there was a broadcast of the 
dimension table(s), so it could be the case here that the multiple smaller 
sends helps. Our dim tables are generally in the single-digit millions and 
Impala chooses to broadcast them. Since the fact result cardinality is always 
much smaller, we've found that forcing a [shuffle] dimension join is actually 
faster since it only sends dims once rather than all to all nodes. The 
degenerative performance of broadcast is especially obvious when the query 
returns zero results. I don't have much experience here, but it does seem that 
Kudu's efficient predicate scans can sometimes "break" Impala's query plan.

-Cliff

On Wed, Apr 11, 2018 at 5:41 PM, Mauricio Aristizabal 
mailto:mauri...@impact.com>> wrote:
@Todd not to belabor the point, but when I suggested breaking up small dim 
tables into multiple parquet files (and in this thread's context p

Re: "broadcast" tablet replication for kudu?

2018-07-23 Thread Clifford Resnick
Great! We’re on 2.11 now. I’ll do some before/after benchmarks this week.

From: Todd Lipcon mailto:t...@cloudera.com>>
Reply-To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
mailto:user@kudu.apache.org>>
Date: Monday, July 23, 2018 at 10:05 AM
To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
mailto:user@kudu.apache.org>>
Subject: Re: "broadcast" tablet replication for kudu?

Impala 2.12. The external RPC protocol is still Thrift.

Todd

On Mon, Jul 23, 2018, 7:02 AM Clifford Resnick 
mailto:cresn...@mediamath.com>> wrote:
Is this impala 3.0? I’m concerned about breaking changes and our RPC to Impala 
is thrift-based.

From: Todd Lipcon mailto:t...@cloudera.com>>
Reply-To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
mailto:user@kudu.apache.org>>
Date: Monday, July 23, 2018 at 9:46 AM
To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
mailto:user@kudu.apache.org>>
Subject: Re: "broadcast" tablet replication for kudu?

Are you on the latest release of Impala? It switched from using Thrift for RPC 
to a new implementation (actually borrowed from kudu) which might help 
broadcast performance a bit.

Todd

On Mon, Jul 23, 2018, 6:43 AM Boris Tyukin 
mailto:bo...@boristyukin.com>> wrote:
sorry to revive the old thread but I am curious if there is a good way to speed 
up requests to frequently used tables in Kudu.

On Thu, Apr 12, 2018 at 8:19 AM Boris Tyukin 
mailto:bo...@boristyukin.com>> wrote:
bummer..After reading your guys conversation, I wish there was an easier 
way...we will have the same issue as we have a few dozens of tables which are 
used very frequently in joins and I was hoping there was an easy way to 
replicate them on most of the nodes to avoid broadcasts every time

On Thu, Apr 12, 2018 at 7:26 AM, Clifford Resnick 
mailto:cresn...@mediamath.com>> wrote:
The table in our case is 12x hashed and ranged by month, so the broadcasts were 
often to all (12) nodes.

On Apr 12, 2018 12:58 AM, Mauricio Aristizabal 
mailto:mauri...@impact.com>> wrote:
Sorry I left that out Cliff, FWIW it does seem to have been broadcast..

[http://?ui=2=2b5b303e51=att=162b815b85ff3b8d=0.2=safe=ii_jfw0n6hg0_162b815b85ff3b8d]

Not sure though how a shuffle would be much different from a broadcast if 
entire table is 1 file/block in 1 node.

On Wed, Apr 11, 2018 at 8:52 PM, Cliff Resnick 
mailto:cre...@gmail.com>> wrote:
From the screenshot it does not look like there was a broadcast of the 
dimension table(s), so it could be the case here that the multiple smaller 
sends helps. Our dim tables are generally in the single-digit millions and 
Impala chooses to broadcast them. Since the fact result cardinality is always 
much smaller, we've found that forcing a [shuffle] dimension join is actually 
faster since it only sends dims once rather than all to all nodes. The 
degenerative performance of broadcast is especially obvious when the query 
returns zero results. I don't have much experience here, but it does seem that 
Kudu's efficient predicate scans can sometimes "break" Impala's query plan.

-Cliff

On Wed, Apr 11, 2018 at 5:41 PM, Mauricio Aristizabal 
mailto:mauri...@impact.com>> wrote:
@Todd not to belabor the point, but when I suggested breaking up small dim 
tables into multiple parquet files (and in this thread's context perhaps 
partition kudu table, even if small, into multiple tablets), it was to speed up 
joins/exchanges, not to parallelize the scan.

For example recently we ran into this slow query where the 14M record dimension 
fit into a single file & block, so it got scanned on a single node though still 
pretty quickly (300ms), however it caused the join to take 25+ seconds and 
bogged down the entire query.  See highlighted fragment and its parent.

So we broke it into several small files the way I described in my previous 
post, and now join and query are fast (6s).

-m


[X]


On Fri, Mar 16, 2018 at 3:55 PM, Todd Lipcon 
mailto:t...@cloudera.com>> wrote:
I suppose in the case that the dimension table scan makes a non-trivial portion 
of your workload time, then yea, parallelizing the scan as you suggest would be 
beneficial. That said, in typical analytic queries, scanning the dimension 
tables is very quick compared to scanning the much-larger fact tables, so the 
extra parallelism on the dim table scan isn't worth too much.

-Todd

On Fri, Mar 16, 2018 at 2:56 PM, Mauricio Aristizabal 
mailto:mauri...@impactradius.com>> wrote:
@Todd I know working with parquet in the past I've seen small dimensions that 
fit in 1 single file/block limit parallelism of join/exchange/aggregation 
nodes, and I've forced those dims to spread across 20 or so blocks by 
leveraging SET PARQUET_FILE_SIZE=8m; or similar when doing INSERT OVERWRITE to 
load them, which then allows these operations to parallel

Re: "broadcast" tablet replication for kudu?

2018-07-23 Thread Clifford Resnick
Is this impala 3.0? I’m concerned about breaking changes and our RPC to Impala 
is thrift-based.

From: Todd Lipcon mailto:t...@cloudera.com>>
Reply-To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
mailto:user@kudu.apache.org>>
Date: Monday, July 23, 2018 at 9:46 AM
To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
mailto:user@kudu.apache.org>>
Subject: Re: "broadcast" tablet replication for kudu?

Are you on the latest release of Impala? It switched from using Thrift for RPC 
to a new implementation (actually borrowed from kudu) which might help 
broadcast performance a bit.

Todd

On Mon, Jul 23, 2018, 6:43 AM Boris Tyukin 
mailto:bo...@boristyukin.com>> wrote:
sorry to revive the old thread but I am curious if there is a good way to speed 
up requests to frequently used tables in Kudu.

On Thu, Apr 12, 2018 at 8:19 AM Boris Tyukin 
mailto:bo...@boristyukin.com>> wrote:
bummer..After reading your guys conversation, I wish there was an easier 
way...we will have the same issue as we have a few dozens of tables which are 
used very frequently in joins and I was hoping there was an easy way to 
replicate them on most of the nodes to avoid broadcasts every time

On Thu, Apr 12, 2018 at 7:26 AM, Clifford Resnick 
mailto:cresn...@mediamath.com>> wrote:
The table in our case is 12x hashed and ranged by month, so the broadcasts were 
often to all (12) nodes.

On Apr 12, 2018 12:58 AM, Mauricio Aristizabal 
mailto:mauri...@impact.com>> wrote:
Sorry I left that out Cliff, FWIW it does seem to have been broadcast..

[http://?ui=2=2b5b303e51=att=162b815b85ff3b8d=0.2=safe=ii_jfw0n6hg0_162b815b85ff3b8d]

Not sure though how a shuffle would be much different from a broadcast if 
entire table is 1 file/block in 1 node.

On Wed, Apr 11, 2018 at 8:52 PM, Cliff Resnick 
mailto:cre...@gmail.com>> wrote:
From the screenshot it does not look like there was a broadcast of the 
dimension table(s), so it could be the case here that the multiple smaller 
sends helps. Our dim tables are generally in the single-digit millions and 
Impala chooses to broadcast them. Since the fact result cardinality is always 
much smaller, we've found that forcing a [shuffle] dimension join is actually 
faster since it only sends dims once rather than all to all nodes. The 
degenerative performance of broadcast is especially obvious when the query 
returns zero results. I don't have much experience here, but it does seem that 
Kudu's efficient predicate scans can sometimes "break" Impala's query plan.

-Cliff

On Wed, Apr 11, 2018 at 5:41 PM, Mauricio Aristizabal 
mailto:mauri...@impact.com>> wrote:
@Todd not to belabor the point, but when I suggested breaking up small dim 
tables into multiple parquet files (and in this thread's context perhaps 
partition kudu table, even if small, into multiple tablets), it was to speed up 
joins/exchanges, not to parallelize the scan.

For example recently we ran into this slow query where the 14M record dimension 
fit into a single file & block, so it got scanned on a single node though still 
pretty quickly (300ms), however it caused the join to take 25+ seconds and 
bogged down the entire query.  See highlighted fragment and its parent.

So we broke it into several small files the way I described in my previous 
post, and now join and query are fast (6s).

-m


[X]


On Fri, Mar 16, 2018 at 3:55 PM, Todd Lipcon 
mailto:t...@cloudera.com>> wrote:
I suppose in the case that the dimension table scan makes a non-trivial portion 
of your workload time, then yea, parallelizing the scan as you suggest would be 
beneficial. That said, in typical analytic queries, scanning the dimension 
tables is very quick compared to scanning the much-larger fact tables, so the 
extra parallelism on the dim table scan isn't worth too much.

-Todd

On Fri, Mar 16, 2018 at 2:56 PM, Mauricio Aristizabal 
mailto:mauri...@impactradius.com>> wrote:
@Todd I know working with parquet in the past I've seen small dimensions that 
fit in 1 single file/block limit parallelism of join/exchange/aggregation 
nodes, and I've forced those dims to spread across 20 or so blocks by 
leveraging SET PARQUET_FILE_SIZE=8m; or similar when doing INSERT OVERWRITE to 
load them, which then allows these operations to parallelize across that many 
nodes.

Wouldn't it be useful here for Cliff's small dims to be partitioned into a 
couple tablets to similarly improve parallelism?

-m

On Fri, Mar 16, 2018 at 2:29 PM, Todd Lipcon 
mailto:t...@cloudera.com>> wrote:
On Fri, Mar 16, 2018 at 2:19 PM, Cliff Resnick 
mailto:cre...@gmail.com>> wrote:
Hey Todd,

Thanks for that explanation, as well as all the great work you're doing  -- 
it's much appreciated! I just have one last follow-up question. Reading about 
BROADCAST operations (Kudu, Spark, Flink, etc. ) it seems the smaller table is 
always copied in its ent

Re: "broadcast" tablet replication for kudu?

2018-03-16 Thread Clifford Resnick
I thought I had read that the Kudu client can configure a scan for 
CLOSEST_REPLICA and assumed this was a way to take advantage of data 
collocation.  If this exists then how far out of context is my understanding of 
it? Reading about HDFS cache replication, I do know that Impala will choose a 
random replica there to more evenly distribute load. But especially compared to 
Kudu upsert, managing mutable data using Parquet is painful. So, perhaps to sum 
thing up, if nearly 100% of my metadata scan are single Primary Key lookups 
followed by a tiny broadcast then am I really just splitting hairs 
performance-wise between Kudu and HDFS-cached parquet?

From:  Todd Lipcon <t...@cloudera.com<mailto:t...@cloudera.com>>
Reply-To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
<user@kudu.apache.org<mailto:user@kudu.apache.org>>
Date: Friday, March 16, 2018 at 2:51 PM
To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
<user@kudu.apache.org<mailto:user@kudu.apache.org>>
Subject: Re: "broadcast" tablet replication for kudu?

It's worth noting that, even if your table is replicated, Impala's planner is 
unaware of this fact and it will give the same plan regardless. That is to say, 
rather than every node scanning its local copy, instead a single node will 
perform the whole scan (assuming it's a small table) and broadcast it from 
there within the scope of a single query. So, I don't think you'll see any 
performance improvements on Impala queries by attempting something like an 
extremely high replication count.

I could see bumping the replication count to 5 for these tables since the extra 
storage cost is low and it will ensure higher availability of the important 
central tables, but I'd be surprised if there is any measurable perf impact.

-Todd

On Fri, Mar 16, 2018 at 11:35 AM, Clifford Resnick 
<cresn...@mediamath.com<mailto:cresn...@mediamath.com>> wrote:
Thanks for that, glad I was wrong there! Aside from replication considerations, 
is it also recommended the number of tablet servers be odd?

I will check forums as you suggested, but from what I read after searching is 
that Impala relies on user configured caching strategies using HDFS cache.  The 
workload for these tables is very light write, maybe a dozen or so records per 
hour across 6 or 7 tables. The size of the tables ranges from thousands to low 
millions of rows so so sub-partitioning would not be required. So perhaps this 
is not a typical use-case but I think it could work quite well with kudu.

From: Dan Burkert <danburk...@apache.org<mailto:danburk...@apache.org>>
Reply-To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
<user@kudu.apache.org<mailto:user@kudu.apache.org>>
Date: Friday, March 16, 2018 at 2:09 PM
To: "user@kudu.apache.org<mailto:user@kudu.apache.org>" 
<user@kudu.apache.org<mailto:user@kudu.apache.org>>
Subject: Re: "broadcast" tablet replication for kudu?

The replication count is the number of tablet servers which Kudu will host 
copies on.  So if you set the replication level to 5, Kudu will put the data on 
5 separate tablet servers.  There's no built-in broadcast table feature; upping 
the replication factor is the closest thing.  A couple of things to keep in 
mind:

- Always use an odd replication count.  This is important due to how the Raft 
algorithm works.  Recent versions of Kudu won't even let you specify an even 
number without flipping some flags.
- We don't test much much beyond 5 replicas.  It should work, but you may run 
in to issues since it's a relatively rare configuration.  With a heavy write 
workload and many replicas you are even more likely to encounter issues.

It's also worth checking in an Impala forum whether it has features that make 
joins against small broadcast tables better?  Perhaps Impala can cache small 
tables locally when doing joins.

- Dan

On Fri, Mar 16, 2018 at 10:55 AM, Clifford Resnick 
<cresn...@mediamath.com<mailto:cresn...@mediamath.com>> wrote:
The problem is, AFIK, that replication count is not necessarily the 
distribution count, so you can't guarantee all tablet servers will have a copy.

On Mar 16, 2018 1:41 PM, Boris Tyukin 
<bo...@boristyukin.com<mailto:bo...@boristyukin.com>> wrote:
I'm new to Kudu but we are also going to use Impala mostly with Kudu. We have a 
few tables that are small but used a lot. My plan is replicate them more than 3 
times. When you create a kudu table, you can specify number of replicated 
copies (3 by default) and I guess you can put there a number, corresponding to 
your node count in cluster. The downside, you cannot change that number unless 
you recreate a table.

On Fri, Mar 16, 2018 at 10:42 AM, Cliff Resnick 
<cre...@gmail.com<mailto:cre...@gmail.com>> wrote:
We will soon be moving our analytics from AWS Redshift

Re: Casual meetup/happy hour at Strata?

2016-09-17 Thread Clifford Resnick
+1. We're just starting with Kudu, but it would be nice to meet other users, 
and a casual Q & A would be great if you're up for it!

On Sep 17, 2016 9:05 PM, Todd Lipcon  wrote:
Hey Kudu users,

I'll be in NYC for the last week in September for Strata/Hadoop World. I 
imagine some others might be as well, and wanted to gauge interest in doing a 
casual meetup or happy hour on Wednesday night of that week. No presentations 
or anything, just pick a time and place and whoever's around can drop by and 
put some faces to names.

Let me know if you're interested - if not enough people are around, I'll can 
the idea, but if it seems there are at least a few people in town it might be 
fun.

-Todd
--
Todd Lipcon
Software Engineer, Cloudera