Re: [ANNOUNCE] Welcoming Bankim Bhavsar as Kudu committer and PMC member
Congrats Bankim! On Tue, Apr 21, 2020 at 1:25 PM Bankim Bhavsar wrote: > Thank you all! Looking forward to more contributions! > > -Bankim. > > > On Tue, Apr 21, 2020 at 12:59 PM Grant Henke > wrote: > >> Congrats Bankim! Thanks for all the contributions! Looking forward to >> more. >> >> On Tue, Apr 21, 2020 at 2:50 PM Alexey Serbin >> >> wrote: >> >> > Congratulations Bankim! Great to see these valuable contributions, >> keep it >> > up! >> > >> > >> > Best regards, >> > >> > Alexey >> > >> > On Sat, Apr 18, 2020 at 11:04 PM Hao Hao wrote: >> > >> > > Congrats Bankim! Well deserved! >> > > >> > > Best, >> > > Hao >> > > >> > > On Sat, Apr 18, 2020 at 5:45 PM Andrew Wong wrote: >> > > >> > >> Congratulations Bankim! Keep up the great work 🎉 >> > >> >> > >> On Sat, Apr 18, 2020 at 3:28 PM Adar Dembo wrote: >> > >> >> > >>> Hi Kudu community, >> > >>> >> > >>> I'm happy to announce that the Kudu PMC has voted to add Bankim >> > >>> Bhavsar as a new committer and PMC member. >> > >>> >> > >>> Bankim has been actively writing Kudu code for the last six months >> or >> > >>> so. Aside from various bug fixes, his major contribution has been to >> > >>> replace the existing Bloom filter predicate code with a more >> > >>> full-featured implementation that should also be more robust and >> > >>> efficient. One of the challenges here has been integration with >> Apache >> > >>> Impala, and providing a common abstraction that can be used by both >> > >>> codebases. This work is still ongoing but is drawing to a close >> pretty >> > >>> soon. >> > >>> >> > >>> Please join me in congratulating Bankim! >> > >>> >> > >> >> > >> >> >> -- >> Grant Henke >> Software Engineer | Cloudera >> gr...@cloudera.com | twitter.com/gchenke | linkedin.com/in/granthenke >> >
Re: Partitioning Rules of Thumb
> I think your hardware situation hurts not only for the number of tablets but also for Kudu + Impala. Impala afaik will only use one core per host per query, so is a poor fit for large complex queries on vertical hardware. This is basically true as of current releases of Impala but I'm working on addressing this. It's been possible to set mt_dop per-query on queries without joins or table sinks for a long time now (the limitations make it of limited use). Rough join and table sink support is behind a hidden flag in 3.3 and 3.4. I've been working on making it performant with joins (add doing all the requisite testing), which should land in master very soon and if things go remotely to plan, be a fully supported option in the next release after 3.4. We saw huge speedups on a lot of queries (like 10x or more). Some queries didn't benefit much, if they were limited by the scan perf (including if the runtime filters pushed into the scans were filtering most data before joins). On Mon, Mar 16, 2020 at 2:58 PM Boris Tyukin wrote: > appreciate your thoughts, Cliff > > On Mon, Mar 16, 2020 at 11:18 AM Cliff Resnick wrote: > >> Boris, >> >> I think the crux of the problem is that "real-time analytics" over deeply >> nested storage does not really exist. I'll qualify that statement with >> "real-time" meaning streaming per-record ingestion, and "analytics" as >> columnar query access. The closest thing I know of today is Google BigQuery >> or Snowflake, but those are actually micro-batch ingestion done well, not >> per-record like Kudu. The only real-time analytics solution I know of that >> has a modicum of nesting is Druid, with a single level of "GroupBy" >> dimensions that get flattened into virtual rows as part of its non-SQL >> rollup API. Columnar storage and real-time are probably aways going to be a >> difficult pairing and in fact Kudu's "real-time" storage format is >> row-based, and Druid requires a whole lambda architecture batch compaction. >> As we all know, nothing is for free, whether the trade-off for nested data >> be column-shredding into something like Kudu as Adar suggested, or >> micro-batching to Parquet or some combination of both. I won't even go into >> how nested analytics are handled on the SQL/query side because it gets >> weird there, too. >> >> I think your hardware situation hurts not only for the number of tablets >> but also for Kudu + Impala. Impala afaik will only use one core per host >> per query, so is a poor fit for large complex queries on vertical hardware. >> >> In conclusion, I don't know how far you've gone into your investigations >> but I can say that if your needs are to support "power users" at a premium >> then something like Snowflake is great for your problem space. But if >> analytics are also more centrally integrated in pipelines then parquet is >> hard to beat for the price and flexibility, as is Kudu for dashboards or >> other intelligence that leverages upsert/key semantics. Ultimately, like >> many of us, you may find that you'll need all of the above. >> >> Cliff >> >> >> >> On Sun, Mar 15, 2020 at 1:11 PM Boris Tyukin >> wrote: >> >>> thanks Adar for your comments and thinking this through! I really think >>> Kudu has tons of potential and very happy we've made a decision to use it >>> for our real-time pipeline. >>> >>> you asked about use cases for support of nested and large binary/text >>> objects. I work for a large healthcare organization and there is a lot of >>> going with newer HL7 FHIR standard. FHIR documents are highly nested json >>> objects that might get as big as a few Gbs in size. We could not use Kudu >>> for storing FHIR bundles/documents due to the size and inability to store >>> FHIR objects natively. We ended up using Hive/Spark for that, but that >>> solution is not real-time. And it is not only about storing, but also about >>> fast search in those FHIR documents. I know we could use Hbase for that but >>> Hbase is really bad when it comes to analytics type of workloads. >>> >>> As for large text/binaries, the good and common example is >>> narratives notes (physician notes, progress notes, etc.) They can be in >>> form of RTF or PDF documents or images. >>> >>> 300 column limit was not a big deal so far but we have high-density >>> nodes with 2 44-core cpus and 12 12-Tb drives and we cannot use the full >>> power of them due to limitations of Kudu in terms of number of tablets per >>> node. This is more concerning than 300 column limit. >>> >>> >>> >>> On Sat, Mar 14, 2020 at 3:45 AM Adar Lieber-Dembo >>> wrote: >>> Snowflake's micro-partitions sound an awful lot like Kudu's rowsets: 1. Both are created transparently and automatically as data is inserted. 2. Both may overlap with one another based on the sort column (Snowflake lets you choose the sort column; in Kudu this is always the PK). 3. Both are pruned at scan time if the scan's predicates allow it. 4. In Kudu, a tablet with a large clu
Re: impala with kudu write become very slow
Also including the Kudu list in case someone there recognises the problem. On Thu, Jul 18, 2019 at 8:05 AM lk_hadoop wrote: > I0718 18:42:22.677520 51139 coordinator.cc:357] starting execution on 5 > backends for query_id=2e4a3fbec0d7d721:2ec73c1c > I0718 18:42:22.679605 12873 impala-internal-service.cc:44] > ExecQueryFInstances(): query_id=2e4a3fbec0d7d721:2ec73c1c > I0718 18:42:22.679620 12873 query-exec-mgr.cc:46] StartQueryFInstances() > query_id=2e4a3fbec0d7d721:2ec73c1c > coord=realtimeanalysis-kudu-04-10-8-50-58:22000 > I0718 18:42:22.679625 12873 query-state.cc:178] Buffer pool limit for > 2e4a3fbec0d7d721:2ec73c1c: 17179869184 > I0718 18:42:22.679675 12873 initial-reservations.cc:60] Successfully > claimed initial reservations (4.00 MB) for query > 2e4a3fbec0d7d721:2ec73c1c > I0718 18:42:22.679769 51332 query-state.cc:309] StartFInstances(): > query_id=2e4a3fbec0d7d721:2ec73c1c #instances=2 > I0718 18:42:22.680196 51332 query-state.cc:322] descriptor table for > query=2e4a3fbec0d7d721:2ec73c1c > tuples: > Tuple(id=2 size=567 slots=[Slot(id=52 type=INT col_path=[] offset=464 > null=(offset=563 mask=20) slot_idx=29 field_idx=-1), Slot(id=53 type=STRING > col_path=[] offset=0 null=(offset=560 mask=1) slot_idx=0 field_idx=-1), > Slot(id=54 type=STRING col_path=[] offset=16 null=(offset=560 mask=2) > slot_idx=1 field_idx=-1), Slot(id=55 type=STRING col_path=[] offset=32 > null=(offset=560 mask=4) slot_idx=2 field_idx=-1), Slot(id=56 type=STRING > col_path=[] offset=48 null=(offset=560 mask=8) slot_idx=3 field_idx=-1), > Slot(id=57 type=STRING col_path=[] offset=64 null=(offset=560 mask=10) > slot_idx=4 field_idx=-1), Slot(id=58 type=STRING col_path=[] offset=80 > null=(offset=560 mask=20) slot_idx=5 field_idx=-1), Slot(id=59 type=STRING > col_path=[] offset=96 null=(offset=560 mask=40) slot_idx=6 field_idx=-1), > Slot(id=60 type=STRING col_path=[] offset=112 null=(offset=560 mask=80) > slot_idx=7 field_idx=-1), Slot(id=61 type=STRING col_path=[] offset=128 > null=(offset=561 mask=1) slot_idx=8 field_idx=-1), Slot(id=62 type=STRING > col_path=[] offset=144 null=(offset=561 mask=2) slot_idx=9 field_idx=-1), > Slot(id=63 type=STRING col_path=[] offset=160 null=(offset=561 mask=4) > slot_idx=10 field_idx=-1), Slot(id=64 type=INT col_path=[] offset=468 > null=(offset=563 mask=40) slot_idx=30 field_idx=-1), Slot(id=65 type=INT > col_path=[] offset=472 null=(offset=563 mask=80) slot_idx=31 field_idx=-1), > Slot(id=66 type=INT col_path=[] offset=476 null=(offset=564 mask=1) > slot_idx=32 field_idx=-1), Slot(id=67 type=INT col_path=[] offset=480 > null=(offset=564 mask=2) slot_idx=33 field_idx=-1), Slot(id=68 type=STRING > col_path=[] offset=176 null=(offset=561 mask=8) slot_idx=11 field_idx=-1), > Slot(id=69 type=STRING col_path=[] offset=192 null=(offset=561 mask=10) > slot_idx=12 field_idx=-1), Slot(id=70 type=STRING col_path=[] offset=208 > null=(offset=561 mask=20) slot_idx=13 field_idx=-1), Slot(id=71 type=STRING > col_path=[] offset=224 null=(offset=561 mask=40) slot_idx=14 field_idx=-1), > Slot(id=72 type=STRING col_path=[] offset=240 null=(offset=561 mask=80) > slot_idx=15 field_idx=-1), Slot(id=73 type=STRING col_path=[] offset=256 > null=(offset=562 mask=1) slot_idx=16 field_idx=-1), Slot(id=74 type=INT > col_path=[] offset=484 null=(offset=564 mask=4) slot_idx=34 field_idx=-1), > Slot(id=75 type=INT col_path=[] offset=488 null=(offset=564 mask=8) > slot_idx=35 field_idx=-1), Slot(id=76 type=INT col_path=[] offset=492 > null=(offset=564 mask=10) slot_idx=36 field_idx=-1), Slot(id=77 type=INT > col_path=[] offset=496 null=(offset=564 mask=20) slot_idx=37 field_idx=-1), > Slot(id=78 type=INT col_path=[] offset=500 null=(offset=564 mask=40) > slot_idx=38 field_idx=-1), Slot(id=79 type=INT col_path=[] offset=504 > null=(offset=564 mask=80) slot_idx=39 field_idx=-1), Slot(id=80 type=INT > col_path=[] offset=508 null=(offset=565 mask=1) slot_idx=40 field_idx=-1), > Slot(id=81 type=STRING col_path=[] offset=272 null=(offset=562 mask=2) > slot_idx=17 field_idx=-1), Slot(id=82 type=STRING col_path=[] offset=288 > null=(offset=562 mask=4) slot_idx=18 field_idx=-1), Slot(id=83 type=INT > col_path=[] offset=512 null=(offset=565 mask=2) slot_idx=41 field_idx=-1), > Slot(id=84 type=STRING col_path=[] offset=304 null=(offset=562 mask=8) > slot_idx=19 field_idx=-1), Slot(id=85 type=STRING col_path=[] offset=320 > null=(offset=562 mask=10) slot_idx=20 field_idx=-1), Slot(id=86 type=STRING > col_path=[] offset=336 null=(offset=562 mask=20) slot_idx=21 field_idx=-1), > Slot(id=87 type=STRING col_path=[] offset=352 null=(offset=562 mask=40) > slot_idx=22 field_idx=-1), Slot(id=88 type=STRING col_path=[] offset=368 > null=(offset=562 mask=80) slot_idx=23 field_idx=-1), Slot(id=89 type=INT > col_path=[] offset=516 null=(offset=565 mask=4) slot_idx=42 field_idx=-1), > Slot(id=90 type=INT col_path=[] offset=520 null=(offset=565 mask=8) > slot_idx=43 field_idx=-1), S
Re: using Kudu binary column in Impala
We don't support Kudu binary columns in Impala: https://issues.apache.org/jira/browse/IMPALA-5323. At least with Impala/Kudu using a string should work fine. We use strings internally in Impala for storing HLL intermediates for stats computation. On Sat, Dec 15, 2018 at 7:17 PM Cliff Resnick wrote: > We're doing some testing storing Hyperloglog synopsis in Kudu. It works > well in spark, but the hope is to also query through Impala with a UDF. > Spark would remain as the writer, with Impala read-only. To work with > Impala I'm wondering if it's best to define the HLL data as Kudu string > type with plain encoding, or perhaps it's possible to keep it as binary but > declare it as string in an external table definition? I presume the latter > is not possible since Kudu's generated external table script does not do > this. Please forgive me for not conducting my own experimentation but I > figured someone here has run up against this before, and if so please let > me know! > > -Cliff > > >