Build Update for apache/druid
-
Build: #25785
Status: Broken
Duration: 56 secs
Commit: 32cd47b (master)
Author: Clint Wylie
Message: Fix home view styling (#9444)
View the changeset:
https://github.com/apache/druid/compare/301605717834127ca0db4a8d05d978571d03
Build Update for apache/druid
-
Build: #25785
Status: Broken
Duration: 3 mins and 38 secs
Commit: 32cd47b (master)
Author: Clint Wylie
Message: Fix home view styling (#9444)
View the changeset:
https://github.com/apache/druid/compare/301605717834127ca0db4a8d0
Yeah, I think the primary objective here is a standalone writer from Spark
to Druid.
On Thu, Mar 5, 2020 at 11:43 AM itai yaffe wrote:
> Thanks Julian!
> I'm actually targeting for this connector to allow write capabilities (at
> least as a first phase), rather than focusing on read capabilities
Thanks Julian!
I'm actually targeting for this connector to allow write capabilities (at least
as a first phase), rather than focusing on read capabilities.
Having said that, I definitely see the value (even for the use-cases in my
company) of having a reader that queries S3 segments directly! Fu
Ah, that seems like a good reason to send mails to the dev list. I _think_
I just whitelisted the address. (I'm not totally sure, since it's the first
time I've done it, and the mailing list interface is a bit esoteric.)
On Wed, Mar 4, 2020 at 12:14 PM Chi Cao Minh wrote:
> The travis emails to
The spark-druid-connector you shared brings up another design decision we
should probably talk through. That connector effectively wraps an HTTP
query client with Spark plumbing. An alternative approach (and the one I
ended up building due to our business requirements) is to build a reader
that ope
I'll let Julian answer, but in the meantime, I just wanted to point out we
might be able to draw some inspiration from this Spark-Redshift connector
(https://github.com/databricks/spark-redshift#scala).
Though it's somewhat outdated, it probably can be used as a reference for this
new Spark-Drui