There was SPARK-12008 which was closed.
Not sure if there is active JIRA in this regard.
On Tue, Aug 2, 2016 at 6:40 PM, 马晓宇 wrote:
> Hi guys,
>
> I wonder if anyone working on SQL based authorization already or not.
>
> This is something we needed badly right now
Found a few issues:
[SPARK-6810] Performance benchmarks for SparkR
[SPARK-2833] performance tests for linear regression
[SPARK-15447] Performance test for ALS in Spark 2.0
Haven't found one for Spark core.
On Fri, Jul 8, 2016 at 8:58 AM, Michael Allman wrote:
> Hello,
>
Running the following command:
build/mvn clean -Phive -Phive-thriftserver -Pyarn -Phadoop-2.6 -Psparkr
-Dhadoop.version=2.7.0 package
The build stopped with this test failure:
^[[31m- SPARK-9757 Persist Parquet relation with decimal column *** FAILED
***^[[0m
On Wed, Jul 6, 2016 at 6:25 AM,
Congratulations, Felix.
On Mon, Aug 8, 2016 at 11:15 AM, Matei Zaharia
wrote:
> Hi all,
>
> The PMC recently voted to add Felix Cheung as a committer. Felix has been
> a major contributor to SparkR and we're excited to have him join
> officially. Congrats and welcome,
Though no hbase release has the hbase-spark module, you can find the
backport patch on HBASE-14160 (for Spark 1.6)
You can build the hbase-spark module yourself.
Cheers
On Wed, Jan 25, 2017 at 3:32 AM, Chetan Khatri
wrote:
> Hello Spark Community Folks,
>
>
The references are vendor specific.
Suggest contacting vendor's mailing list for your PR.
My initial interpretation of HBase repository is that of Apache.
Cheers
On Wed, Jan 25, 2017 at 7:38 AM, Chetan Khatri <chetan.opensou...@gmail.com>
wrote:
> @Ted Yu, Correct but HBase-Spa
Makes sense.
I trust Hyukjin, Holden and Cody's judgement in respective areas.
I just wish to see more participation from the committers.
Thanks
> On Oct 8, 2016, at 8:27 AM, Sean Owen wrote:
>
> Hyukjin
I think only committers should resolve JIRAs which were not created by himself
/ herself.
> On Oct 8, 2016, at 6:53 AM, Hyukjin Kwon wrote:
>
> I am uncertain too. It'd be great if these are documented too.
>
> FWIW, in my case, I privately asked and told Sean first that
'Spark 1.x and Scala 2.10 & 2.11' was repeated.
I guess your second line should read:
org.bdgenomics.adam:adam-{core,apis,cli}-spark2_2.1[0,1] for Spark 2.x and
Scala 2.10 & 2.11
On Wed, Aug 24, 2016 at 9:41 AM, Michael Heuer wrote:
> Hello,
>
> We're a project downstream
This should be in netty-all :
$ jar tvf
/home/x/.m2/repository/io/netty/netty-all/4.0.29.Final/netty-all-4.0.29.Final.jar
| grep ThreadLocalRandom
967 Tue Jun 23 11:10:30 UTC 2015
io/netty/util/internal/ThreadLocalRandom$1.class
1079 Tue Jun 23 11:10:30 UTC 2015
I haven't used Gobblin.
You can consider asking Gobblin mailing list of the first option.
The second option would work.
On Wed, Dec 21, 2016 at 2:28 AM, Chetan Khatri
wrote:
> Hello Guys,
>
> I would like to understand different approach for Distributed
of processing
is delivered to hbase.
Cheers
On Wed, Dec 21, 2016 at 8:00 AM, Chetan Khatri <chetan.opensou...@gmail.com>
wrote:
> Ok, Sure will ask.
>
> But what would be generic best practice solution for Incremental load from
> HBASE.
>
> On Wed, Dec 21, 2016 at 8:42 PM, Ted
Timur:
Mind starting a new thread ?
I have the same question as you have.
> On Mar 20, 2017, at 11:34 AM, Timur Shenkao wrote:
>
> Hello guys,
>
> Spark benefits from stable versions not frequent ones.
> A lot of people still have 1.6.x in production. Those who wants the
For Cassandra, I found:
https://www.instaclustr.com/multi-data-center-sparkcassandra-benchmark-round-2/
My coworker (on vacation at the moment) was doing benchmark with hbase.
When he comes back, the result can be published.
Note: it is hard to find comparison results with same setup (hardware,
You can find the JIRA handle of the person you want to mention by going to
a JIRA where that person has commented.
e.g. you want to find the handle for Joseph.
You can go to:
https://issues.apache.org/jira/browse/SPARK-6635
and click on his name in comment:
Does adding -X to mvn command give you more information ?
Cheers
On Sun, Jun 25, 2017 at 5:29 AM, 萝卜丝炒饭 <1427357...@qq.com> wrote:
> Hi all,
>
> Today I use new PC to compile SPARK.
> At the beginning, it worked well.
> But it stop at some point.
> the content in consle is :
>
Congratulations, Jerry !
On Mon, Aug 28, 2017 at 6:28 PM, Matei Zaharia
wrote:
> Hi everyone,
>
> The PMC recently voted to add Saisai (Jerry) Shao as a committer. Saisai
> has been contributing to many areas of the project for a long time, so it’s
> great to see him
Is there going to be another RC ?
With KafkaContinuousSourceSuite hanging, it is hard to get the rest of the
tests going.
Cheers
On Sat, Jan 13, 2018 at 7:29 AM, Sean Owen wrote:
> The signatures and licenses look OK. Except for the missing k8s package,
> the contents look
Did you include any picture ?
Looks like the picture didn't go thru.
Please use third party site.
Thanks
Original message From: Tomasz Gawęda
Date: 1/15/18 2:07 PM (GMT-08:00) To:
dev@spark.apache.org, u...@spark.apache.org Subject: Broken SQL
spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:203)
>
> 18/08/20 22:29:33 INFO AbstractCoordinator: Marking the coordinator
> :9093 (id: 2147483647 rack: null) dead for group
> spark-kafka-source-1aa50598-99d1-4c53-a73c-fa6637a219b2--13387949
If you have picked up all the changes for SPARK-18057, the Kafka “broker”
supporting v1.0+ should be compatible with Spark's Kafka adapter.
Can you post more details about the “failed to send SSL close message”
errors ?
(The default Kafka version is 2.0.0 in Spark Kafka adapter after SPARK-18057
+1
Original message From: Reynold Xin Date:
8/30/18 11:11 PM (GMT-08:00) To: Felix Cheung Cc:
dev Subject: Re: SPIP: Executor Plugin (SPARK-24918)
I actually had a similar use case a while ago, but not entirely the same. In my
use case, Spark is already up, but I want to
Interesting.
Should requiredClustering return a Set of Expression's ?
This way, we can determine the order of Expression's by looking at
what requiredOrdering()
returns.
On Mon, Mar 26, 2018 at 5:45 PM, Ryan Blue
wrote:
> Hi Pat,
>
> Thanks for starting the
<cloud0...@gmail.com> wrote:
> Actually clustering is already supported, please take a look at
> SupportsReportPartitioning
>
> Ordering is not proposed yet, might be similar to what Ryan proposed.
>
> On Mon, Mar 26, 2018 at 6:11 PM, Ted Yu <yuzhih...@gmail.c
+1
Original message From: Ryan Blue <rb...@netflix.com> Date:
3/30/18 2:28 PM (GMT-08:00) To: Patrick Woody <patrick.woo...@gmail.com> Cc:
Russell Spitzer <russell.spit...@gmail.com>, Wenchen Fan <cloud0...@gmail.com>,
Ted Yu <yuzhih...@gmai
gt;>>>>>> I can
>>>>>>>>>> tell, a hash function for combining clusters into tasks or a way to
>>>>>>>>>> provide
>>>>>>>>>> Spark a hash function for the other side of a join. It seems
&
Congratulations, Zhenhua
Original message From: 雨中漫步 <601450...@qq.com> Date: 4/1/18
11:30 PM (GMT-08:00) To: Yuanjian Li , Wenchen Fan
Cc: dev Subject: 回复: Welcome
Zhenhua Wang as a Spark committer
+1
Original message From: Denny Lee Date:
9/30/18 10:30 PM (GMT-08:00) To: Stavros Kontopoulos
Cc: Sean Owen , Wenchen
Fan , dev Subject: Re: [VOTE] SPARK
2.4.0 (RC2)
+1 (non-binding)
On Sat, Sep 29, 2018 at 10:24 AM Stavros Kontopoulos
wrote:
+1
Stavros
On Sat, Sep
Congratulations to all !
Original message From: Jungtaek Lim Date:
10/3/18 2:41 AM (GMT-08:00) To: Marco Gaido Cc: dev
Subject: Re: welcome a new batch of committers
Congrats all! You all deserved it.
On Wed, 3 Oct 2018 at 6:35 PM Marco Gaido wrote:
Congrats you all!
Il
+1
Original message From: Sean Owen Date:
8/31/18 6:40 AM (GMT-08:00) To: Darcy Shen Cc:
dev@spark.apache.org Subject: Re: Upgrade SBT to the latest
Certainly worthwhile. I think this should target Spark 3, which should come
after 2.4, which is itself already just about
+1
Original message From: Dongjin Lee Date:
9/19/18 7:20 AM (GMT-08:00) To: dev Subject: Re:
from_csv
Another +1.
I already experienced this case several times.
On Mon, Sep 17, 2018 at 11:03 AM Hyukjin Kwon wrote:
+1 for this idea since text parsing in CSV/JSON is quite
301 - 331 of 331 matches
Mail list logo