> Another clarification: not databricks, but the Apache Spark PMC grants
> access to the JIRA / wiki. That said... I'm not actually sure how its done.
word. i'll make the changes if we need to.
-
To unsubscribe, e-mail:
>
> i can't give you permissions -- that has to be (most likely) through
> someone @ databricks, like michael.
>
Another clarification: not databricks, but the Apache Spark PMC grants
access to the JIRA / wiki. That said... I'm not actually sure how its done.
Yep. Let's hold on. :)
On Tue, May 24, 2016 at 3:45 PM, shane knapp wrote:
> > Sure, could you give me the permission for Spark Jira?
> >
> > Although we haven't decided yet, I can add Travis related section
> > (summarizing current configurations and expected VM HW, etc).
> Sure, could you give me the permission for Spark Jira?
>
> Although we haven't decided yet, I can add Travis related section
> (summarizing current configurations and expected VM HW, etc).
>
i can't give you permissions -- that has to be (most likely) through
someone @ databricks, like michael.
Thank you, Shane.
Sure, could you give me the permission for Spark Jira?
Although we haven't decided yet, I can add Travis related section
(summarizing current configurations and expected VM HW, etc).
That will be helpful for further discussions.
It's just a Wiki, you can delete the Travis
> As Sean said, Vanzin made a PR for JDK7 compilation. We can ignore the issue
> of JDK7 compilation.
>
vanzin and i are working together on this right now... we currently
have java 7u79 installed on all of the workers. if some random test
failures keep happening during his tests, i will roll
Hi, All.
As Sean said, Vanzin made a PR for JDK7 compilation. We can ignore the
issue of JDK7 compilation.
The remaining issues are the java-linter and maven installation test.
To: Michael
For the rate limit, Apache Foundation seems to use 30 concurrent according
to the INFRA blog.
Thanks, Koert. This is great. Please keep them coming.
On Tue, May 24, 2016 at 9:27 AM, Koert Kuipers wrote:
> https://issues.apache.org/jira/browse/SPARK-15507
>
> On Tue, May 24, 2016 at 12:21 PM, Ted Yu wrote:
>
>> Please log a JIRA.
>>
>> Thanks
>>
https://issues.apache.org/jira/browse/SPARK-15507
On Tue, May 24, 2016 at 12:21 PM, Ted Yu wrote:
> Please log a JIRA.
>
> Thanks
>
> On Tue, May 24, 2016 at 8:33 AM, Koert Kuipers wrote:
>
>> hello,
>> as we continue to test spark 2.0 SNAPSHOT in-house
+1 (non-binding)
I think this is an important step to improve Spark as an Apache project.
.. Owen
On Mon, May 23, 2016 at 11:18 AM, Holden Karau wrote:
> +1 non-binding (as a contributor anything which speed things up is worth
> a try, and git blame is a good enough
Please log a JIRA.
Thanks
On Tue, May 24, 2016 at 8:33 AM, Koert Kuipers wrote:
> hello,
> as we continue to test spark 2.0 SNAPSHOT in-house we ran into the
> following trying to port an existing application from spark 1.6.1 to spark
> 2.0.0-SNAPSHOT.
>
> given this code:
>
The first item as a whole should be null please refer to the jira.
Sent from my iPhone
> On May 24, 2016, at 7:31 AM, Koert Kuipers wrote:
>
> got it, but i assume thats an internal implementation detail, and it should
> show null not -1?
>
>> On Tue, May 24, 2016 at 3:10
hello,
as we continue to test spark 2.0 SNAPSHOT in-house we ran into the
following trying to port an existing application from spark 1.6.1 to spark
2.0.0-SNAPSHOT.
given this code:
case class Test(a: Int, b: String)
val rdd = sc.parallelize(List(Row(List(Test(5, "ha"), Test(6, "ba")
val
got it, but i assume thats an internal implementation detail, and it should
show null not -1?
On Tue, May 24, 2016 at 3:10 AM, Zhan Zhang wrote:
> The reason for "-1" is that the default value for Integer is -1 if the
> value
> is null
>
> def defaultValue(jt: String):
Do you need more information?
> On 23 May 2016, at 19:16, Ovidiu-Cristian MARCU
> wrote:
>
> Yes,
>
> git log
> commit dafcb05c2ef8e09f45edfb7eabf58116c23975a0
> Author: Sameer Agarwal >
> Date: Sun May 22
The reason for "-1" is that the default value for Integer is -1 if the value
is null
def defaultValue(jt: String): String = jt match {
...
case JAVA_INT => "-1"
...
}
--
View this message in context:
I think it is by design FileInputDStream doesn't support report info,
because FileInputDStream doesn't have event/record concept (it is file
based), so it is hard to define how to correctly report the input info.
Current input info reporting can be supported for all receiver based
InputDStream
Hi,
I'm trying to run a simple spark streaming application with File Streaming
and its working properly but when I try to monitor the number of events in
the Streaming Ui it shows that as 0.Is this a issue and are there any plans
to fix this.Attached is the screenshot of what it shows on the UI.
18 matches
Mail list logo