Re: PSA: Java 8 unidoc build

2017-02-06 Thread Hyukjin Kwon
(One more.. https://github.com/apache/spark/pull/15999 this describes some
more cases that might easily be mistaken)

On 7 Feb 2017 9:21 a.m., "Hyukjin Kwon"  wrote:

> Oh, Joseph, thanks. It is nice to inform this in dev mailing list.
>
> Let me please leave another PR to refer,
>
> https://github.com/apache/spark/pull/16013
>
> and the JIRA you kindly opened,
>
> https://issues.apache.org/jira/browse/SPARK-18692
>
>
> On 7 Feb 2017 9:13 a.m., "Joseph Bradley"  wrote:
>
> Public service announcement: Our doc build has worked with Java 8 for
> brief time periods, but new changes keep breaking the Java 8 unidoc build.
> Please be aware of this, and try to test doc changes with Java 8!  In
> general, it is stricter than Java 7 for docs.
>
> A shout out to @HyukjinKwon and others who have made many fixes for this!
> See these sample PRs for some issues causing failures (especially around
> links):
> https://github.com/apache/spark/pull/16741
> https://github.com/apache/spark/pull/16604
>
> Thanks,
> Joseph
>
> --
>
> Joseph Bradley
>
> Software Engineer - Machine Learning
>
> Databricks, Inc.
>
> [image: http://databricks.com] 
>
>
>


Re: PSA: Java 8 unidoc build

2017-02-06 Thread Hyukjin Kwon
Oh, Joseph, thanks. It is nice to inform this in dev mailing list.

Let me please leave another PR to refer,

https://github.com/apache/spark/pull/16013

and the JIRA you kindly opened,

https://issues.apache.org/jira/browse/SPARK-18692


On 7 Feb 2017 9:13 a.m., "Joseph Bradley"  wrote:

Public service announcement: Our doc build has worked with Java 8 for brief
time periods, but new changes keep breaking the Java 8 unidoc build.
Please be aware of this, and try to test doc changes with Java 8!  In
general, it is stricter than Java 7 for docs.

A shout out to @HyukjinKwon and others who have made many fixes for this!
See these sample PRs for some issues causing failures (especially around
links):
https://github.com/apache/spark/pull/16741
https://github.com/apache/spark/pull/16604

Thanks,
Joseph

-- 

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

[image: http://databricks.com] 


PSA: Java 8 unidoc build

2017-02-06 Thread Joseph Bradley
Public service announcement: Our doc build has worked with Java 8 for brief
time periods, but new changes keep breaking the Java 8 unidoc build.
Please be aware of this, and try to test doc changes with Java 8!  In
general, it is stricter than Java 7 for docs.

A shout out to @HyukjinKwon and others who have made many fixes for this!
See these sample PRs for some issues causing failures (especially around
links):
https://github.com/apache/spark/pull/16741
https://github.com/apache/spark/pull/16604

Thanks,
Joseph

-- 

Joseph Bradley

Software Engineer - Machine Learning

Databricks, Inc.

[image: http://databricks.com] 


[SPARK-19405] Looking for committer to review kinesis-asl PR

2017-02-06 Thread Budde, Adam
Hey all,

Apologies for spamming this to the entire dev list, but I haven’t been able to 
find reviewers for a PR I have open to add a new authorization options to 
KinesisReciever/KinesisUtils:

https://github.com/apache/spark/pull/16744

The PR merges cleanly and passes all tests. @srowen gave it an initial look 
over but we’re still looking for someone who can review the substance of the 
code. Thanks in advance!

Adam




[STREAMING] Looking for committer to review kinesis-asl PR

2017-02-06 Thread Adam Budde
Hey all,

Apologies for spamming the entire dev list, but I'm having some difficulty
finding reviewers for a PR I have open to add additional authorization
options to KinesisUtils/KinesisReceiver:

https://github.com/apache/spark/pull/16744

The change passes all tests and merges cleanly. @srowen gave it an initial
look over but we are still looking for someone to review the substance of
the commit. Thanks in advance!

Adam


[STREAMING] Looking for committer to review kinesis-asl PR

2017-02-06 Thread Adam Budde
Hello all,

Apologies for spamming the entire dev list, but I'm having some difficulty
finding reviewers for a PR I have open to add additional authorization
options to KinesisUtils/KinesisReceiver:

https://github.com/apache/spark/pull/16744

The change passes all tests and merges cleanly. @srowen gave it an initial
look over but we are still looking for someone to review the substance of
the commit. Thanks in advance!

Adam


Re: [SQL]SQLParser fails to resolve nested CASE WHEN statement with parentheses in Spark 2.x

2017-02-06 Thread Herman van Hövell tot Westerflier
Hi Stan,

I have opened https://github.com/apache/spark/pull/16821 to fix this.

On Mon, Feb 6, 2017 at 1:41 PM, StanZhai  wrote:

> Hi all,
>
> SQLParser fails to resolve nested CASE WHEN statement like this:
>
> select case when
>   (1) +
>   case when 1>0 then 1 else 0 end = 2
> then 1 else 0 end
> from tb
>
>  Exception 
> Exception in thread "main"
> org.apache.spark.sql.catalyst.parser.ParseException:
> mismatched input 'then' expecting {'.', '[', 'OR', 'AND', 'IN', NOT,
> 'BETWEEN', 'LIKE', RLIKE, 'IS', 'WHEN', EQ, '<=>', '<>', '!=', '<', LTE,
> '>', GTE, '+', '-', '*', '/', '%', 'DIV', '&', '|', '^'}(line 5, pos 0)
>
> == SQL ==
>
> select case when
>   (1) +
>   case when 1>0 then 1 else 0 end = 2
> then 1 else 0 end
> ^^^
> from tb
>
> But,remove parentheses will be fine:
>
> select case when
>   1 +
>   case when 1>0 then 1 else 0 end = 2
> then 1 else 0 end
> from tb
>
> I've already filed a JIRA for this:
> https://issues.apache.org/jira/browse/SPARK-19472
> 
>
> Any help is greatly appreciated!
>
> Best,
> Stan
>
>
>
>
> --
> View this message in context: http://apache-spark-
> developers-list.1001551.n3.nabble.com/SQL-SQLParser-
> fails-to-resolve-nested-CASE-WHEN-statement-with-parentheses-in-Spark-2-x-
> tp20867.html
> Sent from the Apache Spark Developers List mailing list archive at
> Nabble.com.
>
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>
>


-- 


[image: Register today for Spark Summit East 2017!]


Herman van Hövell

Software Engineer

Databricks Inc.

hvanhov...@databricks.com

+31 6 420 590 27

databricks.com

[image: http://databricks.com] 


[SQL]SQLParser fails to resolve nested CASE WHEN statement with parentheses in Spark 2.x

2017-02-06 Thread StanZhai
Hi all,

SQLParser fails to resolve nested CASE WHEN statement like this:

select case when
  (1) +
  case when 1>0 then 1 else 0 end = 2
then 1 else 0 end
from tb

 Exception 
Exception in thread "main"
org.apache.spark.sql.catalyst.parser.ParseException: 
mismatched input 'then' expecting {'.', '[', 'OR', 'AND', 'IN', NOT,
'BETWEEN', 'LIKE', RLIKE, 'IS', 'WHEN', EQ, '<=>', '<>', '!=', '<', LTE,
'>', GTE, '+', '-', '*', '/', '%', 'DIV', '&', '|', '^'}(line 5, pos 0)

== SQL ==

select case when
  (1) +
  case when 1>0 then 1 else 0 end = 2
then 1 else 0 end
^^^
from tb

But,remove parentheses will be fine:

select case when
  1 +
  case when 1>0 then 1 else 0 end = 2
then 1 else 0 end
from tb

I've already filed a JIRA for this: 
https://issues.apache.org/jira/browse/SPARK-19472
  

Any help is greatly appreciated!

Best,
Stan




--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/SQL-SQLParser-fails-to-resolve-nested-CASE-WHEN-statement-with-parentheses-in-Spark-2-x-tp20867.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: [SQL]A confusing NullPointerException when creating table using Spark2.1.0

2017-02-06 Thread StanZhai
This issue has been fixed by  https://github.com/apache/spark/pull/16820
  .



--
View this message in context: 
http://apache-spark-developers-list.1001551.n3.nabble.com/SQL-A-confusing-NullPointerException-when-creating-table-using-Spark2-1-0-tp20851p20866.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

-
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org



Re: FileNotFoundException, while file is actually available

2017-02-06 Thread censj
If you deploy yarn model,you can used yarn logs -applicationId youApplicationId 
get yarn logs. You can get logs in details。
Then Looking error info,
===
Name: cen sujun
Mobile: 13067874572
Mail: ce...@lotuseed.com

> 在 2017年2月6日,05:33,Evgenii Morozov  写道:
> 
> Hi, 
> 
> I see a lot of exceptions like the following during our machine learning 
> pipeline calculation. Spark version 2.0.2.
> Sometimes it’s just few executors that fails with this message, but the job 
> is successful. 
> 
> I’d appreciate any hint you might have.
> Thank you.
> 
> 2017-02-05 07:56:47.022 [task-result-getter-1] WARN  
> o.a.spark.scheduler.TaskSetManager - Lost task 0.0 in stage 151558.0 (TID 
> 993070, 10.61.12.43):
> java.io.FileNotFoundException: File file:/path/to/file does not exist
>at 
> org.apache.hadoop.fs.RawLocalFileSystem.deprecatedGetFileStatus(RawLocalFileSystem.java:611)
>at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileLinkStatusInternal(RawLocalFileSystem.java:824)
>at 
> org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:601)
>at 
> org.apache.hadoop.fs.FilterFileSystem.getFileStatus(FilterFileSystem.java:421)
>at 
> org.apache.hadoop.fs.ChecksumFileSystem$ChecksumFSInputChecker.(ChecksumFileSystem.java:142)
>at 
> org.apache.hadoop.fs.ChecksumFileSystem.open(ChecksumFileSystem.java:346)
>at org.apache.hadoop.fs.FileSystem.open(FileSystem.java:769)
>at 
> org.apache.hadoop.mapred.LineRecordReader.(LineRecordReader.java:109)
>at 
> org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
>at org.apache.spark.rdd.HadoopRDD$$anon$1.(HadoopRDD.scala:245)
>at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:208)
>at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:101)
>at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>at 
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:319)
>at org.apache.spark.rdd.RDD.iterator(RDD.scala:283)
>at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:79)
>at 
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:47)
>at org.apache.spark.scheduler.Task.run(Task.scala:86)
>at 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:274)
>at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>at java.lang.Thread.run(Thread.java:745)
> 
> 
> -
> To unsubscribe e-mail: dev-unsubscr...@spark.apache.org
>