[jira] [Reopened] (PIG-4164) After Pig job finish, Pig client spend too much time retry to connect to AM

2014-09-26 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai reopened PIG-4164:
-

Rollback the patch temporarily since it break local mode.

> After Pig job finish, Pig client spend too much time retry to connect to AM
> ---
>
> Key: PIG-4164
> URL: https://issues.apache.org/jira/browse/PIG-4164
> Project: Pig
>  Issue Type: Bug
>  Components: impl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4164-0.patch
>
>
> For some script, after job finish, Pig spend a lot time try to connect AM 
> before get redirect to JobHistoryServer. Here is the message we saw:
> {code}
> 2014-09-10 15:13:55,370 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 0 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:56,371 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 1 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:57,372 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 2 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:57,476 [main] INFO  
> org.apache.hadoop.mapred.ClientServiceDelegate - Application state is 
> completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PIG-4204) UDFContext needs concept of Hadoop 2 Application Master

2014-09-26 Thread Akihiro Matsukawa (JIRA)
Akihiro Matsukawa created PIG-4204:
--

 Summary: UDFContext needs concept of Hadoop 2 Application Master
 Key: PIG-4204
 URL: https://issues.apache.org/jira/browse/PIG-4204
 Project: Pig
  Issue Type: Improvement
Reporter: Akihiro Matsukawa
Priority: Minor


Due to the fact that Pig's UDFs are instantiated both in the client and in the 
backend, many UDFs rely on the UDFContext.isFrontend() method to determine 
their behavior.

This distinction worked fine in Hadoop 1, but in Hadoop 2 now the UDF can be 
instantiate in a third environment, the Application Master. This is a sort of 
in-between environment where we are neither on the client nor a worker node.

For example, in Hadoop 2.4, split computation was pushed to the AM: 
MAPREDUCE-207. If the loader needs to do some initialization depending on 
UDFContext.isFrontend, its behavior on the AM is undefined (currently, it 
performs the backend action, while the AM might not have the full information a 
task node would, causing breakage).

Unfortunately I've found no easy way to check that we are in an AM task rather 
than a map/reduce task, but I'll file this ticket here if any others have 
insights.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Pig-trunk #1666

2014-09-26 Thread Apache Jenkins Server
See 

Changes:

[daijy] PIG-4176: Fix tez e2e test Bloom_[1-3]

[daijy] PIG-4164: After Pig job finish, Pig client spend too much time retry to 
connect to AM

--
[...truncated 104 lines...]
   [javacc] File "ParseException.java" does not exist.  Will create one.
   [javacc] File "Token.java" does not exist.  Will create one.
   [javacc] File "JavaCharStream.java" does not exist.  Will create one.
   [javacc] Parser generated successfully.
   [javacc] Java Compiler Compiler Version 4.2 (Parser Generator)
   [javacc] (type "javacc" with no arguments for help)
   [javacc] Reading from file 

 . . .
   [javacc] Warning: Lookahead adequacy checking not being performed since 
option LOOKAHEAD is more than 1.  Set option FORCE_LA_CHECK to true to force 
checking.
   [javacc] File "TokenMgrError.java" does not exist.  Will create one.
   [javacc] File "ParseException.java" does not exist.  Will create one.
   [javacc] File "Token.java" does not exist.  Will create one.
   [javacc] File "JavaCharStream.java" does not exist.  Will create one.
   [javacc] Parser generated with 0 errors and 1 warnings.
   [javacc] Java Compiler Compiler Version 4.2 (Parser Generator)
   [javacc] (type "javacc" with no arguments for help)
   [javacc] Reading from file 

 . . .
   [javacc] File "TokenMgrError.java" is being rebuilt.
   [javacc] File "ParseException.java" is being rebuilt.
   [javacc] File "Token.java" is being rebuilt.
   [javacc] File "JavaCharStream.java" is being rebuilt.
   [javacc] Parser generated successfully.
   [jjtree] Java Compiler Compiler Version 4.2 (Tree Builder)
   [jjtree] (type "jjtree" with no arguments for help)
   [jjtree] Reading from file 

 . . .
   [jjtree] File "Node.java" does not exist.  Will create one.
   [jjtree] File "SimpleNode.java" does not exist.  Will create one.
   [jjtree] File "DOTParserTreeConstants.java" does not exist.  Will create one.
   [jjtree] File "JJTDOTParserState.java" does not exist.  Will create one.
   [jjtree] Annotated grammar generated successfully in 

   [javacc] Java Compiler Compiler Version 4.2 (Parser Generator)
   [javacc] (type "javacc" with no arguments for help)
   [javacc] Reading from file 

 . . .
   [javacc] File "TokenMgrError.java" does not exist.  Will create one.
   [javacc] File "ParseException.java" does not exist.  Will create one.
   [javacc] File "Token.java" does not exist.  Will create one.
   [javacc] File "SimpleCharStream.java" does not exist.  Will create one.
   [javacc] Parser generated successfully.

prepare:
[mkdir] Created dir: 


genLexer:

genParser:

genTreeParser:

gen:

compile:
 [echo] *** Building Main Sources ***
 [echo] *** To compile with all warnings enabled, supply -Dall.warnings=1 
on command line ***
 [echo] *** Else, you will only be warned about deprecations ***
[javac] Compiling 972 source files to 

[javac] 
/home/jenkins/.ivy2/cache/org.apache.hbase/hbase-common/jars/hbase-common-0.96.0-hadoop2.jar(org/apache/hadoop/hbase/io/ImmutableBytesWritable.class):
 warning: Cannot find annotation method 'value()' in type 'SuppressWarnings': 
class file for edu.umd.cs.findbugs.annotations.SuppressWarnings not found
[javac] 
/home/jenkins/.ivy2/cache/org.apache.hbase/hbase-common/jars/hbase-common-0.96.0-hadoop2.jar(org/apache/hadoop/hbase/io/ImmutableBytesWritable.class):
 warning: Cannot find annotation method 'justification()' in type 
'SuppressWarnings'
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.
[javac] 2 warnings
 [copy] Copying 1 file to 

 [copy] Copying 1 file to 

 [copy] Copying 2 files to 

 [copy] Copying 2 files to 


ivy-buildJar:

jar:
 [echo] svnString 1627905
  [jar] Building jar: 


[jira] [Commented] (PIG-4178) HCatDDL_[1-3] fail on Windows

2014-09-26 Thread Rohini Palaniswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4178?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150109#comment-14150109
 ] 

Rohini Palaniswamy commented on PIG-4178:
-

+1

> HCatDDL_[1-3] fail on Windows
> -
>
> Key: PIG-4178
> URL: https://issues.apache.org/jira/browse/PIG-4178
> Project: Pig
>  Issue Type: Bug
>  Components: impl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4178-1.patch
>
>
> Pig fail to invoke "python hcat.py", which is the supposed approach to invoke 
> hcat on Windows.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PIG-4182) e2e tests Scripting_[1-12] fail on Windows

2014-09-26 Thread Rohini Palaniswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4182?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150102#comment-14150102
 ] 

Rohini Palaniswamy commented on PIG-4182:
-

Wouldn't the windows file name start with drive letter (C:\\somedir\somefile)? 
Earlier it was removing the : after the drive letter for Windows and will also 
remove / in absolute paths of Linux. Dont understand what case the new regex 
address.  What is it for?

> e2e tests Scripting_[1-12] fail on Windows
> --
>
> Key: PIG-4182
> URL: https://issues.apache.org/jira/browse/PIG-4182
> Project: Pig
>  Issue Type: Bug
>  Components: impl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4182-1.patch
>
>
> Error message:
> {code}
> 2014-09-11 12:37:56,622 [main] ERROR org.apache.pig.tools.pigstats.PigStats - 
> ERROR 0: org.apache.pig.backend.executionengine.ExecException: ERROR 2997: 
> Unable to recreate exception from backed error: 
> AttemptID:attempt_1410405156681_1228_m_00_3 Info:Error: 
> java.io.IOException: Deserialization error: could not instantiate 
> 'org.apache.pig.scripting.jython.JythonFunction' with arguments 
> '[D:\hdp\pig-0.14.0.2.2.0.0-1181\test\e2e\pig\testdist\libexec\python\scriptingudf.py,
>  square]'
>   at 
> org.apache.pig.impl.util.ObjectSerializer.deserialize(ObjectSerializer.java:62)
>   at 
> org.apache.pig.backend.hadoop.executionengine.mapReduceLayer.PigGenericMapBase.setup(PigGenericMapBase.java:180)
>   at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:142)
>   at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:784)
>   at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
>   at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:415)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
>   at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
> Caused by: java.lang.RuntimeException: could not instantiate 
> 'org.apache.pig.scripting.jython.JythonFunction' with arguments 
> '[D:\hdp\pig-0.14.0.2.2.0.0-1181\test\e2e\pig\testdist\libexec\python\scriptingudf.py,
>  square]'
>   at 
> org.apache.pig.impl.PigContext.instantiateFuncFromSpec(PigContext.java:778)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.instantiateFunc(POUserFunc.java:124)
>   at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.expressionOperators.POUserFunc.readObject(POUserFunc.java:584)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1891)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1348)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>   at java.util.ArrayList.readObject(ArrayList.java:733)
>   at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1891)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
>   at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1348)
>   at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
>   at java.util.HashMap.readObject(HashMap.java:1155)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:601)
>   at 
> java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:1004)
>   at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1891)
>   at 
> java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1796)
>   at java.io.ObjectInputStream.readObje

[jira] [Commented] (PIG-4180) e2e test Native_3 fail on Hadoop 2

2014-09-26 Thread Rohini Palaniswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150094#comment-14150094
 ] 

Rohini Palaniswamy commented on PIG-4180:
-

+1

> e2e test Native_3 fail on Hadoop 2
> --
>
> Key: PIG-4180
> URL: https://issues.apache.org/jira/browse/PIG-4180
> Project: Pig
>  Issue Type: Bug
>  Components: impl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4180-1.patch
>
>
> Test harness should use hadoop-0.23.0-streaming.jar to run Native_3 on Hadoop 
> 2. See failure on Windows, not sure why it does not manifest on Linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PIG-4050) HadoopShims.getTaskReports() can cause OOM with Hadoop 2

2014-09-26 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4050?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150079#comment-14150079
 ] 

Daniel Dai commented on PIG-4050:
-

Patch LGTM. +1.

> HadoopShims.getTaskReports() can cause OOM with Hadoop 2
> 
>
> Key: PIG-4050
> URL: https://issues.apache.org/jira/browse/PIG-4050
> Project: Pig
>  Issue Type: Bug
>Reporter: Rohini Palaniswamy
>Assignee: Rohini Palaniswamy
> Fix For: 0.14.0
>
> Attachments: PIG-4050-1.patch, PIG-4050-2.patch
>
>
> Details in 
> https://issues.apache.org/jira/browse/PIG-4043?focusedCommentId=14046878&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14046878
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PIG-4176) Fix tez e2e test Bloom_[1-3]

2014-09-26 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai resolved PIG-4176.
-
  Resolution: Fixed
Hadoop Flags: Reviewed

Patch committed to trunk. Thanks Rohini for review!

> Fix tez e2e test Bloom_[1-3]
> 
>
> Key: PIG-4176
> URL: https://issues.apache.org/jira/browse/PIG-4176
> Project: Pig
>  Issue Type: Bug
>  Components: tez
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4176-1.patch
>
>
> Test fail with message:
> {code}
> : ], TaskAttempt 1 failed, info=[Error: Failure while 
> running task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: F: 
> Store(/user/pig/out/daijy-1410899730-nightly.conf/Bloom_1.out:org.apache.pig.builtin.PigStorage)
>  - scope-62 Operator Key: scope-62): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 2078: Caught 
> error from UDF: Bloom 
> [./tmp_daijy-1410899730-nightly.conf_mybloom_1/part-r-0 (No such file or 
> directory)]
> : at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:310)
> : at 
> org.apache.pig.backend.hadoop.executionengine.tez.POStoreTez.getNextTuple(POStoreTez.java:113)
> : at 
> org.apache.pig.backend.hadoop.executionengine.tez.PigProcessor.runPipeline(PigProcessor.java:319)
> : at 
> org.apache.pig.backend.hadoop.executionengine.tez.PigProcessor.run(PigProcessor.java:198)
> : at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:180)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
> : at java.security.AccessController.doPrivileged(Native 
> Method)
> : at javax.security.auth.Subject.doAs(Subject.java:394)
> : at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:172)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:167)
> : at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> : at 
> java.util.concurrent.FutureTask.run(FutureTask.java:138)
> : at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> : at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> : at java.lang.Thread.run(Thread.java:695)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PIG-4173) Move to Spark 1.x

2014-09-26 Thread Richard Ding (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Ding updated PIG-4173:
--
Attachment: PIG-4173_2.patch

Adding javax.servlet dependency

> Move to Spark 1.x
> -
>
> Key: PIG-4173
> URL: https://issues.apache.org/jira/browse/PIG-4173
> Project: Pig
>  Issue Type: Sub-task
>  Components: spark
>Reporter: bc Wong
>Assignee: Richard Ding
> Attachments: PIG-4173.patch, PIG-4173_2.patch
>
>
> The Spark branch is using Spark 0.9: 
> https://github.com/apache/pig/blob/spark/ivy.xml#L438. We should probably 
> switch to Spark 1.x asap, due to Spark interface changes since 1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PIG-4176) Fix tez e2e test Bloom_[1-3]

2014-09-26 Thread Rohini Palaniswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150014#comment-14150014
 ] 

Rohini Palaniswamy commented on PIG-4176:
-

+1

> Fix tez e2e test Bloom_[1-3]
> 
>
> Key: PIG-4176
> URL: https://issues.apache.org/jira/browse/PIG-4176
> Project: Pig
>  Issue Type: Bug
>  Components: tez
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4176-1.patch
>
>
> Test fail with message:
> {code}
> : ], TaskAttempt 1 failed, info=[Error: Failure while 
> running task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: F: 
> Store(/user/pig/out/daijy-1410899730-nightly.conf/Bloom_1.out:org.apache.pig.builtin.PigStorage)
>  - scope-62 Operator Key: scope-62): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 2078: Caught 
> error from UDF: Bloom 
> [./tmp_daijy-1410899730-nightly.conf_mybloom_1/part-r-0 (No such file or 
> directory)]
> : at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:310)
> : at 
> org.apache.pig.backend.hadoop.executionengine.tez.POStoreTez.getNextTuple(POStoreTez.java:113)
> : at 
> org.apache.pig.backend.hadoop.executionengine.tez.PigProcessor.runPipeline(PigProcessor.java:319)
> : at 
> org.apache.pig.backend.hadoop.executionengine.tez.PigProcessor.run(PigProcessor.java:198)
> : at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:180)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
> : at java.security.AccessController.doPrivileged(Native 
> Method)
> : at javax.security.auth.Subject.doAs(Subject.java:394)
> : at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:172)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:167)
> : at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> : at 
> java.util.concurrent.FutureTask.run(FutureTask.java:138)
> : at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> : at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> : at java.lang.Thread.run(Thread.java:695)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PIG-4179) e2e test Limit_2 fail on some platform

2014-09-26 Thread Rohini Palaniswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150011#comment-14150011
 ] 

Rohini Palaniswamy commented on PIG-4179:
-

If Load statement is not loading as numbers, then how is the sorted order 
numeric?

> e2e test Limit_2 fail on some platform
> --
>
> Key: PIG-4179
> URL: https://issues.apache.org/jira/browse/PIG-4179
> Project: Pig
>  Issue Type: Bug
>  Components: e2e harness
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4179-1.patch
>
>
> Eg, CentOS. The sort syntax is wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (PIG-4164) After Pig job finish, Pig client spend too much time retry to connect to AM

2014-09-26 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai resolved PIG-4164.
-
  Resolution: Fixed
Hadoop Flags: Reviewed

Patch committed to trunk. Thanks Rohini for review!

> After Pig job finish, Pig client spend too much time retry to connect to AM
> ---
>
> Key: PIG-4164
> URL: https://issues.apache.org/jira/browse/PIG-4164
> Project: Pig
>  Issue Type: Bug
>  Components: impl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4164-0.patch
>
>
> For some script, after job finish, Pig spend a lot time try to connect AM 
> before get redirect to JobHistoryServer. Here is the message we saw:
> {code}
> 2014-09-10 15:13:55,370 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 0 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:56,371 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 1 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:57,372 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 2 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:57,476 [main] INFO  
> org.apache.hadoop.mapred.ClientServiceDelegate - Application state is 
> completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PIG-4173) Move to Spark 1.x

2014-09-26 Thread Richard Ding (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Ding updated PIG-4173:
--
Attachment: PIG-4173.patch

Attaching the initial patch to upgrade Spark to 1.1.0.

I made some local changes so that the patch now compiles with the latest Spark 
jar.

I have a question though: why don't we use JavaRDD throughout the code? Is this 
due to performance concerns?

> Move to Spark 1.x
> -
>
> Key: PIG-4173
> URL: https://issues.apache.org/jira/browse/PIG-4173
> Project: Pig
>  Issue Type: Sub-task
>  Components: spark
>Reporter: bc Wong
>Assignee: Richard Ding
> Attachments: PIG-4173.patch
>
>
> The Spark branch is using Spark 0.9: 
> https://github.com/apache/pig/blob/spark/ivy.xml#L438. We should probably 
> switch to Spark 1.x asap, due to Spark interface changes since 1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (PIG-4173) Move to Spark 1.x

2014-09-26 Thread Richard Ding (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-4173?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Richard Ding reassigned PIG-4173:
-

Assignee: Richard Ding

> Move to Spark 1.x
> -
>
> Key: PIG-4173
> URL: https://issues.apache.org/jira/browse/PIG-4173
> Project: Pig
>  Issue Type: Sub-task
>  Components: spark
>Reporter: bc Wong
>Assignee: Richard Ding
>
> The Spark branch is using Spark 0.9: 
> https://github.com/apache/pig/blob/spark/ivy.xml#L438. We should probably 
> switch to Spark 1.x asap, due to Spark interface changes since 1.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PIG-4164) After Pig job finish, Pig client spend too much time retry to connect to AM

2014-09-26 Thread Rohini Palaniswamy (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149863#comment-14149863
 ] 

Rohini Palaniswamy commented on PIG-4164:
-

That's fine. +1.

> After Pig job finish, Pig client spend too much time retry to connect to AM
> ---
>
> Key: PIG-4164
> URL: https://issues.apache.org/jira/browse/PIG-4164
> Project: Pig
>  Issue Type: Bug
>  Components: impl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4164-0.patch
>
>
> For some script, after job finish, Pig spend a lot time try to connect AM 
> before get redirect to JobHistoryServer. Here is the message we saw:
> {code}
> 2014-09-10 15:13:55,370 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 0 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:56,371 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 1 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:57,372 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 2 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:57,476 [main] INFO  
> org.apache.hadoop.mapred.ClientServiceDelegate - Application state is 
> completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: need use "ant jar -Dhadoopversion=23" to build, then e2e tests works?

2014-09-26 Thread Daniel Dai
Most probably your HADOOP_HOME is not defined correctly. Do you have
"-Dharness.hadoop.home" in the command line?

On Fri, Sep 26, 2014 at 8:21 AM, Rohini Palaniswamy
 wrote:
> No. Seems like a bug introduced after breaking down the fat jar. Please
> file a jira.
>
> On Thu, Sep 25, 2014 at 10:56 PM, Zhang, Liyun 
> wrote:
>
>> Hi all,
>>   I'm using e2e test of pig(
>> https://cwiki.apache.org/confluence/display/PIG/HowToTest#HowToTest-HowtoRune2eTests).
>> My hadoop env is hadoop1.
>> Because I use hadoop-1 , I use "ant jar" to build .
>>
>> When I execute following command:
>>
>> ant -Dharness.old.pig=old_pig -Dharness.cluster.conf=hadoop_conf_dir
>> -Dharness.cluster.bin=hadoop_script test-e2e-deploy
>>
>> following error is found:
>> 67 Going to run /home/zly/prj/oss/pig/test/e2e/pig/../../../bin/pig -e
>> mkdir /user/pig/out/root-1411632015-nightly.conf/
>> 168 Cannot locate pig-core-h2.jar. do 'ant -Dhadoopversion=23 jar',
>> and try again
>>
>>
>> It seems that I need use "ant jar -Dhadoopversion=23" to build , the
>> test-e2e-deploy can success.
>>
>> Can anyone tell me my understanding is right?
>>
>>
>> Best Regards
>> Zhang,Liyun
>>
>>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


[jira] [Commented] (PIG-4164) After Pig job finish, Pig client spend too much time retry to connect to AM

2014-09-26 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149746#comment-14149746
 ] 

Daniel Dai commented on PIG-4164:
-

I check the code again and I cannot find a good answer. Ideally we can create a 
Cluster object outside the loop, however, Cluster is hadoop 2 only and we can 
only put in shims. That will inevitably make code much complex. Consider number 
of cluster object created is linear to the number of jobs, it does not seems 
too bad. I would like to check in the code as is, sounds ok?

> After Pig job finish, Pig client spend too much time retry to connect to AM
> ---
>
> Key: PIG-4164
> URL: https://issues.apache.org/jira/browse/PIG-4164
> Project: Pig
>  Issue Type: Bug
>  Components: impl
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4164-0.patch
>
>
> For some script, after job finish, Pig spend a lot time try to connect AM 
> before get redirect to JobHistoryServer. Here is the message we saw:
> {code}
> 2014-09-10 15:13:55,370 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 0 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:56,371 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 1 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:57,372 [main] INFO  org.apache.hadoop.ipc.Client - Retrying 
> connect to server: daijymacpro-2.local/10.11.2.30:55223. Already tried 2 
> time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, 
> sleepTime=1000 MILLISECONDS)
> 2014-09-10 15:13:57,476 [main] INFO  
> org.apache.hadoop.mapred.ClientServiceDelegate - Application state is 
> completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PIG-4203) Implement sparse JOIN on tables using bloom filter

2014-09-26 Thread Daniel Dai (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daniel Dai updated PIG-4203:

Assignee: Ashwin Shankar

> Implement sparse JOIN on tables using bloom filter
> --
>
> Key: PIG-4203
> URL: https://issues.apache.org/jira/browse/PIG-4203
> Project: Pig
>  Issue Type: New Feature
>Reporter: Ashwin Shankar
>Assignee: Ashwin Shankar
>
> Currently when users want to do a join on tables where one of the tables is 
> sparse(ie only a small percentage of records match during join), they could 
> use bloom filters to make the make join efficient(See PIG-2328).
> However this involves writing some code and calling couple of UDFs - 
> BuildBloom,Bloom. 
> It would be great if building of bloom filters in these cases are 
> automatically done ie Pig automatically inserts them into MR plan when users 
> specify some keyword.
> Calling this keyword "sparse" if no one has any objections.
> Eg : C = JOIN A BY a1, B BY b1 USING 'sparse';  
> Assumption here is that table mentioned on the right side of join is the 
> smaller table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PIG-4176) Fix tez e2e test Bloom_[1-3]

2014-09-26 Thread Daniel Dai (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149687#comment-14149687
 ] 

Daniel Dai commented on PIG-4176:
-

bump

> Fix tez e2e test Bloom_[1-3]
> 
>
> Key: PIG-4176
> URL: https://issues.apache.org/jira/browse/PIG-4176
> Project: Pig
>  Issue Type: Bug
>  Components: tez
>Reporter: Daniel Dai
>Assignee: Daniel Dai
> Fix For: 0.14.0
>
> Attachments: PIG-4176-1.patch
>
>
> Test fail with message:
> {code}
> : ], TaskAttempt 1 failed, info=[Error: Failure while 
> running task:org.apache.pig.backend.executionengine.ExecException: ERROR 0: 
> Exception while executing (Name: F: 
> Store(/user/pig/out/daijy-1410899730-nightly.conf/Bloom_1.out:org.apache.pig.builtin.PigStorage)
>  - scope-62 Operator Key: scope-62): 
> org.apache.pig.backend.executionengine.ExecException: ERROR 2078: Caught 
> error from UDF: Bloom 
> [./tmp_daijy-1410899730-nightly.conf_mybloom_1/part-r-0 (No such file or 
> directory)]
> : at 
> org.apache.pig.backend.hadoop.executionengine.physicalLayer.PhysicalOperator.processInput(PhysicalOperator.java:310)
> : at 
> org.apache.pig.backend.hadoop.executionengine.tez.POStoreTez.getNextTuple(POStoreTez.java:113)
> : at 
> org.apache.pig.backend.hadoop.executionengine.tez.PigProcessor.runPipeline(PigProcessor.java:319)
> : at 
> org.apache.pig.backend.hadoop.executionengine.tez.PigProcessor.run(PigProcessor.java:198)
> : at 
> org.apache.tez.runtime.LogicalIOProcessorRuntimeTask.run(LogicalIOProcessorRuntimeTask.java:324)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:180)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable$1.run(TezTaskRunner.java:172)
> : at java.security.AccessController.doPrivileged(Native 
> Method)
> : at javax.security.auth.Subject.doAs(Subject.java:394)
> : at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:172)
> : at 
> org.apache.tez.runtime.task.TezTaskRunner$TaskRunnerCallable.call(TezTaskRunner.java:167)
> : at 
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
> : at 
> java.util.concurrent.FutureTask.run(FutureTask.java:138)
> : at 
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
> : at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
> : at java.lang.Thread.run(Thread.java:695)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PIG-4203) Implement sparse JOIN on tables using bloom filter

2014-09-26 Thread Ashwin Shankar (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149677#comment-14149677
 ] 

Ashwin Shankar commented on PIG-4203:
-

Working on this.

> Implement sparse JOIN on tables using bloom filter
> --
>
> Key: PIG-4203
> URL: https://issues.apache.org/jira/browse/PIG-4203
> Project: Pig
>  Issue Type: New Feature
>Reporter: Ashwin Shankar
>
> Currently when users want to do a join on tables where one of the tables is 
> sparse(ie only a small percentage of records match during join), they could 
> use bloom filters to make the make join efficient(See PIG-2328).
> However this involves writing some code and calling couple of UDFs - 
> BuildBloom,Bloom. 
> It would be great if building of bloom filters in these cases are 
> automatically done ie Pig automatically inserts them into MR plan when users 
> specify some keyword.
> Calling this keyword "sparse" if no one has any objections.
> Eg : C = JOIN A BY a1, B BY b1 USING 'sparse';  
> Assumption here is that table mentioned on the right side of join is the 
> smaller table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (PIG-4203) Implement sparse JOIN on tables using bloom filter

2014-09-26 Thread Ashwin Shankar (JIRA)

 [ 
https://issues.apache.org/jira/browse/PIG-4203?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashwin Shankar updated PIG-4203:

Summary: Implement sparse JOIN on tables using bloom filter  (was: 
Implement sparse JOIN on table using bloom filter)

> Implement sparse JOIN on tables using bloom filter
> --
>
> Key: PIG-4203
> URL: https://issues.apache.org/jira/browse/PIG-4203
> Project: Pig
>  Issue Type: New Feature
>Reporter: Ashwin Shankar
>
> Currently when users want to do a join on tables where one of the tables is 
> sparse(ie only a small percentage of records match during join), they could 
> use bloom filters to make the make join efficient(See PIG-2328).
> However this involves writing some code and calling couple of UDFs - 
> BuildBloom,Bloom. 
> It would be great if building of bloom filters in these cases are 
> automatically done ie Pig automatically inserts them into MR plan when users 
> specify some keyword.
> Calling this keyword "sparse" if no one has any objections.
> Eg : C = JOIN A BY a1, B BY b1 USING 'sparse';  
> Assumption here is that table mentioned on the right side of join is the 
> smaller table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (PIG-4203) Implement sparse JOIN on table using bloom filter

2014-09-26 Thread Ashwin Shankar (JIRA)
Ashwin Shankar created PIG-4203:
---

 Summary: Implement sparse JOIN on table using bloom filter
 Key: PIG-4203
 URL: https://issues.apache.org/jira/browse/PIG-4203
 Project: Pig
  Issue Type: New Feature
Reporter: Ashwin Shankar


Currently when users want to do a join on tables where one of the tables is 
sparse(ie only a small percentage of records match during join), they could use 
bloom filters to make the make join efficient(See PIG-2328).
However this involves writing some code and calling couple of UDFs - 
BuildBloom,Bloom. 
It would be great if building of bloom filters in these cases are automatically 
done ie Pig automatically inserts them into MR plan when users specify some 
keyword.
Calling this keyword "sparse" if no one has any objections.
Eg : C = JOIN A BY a1, B BY b1 USING 'sparse';  

Assumption here is that table mentioned on the right side of join is the 
smaller table.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (PIG-3996) Delete zebra from svn

2014-09-26 Thread Ankit Kamboj (JIRA)

[ 
https://issues.apache.org/jira/browse/PIG-3996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14149591#comment-14149591
 ] 

Ankit Kamboj commented on PIG-3996:
---

Thanks Daniel for the information!

> Delete zebra from svn
> -
>
> Key: PIG-3996
> URL: https://issues.apache.org/jira/browse/PIG-3996
> Project: Pig
>  Issue Type: Bug
>Reporter: Cheolsoo Park
>Assignee: Cheolsoo Park
> Fix For: 0.13.0, 0.14.0
>
> Attachments: PIG-3996-1.patch
>
>
> Zebra has been deprecated for a while. Let's delete dead code!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: need use "ant jar -Dhadoopversion=23" to build, then e2e tests works?

2014-09-26 Thread Rohini Palaniswamy
No. Seems like a bug introduced after breaking down the fat jar. Please
file a jira.

On Thu, Sep 25, 2014 at 10:56 PM, Zhang, Liyun 
wrote:

> Hi all,
>   I'm using e2e test of pig(
> https://cwiki.apache.org/confluence/display/PIG/HowToTest#HowToTest-HowtoRune2eTests).
> My hadoop env is hadoop1.
> Because I use hadoop-1 , I use "ant jar" to build .
>
> When I execute following command:
>
> ant -Dharness.old.pig=old_pig -Dharness.cluster.conf=hadoop_conf_dir
> -Dharness.cluster.bin=hadoop_script test-e2e-deploy
>
> following error is found:
> 67 Going to run /home/zly/prj/oss/pig/test/e2e/pig/../../../bin/pig -e
> mkdir /user/pig/out/root-1411632015-nightly.conf/
> 168 Cannot locate pig-core-h2.jar. do 'ant -Dhadoopversion=23 jar',
> and try again
>
>
> It seems that I need use "ant jar -Dhadoopversion=23" to build , the
> test-e2e-deploy can success.
>
> Can anyone tell me my understanding is right?
>
>
> Best Regards
> Zhang,Liyun
>
>


Build failed in Jenkins: Pig-trunk #1665

2014-09-26 Thread Apache Jenkins Server
See 

Changes:

[daijy] PIG-3870: STRSPLITTOBAG UDF

--
[...truncated 99 lines...]
[mkdir] Created dir: 

[mkdir] Created dir: 

[mkdir] Created dir: 

[mkdir] Created dir: 

 [move] Moving 1 file to 


cc-compile:
   [javacc] Java Compiler Compiler Version 4.2 (Parser Generator)
   [javacc] (type "javacc" with no arguments for help)
   [javacc] Reading from file 

 . . .
   [javacc] File "TokenMgrError.java" does not exist.  Will create one.
   [javacc] File "ParseException.java" does not exist.  Will create one.
   [javacc] File "Token.java" does not exist.  Will create one.
   [javacc] File "JavaCharStream.java" does not exist.  Will create one.
   [javacc] Parser generated successfully.
   [javacc] Java Compiler Compiler Version 4.2 (Parser Generator)
   [javacc] (type "javacc" with no arguments for help)
   [javacc] Reading from file 

 . . .
   [javacc] Warning: Lookahead adequacy checking not being performed since 
option LOOKAHEAD is more than 1.  Set option FORCE_LA_CHECK to true to force 
checking.
   [javacc] File "TokenMgrError.java" does not exist.  Will create one.
   [javacc] File "ParseException.java" does not exist.  Will create one.
   [javacc] File "Token.java" does not exist.  Will create one.
   [javacc] File "JavaCharStream.java" does not exist.  Will create one.
   [javacc] Parser generated with 0 errors and 1 warnings.
   [javacc] Java Compiler Compiler Version 4.2 (Parser Generator)
   [javacc] (type "javacc" with no arguments for help)
   [javacc] Reading from file 

 . . .
   [javacc] File "TokenMgrError.java" is being rebuilt.
   [javacc] File "ParseException.java" is being rebuilt.
   [javacc] File "Token.java" is being rebuilt.
   [javacc] File "JavaCharStream.java" is being rebuilt.
   [javacc] Parser generated successfully.
   [jjtree] Java Compiler Compiler Version 4.2 (Tree Builder)
   [jjtree] (type "jjtree" with no arguments for help)
   [jjtree] Reading from file 

 . . .
   [jjtree] File "Node.java" does not exist.  Will create one.
   [jjtree] File "SimpleNode.java" does not exist.  Will create one.
   [jjtree] File "DOTParserTreeConstants.java" does not exist.  Will create one.
   [jjtree] File "JJTDOTParserState.java" does not exist.  Will create one.
   [jjtree] Annotated grammar generated successfully in 

   [javacc] Java Compiler Compiler Version 4.2 (Parser Generator)
   [javacc] (type "javacc" with no arguments for help)
   [javacc] Reading from file 

 . . .
   [javacc] File "TokenMgrError.java" does not exist.  Will create one.
   [javacc] File "ParseException.java" does not exist.  Will create one.
   [javacc] File "Token.java" does not exist.  Will create one.
   [javacc] File "SimpleCharStream.java" does not exist.  Will create one.
   [javacc] Parser generated successfully.

prepare:
[mkdir] Created dir: 


genLexer:

genParser:

genTreeParser:

gen:

compile:
 [echo] *** Building Main Sources ***
 [echo] *** To compile with all warnings enabled, supply -Dall.warnings=1 
on command line ***
 [echo] *** Else, you will only be warned about deprecations ***
[javac] Compiling 972 source files to 

[javac] 
/home/jenkins/.ivy2/cache/org.apache.hbase/hbase-common/jars/hbase-common-0.96.0-hadoop2.jar(org/apache/hadoop/hbase/io/ImmutableBytesWritable.class):
 warning: Cannot find annotation method 'value()' in type 'SuppressWarnings': 
class file for edu.umd.cs.findbugs.annotations.SuppressWarnings not found
[javac] 
/home/jenkins/.ivy2/cache/org.apache.hbase/hbase-common/jars/hbase-common-0.96.0-hadoop2.jar(org/apache/hadoop/hbase/io/ImmutableBytesWritable.class):
 warning: Cannot find annotation method 'justification()' in type 
'SuppressWarnings'
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint