-----------------------------------------------------------
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/57317/#review169547
-----------------------------------------------------------



Review for page 2


src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/plans/PhyPlanVisitor.java
Lines 372-375 (original), 371-374 (patched)
<https://reviews.apache.org/r/57317/#comment241943>

    Can you remove commented out code?



src/org/apache/pig/backend/hadoop/executionengine/spark/JobGraphBuilder.java
Lines 108-116 (patched)
<https://reviews.apache.org/r/57317/#comment241944>

    throw new VisitorException



src/org/apache/pig/backend/hadoop/executionengine/spark/JobGraphBuilder.java
Lines 195 (patched)
<https://reviews.apache.org/r/57317/#comment242048>

    pass the whole Exception instead of just e.getMessage()



src/org/apache/pig/backend/hadoop/executionengine/spark/JobGraphBuilder.java
Lines 263 (patched)
<https://reviews.apache.org/r/57317/#comment242050>

    Can you rename to successorPlan



src/org/apache/pig/backend/hadoop/executionengine/spark/JobGraphBuilder.java
Lines 266 (patched)
<https://reviews.apache.org/r/57317/#comment242049>

    Shouldn't exception be thrown in this case?



src/org/apache/pig/backend/hadoop/executionengine/spark/JobGraphBuilder.java
Lines 326 (patched)
<https://reviews.apache.org/r/57317/#comment242051>

    If there is an assumption, it will be LinkedHashSet then make type as 
LinkedHashSet instead of Set.



src/org/apache/pig/backend/hadoop/executionengine/spark/JobMetricsListener.java
Lines 97 (patched)
<https://reviews.apache.org/r/57317/#comment242019>

    Why capture metrics at task level? Pig client might run out of memory when 
there are lot of tasks



src/org/apache/pig/backend/hadoop/executionengine/spark/JobMetricsListener.java
Lines 179 (patched)
<https://reviews.apache.org/r/57317/#comment242018>

    What is the point of wait() if you are just returning false? Shouldn't wait 
be done in a loop till finishedJobIds.contains(jobId) ?
    
    jobMetricsListener.waitForJobToEnd(jobID); which calls this method does not 
even check for the return type. Should it be changed to void?



src/org/apache/pig/backend/hadoop/executionengine/spark/KryoSerializer.java
Lines 29-30 (patched)
<https://reviews.apache.org/r/57317/#comment242016>

    Do not use hive classes.



src/org/apache/pig/backend/hadoop/executionengine/spark/KryoSerializer.java
Lines 58 (patched)
<https://reviews.apache.org/r/57317/#comment241961>

    serializeJobConf and deserialize do not use kryo. Add these as general 
methods to pig's ObjectSerializer instead. Use object serialization instead of 
jobconf.write and compress it similar to other methods there to keep size 
small. i.e
    
    public static byte[] serializeWithoutEncoding(Object object) {
    if (obj == null)
                return "";
            try {
                ByteArrayOutputStream serialObj = new ByteArrayOutputStream();
                Deflater def = new Deflater(Deflater.BEST_COMPRESSION);
                ObjectOutputStream objStream = new ObjectOutputStream(new 
DeflaterOutputStream(
                        serialObj, def));
                objStream.writeObject(obj);
                objStream.close();
                return serialObj.toByteArray();
            } catch (Exception e) {
                throw new IOException("Serialization error: " + e.getMessage(), 
e);
            }
    }



src/org/apache/pig/backend/hadoop/executionengine/spark/MapReducePartitionerWrapper.java
Lines 79 (patched)
<https://reviews.apache.org/r/57317/#comment242011>

    Any reason for using a IndexedKey to store PigNullableWritable instead of 
directly using PigNullableWritable? It is to have Serializable implementation?
    
    The comparators for PigNullableWritable have lot of conditions for the 
different data types taken care of and IndexedKey can miss some of that. Also 
since you are repeating the index and have another object wrapper, the size of 
each record will be more.



src/org/apache/pig/backend/hadoop/executionengine/spark/MapReducePartitionerWrapper.java
Lines 102-103 (patched)
<https://reviews.apache.org/r/57317/#comment242012>

    Reflection for method is not required. You can just call getPartition() 
directly on the Partitioner



src/org/apache/pig/backend/hadoop/executionengine/spark/MapReducePartitionerWrapper.java
Lines 121-123 (patched)
<https://reviews.apache.org/r/57317/#comment242013>

    Can this be removed? It is too much debug information



src/org/apache/pig/backend/hadoop/executionengine/spark/MapReducePartitionerWrapper.java
Lines 130 (patched)
<https://reviews.apache.org/r/57317/#comment242014>

    Why do we even need equals and hashcode implementation for the Partitioner 
class?



src/org/apache/pig/backend/hadoop/executionengine/spark/MapReducePartitionerWrapper.java
Lines 133-134 (patched)
<https://reviews.apache.org/r/57317/#comment242015>

    better naming convention for variables



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkEngineConf.java
Lines 49-51 (patched)
<https://reviews.apache.org/r/57317/#comment242010>

    Nitpick. Formatting is off



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 167 (patched)
<https://reviews.apache.org/r/57317/#comment241964>

    Why call explain method in debug statements? SparkOperPlan has toString() 
implemented. You can just just do
    LOG.info("Spark plan:\n", sparkplan);  It will also get rid of System.out. 
Same for other places which call explain method.



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 175-177 (patched)
<https://reviews.apache.org/r/57317/#comment241965>

    Avoid random job id



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 179 (patched)
<https://reviews.apache.org/r/57317/#comment241966>

    You can reuse ScriptState id (pig.script.id) instead of creating another 
random uuid for the job group id.



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 249 (patched)
<https://reviews.apache.org/r/57317/#comment241995>

    No System.out statements. Please use LOG.debug. Same for other occurrences 
as well.



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 298 (patched)
<https://reviews.apache.org/r/57317/#comment241997>

    LOG.info("Running clean up");
    
    Lot of log messages start with lowercase letter. Can you fix them to start 
with uppercase.



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 302 (patched)
<https://reviews.apache.org/r/57317/#comment242000>

    Why do we have to delete ship files and cache files in local mode? Should 
not have to do it. This will end up deleting user files.



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 354 (patched)
<https://reviews.apache.org/r/57317/#comment242002>

    Adding files to Spark Job
    
    Most of the log messages start with lowercase letter, use camel case words 
or more like developer debug messages. Can we make them all proper sentences 
starting with a capital letter as they are seen by users?



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 370 (patched)
<https://reviews.apache.org/r/57317/#comment242001>

    log statement not required. addResourceToSparkJobWorkingDirectory will 
print the file name again.



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 387-388 (patched)
<https://reviews.apache.org/r/57317/#comment242003>

    Why copy to local directory if already in hdfs?



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 437 (patched)
<https://reviews.apache.org/r/57317/#comment242004>

    If local mode you should not have to copy any files. And especially should 
not be copying anything to current user directory.



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 545 (patched)
<https://reviews.apache.org/r/57317/#comment242007>

    Remove PigOnSpark: prefix for job name. It will already be PigLatin:



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 576-580 (patched)
<https://reviews.apache.org/r/57317/#comment241970>

    This is redundant. Setting true again for the same setting.



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 604 (patched)
<https://reviews.apache.org/r/57317/#comment241968>

    new ArrayList<>(allOperKeys.keySet());



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java
Lines 651-658 (patched)
<https://reviews.apache.org/r/57317/#comment241969>

    Shouldn't calling sparkContext.stop() work for this as well?



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkUtil.java
Lines 62-63 (patched)
<https://reviews.apache.org/r/57317/#comment241957>

    Keep state like default parallelism, broadcast variables in some singleton 
class like PigSparkContext instead of SparkUtil.
    
    You should also give preference to Pig's default_parallel over 
spark.default.parallel.



src/org/apache/pig/backend/hadoop/executionengine/spark/SparkUtil.java
Lines 94-97 (patched)
<https://reviews.apache.org/r/57317/#comment241963>

    Setting random job and attempt ids instead of actual ones is not a good 
thing. I have seen user udfs access them as well. If it is tricky to fix and 
will take time, create a new jira to be fixed after the merge.



src/org/apache/pig/backend/hadoop/executionengine/spark/UDFJarsFinder.java
Lines 54 (patched)
<https://reviews.apache.org/r/57317/#comment241945>

    throw new VisitorException(e);



src/org/apache/pig/backend/hadoop/executionengine/spark/converter/CollectedGroupConverter.java
Lines 51-52 (patched)
<https://reviews.apache.org/r/57317/#comment241955>

    Remove unused variables



src/org/apache/pig/backend/hadoop/executionengine/spark/converter/CounterConverter.java
Lines 41 (patched)
<https://reviews.apache.org/r/57317/#comment241956>

    Minor. Fix whitespaces in this class.



src/org/apache/pig/backend/hadoop/executionengine/spark/converter/DistinctConverter.java
Lines 69-70 (patched)
<https://reviews.apache.org/r/57317/#comment241958>

    Can remove. Too simple a code to have debug statements for in and out 
tuples.



src/org/apache/pig/backend/hadoop/executionengine/spark/converter/DistinctConverter.java
Lines 76-77 (patched)
<https://reviews.apache.org/r/57317/#comment241959>

    Can remove



src/org/apache/pig/backend/hadoop/executionengine/spark/converter/ForEachConverter.java
Lines 82-92 (patched)
<https://reviews.apache.org/r/57317/#comment241960>

    Shouldn't this also go into initialize() method?


- Rohini Palaniswamy


On March 17, 2017, 6:35 a.m., kelly zhang wrote:
> 
> -----------------------------------------------------------
> This is an automatically generated e-mail. To reply, visit:
> https://reviews.apache.org/r/57317/
> -----------------------------------------------------------
> 
> (Updated March 17, 2017, 6:35 a.m.)
> 
> 
> Review request for pig, Daniel Dai and Rohini Palaniswamy.
> 
> 
> Bugs: PIG-4059 and PIG-4854;
>     https://issues.apache.org/jira/browse/PIG-4059
>     https://issues.apache.org/jira/browse/PIG-4854;
> 
> 
> Repository: pig-git
> 
> 
> Description
> -------
> 
> Merge all changes from spark branch
> 
> 
> Diffs
> -----
> 
>   bin/pig e1212fa 
>   build.xml a0d2ca8 
>   ivy.xml 42daec9 
>   ivy/libraries.properties 481066e 
>   src/META-INF/services/org.apache.pig.ExecType 5c034c8 
>   src/docs/src/documentation/content/xdocs/start.xml c9a1491 
>   
> src/org/apache/pig/backend/hadoop/executionengine/mapReduceLayer/PigSplit.java
>  e866b28 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/PhysicalOperator.java
>  0e35273 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/expressionOperators/POUserFunc.java
>  ecf780c 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/plans/PhyPlanVisitor.java
>  3bad98b 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/plans/PhysicalPlan.java
>  2376d03 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POBroadcastSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POCollectedGroup.java
>  bcbfe2b 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POFRJoin.java
>  d80951a 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POFRJoinSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POForEach.java
>  4dc6d54 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POGlobalRearrange.java
>  52cfb73 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POMergeCogroup.java
>  4923d3f 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POMergeJoin.java
>  13f70c0 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POPoissonSample.java
>  f2830c2 
>   
> src/org/apache/pig/backend/hadoop/executionengine/physicalLayer/relationalOperators/POSort.java
>  c3a82c3 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/JobGraphBuilder.java 
> PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/JobMetricsListener.java
>  PRE-CREATION 
>   src/org/apache/pig/backend/hadoop/executionengine/spark/KryoSerializer.java 
> PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/MapReducePartitionerWrapper.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/SparkEngineConf.java 
> PRE-CREATION 
>   src/org/apache/pig/backend/hadoop/executionengine/spark/SparkExecType.java 
> PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/SparkExecutionEngine.java
>  PRE-CREATION 
>   src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLauncher.java 
> PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/SparkLocalExecType.java
>  PRE-CREATION 
>   src/org/apache/pig/backend/hadoop/executionengine/spark/SparkUtil.java 
> PRE-CREATION 
>   src/org/apache/pig/backend/hadoop/executionengine/spark/UDFJarsFinder.java 
> PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/BroadcastConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/CollectedGroupConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/CounterConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/DistinctConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/FRJoinConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/FilterConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/ForEachConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/GlobalRearrangeConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/IndexedKey.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/IteratorTransform.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/JoinGroupSparkConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/LimitConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/LoadConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/LocalRearrangeConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/MergeCogroupConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/MergeJoinConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/OutputConsumerIterator.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/PackageConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/PigSecondaryKeyComparatorSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/PoissonSampleConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/RDDConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/RankConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/ReduceByConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/SecondaryKeySortUtil.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/SkewedJoinConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/SortConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/SparkSampleSortConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/SplitConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/StoreConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/StreamConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/converter/UnionConverter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/operator/NativeSparkOperator.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/operator/POGlobalRearrangeSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/operator/POJoinGroupSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/operator/POPoissonSampleSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/operator/POReduceBySpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/operator/POSampleSortSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/optimizer/AccumulatorOptimizer.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/optimizer/CombinerOptimizer.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/optimizer/JoinGroupOptimizerSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/optimizer/MultiQueryOptimizerSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/optimizer/NoopFilterRemover.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/optimizer/ParallelismSetter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/optimizer/SecondaryKeyOptimizerSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/plan/DotSparkPrinter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/plan/SparkCompiler.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/plan/SparkCompilerException.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/plan/SparkOpPlanVisitor.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/plan/SparkOperPlan.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/plan/SparkOperator.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/plan/SparkPOPackageAnnotator.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/plan/SparkPrinter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/plan/XMLSparkPrinter.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/running/PigInputFormatSpark.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/spark/streaming/SparkExecutableManager.java
>  PRE-CREATION 
>   
> src/org/apache/pig/backend/hadoop/executionengine/util/AccumulatorOptimizerUtil.java
>  c4b44ad 
>   
> src/org/apache/pig/backend/hadoop/executionengine/util/CombinerOptimizerUtil.java
>  889c01b 
>   
> src/org/apache/pig/backend/hadoop/executionengine/util/SecondaryKeyOptimizerUtil.java
>  0b59c9c 
>   src/org/apache/pig/backend/hadoop/streaming/HadoopExecutableManager.java 
> 951146f 
>   src/org/apache/pig/data/SelfSpillBag.java d17f0a8 
>   src/org/apache/pig/impl/plan/OperatorPlan.java 8b2e2e7 
>   src/org/apache/pig/impl/util/UDFContext.java 09afc0a 
>   src/org/apache/pig/tools/pigstats/PigStatsUtil.java e97625f 
>   src/org/apache/pig/tools/pigstats/spark/SparkCounter.java PRE-CREATION 
>   src/org/apache/pig/tools/pigstats/spark/SparkCounterGroup.java PRE-CREATION 
>   src/org/apache/pig/tools/pigstats/spark/SparkCounters.java PRE-CREATION 
>   src/org/apache/pig/tools/pigstats/spark/SparkJobStats.java PRE-CREATION 
>   src/org/apache/pig/tools/pigstats/spark/SparkPigStats.java PRE-CREATION 
>   src/org/apache/pig/tools/pigstats/spark/SparkPigStatusReporter.java 
> PRE-CREATION 
>   src/org/apache/pig/tools/pigstats/spark/SparkScriptState.java PRE-CREATION 
>   src/org/apache/pig/tools/pigstats/spark/SparkStatsUtil.java PRE-CREATION 
>   test/e2e/pig/build.xml 1ec9cf6 
>   test/e2e/pig/conf/spark.conf PRE-CREATION 
>   test/e2e/pig/drivers/TestDriverPig.pm bcec317 
>   test/e2e/pig/tests/streaming.conf 18f2fb2 
>   test/excluded-tests-spark PRE-CREATION 
>   
> test/org/apache/pig/newplan/logical/relational/TestLocationInPhysicalPlan.java
>  94b34b3 
>   test/org/apache/pig/spark/TestIndexedKey.java PRE-CREATION 
>   test/org/apache/pig/spark/TestSecondarySortSpark.java PRE-CREATION 
>   test/org/apache/pig/test/MiniGenericCluster.java 9347269 
>   test/org/apache/pig/test/SparkMiniCluster.java PRE-CREATION 
>   test/org/apache/pig/test/TestAssert.java 6d4b5c6 
>   test/org/apache/pig/test/TestCase.java c9bb2fa 
>   test/org/apache/pig/test/TestCollectedGroup.java a958d33 
>   test/org/apache/pig/test/TestCombiner.java df44293 
>   test/org/apache/pig/test/TestCubeOperator.java de96e6c 
>   test/org/apache/pig/test/TestEmptyInputDir.java a9a46af 
>   test/org/apache/pig/test/TestEvalPipeline.java 48ece69 
>   test/org/apache/pig/test/TestEvalPipeline2.java c8f51d7 
>   test/org/apache/pig/test/TestEvalPipelineLocal.java c12d595 
>   test/org/apache/pig/test/TestFinish.java f18c103 
>   test/org/apache/pig/test/TestForEachNestedPlanLocal.java 63d8f67 
>   test/org/apache/pig/test/TestGrunt.java f16ff60 
>   test/org/apache/pig/test/TestHBaseStorage.java 864985e 
>   test/org/apache/pig/test/TestLimitVariable.java 53b9dae 
>   test/org/apache/pig/test/TestLineageFindRelVisitor.java e8e6aeb 
>   test/org/apache/pig/test/TestMapSideCogroup.java 2c78b4a 
>   test/org/apache/pig/test/TestMergeJoinOuter.java 81aee55 
>   test/org/apache/pig/test/TestMultiQuery.java c32eab7 
>   test/org/apache/pig/test/TestMultiQueryLocal.java b9ac035 
>   test/org/apache/pig/test/TestNativeMapReduce.java c4f6573 
>   test/org/apache/pig/test/TestNullConstant.java 3ea4509 
>   test/org/apache/pig/test/TestPigRunner.java 25380e4 
>   test/org/apache/pig/test/TestPigServer.java 8e28646 
>   test/org/apache/pig/test/TestPigServerLocal.java fbabd03 
>   test/org/apache/pig/test/TestProjectRange.java 2e3e7b8 
>   test/org/apache/pig/test/TestPruneColumn.java f05e0ec 
>   test/org/apache/pig/test/TestRank1.java 9e4ef62 
>   test/org/apache/pig/test/TestRank2.java fc802a9 
>   test/org/apache/pig/test/TestRank3.java 43af10d 
>   test/org/apache/pig/test/TestSecondarySort.java 8991010 
>   test/org/apache/pig/test/TestSkewedJoin.java 947a31b 
>   test/org/apache/pig/test/TestStoreBase.java eb3b253 
>   test/org/apache/pig/test/TezMiniCluster.java 0bf7c5a 
>   test/org/apache/pig/test/Util.java 18b241e 
>   test/org/apache/pig/test/YarnMiniCluster.java PRE-CREATION 
> 
> 
> Diff: https://reviews.apache.org/r/57317/diff/3/
> 
> 
> Testing
> -------
> 
> all test pass
> 
> 
> Thanks,
> 
> kelly zhang
> 
>

Reply via email to