See 
<https://builds.apache.org/job/Mahout-Quality/3525/display/redirect?page=changes>

Changes:

[rawkintrevo] rebased off of master fresh

[rawkintrevo] viennacl and viennacl-omp Pall-scala

[rawkintrevo] better cleaning

[rawkintrevo] failing on h2o, but works otherwise

[rawkintrevo] added tests

[rawkintrevo] fix antrun copy issue on h2o

[rawkintrevo] h2o dependency reduced

[rawkintrevo] add scala version to spark profiles or math-scala doesn't build

[rawkintrevo] moved integration module to end

[rawkintrevo] moved distribution module to end

[rawkintrevo] distribution was looking for jar named with classifier

[trevor.d.grant] throwing poo at the wall

[trevor.d.grant] still broken, but profile order matters

------------------------------------------
[...truncated 320.30 KB...]
X2              -4.15265                +1.78491                -2.32654        
        +0.08056
X3              -5.67991                +1.88687                -3.01022        
        +0.03954
X4              +163.17933              +51.91530               +3.14318        
        +0.03474
F-statistic: 16.385423615806292 on 4 and 4 DF,  p-value: 0.009545

Mean Squared Error: 6.4571565392236
R^2: 0.9424805502529814
- fittness tests
- durbinWatsonTest test
RegressionSuite:
- ordinary least squares
- cochrane-orcutt
BlasSuite:
AB' num partitions = 3.
{
 0 =>   {0:26.0,1:38.0}
 1 =>   {0:38.0,1:56.0}
 2 =>   {0:50.0,1:74.0}
}
- ABt
- A * B Hadamard
- A + B Elementwise
- A - B Elementwise
- A / B Elementwise
{
 0 =>   {0:5.0,1:8.0}
 1 =>   {0:8.0,1:13.0}
}
{
 0 =>   {0:5.0,1:8.0}
 1 =>   {0:8.0,1:13.0}
}
- AtA slim
{
 0 =>   {0:1.0,1:2.0,2:3.0}
 1 =>   {0:2.0,1:3.0,2:4.0}
 2 =>   {0:3.0,1:4.0,2:5.0}
}
- At
- verbosity
SimilarityAnalysisSuite:
- Cross-occurrence [A'A], [B'A] boolean data using LLR
- Cross-occurrence [A'A], [B'A] double data using LLR
- Cross-occurrence [A'A], [B'A] integer data using LLR
- Cross-occurrence two matrices with different number of columns
- Cross-occurrence two IndexedDatasets
- Cross-occurrence two IndexedDatasets different row ranks
- Cross-occurrence two IndexedDatasets LLR threshold
- LLR calc
- downsampling by number per row
ClassifierStatsSparkTestSuite:
- testFullRunningAverageAndStdDev
- testBigFullRunningAverageAndStdDev
- testStddevFullRunningAverageAndStdDev
- testFullRunningAverage
- testFullRunningAveragCopyConstructor
- testInvertedRunningAverage
- testInvertedRunningAverageAndStdDev
- testBuild
- GetMatrix
- testPrecisionRecallAndF1ScoreAsScikitLearn
RLikeDrmOpsSuite:
- A.t
{
 0 =>   {0:11.0,1:17.0}
 1 =>   {0:25.0,1:39.0}
}
{
 0 =>   {0:11.0,1:17.0}
 1 =>   {0:25.0,1:39.0}
}
- C = A %*% B
{
 0 =>   {0:11.0,1:17.0}
 1 =>   {0:25.0,1:39.0}
}
{
 0 =>   {0:11.0,1:17.0}
 1 =>   {0:25.0,1:39.0}
}
Q=
{
 0 =>   {0:0.40273861426601687,1:-0.9153150324187648}
 1 =>   {0:0.9153150324227656,1:0.40273861426427493}
}
- C = A %*% B mapBlock {}
- C = A %*% B incompatible B keys
- Spark-specific C = At %*% B , join
- C = At %*% B , join, String-keyed
- C = At %*% B , zippable, String-keyed
- C = A %*% B.t
{
 0 =>   {0:26.0,1:35.0,2:46.0,3:51.0}
 1 =>   {0:50.0,1:69.0,2:92.0,3:105.0}
 2 =>   {0:62.0,1:86.0,2:115.0,3:132.0}
 3 =>   {0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = A %*% inCoreB
{
 0 =>   {0:26.0,1:35.0,2:46.0,3:51.0}
 1 =>   {0:50.0,1:69.0,2:92.0,3:105.0}
 2 =>   {0:62.0,1:86.0,2:115.0,3:132.0}
 3 =>   {0:74.0,1:103.0,2:138.0,3:159.0}
}
- C = inCoreA %*%: B
- C = A.t %*% A
- C = A.t %*% A fat non-graph
- C = A.t %*% A non-int key
- C = A + B
A=
{
 0 =>   {0:1.0,1:2.0,2:3.0}
 1 =>   {0:3.0,1:4.0,2:5.0}
 2 =>   {0:5.0,1:6.0,2:7.0}
}
B=
{
 0 =>   {0:0.9642417991285117,1:0.8912109669210633,2:0.9068284737240229}
 1 =>   {0:0.21510881935800774,1:0.2520809477156928,2:0.11200953068118802}
 2 =>   {0:0.5065984433451026,1:0.9865993528224783,2:0.02122628475914201}
}
C=
{
 0 =>   {0:1.9642417991285117,1:2.8912109669210633,2:3.9068284737240226}
 1 =>   {0:3.215108819358008,1:4.252080947715693,2:5.112009530681188}
 2 =>   {0:5.506598443345102,1:6.986599352822479,2:7.021226284759142}
}
- C = A + B, identically partitioned
- C = A + B side test 1
- C = A + B side test 2
- C = A + B side test 3
- Ax
- A'x
- colSums, colMeans
- rowSums, rowMeans
- A.diagv
- numNonZeroElementsPerColumn
- C = A cbind B, cogroup
- C = A cbind B, zip
- B = 1 cbind A
- B = A cbind 1
- B = A + 1.0
- C = A rbind B
- C = A rbind B, with empty
- scalarOps
optimized:OpAewUnaryFunc(org.apache.mahout.sparkbindings.drm.CheckpointedDrmSpark@33015f28,<function1>,false)
- A * A -> sqr(A) rewrite 
optimizer 
rewritten:OpAewUnaryFuncFusion(org.apache.mahout.sparkbindings.drm.CheckpointedDrmSpark@35c1444b,List(OpAewUnaryFunc(OpAewUnaryFunc(OpAewUnaryFunc(org.apache.mahout.sparkbindings.drm.CheckpointedDrmSpark@35c1444b,<function1>,false),<function1>,false),<function1>,true),
 
OpAewUnaryFunc(OpAewUnaryFunc(org.apache.mahout.sparkbindings.drm.CheckpointedDrmSpark@35c1444b,<function1>,false),<function1>,false),
 
OpAewUnaryFunc(org.apache.mahout.sparkbindings.drm.CheckpointedDrmSpark@35c1444b,<function1>,false)))
- B = 1 + 2 * (A * A) ew unary function fusion
- functional apply()
- C = A + B missing rows
- C = cbind(A, B) with missing rows
collected A = 
{
 0 =>   {0:1.0,1:2.0,2:3.0}
 1 =>   {}
 2 =>   {}
 3 =>   {0:3.0,1:4.0,2:5.0}
}
collected B = 
{
 0 =>   {0:2.0,1:3.0,2:4.0}
 1 =>   {0:1.0,1:1.0,2:1.0}
 2 =>   {0:1.0,1:1.0,2:1.0}
 3 =>   {0:4.0,1:5.0,2:6.0}
}
- B = A + 1.0 missing rows
in-core mul ms: 1256
a'b 
plan:OpAtB(org.apache.mahout.sparkbindings.drm.CheckpointedDrmSpark@68396a6a,org.apache.mahout.sparkbindings.drm.CheckpointedDrmSpark@11798a9b)
a'b plan contains 3 partitions.
distributed mul ms: 2339.
- A'B, bigger
- C = At %*% B , zippable
TextDelimitedReaderWriterSuite:
- indexedDatasetDFSRead should read sparse matrix file with null rows
PreprocessorSuite:
OpMapBlock(org.apache.mahout.sparkbindings.drm.CheckpointedDrmSpark@3fe34ebd,<function1>,8,-1,true)
{1:3.0,2:5.0,3:6.0}
- asfactor test
{0:2.0,1:5.0,2:-4.0}
{0:0.8164965809277263,1:3.2659863237109037,2:8.286535263104035}
- standard scaler test
- mean center test
TFIDFSparkTestSuite:
- TF test
- TFIDF test
- MLlib TFIDF test
Run completed in 1 minute, 56 seconds.
Total number of tests run: 131
Suites: completed 19, aborted 0
Tests: succeeded 131, failed 0, canceled 0, ignored 1, pending 0
All tests passed.
[INFO] 
[INFO] --- build-helper-maven-plugin:1.9.1:remove-project-artifact 
(remove-old-mahout-artifacts) @ mahout-spark_2.10 ---
[INFO] /home/jenkins/.m2/repository/org/apache/mahout/mahout-spark_2.10 removed.
[INFO] 
[INFO] --- maven-site-plugin:3.4:attach-descriptor (attach-descriptor) @ 
mahout-spark_2.10 ---
[INFO] 
[INFO] --- maven-antrun-plugin:1.4:run (copy) @ mahout-spark_2.10 ---
project.artifactId
[INFO] Executing tasks
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Apache Mahout
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO]                                                                         
[INFO] ------------------------------------------------------------------------
[INFO] Skipping Mahout Build Tools
[INFO] This project has been banned from the build due to previous failures.
[INFO] ------------------------------------------------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO] 
[INFO] Mahout Build Tools ................................. SUCCESS [  2.910 s]
[INFO] Apache Mahout ...................................... SUCCESS [  0.796 s]
[INFO] Mahout Math ........................................ SUCCESS [01:37 min]
[INFO] Mahout HDFS ........................................ SUCCESS [  6.212 s]
[INFO] Mahout Map-Reduce .................................. SUCCESS [12:34 min]
[INFO] Mahout Integration ................................. SUCCESS [01:02 min]
[INFO] Mahout Examples .................................... SUCCESS [ 28.389 s]
[INFO] Mahout Math Scala bindings ......................... SUCCESS [04:56 min]
[INFO] Mahout Spark bindings .............................. FAILURE [02:59 min]
[INFO] Mahout H2O backend ................................. SKIPPED
[INFO] Mahout Release Package ............................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 23:52 min
[INFO] Finished at: 2017-12-18T04:10:55+00:00
[INFO] Final Memory: 69M/965M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.4:run (copy) on project 
mahout-spark_2.10: An Ant BuildException has occured: Only one of tofile and 
todir may be set. -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
[ERROR] 
[ERROR] After correcting the problems, you can resume the build with the command
[ERROR]   mvn <goals> -rf :mahout-spark_2.10
Build step 'Invoke top-level Maven targets' marked build as failure
[PMD] Skipping publisher since build result is FAILURE
[TASKS] Skipping publisher since build result is FAILURE
Archiving artifacts
[Fast Archiver] Compressed 153.73 MB of artifacts by 89.2% relative to #3524
Recording test results
Publishing Javadoc

Reply via email to