[ https://issues.apache.org/jira/browse/MAHOUT-1541?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14088622#comment-14088622 ]
Hudson commented on MAHOUT-1541: -------------------------------- SUCCESS: Integrated in Mahout-Quality #2733 (See [https://builds.apache.org/job/Mahout-Quality/2733/]) MAHOUT-1541, MAHOUT-1568, MAHOUT-1569 refactoring the options parser and option defaults to DRY up individual driver code putting more in base classes, tightened up the test suite with a better way of comparing actual with correct (pat: rev a80974037853c5227f9e5ef1c384a1fca134746e) * math-scala/src/main/scala/org/apache/mahout/math/cf/CooccurrenceAnalysis.scala * spark/src/main/scala/org/apache/mahout/drivers/ReaderWriter.scala * spark/src/main/scala/org/apache/mahout/sparkbindings/io/MahoutKryoRegistrator.scala * spark/src/main/scala/org/apache/mahout/drivers/MahoutOptionParser.scala * spark/src/main/scala/org/apache/mahout/drivers/IndexedDataset.scala * spark/src/main/scala/org/apache/mahout/drivers/MahoutDriver.scala * spark/src/main/scala/org/apache/mahout/cf/CooccurrenceAnalysis.scala * spark/src/main/scala/org/apache/mahout/sparkbindings/drm/CheckpointedDrmSpark.scala * spark/src/main/scala/org/apache/mahout/drivers/TextDelimitedReaderWriter.scala * spark/src/test/scala/org/apache/mahout/drivers/ItemSimilarityDriverSuite.scala * spark/src/main/scala/org/apache/mahout/drivers/ItemSimilarityDriver.scala * spark/src/main/scala/org/apache/mahout/drivers/Schema.scala * spark/src/test/scala/org/apache/mahout/cf/CooccurrenceAnalysisSuite.scala > Create CLI Driver for Spark Cooccurrence Analysis > ------------------------------------------------- > > Key: MAHOUT-1541 > URL: https://issues.apache.org/jira/browse/MAHOUT-1541 > Project: Mahout > Issue Type: New Feature > Components: CLI > Reporter: Pat Ferrel > Assignee: Pat Ferrel > > Create a CLI driver to import data in a flexible manner, create an > IndexedDataset with BiMap ID translation dictionaries, call the Spark > CooccurrenceAnalysis with the appropriate params, then write output with > external IDs optionally reattached. > Ultimately it should be able to read input as the legacy mr does but will > support reading externally defined IDs and flexible formats. Output will be > of the legacy format or text files of the user's specification with > reattached Item IDs. > Support for legacy formats is a question, users can always use the legacy > code if they want this. Internal to the IndexedDataset is a Spark DRM so > pipelining can be accomplished without any writing to an actual file so the > legacy sequence file output may not be needed. > Opinions? -- This message was sent by Atlassian JIRA (v6.2#6252)