[
https://issues.apache.org/jira/browse/BAHIR-104?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16279429#comment-16279429
]
ASF GitHub Bot commented on BAHIR-104:
--------------------------------------
Github user ckadner commented on the issue:
https://github.com/apache/bahir/pull/55
@emlaver -- could you take a look at the build failure? Thanks
```
CloudantChangesDFSuite:
- load and save data from Cloudant database *** FAILED ***
0 did not equal 1967 (CloudantChangesDFSuite.scala:51)
```
... and with a bit more log context:
```
[INFO] --- scalatest-maven-plugin:1.0:test (test) @ spark-sql-cloudant_2.11
---
Discovery starting.
Sql-cloudant tests that require Cloudant databases have been enabled by
the environment variables CLOUDANT_USER and CLOUDANT_PASSWORD.
Discovery completed in 187 milliseconds.
Run starting. Expected test count is: 22
CloudantOptionSuite:
- invalid api receiver option throws an error message
- empty username option throws an error message
- empty password option throws an error message
- empty databaseName throws an error message
ClientSparkFunSuite:
CloudantChangesDFSuite:
- load and save data from Cloudant database *** FAILED ***
0 did not equal 1967 (CloudantChangesDFSuite.scala:51)
- load and count data from Cloudant search index
- load data and verify deleted doc is not in results
- load data and count rows in filtered dataframe
- save filtered dataframe to database
- save dataframe to database using createDBOnSave=true option
- load and count data from view
- load data from view with MapReduce function
- load data and verify total count of selector, filter, and view option
CloudantSparkSQLSuite:
- verify results from temp view of database n_airportcodemapping
- verify results from temp view of index in n_flight
CloudantAllDocsDFSuite:
- load and save data from Cloudant database
- load and count data from Cloudant search index
- load data and count rows in filtered dataframe
- save filtered dataframe to database
- save dataframe to database using createDBOnSave=true option
- load and count data from view
- load data from view with MapReduce function
Run completed in 3 minutes, 8 seconds.
Total number of tests run: 22
Suites: completed 6, aborted 0
Tests: succeeded 21, failed 1, canceled 0, ignored 0, pending 0
*** 1 TEST FAILED ***
[INFO]
------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Bahir - Parent POM .......................... SUCCESS [
4.355 s]
[INFO] Apache Bahir - Spark SQL Cloudant DataSource ....... FAILURE [06:50
min]
[INFO] Apache Bahir - Spark Streaming Akka ................ SKIPPED
[INFO] Apache Bahir - Spark SQL Streaming Akka ............ SKIPPED
[INFO] Apache Bahir - Spark Streaming MQTT ................ SKIPPED
[INFO] Apache Bahir - Spark SQL Streaming MQTT ............ SKIPPED
[INFO] Apache Bahir - Spark Streaming Twitter ............. SKIPPED
[INFO] Apache Bahir - Spark Streaming ZeroMQ .............. SKIPPED
[INFO] Apache Bahir - Spark Streaming Google PubSub ....... SKIPPED
[INFO] Apache Bahir - Spark Extensions Distribution ....... SKIPPED
[INFO]
------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO]
------------------------------------------------------------------------
[INFO] Total time: 06:55 min
[INFO] Finished at: 2017-12-05T14:33:29-08:00
[INFO] Final Memory: 67M/2606M
[INFO]
------------------------------------------------------------------------
[ERROR] Failed to execute goal
org.scalatest:scalatest-maven-plugin:1.0:test (test) on project
spark-sql-cloudant_2.11: There are test failures -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions,
please read the following articles:
[ERROR] [Help 1]
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
[ERROR]
[ERROR] After correcting the problems, you can resume the build with the
command
[ERROR] mvn <goals> -rf :spark-sql-cloudant_2.11
```
> MQTT Dstream returned by the new multi topic support API is not a pairRDD
> -------------------------------------------------------------------------
>
> Key: BAHIR-104
> URL: https://issues.apache.org/jira/browse/BAHIR-104
> Project: Bahir
> Issue Type: Bug
> Components: Spark Streaming Connectors
> Affects Versions: Spark-2.1.0
> Reporter: Francesco Beneventi
> Labels: MQTT, SPARK
>
> The new multi topic support API added with [BAHIR-89], when used in pyspark,
> does not return a Dstream of <topic,message> tuples.
> Example:
> In pyspark, when creating a Dstream using the new API ( mqttstream =
> MQTTUtils.createPairedStream(ssc, brokerUrl, topics) ) the expected contents
> of mqttstream should be a collections of tuples:
> (topic,message) , (topic,message) , (topic,message) , ...
> Instead, the current content is a flattened list:
> topic, message, topic, message, topic, message, ...
> that is hard to use.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)