[
https://issues.apache.org/jira/browse/BAHIR-130?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16205855#comment-16205855
]
Romeo Kienzer commented on BAHIR-130:
-------------------------------------
[~emlaver] I've tried with 3 partitions. This doesn't work. I've tried with one
partition, this works. But do we have a guarantee that it works in all cases? I
think we could still use my fix to be on the save side. I'm using this fix now
since a couple of weeks in production for my coursera course
[https://www.coursera.org/learn/exploring-visualizing-iot-data]
What I really like about my implementation is its purity. So fully functional,
no internal state must be kept (e.g. counting the number of re-tries). Only
catch I see is that we are changing the semantics from "ERROR" to "BLOCKING".
But the same semantics you would encounter if you would read a very large
database since IMHO the connector still copies the complete result into Apache
Spark (no push-down trait is used like in the MongoDB connector)
I recommend accepting this PR as it is. But I'm happy to discuss...
> Support Cloudant Lite Plan
> --------------------------
>
> Key: BAHIR-130
> URL: https://issues.apache.org/jira/browse/BAHIR-130
> Project: Bahir
> Issue Type: Improvement
> Components: Spark SQL Data Sources
> Affects Versions: Spark-2.0.0, Spark-2.0.1, Spark-2.0.2, Spark-2.1.0,
> Spark-2.1.1, Spark-2.2.0
> Environment: ApacheSpark, any
> Reporter: Romeo Kienzer
> Assignee: Romeo Kienzer
> Priority: Minor
> Fix For: Spark-2.1.1, Spark-2.2.1
>
> Original Estimate: 24h
> Remaining Estimate: 24h
>
> Cloudant has a plan called "Lite" supporting only five requests per second.
> So you end up with the following exception:
> org.apache.spark.SparkException: Job aborted due to stage failure: Task 4 in
> stage 0.0 failed 10 times, most recent failure: Lost task 4.9 in stage 0.0
> (TID 42, yp-spark-dal09-env5-0040): java.lang.RuntimeException: Database
> harlemshake2 request error: {"error":"too_many_requests","reason":"You've
> exceeded your current limit of 5 requests per second for query class. Please
> try later.","class":"query","rate":5}
> at
> org.apache.bahir.cloudant.common.JsonStoreDataAccess.getQueryResult(JsonStoreDataAccess.scala:158)
> at
> org.apache.bahir.cloudant.common.JsonStoreDataAccess.getIterator(JsonStoreDataAccess.scala:72)
> Suggestion: Change JsonStoreDataAccess.scala in a way that when a 403 HTTP
> status code is returned the response is parsed in order to obtain the rate
> limit and then throttle the query down to that limit. In addition issue a
> WARNING in the log
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)