[
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16932616#comment-16932616
]
Qihong Chen commented on KAFKA-7500:
HiĀ [~ryannedolan], Thanks for your quick responseĀ (y)(y)(y)
Now
[
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931743#comment-16931743
]
Qihong Chen edited comment on KAFKA-7500 at 9/17/19 7:22 PM:
-
Hi
[
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16931743#comment-16931743
]
Qihong Chen commented on KAFKA-7500:
Hi [~ryannedolan], just listened your Kafka Power Chat talk
[
https://issues.apache.org/jira/browse/KAFKA-7500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16927863#comment-16927863
]
Qihong Chen edited comment on KAFKA-7500 at 9/11/19 6:13 PM:
-
[~ryannedolan
+1 to merge
On Fri, Jul 24, 2015 at 9:54 AM, Dan Smith dsm...@pivotal.io wrote:
+1 to merge.
-Dan
On Thu, Jul 23, 2015 at 11:44 PM, Jianxia Chen jc...@pivotal.io wrote:
+1 for merge
On Thu, Jul 23, 2015 at 4:27 PM, Anilkumar Gingade aging...@pivotal.io
wrote:
+1 for merge.
Qihong Chen created GEODE-137:
-
Summary: Spark Connector: should connect to local GemFire server
if possible
Key: GEODE-137
URL: https://issues.apache.org/jira/browse/GEODE-137
Project: Geode
[
https://issues.apache.org/jira/browse/GEODE-120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Qihong Chen resolved GEODE-120.
---
Resolution: Fixed
RDD.saveToGemfire() can not handle big dataset (1M entries per partition
[
https://issues.apache.org/jira/browse/GEODE-114?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Qihong Chen closed GEODE-114.
-
There's race condition in DefaultGemFireConnection.getRegionProxy
[
https://issues.apache.org/jira/browse/GEODE-120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Qihong Chen updated GEODE-120:
--
Summary: RDD.saveToGemfire() can not handle big dataset (1M entries per
partition
---
This is an automatically generated e-mail. To reply, visit:
https://reviews.apache.org/r/36408/#review91625
---
Ship it!
Ship It!
- Qihong Chen
On July 10, 2015, 11:32 p.m
The problem is caused by multiple major dependencies and different release
cycles. Spark Geode Connector depends on two products: Spark and Geode (not
counting other dependencies), and Spark moves much faster than Geode, and
some features/code are not backward compatible.
Our initial connector
11 matches
Mail list logo