[jira] [Commented] (KAFKA-6052) Windows: Consumers not polling when isolation.level=read_committed

2018-03-21 Thread Pegerto Fernandez (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16408105#comment-16408105
 ] 

Pegerto Fernandez commented on KAFKA-6052:
--

Hello

We did some test on 1.0.1 and it seems the issue is resolved but this ticket is 
open, can anybody else confirm if this is solved with 1.0.1?

Regards.

> Windows: Consumers not polling when isolation.level=read_committed 
> ---
>
> Key: KAFKA-6052
> URL: https://issues.apache.org/jira/browse/KAFKA-6052
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer, producer 
>Affects Versions: 0.11.0.0
> Environment: Windows 10. All processes running in embedded mode.
>Reporter: Ansel Zandegran
>Assignee: Vahid Hashemian
>Priority: Major
>  Labels: windows
> Attachments: Prducer_Consumer.log, Separate_Logs.zip, kafka-logs.zip, 
> logFile.log
>
>
> *The same code is running fine in Linux.* I am trying to send a transactional 
> record with exactly once schematics. These are my producer, consumer and 
> broker setups. 
> public void sendWithTTemp(String topic, EHEvent event) {
> Properties props = new Properties();
> props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
>   "localhost:9092,localhost:9093,localhost:9094");
> //props.put("bootstrap.servers", 
> "34.240.248.190:9092,52.50.95.30:9092,52.50.95.30:9092");
> props.put(ProducerConfig.ACKS_CONFIG, "all");
> props.put(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, true);
> props.put(ProducerConfig.MAX_IN_FLIGHT_REQUESTS_PER_CONNECTION, "1");
> props.put(ProducerConfig.BATCH_SIZE_CONFIG, 16384);
> props.put(ProducerConfig.LINGER_MS_CONFIG, 1);
> props.put("transactional.id", "TID" + transactionId.incrementAndGet());
> props.put(ProducerConfig.TRANSACTION_TIMEOUT_CONFIG, "5000");
> Producer producer =
> new KafkaProducer<>(props,
> new StringSerializer(),
> new StringSerializer());
> Logger.log(this, "Initializing transaction...");
> producer.initTransactions();
> Logger.log(this, "Initializing done.");
> try {
>   Logger.log(this, "Begin transaction...");
>   producer.beginTransaction();
>   Logger.log(this, "Begin transaction done.");
>   Logger.log(this, "Sending events...");
>   producer.send(new ProducerRecord<>(topic,
>  event.getKey().toString(),
>  event.getValue().toString()));
>   Logger.log(this, "Sending events done.");
>   Logger.log(this, "Committing...");
>   producer.commitTransaction();
>   Logger.log(this, "Committing done.");
> } catch (ProducerFencedException | OutOfOrderSequenceException
> | AuthorizationException e) {
>   producer.close();
>   e.printStackTrace();
> } catch (KafkaException e) {
>   producer.abortTransaction();
>   e.printStackTrace();
> }
> producer.close();
>   }
> *In Consumer*
> I have set isolation.level=read_committed
> *In 3 Brokers*
> I'm running with the following properties
>   Properties props = new Properties();
>   props.setProperty("broker.id", "" + i);
>   props.setProperty("listeners", "PLAINTEXT://:909" + (2 + i));
>   props.setProperty("log.dirs", Configuration.KAFKA_DATA_PATH + "\\B" + 
> i);
>   props.setProperty("num.partitions", "1");
>   props.setProperty("zookeeper.connect", "localhost:2181");
>   props.setProperty("zookeeper.connection.timeout.ms", "6000");
>   props.setProperty("min.insync.replicas", "2");
>   props.setProperty("offsets.topic.replication.factor", "2");
>   props.setProperty("offsets.topic.num.partitions", "1");
>   props.setProperty("transaction.state.log.num.partitions", "2");
>   props.setProperty("transaction.state.log.replication.factor", "2");
>   props.setProperty("transaction.state.log.min.isr", "2");
> I am not getting any records in the consumer. When I set 
> isolation.level=read_uncommitted, I get the records. I assume that the 
> records are not getting commited. What could be the problem? log attached



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-5876) IQ should throw different exceptions for different errors

2018-01-30 Thread Pegerto Fernandez (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344789#comment-16344789
 ] 

Pegerto Fernandez commented on KAFKA-5876:
--

Hi,

In addition to the original comment.

When the local store is not created, or is  not yet assign to a task. 
InvalidStateStoreException its ambiguous and do not provide information about a 
not existing vs not available datastore.

Regards

> IQ should throw different exceptions for different errors
> -
>
> Key: KAFKA-5876
> URL: https://issues.apache.org/jira/browse/KAFKA-5876
> Project: Kafka
>  Issue Type: Task
>  Components: streams
>Reporter: Matthias J. Sax
>Assignee: Vito Jeng
>Priority: Major
>  Labels: needs-kip, newbie++
> Fix For: 1.2.0
>
>
> Currently, IQ does only throws {{InvalidStateStoreException}} for all errors 
> that occur. However, we have different types of errors and should throw 
> different exceptions for those types.
> For example, if a store was migrated it must be rediscovered while if a store 
> cannot be queried yet, because it is still re-created after a rebalance, the 
> user just needs to wait until store recreation is finished.
> There might be other examples, too.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-6497) streams#store ambiguous InvalidStateStoreException

2018-01-30 Thread Pegerto Fernandez (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16344757#comment-16344757
 ] 

Pegerto Fernandez commented on KAFKA-6497:
--

Hello [~mjsax]

Yes, it does,  I am sorry for the duplication, I did a search before raise the 
ticket but I fail to locate KAFKA-5876.

I am linking and closing this ticket.

> streams#store ambiguous InvalidStateStoreException
> --
>
> Key: KAFKA-6497
> URL: https://issues.apache.org/jira/browse/KAFKA-6497
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Pegerto Fernandez
>Priority: Major
>
> When using the streams API.
> When deploy new materialised views, the access to the store provide always 
> InvalidStateStoreExeception, that can be cause by a rebalance, but it is also 
> caused by not execute the topology that create the view. For example and 
> empty topic.
>  
> In this case, when there topology is running, but the local store do not 
> contain any data, the behaviour should be different, for example wait versus 
> assume the expected key to be not found. 
>  
> I am relative new to streams so please correct me if I miss something.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6497) streams#store ambiguous InvalidStateStoreException

2018-01-30 Thread Pegerto Fernandez (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Pegerto Fernandez resolved KAFKA-6497.
--
Resolution: Duplicate

> streams#store ambiguous InvalidStateStoreException
> --
>
> Key: KAFKA-6497
> URL: https://issues.apache.org/jira/browse/KAFKA-6497
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Pegerto Fernandez
>Priority: Major
>
> When using the streams API.
> When deploy new materialised views, the access to the store provide always 
> InvalidStateStoreExeception, that can be cause by a rebalance, but it is also 
> caused by not execute the topology that create the view. For example and 
> empty topic.
>  
> In this case, when there topology is running, but the local store do not 
> contain any data, the behaviour should be different, for example wait versus 
> assume the expected key to be not found. 
>  
> I am relative new to streams so please correct me if I miss something.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-6497) streams#store ambiguous InvalidStateStoreException

2018-01-29 Thread Pegerto Fernandez (JIRA)
Pegerto Fernandez created KAFKA-6497:


 Summary: streams#store ambiguous InvalidStateStoreException
 Key: KAFKA-6497
 URL: https://issues.apache.org/jira/browse/KAFKA-6497
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 1.0.0
Reporter: Pegerto Fernandez


When using the streams API.

When deploy new materialised views, the access to the store provide always 
InvalidStateStoreExeception, that can be cause by a rebalance, but it is also 
caused by not execute the topology that create the view. For example and empty 
topic.

 

In this case, when there topology is running, but the local store do not 
contain any data, the behaviour should be different, for example wait versus 
assume the expected key to be not found. 

 

I am relative new to streams so please correct me if I miss something.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)