2019-09-26 11:06:34 UTC - Christophe Bornet: Great news !
----
2019-09-26 11:36:01 UTC - Rajiv Abraham: Thanks for the explanation!
----
2019-09-26 13:34:41 UTC - Jesse Zhang (Bose): @Matteo Merli,  it is great that 
we can use negativeAcks with server 2.3.1. I have tested, it is working.
However, i noticed the NackRedeliveryDelay is not doing as expected. I set 
200second delay, but it is retring at second 57

See my log:
1 - ID: (10,0,-1,-1)   //at second 1,  processed message #0
2 - ID: (10,1,-1,-1)
3 - ID: (10,2,-1,-1)
4 - ID: (10,3,-1,-1)
5 - ID: (10,4,-1,-1)
6 - ID: (10,5,-1,-1)
7 - ID: (10,6,-1,-1)
8 - ID: (10,7,-1,-1)
9 - ID: (10,8,-1,-1)
10 - ID: (10,9,-1,-1)   //at second 10,  processed message #9

68 - ID: (10,9,-1,-1)    //at second 68,  processed message #9 again,  delay is 
57 seconds only.
69 - ID: (10,0,-1,-1)    //at second 69,  processed message #0 again,  delay is 
68 seconds only.
70 - ID: (10,1,-1,-1)
71 - ID: (10,2,-1,-1)
72 - ID: (10,3,-1,-1)
73 - ID: (10,4,-1,-1)
74 - ID: (10,5,-1,-1)
75 - ID: (10,6,-1,-1)
76 - ID: (10,7,-1,-1)
----
2019-09-26 13:38:04 UTC - Jesse Zhang (Bose): 
----
2019-09-26 13:49:52 UTC - Asif Ameer: @Asif Ameer has joined the channel
----
2019-09-26 13:55:41 UTC - Jesse Zhang (Bose): every time it is retrying, the 
`availablePermits` is dropping by the retrying number,  in my case,  I have 10 
messages retrying, after 5 times, I see this `"availablePermits": 950, `
----
2019-09-26 15:39:51 UTC - Jesse Zhang (Bose): another finding,  if I run the 
same client as 2 clients to the same shared subscription. I can see BOTH 
clients are retrying the same messages independently. I understand pulsar is 
trying to retry the Nacked messages on another client. But I have seen cases 
that my message can be delivered to more than 1 client at the same moment for 
processing. Is this expected behavior?
----
2019-09-26 16:43:38 UTC - Vladimir Shchur: Hi, will there be a video? I want 
some context about Jiffy, couldn't find any information about it
----
2019-09-26 17:18:16 UTC - Matteo Merli: Within a single shared subscription, 
the message will not be given at the same time to 2 consumers, though after a 
nack, it might be replayed to a different consumer.
+1 : Jesse Zhang (Bose)
----
2019-09-26 17:42:05 UTC - Jesse Zhang (Bose): @Matteo Merli, thanks for the 
reply.  Do you have some ideas on the `NackRedeliveryDelay` issue I reported?
----
2019-09-26 18:00:16 UTC - Nicolas Ha: I am just adding tracing to my app, it 
turns out sending a new message is pretty fast (~6ms), but the bulk of the time 
is spent creating a `newConsumer` or creating a `newProducer` (~20ms to 30ms). 
Are those numbers expected?
(I am using the synchronous creation method - my understanding is that the 
async version would scale better but would not change the performance on small 
loads)
----
2019-09-26 18:10:43 UTC - Gilberto Muñoz Hernández: hi @Matteo Merli, any news 
about this bug?
----
2019-09-26 18:13:43 UTC - Matteo Merli: It was fixed in 
<https://github.com/apache/pulsar/pull/5152> which is scheduled for 2.4.2 
release which should come shortly 
----
2019-09-26 20:24:54 UTC - Naby: @Naby has joined the channel
----
2019-09-26 23:38:29 UTC - Poule: `[CRITICAL] python_instance.py: Haven't 
received health check from spawner in a while. Stopping instance...`
----
2019-09-27 01:58:43 UTC - Poule: I try this:
```
bookkeeper shell recover 192.168.5.55:2181 192.168.5.55:3181
JMX enabled by default
The provided bookie dest 192.168.5.55:3181 will be ignored!
Bookies : [192.168.5.55:2181]
Are you sure to recover them : (Y/N) (Y or N) Y
```
and I get:
```
21:56:21.413 [main] INFO  org.apache.zookeeper.ZooKeeper - Initiating client 
connection, connectString=localhost:2181 sessionTimeout=30000 
watcher=org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase@5b619d14
21:56:21.431 [main-SendThread(localhost:2181)] INFO  
org.apache.zookeeper.ClientCnxn - Opening socket connection to server 
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown 
error)
```
----
2019-09-27 03:26:39 UTC - Chao Wang: @Chao Wang has joined the channel
----
2019-09-27 03:34:42 UTC - Chao Wang: Hi all nice to join this fantastic group 
here.
wave : Matteo Merli, Ali Ahmed, Poule
hugging_face : Poule
----
2019-09-27 04:24:14 UTC - Poule: ok.. what do I do with that:
```
        Ideal = [0, 1]
        Live = []
        Pending = []
        ToStart = [0, 1]
        ToStop = []
Caused by: org.apache.bookkeeper.statelib.api.exceptions.StateStoreException: 
Failed to restore rocksdb 
000000000000000001/000000000000000001/000000000000000000
        at 
org.apache.bookkeeper.statelib.impl.rocksdb.checkpoint.RocksCheckpointer.restore(RocksCheckpointer.java:84)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]
        at 
org.apache.bookkeeper.statelib.impl.kv.RocksdbKVStore.loadRocksdbFromCheckpointStore(RocksdbKVStore.java:161)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]
        at 
org.apache.bookkeeper.statelib.impl.kv.RocksdbKVStore.init(RocksdbKVStore.java:223)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]
        at 
org.apache.bookkeeper.statelib.impl.journal.AbstractStateStoreWithJournal.lambda$initializeLocalStore$5(AbstractStateStoreWithJournal.java:202)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]
        at 
org.apache.bookkeeper.statelib.impl.journal.AbstractStateStoreWithJournal.lambda$executeIO$16(AbstractStateStoreWithJournal.java:471)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]
        ... 12 more
Caused by: java.io.FileNotFoundException: 
000000000000000001/000000000000000001/000000000000000000/checkpoints/eb6c9de5-fce8-4ae2-a424-6e5780a66245/metadata
```
----
2019-09-27 04:24:42 UTC - Poule: is it `bookkeeper shell recover` ?
----
2019-09-27 04:27:01 UTC - Poule: currently running `bookkeeper autorecovery` ...
----
2019-09-27 04:53:18 UTC - Poule: the above error then makes docker container 
die on
```
04:48:58.883 [main] ERROR org.apache.pulsar.PulsarStandaloneStarter - Failed to 
start pulsar service.
io.grpc.StatusRuntimeException: INTERNAL: http2 exception
```
----
2019-09-27 04:53:47 UTC - Poule: well I guess it's the consequence of the 
rocksdb restore problem
----
2019-09-27 04:54:17 UTC - yaotailin: @yaotailin has joined the channel
----
2019-09-27 05:31:01 UTC - Poule: how can I set standalone.conf so the main 
process does not kill itself before I fix the rocksdb problem? I've changed 
some settings with no luck
----
2019-09-27 06:04:27 UTC - Nicolas Ha: 
<https://pulsar.apache.org/api/client/org/apache/pulsar/client/api/MessageId.html#fromByteArrayWithTopic-byte:A-TopicName->
`fromByteArray` and `fromByteArrayWithTopic` - is there any reason to prefer 
one over the other?
----

Reply via email to