2020-05-25 09:19:07 UTC - Adelina Brask: thanks a lot guys. You are the best .
----
2020-05-25 09:22:35 UTC - Frans Guelinckx: Also, I openend a ticket at debezium 
for this: <https://issues.redhat.com/browse/DBZ-2112>
----
2020-05-25 11:09:44 UTC - Luke Stephenson: The compacted topic is large (over a 
billion messages).  As such I had planned to partition the topic, both so 
publishers can produce messages faster, but also to distribute the load on the 
brokers. The entities in the compacted topic are distinct and ordering only 
needs to be guaranteed for all versions of an entity with the same id (eg if it 
was a user bank account balance, I can handle state for different bank accounts 
out of order, but all state for a single account needs to be in order and 
received by the same consumer).
From my kafka background, I'd initially just assumed I'd spin up multiple 
consumers to handle one subscription and each one would be assigned a different 
partition.  However that didn't appear to be supported.
----
2020-05-25 11:31:11 UTC - feynmanlin: @feynmanlin has joined the channel
----
2020-05-25 12:44:32 UTC - Adriaan de Haan: @Adriaan de Haan has joined the 
channel
----
2020-05-25 12:46:38 UTC - Adriaan de Haan: Hi all, I am just starting out on 
pulsar and have a basic question about subscriptions and the way that the 
backlog is managed.
----
2020-05-25 12:50:30 UTC - Adriaan de Haan: I am using pulsar as a task queue 
for a few different workers, and I want each task added to the topic to be 
processed once. It works very simply and easily, with using a shared 
subscription, new workers can join and others can leave and they keep consuming 
correctly using the correct cursor maintained by Pulsar.
The problem however is when all the consumer applications die - when the last 
of them die the subscription is removed and the cursor lost, after it is 
started up again it starts at the latest message and all intermediate messages 
are lost.
----
2020-05-25 12:51:31 UTC - Adriaan de Haan: I looked into using the "earliest" 
message instead - but then I get double-processing because processed messages 
are not deleted from the topic immediately.
----
2020-05-25 12:52:15 UTC - Adriaan de Haan: The easy solution is to have a 
"permanent" subscription that remains even after my consumer applications are 
all shut down - is there an easy way to achieve this?
----
2020-05-25 12:52:41 UTC - Adriaan de Haan: I am using the java client
----
2020-05-25 12:53:25 UTC - Patrik Kleindl: We did a similar thing with Kafka and 
provided a central topic with the state of a business object for multiple 
downstream applications.
Compaction was used to keep the storage growth in check and keep the last 
version of the business object for reference.
So in our case there would have to be multiple key-shared subscriptions on this 
topic.
----
2020-05-25 12:59:16 UTC - Adriaan de Haan: Yeah, I've just implemented exactly 
that (sending the reply topic as part of the message) - but the reply topic is 
long lived since basically every new application that "binds" and starts using 
the service will be sending and receiving a lot of messages. So I'm lazily 
instantiating and maintaining the producers for the topics.
----
2020-05-25 13:02:10 UTC - Penghui Li: &gt; The problem however is when all the 
consumer applications die - when the last of them die the subscription is 
removed and the cursor lost, after it is started up again it starts at the 
latest message and all intermediate messages are lost.
The cursor will not lost after all consumers died. The `earliest`  or `lastest` 
 is for the first time to create subscription.
----
2020-05-25 13:04:34 UTC - Penghui Li: Are you using `non-persistent` topic?
----
2020-05-25 14:02:42 UTC - Tymm: Hi there, I am running pulsar standalone with 
around 10 instances of a function in a server with 16gb of ram. There are 
around 100 messages published to the server from multiple sources (all using 
websocket to publish) and the server is constantly using close to full memory, 
cpu usage is quite low tho, is this expected? Should I upgrade my server or 
look into the code of the functions? Thanks
----
2020-05-25 14:31:25 UTC - Tymm: Hi guys, i get the following error when i run 
"sudo bin/pulsar-admin functions list"
`null`
`Reason: <http://javax.ws.rs|javax.ws.rs>.ProcessingException: Connection 
refused: localhost/127.0.0.1:80`

and in the pulsar standalone log:
`java.util.concurrent.CompletionException: 
org.apache.bookkeeper.statelib.api.exceptions.StateStoreException: Failed to 
restore rocksdb 000000000000000000/000000000000000000/000000000000000000`
        `at 
java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:292)
 ~[?:1.8.0_252]`
        `at 
java.util.concurrent.CompletableFuture.completeThrowable(CompletableFuture.java:308)
 ~[?:1.8.0_252]`
        `at 
java.util.concurrent.CompletableFuture.uniCompose(CompletableFuture.java:957) 
~[?:1.8.0_252]`
        `at 
java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:940)
 ~[?:1.8.0_252]`
        `at 
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488) 
~[?:1.8.0_252]`
        `at 
java.util.concurrent.CompletableFuture.completeExceptionally(CompletableFuture.java:1990)
 ~[?:1.8.0_252]`
        `at 
org.apache.bookkeeper.statelib.impl.journal.AbstractStateStoreWithJournal.lambda$executeIO$16(AbstractStateStoreWithJournal.java:474)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_252]`
        `at 
com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
 [com.google.guava-guava-25.1-jre.jar:?]`
        `at 
com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:57)
 [com.google.guava-guava-25.1-jre.jar:?]`
        `at 
com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
 [com.google.guava-guava-25.1-jre.jar:?]`
        `at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
[?:1.8.0_252]`
        `at java.util.concurrent.FutureTask.run(FutureTask.java:266) 
[?:1.8.0_252]`
        `at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [?:1.8.0_252]`
        `at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [?:1.8.0_252]`
        `at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_252]`
        `at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_252]`
        `at 
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
 [io.netty-netty-all-4.1.32.Final.jar:4.1.32.Final]`
        `at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]`
`Caused by: org.apache.bookkeeper.statelib.api.exceptions.StateStoreException: 
Failed to restore rocksdb 
000000000000000000/000000000000000000/000000000000000000`
        `at 
org.apache.bookkeeper.statelib.impl.rocksdb.checkpoint.RocksCheckpointer.restore(RocksCheckpointer.java:84)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.kv.RocksdbKVStore.loadRocksdbFromCheckpointStore(RocksdbKVStore.java:161)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.kv.RocksdbKVStore.init(RocksdbKVStore.java:223)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.journal.AbstractStateStoreWithJournal.lambda$initializeLocalStore$5(AbstractStateStoreWithJournal.java:202)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.journal.AbstractStateStoreWithJournal.lambda$executeIO$16(AbstractStateStoreWithJournal.java:471)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `... 12 more`
`Caused by: <http://java.io|java.io>.FileNotFoundException: 
000000000000000000/000000000000000000/000000000000000000/checkpoints/ec0064e2-56e5-44b3-9388-5d81e653df1c/metadata`
        `at 
org.apache.bookkeeper.statelib.impl.rocksdb.checkpoint.dlog.DLCheckpointStore.openInputStream(DLCheckpointStore.java:92)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.rocksdb.checkpoint.RocksCheckpointer.getLatestCheckpoint(RocksCheckpointer.java:117)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.rocksdb.checkpoint.RocksCheckpointer.restore(RocksCheckpointer.java:52)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.kv.RocksdbKVStore.loadRocksdbFromCheckpointStore(RocksdbKVStore.java:161)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.kv.RocksdbKVStore.init(RocksdbKVStore.java:223)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.journal.AbstractStateStoreWithJournal.lambda$initializeLocalStore$5(AbstractStateStoreWithJournal.java:202)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `at 
org.apache.bookkeeper.statelib.impl.journal.AbstractStateStoreWithJournal.lambda$executeIO$16(AbstractStateStoreWithJournal.java:471)
 ~[org.apache.bookkeeper-statelib-4.9.2.jar:4.9.2]`
        `... 12 more`
`14:27:44.740 [io-write-scheduler-OrderedScheduler-1-0] INFO  
org.apache.bookkeeper.statelib.impl.journal.AbstractStateStoreWithJournal - 
closing async state store 
000000000000000000/000000000000000001/000000000000000000`
`14:27:44.741 [io-read-scheduler-OrderedScheduler-1-0] INFO  
org.apache.bookkeeper.statelib.impl.journal.AbstractStateStoreWithJournal - 
Successfully close the log stream of state store 
000000000000000000/000000000000000001/000000000000000000`

Please advise. Thanks.
----
2020-05-25 14:41:49 UTC - Adriaan de Haan: I am using persistent topic
----
2020-05-25 14:42:12 UTC - Adriaan de Haan: From my testing that was not the case
----
2020-05-25 14:43:29 UTC - Adriaan de Haan: I sent msg1, msg2, msg3, created a 
subscription (with earliest) and it read msg1 and msg2, then I killed the 
process and started it up again, and the subsciption again read msg1 and msg2.
----
2020-05-25 15:12:26 UTC - Penghui Li: Did you acknowledge the received 
messages? If the message not acknowledged by the client, the broker will resend 
them to the client.
----
2020-05-25 15:14:23 UTC - Penghui Li: And the message acknowledgement is to 
accumulate a batch and send it to the server. So you’d better wait a moment to 
kill the process.
----
2020-05-25 15:16:14 UTC - Deepak Sah: Hi everyone, is this `$ pulsar-admin 
brokers update-dynamic-config brokerShutdownTimeoutMs 100`  a correct way of 
updating broker configuration? I’m getting this error `Was passed main 
parameter 'brokerDeleteInactiveTopicsEnabled' but no main parameter was 
defined` .
----
2020-05-25 15:19:23 UTC - Penghui Li: Which version are you using.  I tried 
`bin/pulsar-admin brokers update-dynamic-config --config 
brokerShutdownTimeoutMs --value 100` in the master branch, it works.
+1 : Deepak Sah
----
2020-05-25 15:21:06 UTC - Deepak Sah: yes, even I tried with `--config` and 
`--value` flags and it worked. The command I pasted was from the docs
----
2020-05-25 15:23:31 UTC - Penghui Li: Could you please paste the doc link here 
or create an issue/PR for fixing the related doc?
----
2020-05-25 15:23:59 UTC - Deepak Sah: I am creating a PR for it
----
2020-05-25 15:25:49 UTC - Penghui Li: Great, thanks!
+1 : Deepak Sah
----
2020-05-25 15:38:18 UTC - Sijie Guo: &gt; From my kafka background, I’d 
initially just assumed I’d spin up multiple consumers to handle one 
subscription and each one would be assigned a different partition. However that 
didn’t appear to be supported.
If you use partitioned topic, the data is already partitioned. each partition 
will be compacted if you enable compaction. So each partition can has its own 
compacted state.
----
2020-05-25 15:38:51 UTC - Sijie Guo: Then you can read the compacted topic use 
Failover subscription.
----
2020-05-25 15:39:05 UTC - Sijie Guo: This would achieve the exact same thing as 
you did in Kafka.
----
2020-05-25 15:40:08 UTC - Sijie Guo: @Patrik Kleindl

&gt; in our case there would have to be multiple key-shared subscriptions on 
this topic.
Does Failover subscription work for you? Or do you need key_shared subscription 
for your use case?
----
2020-05-25 16:22:57 UTC - Deepak Sah: What version of pulsar are you using? I’m 
getting `Remotely Closed` error
----
2020-05-25 16:22:58 UTC - Adriaan de Haan: thanks! it seems my acks were not 
happening!
----
2020-05-25 16:23:22 UTC - Adriaan de Haan: It is a shared subscription - for 
those the acks dont get batched, right?
----
2020-05-25 16:24:36 UTC - Adriaan de Haan: thanks a lot for helping out a 
newbie with a stupid mistake!
----
2020-05-25 16:52:31 UTC - Tymm: 2.4.2
----
2020-05-25 17:20:05 UTC - Chris DiGiovanni: @Jared Mackey I'm seeing the issue 
in our production instance running 2.4.1 and in 2.5.2 in our development 
instance.

@Tanner Nilsson Yes, the metrics endpoint still responds.  Right now we have a 
Prometheus Alert for 5xx on the Proxies, which gives us a heads up right now 
that something is wrong.
----
2020-05-25 18:26:21 UTC - Sijie Guo: Are you using zookeeper for broker 
discovery or setting brokerServiceURL for broker discovery?
----
2020-05-25 18:30:31 UTC - Chris DiGiovanni: We are using zookeeperServers 
config field for broker discovery.  brokerServiceURL and brokerWebServiceURL 
config params are left blank.
----
2020-05-25 18:32:22 UTC - Chris DiGiovanni: This Friday we enable debug logs 
for our development 2.5.2 environment as the default logging hasn't been enough 
to point to what the actual issue may be.  Since the issue hasn't occurred in 
Dev over the weekend, thinking this has to be load related.
----
2020-05-25 20:52:26 UTC - Nicolas Ha: Hello :) 
I have been reading 
<https://github.com/apache/pulsar/wiki/PIP-13:-Subscribe-to-topics-represented-by-regular-expressions>
My understanding is that the regex consumer wraps other consumers, however how 
does it deal with the cursor / reader?
----
2020-05-25 23:21:40 UTC - Sijie Guo: If you are running on k8s, I would suggest 
you trying brokerServiceURL instead of zookeeper servers.
----
2020-05-25 23:22:13 UTC - Sijie Guo: I am doubting the issues you are facing 
related to zookeeper session expiries.
----
2020-05-25 23:23:01 UTC - Sijie Guo: the regex consumer is just a wrapper over 
multiple consumers. It discovers topics and create corresponding consumers.
----
2020-05-25 23:23:19 UTC - Sijie Guo: So the acknowledgement actually happens on 
the topic it acks to
----
2020-05-26 02:54:51 UTC - Deepa: @Sijie Guo - can you please help with your 
inputs on above query?
----
2020-05-26 04:47:25 UTC - Raman Gupta: I'm in the process of decommissioning a 
bookie. I found these docs: 
<https://bookkeeper.apache.org/docs/latest/admin/decomission/> -- however the 
decommission command is running for a long time, even without much storage to 
deal with. I do see step 1 of this procedure is: "Stop the bookie" which is odd 
to me: how are the ledgers the bookie contains supposed to be copied to another 
bookie if the bookie is stopped?
----
2020-05-26 06:02:25 UTC - Sijie Guo: pingpong happens at both side
----
2020-05-26 06:03:20 UTC - Sijie Guo: the auto recovery will read messages from 
replicas to replicate.
----
2020-05-26 06:03:50 UTC - Sijie Guo: did you enable autorecovery?
----
2020-05-26 06:04:23 UTC - Sijie Guo: autorecovery should be enabled first 
before running decomissioning.
----
2020-05-26 06:12:30 UTC - Tymm: Hello, Im running 2.5.2 standalone, however 
when I run sudo bin/pulsar-admin functions list, i get the following error:
null

Reason: <http://javax.ws.rs|javax.ws.rs>.ProcessingException: Connection 
refused: localhost/127.0.0.1:8080

Please advise, Thanks
----
2020-05-26 06:14:15 UTC - Raman Gupta: I did not. Looks like it is another 
process I need to run? Is there a way to simply configure my bookies to run 
auto-recovery in-process?
----
2020-05-26 06:15:36 UTC - Ken Huang: Hi, I want to create a function by using 
REST API. After reading the document, I don't know how to set the request body, 
is there any sample?
----
2020-05-26 06:16:47 UTC - Raman Gupta: I see `autoRecoveryDaemonEnabled` . 
However, is it a better architecture / safer to run separate?
----
2020-05-26 06:36:56 UTC - Patrik Kleindl: Thanks, I think I understand it now, 
on the partition level that should be equal. Just to verify one thought: If you 
run multiple failover subscriptions on e.g. 2 instances and they use the same 
consumer name in all subscriptions then all the messages would go to a single 
instance and the other would be idle. You could only balance this by manually 
“balancing” the consumer names between the instances, right?
Our use case was only an example, we are doing a lot of stuff with Kafka 
Streams which is not yet an option with KOP, but it would be an interesting 
experiment.
----
2020-05-26 08:08:10 UTC - Raman Gupta: Never mind, I've followed the helm chart 
example and deployed it as a separate stateful set. The autorecovery process 
seems to be working but still decommissioning is not working. In the recovery 
process I see lots of errors like `NotEnoughBookiesException` and `Error while 
recovering ledger` .  All of my Pulsar namespaces have been updated with 
ensemble/write/ack settings that should be compatible with the smaller bookie 
cluster. What is going on?
----
2020-05-26 08:10:34 UTC - Raman Gupta: Also is it normal that the recovery 
process keeps exiting (normally, with exit code 0)?
----
2020-05-26 08:26:58 UTC - Raman Gupta: Looking at the ledger metadata, there 
are still many that have a larger ensemble size than the updated policy setting 
in Pulsar. Maybe Pulsar only applies those policies to *new* ledgers? How can I 
reduce my cluster size?
----
2020-05-26 08:38:16 UTC - Patrik Kleindl: Hi, regarding the state used in 
Pulsar Functions via Bookkeeper:
Does the RocksDB on the BK contain the actual K/V pair or only the reference to 
where it is stored?
And is it one global RocksDB or one per state?
The main reason I am asking is if this is viable for state-heavy processing, as 
I have made the experience with Kafka Streams that the memory used by RocksDB 
can get out of hand quickly and is outside JVM memory… especially when you have 
multiple instances of RocksDB on a single machine.
----

Reply via email to