2020-04-22 12:35:45 UTC - Ben: Hi all, thanks for Pulsar, I was previously 
using Kafka and I had all sorts of problems due to the fact that my workers 
take a variable length of time and I had to constantly work around the 
high-water-mark acking. Being able to ack messages individually is a real 
bonus. Multiple consumers on a single shared topic is also great.

I do have a question: I have a large pool of workers accessible over a 
high-latency connection, who connect into my cluster through a proxy. When I 
start the pool of workers up, they all connect in to the broker, but initially 
one of them gets all the jobs and the others get starved. The number of 
available permits on the worker that gets all the jobs goes massively negative 
at this time. After a while, things seem to sort themselves out a bit, but is 
there anything I can do to prevent this behaviour?
----
2020-04-22 12:37:57 UTC - Ben: Here's a screenshot of the Pulsar dashboard, 
taken 12 minutes after the workers have started. You can see things have 
started to stabilize but there are still a couple of workers which have 
massively exceeded their received queue size (which is 10).
----
2020-04-22 12:39:29 UTC - Ben: I am running Pulsar 2.5.0. Wondering if this 
could be related to <https://github.com/apache/pulsar/issues/6054>.
----
2020-04-22 12:41:09 UTC - Ben: Here's the output of `pulsar-admin topics 
stats-internal` on the topic.
----
2020-04-22 12:43:34 UTC - Chris Bartholomew: @Ben I suggest shrinking the 
receiverQueueSize on your consumers. By default it is 1000. It looks like that 
first consumer is grabbing all the messages. You may want to shrink the queue 
size down to 1 if you have a lot of consumers and an unreliable connection so 
that a consumer can only have 1 message to work on at a time. That way you 
should get better work distribution.
----
2020-04-22 12:44:51 UTC - Ben: @Chris Bartholomew Thanks -- the 
receiverQueueSize on these consumers is already set to 10. I tried using a 
non-partitioned topic and a receiverQueueSize of 0 (well, -1 in the go client, 
0 in the underlying C library) but that seemed to make the effect worse.
----
2020-04-22 12:47:25 UTC - Ben: The other thing I'm confused about, which might 
be relevant, is that according to my publish stats there should be about 3-4 
million messages on this topic waiting to be actioned, but the backlog is 
showing much less than that.
----
2020-04-22 12:53:39 UTC - Chris Bartholomew: Are you using message batching 
when producing the messages?
----
2020-04-22 13:20:06 UTC - Ben: Yes: batches of max 1000 with a 1s max delay. 
I'm checking the error in the callback on SendAsync(), and crashing if there's 
an error. That bit seems to be working. Also on the producer, I'm using LZ4 
compression and a SendTimeout of -1 (infinite), with BlockIfQueueFull set true.
----
2020-04-22 13:25:37 UTC - Ben: On the external workers, I do get frequent 
messages of the form `Destroyed producer which was not properly closed`, which 
seem to be coming from the `ProducerImpl::~ProducerImpl()` destructor in the 
C++ library. I don't see these messages on the producers which are internal to 
the cluster.
----
2020-04-22 13:26:21 UTC - Ben: None of this is a show-stopper for me, I'm just 
interested to know whether there is something I could be tuning to get smoother 
behaviour.
----
2020-04-22 13:27:31 UTC - Chris Bartholomew: When messages are batched at the 
producer, they are processed a single message by the broker. So, a message on 
the broker can have up to 1K messages in it.
----
2020-04-22 13:27:48 UTC - Ben: Ah!
----
2020-04-22 13:28:19 UTC - Ben: That solves that mystery :thumbsup: :smile:
----
2020-04-22 13:30:08 UTC - Ben: I could stand to batch up these kinds of 
messages (which are task specifications) much less aggressively. Would that 
feed through to the workers? i.e. if I make the producer batches smaller, will 
the workers get smaller batches?
----
2020-04-22 13:30:43 UTC - Chris Bartholomew: Yes, I think so.
----
2020-04-22 13:31:19 UTC - Ben: This all makes a lot of sense now. I'm going to 
try that. Thank you very much for your help!
----
2020-04-22 13:33:14 UTC - Chris Bartholomew: Let me know how it goes.
----
2020-04-22 13:54:04 UTC - Patrik Kleindl: @Patrik Kleindl has joined the channel
----
2020-04-22 14:16:35 UTC - Patrick Schuh: @Patrick Schuh has joined the channel
----
2020-04-22 14:30:43 UTC - Ben: Yes. That was very definitely the problem, and 
your suggestion very definitely fixed it. I dropped the batching size from 1000 
to 10, for this message type. The consumers now have a very smooth load, and 
the "available permits" number is now between -40 and 20, with a median of 
around zero.
+1 : Chris Bartholomew
----
2020-04-22 14:31:25 UTC - Ben: Also, the delivery rate shown in Grafana has 
gone from being very spiky, to very smooth indeed.
----
2020-04-22 14:31:41 UTC - Ben: So that is definitely the lesson: if you want to 
consume small batches of messages, you must produce small batches of messages.
----
2020-04-22 16:00:06 UTC - Sijie Guo: Yes it is maintained. 
----
2020-04-22 16:00:46 UTC - Sijie Guo: We are working on adding the support for 
more resources.
+1 : Alexander Ursu
----
2020-04-22 19:59:05 UTC - Michael Gokey: Has anyone documented the changes that 
were necessary to implement "Transport Encryption using TLS" and client 
authentication using okens based on JWT in Pulsar? I am having problems and 
would like to see if there is anything I missed. Thanks
----
2020-04-22 20:29:59 UTC - Pushkar Sawant: Is there any troubleshooting guide 
for operational issues with pulsar cluster? We started using the pulsar service 
in later 2019 and it’s been relatively stable so far. Lately we have been 
having few operational issues. This could be related to increased usage but in 
terms of CPU utilization, the cluster usage is well below 25%. These are some 
of the issues that we are experiencing.
1. Could not send message to broker within given timeout: I believe the timeout 
is 30 seconds. I don’t see any networking or cluster issues but producers 
started throwing this error recently.
2. Backlog quota reached: our backlog quota is set to 20Gb. Even through the 
number of messages is low the backlog quota is reached. Peeking  into the 
message, the data doesn’t feel like 20Gb in size.
Has anyone experienced similar issues? We are running Pulsar 2.3.0
----
2020-04-22 20:51:10 UTC - Sijie Guo: What kind of problems you encountered when 
following the instructions here?

<http://pulsar.apache.org/docs/en/security-tls-transport/>
----
2020-04-22 20:54:12 UTC - Sijie Guo: &gt; Could not send message to broker 
within given timeout:
Timeout can occur if you have high latency and items are pending in the 
producers’ pending queue. So you might need to check your produce latency and 
other factors to see if you can find any clues.

&gt;  Even through the number of messages is low the backlog quota is reached.
Use `topics stats` command to look into the stats of a given topic and find if 
there are any dangling subscriptions.
----
2020-04-22 21:02:11 UTC - Pushkar Sawant: Thanks @Sijie Guo
For the producer pending queue, is this a configuration or just message 
generation rate on producer side?
I have been using `topics stats` to view. We only have 1 subscription on these 
topics. For now the workaround is to clear backlog for the subscription.
----
2020-04-22 21:18:12 UTC - Sijie Guo: • maxPendingMessages is the setting in 
your producer configuration. You can also check your message generation rate.
• If you have 1 subscription, do you know why this subscription doesn’t 
acknowledge the messages?
----
2020-04-22 21:18:29 UTC - Sijie Guo: If you can share the stats with me, I can 
help you look into it as well.
----
2020-04-22 21:36:50 UTC - Pushkar Sawant: sure. I don’t have any topics in this 
particular situation. I will share stats output once i have one
----
2020-04-22 21:37:20 UTC - Sijie Guo: thanks
----
2020-04-22 22:40:10 UTC - JG: Hello, can someone tell me the difference between 
the PulsarReader and Presto SQL ? It seems that Presto don't retrieve all 
streams, PulsarReader is a continious Reader...
----
2020-04-22 23:04:40 UTC - Addison Higham: hrm... is there a way to restart 
geo-replication for a given topic namespace? I am seeing a weird issue that I 
want to tree and recreate ideally without having to restart a broker
----
2020-04-22 23:06:02 UTC - Matteo Merli: Try to unload the topic (`pulsar-admin 
topics unload $TOPIC`) that would close and re-open the topic immediately
----
2020-04-22 23:06:18 UTC - Addison Higham: it seems like multiple brokers are 
all trying to do replication of the same topic
----
2020-04-22 23:06:41 UTC - Addison Higham: and then I get a ton of `Producer 
with name 'pulsar.repl....' is already connected to topic`
----
2020-04-22 23:08:03 UTC - Addison Higham: on which cluster? the cluster doing 
the actual replication?
----
2020-04-23 00:36:02 UTC - Kai Levy: What would be the best way of accessing the 
most recent message for a topic / subscription at the time of connecting? For 
example, I want to connect a consumer to a topic and get the most recent 
message, _but not_ messages that are produced after the time of connecting. So 
ideally, I'd like to seek(latest - 1), but I don't see a way to do that with 
the api. Thanks!
----
2020-04-23 01:43:00 UTC - Carlos Olmos: @Carlos Olmos has joined the channel
----
2020-04-23 02:07:16 UTC - Carlos Olmos: Hello, I'm trying to enable the 
StreamStorageLifecycleComponent in a new pulsar cluster. But in the logs, it 
seems like it's trying to use a zookeeper service in localhost, even though the 
zkServers is setup to an external cluster. I am getting this type of messages:
``` bookkeeper: 01:33:49.615 
[io-write-scheduler-OrderedScheduler-1-0-SendThread(localhost:2181)] INFO  
org.apache.zookeeper.ClientCnxn - Opening socket connection to server 
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown 
error)
 bookkeeper: 01:33:49.606 [DLM-/stream/storage-OrderedScheduler-0-0] ERROR 
org.apache.distributedlog.bk.SimpleLedgerAllocator - Error creating ledger for 
allocating 
/stream/storage/streams_000000000000000000_000000000000000000_000000000000000000/&lt;default&gt;/allocation
 :
bookkeeper: 
org.apache.distributedlog.ZooKeeperClient$ZooKeeperConnectionException: Problem 
connecting to servers: localhost:2181```
Is there any additional configuration I have to add for the functions state 
storage to use the ZK cluster?
----
2020-04-23 03:34:01 UTC - Fayce: @Fayce has joined the channel
----
2020-04-23 03:47:09 UTC - Fayce: Rookie question: I installed pulsar 2.5.0 on 
my unbuntu 18.04 machine, and when I try to run in standalone mode in a 
terminal, the process stops with last messages:

13:45:39.424 [main-EventThread] INFO  
org.apache.pulsar.zookeeper.ZookeeperClientFactoryImpl - ZooKeeper session 
established: State:CONNECTED Timeout:30000 sessionid:0x1000117b23d000b 
local:/127.0.0.1:49698 remoteserver:localhost/127.0.0.1:2181 lastZxid:0 xid:1 
sent:1 recv:1 queuedpkts:0 pendingresp:0 queuedevents:0
        at org.apache.pulsar.PulsarStandalone.start(PulsarStandalone.java:318) 
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
        at 
org.apache.pulsar.PulsarStandaloneStarter.main(PulsarStandaloneStarter.java:119)
 [org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
Caused by: org.apache.pulsar.broker.PulsarServerException: 
<http://java.io|java.io>.IOException: Failed to bind to /0.0.0.0:8080
        at org.apache.pulsar.broker.web.WebService.start(WebService.java:212) 
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
        at org.apache.pulsar.broker.PulsarService.start(PulsarService.java:461) 
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
        ... 2 more
Caused by: <http://java.io|java.io>.IOException: Failed to bind to /0.0.0.0:8080
        at 
org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:346)
 ~[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
        at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) 
~[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
        at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
 ~[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
        at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231) 
~[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
        at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
 ~[org.eclipse.jetty-jetty-util-9.4.20.v20190813.jar:9.4.20.v20190813]
        at org.eclipse.jetty.server.Server.doStart(Server.java:385) 
~[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
        at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
 ~[org.eclipse.jetty-jetty-util-9.4.20.v20190813.jar:9.4.20.v20190813]
        at org.apache.pulsar.broker.web.WebService.start(WebService.java:195) 
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
        at org.apache.pulsar.broker.PulsarService.start(PulsarService.java:461) 
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
        ... 2 more
        at 
org.eclipse.jetty.server.ServerConnector.openAcceptChannel(ServerConnector.java:342)
 ~[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
        at 
org.eclipse.jetty.server.ServerConnector.open(ServerConnector.java:307) 
~[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
        at 
org.eclipse.jetty.server.AbstractNetworkConnector.doStart(AbstractNetworkConnector.java:80)
 ~[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
        at 
org.eclipse.jetty.server.ServerConnector.doStart(ServerConnector.java:231) 
~[org.eclipse.jetty-jetty-server-9.4.20.v20190813.jar:9.4.20.v20190813]
        at 
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
 ~[org.eclipse.jetty-jetty-util-9.4.20.v20190813.jar:9.4.20.v20190813]
        at org.apache.pulsar.broker.web.WebService.start(WebService.java:195) 
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
        at org.apache.pulsar.broker.PulsarService.start(PulsarService.java:461) 
~[org.apache.pulsar-pulsar-broker-2.5.0.jar:2.5.0]
13:45:41.655 [Thread-1] INFO  org.eclipse.jetty.server.handler.ContextHandler - 
Stopped o.e.j.s.ServletContextHandler@5481f204{/admin,null,UNAVAILABLE}
13:45:41.668 [Thread-1] INFO  org.apache.pulsar.broker.web.WebService - Web 
service closed
13:45:41.668 [Thread-1] INFO  org.apache.pulsar.broker.service.BrokerService - 
Shutting down Pulsar Broker service
13:45:41.671 [Thread-1] ERROR org.apache.pulsar.broker.service.BrokerService - 
Failed to disable broker from loadbalancer list null
:Closed type:None path:null
13:45:41.800 [pulsar-ordered-OrderedExecutor-4-0-EventThread] INFO  
org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 
0x1000117b23d000a
13:45:41.908 [Thread-1] INFO  org.apache.zookeeper.ZooKeeper - Session: 
0x1000117b23d0009 closed
ed log segment (logrecs_000000000000000008 : [LogSegmentId:44, firstTxId:8, 
lastTxId:8, version:VERSION_V5_SEQUENCE_ID, completionTime:1587613541943, 
recordCount:1, regio13:45:41.950 [DLM-/stream/storage-OrderedScheduler-10-0] 
INFO  org.apache.distributedlog.BKLogWriteHandler - Completed 
inprogress_000000000000000008 to logrecs_0000000000.
13:45:41.953 [io-read-scheduler-OrderedScheduler-1-0] INFO  
<http://org.apache.bookkeeper.stream.storage.impl.sc|org.apache.bookkeeper.stream.storage.impl.sc>.ZkStorageContainerManager
 - Successfully stopped storage container (1)
13:45:41.953 [io-read-scheduler-OrderedScheduler-1-0] INFO  
<http://org.apache.bookkeeper.stream.storage.impl.sc|org.apache.bookkeeper.stream.storage.impl.sc>.ZkStorageContainerManager
 - Storage container (1) is removed from.
13:45:41.961 [io-read-scheduler-OrderedScheduler-0-0] INFO  
<http://org.apache.bookkeeper.stream.storage.impl.sc|org.apache.bookkeeper.stream.storage.impl.sc>.ZkStorageContainerManager
 - Successfully stopped storage container (0)
13:45:41.961 [io-read-scheduler-OrderedScheduler-0-0] INFO  
<http://org.apache.bookkeeper.stream.storage.impl.sc|org.apache.bookkeeper.stream.storage.impl.sc>.ZkStorageContainerManager
 - Storage container (0) is removed from live set.
13:45:42.031 [Thread-1] INFO  org.apache.distributedlog.BookKeeperClient - 
BookKeeper Client closed 
bk:<distributedlog://127.0.0.1:2181/stream/storage:factory_writer_share13:45:42.034>
 [DL-io-0] INFO  org.apache.bookkeeper.proto.PerChannelBookieClient - 
Disconnected from bookie channel [id: 0x402469cc, L:/127.0.0.1:33566 ! 
R:localhost/127.013:45:42.035 [Thread-1] INFO  
org.apache.distributedlog.ZooKeeperClient - Close zookeeper client 
bk:<distributedlog://127.0.0.1:2181/stream/storage:factory_writer_shared:zk>.
13:45:42.035 [Thread-1] INFO  org.apache.distributedlog.ZooKeeperClient - 
Closing zookeeper client 
bk:<distributedlog://127.0.0.1:2181/stream/storage:factory_writer_shared:zk>.
13:45:42.139 [io-write-scheduler-OrderedScheduler-0-0-EventThread] INFO  
org.apache.zookeeper.ClientCnxn - EventThread shut down for session: 
0x1000117b23d0008
13:45:42.139 [Thread-1] INFO  org.apache.zookeeper.ZooKeeper - Session: 
0x1000117b23d0008 closed
13:45:42.139 [Thread-1] INFO  org.apache.distributedlog.ZooKeeperClient - 
Closed zookeeper client 
bk:<distributedlog://127.0.0.1:2181/stream/storage:factory_writer_shared:zk>.
13:45:42.140 [Thread-1] INFO  org.apache.distributedlog.ZooKeeperClient - Close 
zookeeper client 
dlzk:<distributedlog://127.0.0.1:2181/stream/storage:factory_writer_shared>.
13:45:42.247 [Thread-1] INFO  org.apache.distributedlog.ZooKeeperClient - 
Closed zookeeper client 
dlzk:<distributedlog://127.0.0.1:2181/stream/storage:factory_writer_shared>.
13:45:42.250 [Thread-1] INFO  org.apache.distributedlog.impl.BKNamespaceDriver 
- Release external resources used by channel factory.
13:45:42.251 [Thread-1] INFO  org.apache.distributedlog.impl.BKNamespaceDriver 
- Stopped request timer
13:45:42.252 [Thread-1] INFO  
org.apache.distributedlog.BKDistributedLogNamespace - Executor Service Stopped.
13:45:42.252 [Curator-Framework-0] INFO  
org.apache.curator.framework.imps.CuratorFrameworkImpl - 
backgroundOperationsLoop exiting
13:45:42.359 [Thread-1] INFO  org.apache.zookeeper.ZooKeeper - Session: 
0x1000117b23d0004 closed
13:45:42.359 [main-EventThread] INFO  org.apache.zookeeper.ClientCnxn - 
EventThread shut down for session: 0x1000117b23d0004
13:45:42.359 [Thread-1] INFO  org.apache.bookkeeper.proto.BookieServer - 
Shutting down BookieServer
13:45:42.374 [Thread-1] INFO  org.apache.bookkeeper.bookie.Bookie - Turning 
bookie to read only during shut down
13:45:42.374 [Thread-1] INFO  org.apache.bookkeeper.bookie.SyncThread - 
Shutting down SyncThread
13:45:42.375 [SyncThread-7-1] INFO  
org.apache.bookkeeper.bookie.EntryLogManagerBase - Creating a new entry log 
file because current active log channel has not initialized yet
13:45:42.378 [SyncThread-7-1] INFO  
org.apache.bookkeeper.bookie.EntryLoggerAllocator - Created new entry log file 
data/standalone/bookkeeper0/current/e.log for logId 14.
13:45:42.380 [pool-5-thread-1] INFO  
org.apache.bookkeeper.bookie.EntryLoggerAllocator - Created new entry log file 
data/standalone/bookkeeper0/current/f.log for logId 15.
13:45:42.402 [SyncThread-7-1] INFO  org.apache.bookkeeper.bookie.Journal - 
garbage collected journal 171a4e0e756.txn
13:45:42.403 [SyncThread-7-1] INFO  org.apache.bookkeeper.bookie.SyncThread - 
Flush ledger storage at checkpoint CheckpointList{checkpoints=[LogMark: 
logFileId - 1587609134940 , logFileOffset - 3072]}.
13:45:42.407 [Thread-1] INFO  org.apache.bookkeeper.bookie.Journal - Shutting 
down Journal
13:45:42.407 [ForceWriteThread] INFO  org.apache.bookkeeper.bookie.Journal - 
ForceWrite thread interrupted
13:45:42.408 [BookieJournal-3181] INFO  org.apache.bookkeeper.bookie.Journal - 
Journal exits when shutting down
13:45:42.408 [BookieJournal-3181] INFO  org.apache.bookkeeper.bookie.Journal - 
Journal exited loop!
13:45:42.408 [Thread-1] INFO  org.apache.bookkeeper.bookie.Journal - Finished 
Shutting down Journal thread
13:45:42.408 [Bookie-3181] INFO  org.apache.bookkeeper.bookie.Bookie - Journal 
thread(s) quit.
13:45:42.434 [Thread-1] INFO  
org.apache.bookkeeper.bookie.GarbageCollectorThread - Shutting down 
GarbageCollectorThread
13:45:42.434 [Thread-1] INFO  org.apache.bookkeeper.bookie.EntryLogger - 
Stopping EntryLogger
13:45:42.437 [Thread-1] INFO  org.apache.bookkeeper.bookie.EntryLoggerAllocator 
- Stopped entry logger preallocator.
13:45:42.438 [Thread-1] INFO  org.apache.bookkeeper.bookie.LedgerDirsMonitor - 
Shutting down LedgerDirsMonitor
13:45:42.542 [Thread-1] INFO  org.apache.zookeeper.ZooKeeper - Session: 
0x1000117b23d0001 closed
13:45:42.542 [main-EventThread] INFO  org.apache.zookeeper.ClientCnxn - 
EventThread shut down for session: 0x1000117b23d0001
13:45:42.648 [Thread-1] INFO  org.apache.zookeeper.ZooKeeper - Session: 
0x1000117b23d0000 closed
13:45:42.648 [main-EventThread] INFO  org.apache.zookeeper.ClientCnxn - 
EventThread shut down for session: 0x1000117b23d0000
13:45:42.648 [Thread-1] INFO  org.apache.zookeeper.server.ZooKeeperServer - 
shutting down
13:45:42.649 [Thread-1] INFO  org.apache.zookeeper.server.SessionTrackerImpl - 
Shutting down
13:45:42.649 [Thread-1] INFO  org.apache.zookeeper.server.PrepRequestProcessor 
- Shutting down
13:45:42.649 [Thread-1] INFO  org.apache.zookeeper.server.SyncRequestProcessor 
- Shutting down
13:45:42.649 [ProcessThread(sid:0 cport:2181):] INFO  
org.apache.zookeeper.server.PrepRequestProcessor - PrepRequestProcessor exited 
loop!
13:45:42.649 [SyncThread:0] INFO  
org.apache.zookeeper.server.SyncRequestProcessor - SyncRequestProcessor exited!
13:45:42.650 [Thread-1] INFO  org.apache.zookeeper.server.FinalRequestProcessor 
- shutdown of request processor complete
13:45:42.653 [ConnnectionExpirer] INFO  
org.apache.zookeeper.server.NIOServerCnxnFactory - ConnnectionExpirerThread 
interrupted
13:45:42.655 [NIOServerCxnFactory.AcceptThread:0.0.0.0/0.0.0.0:2181] INFO  
org.apache.zookeeper.server.NIOServerCnxnFactory - accept thread exitted run 
method
13:45:42.655 [main-SendThread(127.0.0.1:2181)] INFO  
org.apache.zookeeper.ClientCnxn - Unable to read additional data from server 
sessionid 0x1000117b23d000b, likely server has closed socket, closing socket 
connection and attempting reconnect
13:45:42.655 [NIOServerCxnFactory.SelectorThread-1] INFO  
org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run 
method
13:45:42.660 [NIOServerCxnFactory.SelectorThread-0] INFO  
org.apache.zookeeper.server.NIOServerCnxnFactory - selector thread exitted run 
method
----
2020-04-23 03:47:49 UTC - Fayce: I can't see any clear error messages, so it's 
very difficult to understand what is wrong...
----
2020-04-23 03:47:58 UTC - Fayce: Can someone help? Thanks a lot
----
2020-04-23 04:47:17 UTC - Yuvaraj Loganathan: Caused by: 
<http://java.io|java.io>.IOException: Failed to bind to /0.0.0.0:8080 Seems 
like some other process is listening on 8080 . `netstat -tupln`  use this 
command to figure out which PID is listening on 8080 and kill it and start again
----
2020-04-23 04:50:17 UTC - Fayce: Hi Yuva, thanks for the help. Indeed it's one 
of my processes that is listening to this port. Is there a way to assign 
another port to pulsar?
----
2020-04-23 04:51:19 UTC - Yuvaraj Loganathan: You can change this config 
<https://github.com/apache/pulsar/blob/master/conf/standalone.conf#L31>
----
2020-04-23 04:52:08 UTC - Fayce: Indeed I stopped my application and it looks 
like it's running OK now, but I need the other process to run too. So easiest 
solution would be to assign another port to pulsar...
----
2020-04-23 04:52:37 UTC - Fayce: Thanks a lot I will change it in the conf 
file. cheers
----
2020-04-23 04:52:46 UTC - Yuvaraj Loganathan: :thumbsup:
----
2020-04-23 06:21:39 UTC - Pierre Zemb: Hi everyone :wave:
I do have a question for the pulsar SQL users: is it possible to define indexes 
to reduce the number of ledger to open by the presto workers? I’m thinking 
about an usage where I will always query with a time interval on a 
infinite-topic thanks to the tiered-storage :slightly_smiling_face:
----
2020-04-23 06:28:54 UTC - Sijie Guo: Pulsar SQL currently uses Presto as the 
query engine. So it is an interactive query engine that process the “static” 
data in the point of time you submit a query.

Pulsar SQL executes queries using segment reader.

Pulsar Reader is a non-durable consumer that read events in sequence.
----
2020-04-23 06:33:32 UTC - Sijie Guo: 1. Consumer#getLastMessageId
2. You can cast MessageId to MessageIdImpl. Then you can try to construct the 
previous message id. 
It is not ideal. but that’s a way that you can hack around at this moment.
----
2020-04-23 06:37:19 UTC - Sijie Guo: @Pierre Zemb The only “index” is the 
publish timestamp “index”. The publish time can be used for reducing the number 
of ledgers to be scanned.
----
2020-04-23 07:16:39 UTC - Pierre Zemb: That is working for my usecase 
:slightly_smiling_face: How does it work? Is there some metadata associated 
with a sealed ledger like first publish time and last publish time?
----
2020-04-23 07:45:12 UTC - fangwei: @fangwei has joined the channel
----
2020-04-23 07:51:33 UTC - Fernando Miguélez: @Fernando Miguélez has joined the 
channel
----
2020-04-23 07:55:18 UTC - Fernando Miguélez: I can download docker image from 
<http://hub.docker.com|hub.docker.com> but libraries are not yet available on 
maven central. Is there any maven repo I can download 2.5.1 libs from?
----
2020-04-23 08:43:36 UTC - Ben: Anyone had any experience of segfaults in the 
C++ client whilst de-serializing messages? 
<https://github.com/apache/pulsar/issues/6806>
----
2020-04-23 08:47:06 UTC - tuteng: <http://pulsar.apache.org/en/download/>
----
2020-04-23 09:08:27 UTC - Sijie Guo: @tuteng I think Fernando is referring the 
maven asserts.
----

Reply via email to