2018-10-04 09:54:33 UTC - Markus Paasovaara: @Markus Paasovaara has joined the 
channel
----
2018-10-04 10:07:02 UTC - Markus Paasovaara: Hi, I'm using the Java 
PulsarClient. In case the connection to Pulsar drops, the client will try to 
reconnect automatically. Is there a way to disable this functionality? I'd just 
rather let Consumer.receive() throw and I could handle the error myself.
----
2018-10-04 10:07:22 UTC - Markus Paasovaara: Now it seems that the 
re-connection logic happens inside the io-thread without me having any 
visibility to the fact that the connection was lost.
----
2018-10-04 10:16:15 UTC - Ivan Kelly: I don't think this is configurable
----
2018-10-04 10:27:39 UTC - Markus Paasovaara: ok. I noticed also that if I use 
receive() with timeouts then after the timeout I can call receive again without 
the exception either. Basically the consumer will act like there's no problems 
at all, even though there's no tcp/ip connection to the broker.
----
2018-10-04 10:28:20 UTC - Markus Paasovaara: As a workaround I can probably 
check after each timeout that is the connection still alive, but I'd expect the 
consumer to throw.
----
2018-10-04 13:55:34 UTC - Brian F: @Brian F has joined the channel
----
2018-10-04 14:54:16 UTC - Grant Wu: I believe they mean in bound paper
----
2018-10-04 15:17:29 UTC - Ryan Waters: @Ryan Waters has joined the channel
----
2018-10-04 16:55:02 UTC - Alex Mault: @Alex Mault has joined the channel
----
2018-10-04 17:00:22 UTC - Alex Mault: Hello! Does anyone know if Pulsar offers 
a way to get notified upon new topics being created?
A simple example use case would be if something began attempting to read 
`/ten1/n1/dogs` (where topic `dogs` did not yet exist and was created on first 
read) a service would get notified of the new topic, then begins producing 
messages on that topic. (Say for example the contents “dog” in this case… as 
sort of an “echo” program)
----
2018-10-04 17:03:58 UTC - Matteo Merli: Currently there’s no way to do this. 
There was some thinking to add system topics for these kind of notifications, 
though we haven’t got to start any work on that.
----
2018-10-04 17:10:09 UTC - Alex Mault: Thanks for the quick reply! If I were to 
implement this and PR it, would it be of interest to the project?
----
2018-10-04 17:10:29 UTC - Matteo Merli: Of course!
----
2018-10-04 17:10:34 UTC - Alex Mault: What other system-event type topics did 
you guys have in mind?
----
2018-10-04 17:10:54 UTC - Alex Mault: Is there some mailing list or chat 
history I can look at to make sure I have the whole picture of what people were 
thinking?
----
2018-10-04 17:16:45 UTC - Matteo Merli: For this kind of feature is good to 
start with design doc to gather feedback from community (examples at 
<https://github.com/apache/pulsar/wiki>).
Share that on the <mailto:[email protected]|[email protected]> list 
to get a conversation started (mailing list is a better place to have a more 
in-depth discussion).

For the system topic specific, I’d think events like :
 * Topic creation / deletion
 * Alerts like : backlog quota reached (probably few others)
 * Policies updates
 * Other events (compaction, offloading)

One point is that I’d keep 1 system-topic per namespace so that each tenant 
application can consume its own notifications.

The topic-creation events will be very good to improve the topic regex consumer 
so that it would be able to discover new topics without polling the list of 
topics periodically.
----
2018-10-04 17:21:15 UTC - Alex Mault: Thanks!
----
2018-10-04 17:58:38 UTC - Dave Southwell: Yesterday I was asking about how 
Pulsar and Bookkeeper cleanup full ledger directories, and I thought I 
understood that over time garbage collection would come along and clean up 
unneeded ledgers.  But, today I find the directories(disks) are still full.  
:disappointed:
----
2018-10-04 18:00:15 UTC - Matteo Merli: You should check the storage size for 
the topics
----
2018-10-04 18:00:48 UTC - Matteo Merli: You can get that from the metrics (in 
Prometheus) or querying the topics stats `pulsar-admin topics stats my-topic`
----
2018-10-04 18:11:46 UTC - Dave Southwell: I'm getting 500's when looking at 
topic stats
----
2018-10-04 18:12:23 UTC - Matteo Merli: Can you check the broker log for an 
exception and paste it here?
----
2018-10-04 18:18:50 UTC - Dave Southwell: I don't see anything of interest in 
the broker log, but in the bookkeeper log I see the one runnable node trying to 
connect to the other non-runnable nodes.  The other nodes are not runnable 
because the data disk is full, and the bookkeeper service fails to start 
because of it.  Catch 22.  I could remove all the ledgers from the data disk on 
the two nodes that have full disks, but that kind of defeats the purpose of 
figuring out why the garbage wasn't cleaned up on its own. :disappointed:
----
2018-10-04 18:19:58 UTC - Dave Southwell: Or, I could add another node and add 
it to the ensemble I guess.
----
2018-10-04 18:20:19 UTC - Matteo Merli: yes, that should unblock it
----
2018-10-04 18:26:20 UTC - Dave Southwell: It still seems that the two nodes 
that have full ledger disks will never recover if I can't get bookkeeper to 
start up on them because that disk is full.  Here's the error from those nodes 
when trying to start bookkeeper. ```17:54:11.772 [bookie-io-1-12] INFO  
org.apache.bookkeeper.proto.AuthHandler - Authentication success on server side
17:54:11.796 [main] ERROR org.apache.bookkeeper.server.Main - Failed to build 
bookie server
java.io.IOException: Error open RocksDB database
        at 
org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.&lt;init&gt;(KeyValueStorageRocksDB.java:175)
 ~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at 
org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.&lt;init&gt;(KeyValueStorageRocksDB.java:79)
 ~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at 
org.apache.bookkeeper.bookie.storage.ldb.KeyValueStorageRocksDB.lambda$static$0(KeyValueStorageRocksDB.java:54)
 ~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at 
org.apache.bookkeeper.bookie.storage.ldb.LedgerMetadataIndex.&lt;init&gt;(LedgerMetadataIndex.java:70)
 ~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at 
org.apache.bookkeeper.bookie.storage.ldb.SingleDirectoryDbLedgerStorage.&lt;init&gt;(SingleDirectoryDbLedgerStorage.java:170)
 ~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at 
org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.newSingleDirectoryDbLedgerStorage(DbLedgerStorage.java:126)
 ~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at 
org.apache.bookkeeper.bookie.storage.ldb.DbLedgerStorage.initialize(DbLedgerStorage.java:112)
 ~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at org.apache.bookkeeper.bookie.Bookie.&lt;init&gt;(Bookie.java:721) 
~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at 
org.apache.bookkeeper.proto.BookieServer.newBookie(BookieServer.java:115) 
~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at 
org.apache.bookkeeper.proto.BookieServer.&lt;init&gt;(BookieServer.java:96) 
~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at 
org.apache.bookkeeper.server.service.BookieService.&lt;init&gt;(BookieService.java:42)
 ~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at org.apache.bookkeeper.server.Main.buildBookieServer(Main.java:299) 
~[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at org.apache.bookkeeper.server.Main.doMain(Main.java:219) 
[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at org.apache.bookkeeper.server.Main.main(Main.java:201) 
[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
        at org.apache.bookkeeper.proto.BookieServer.main(BookieServer.java:252) 
[org.apache.bookkeeper-bookkeeper-server-4.7.1.jar:4.7.1]
Caused by: org.rocksdb.RocksDBException: While appending to file: 
/var/opt/pulsar/data/current/ledgers/005869.sst: No space left on device
        at org.rocksdb.RocksDB.open(Native Method) 
~[org.rocksdb-rocksdbjni-5.13.3.jar:?]```
----
2018-10-04 20:46:11 UTC - Jerry Peng: @Rajan Dhabalia what is the purpose of 
this config:
<https://github.com/apache/pulsar/blob/master/pulsar-functions/worker/src/main/java/org/apache/pulsar/functions/worker/WorkerConfig.java#L60>
when will this config ever be something other than the worker hostname and port?
----
2018-10-04 21:01:06 UTC - Dave Southwell: Has anyone successfully used this: 
<https://pulsar.apache.org/docs/latest/admin/Dashboard/>
----
2018-10-04 21:02:01 UTC - Ali Ahmed: Hi dave the dashboard works fine
----
2018-10-04 21:02:59 UTC - Dave Southwell: Hmm, not here.  I have it up and it 
seems to pickup that I have a cluster called `s-pulsar` but there's no other 
data on any of the tabs.
----
2018-10-04 21:03:24 UTC - Ali Ahmed: refresh interval is about once a minute
----
2018-10-04 21:03:59 UTC - Ali Ahmed: you can look ar this compose file to get 
an idea oh it’s setup
<https://github.com/apache/pulsar/tree/master/docker-compose/standalone-dashboard>
----
2018-10-04 21:05:04 UTC - Dave Southwell: Interesting.  But, I'm not in a 
standalone config.  I have a three node ensemble.
----
2018-10-04 21:05:54 UTC - Matteo Merli: are the brokers reachable from the 
Docker container where the dashboard runs?
----
2018-10-04 21:07:11 UTC - Dave Southwell: I'll double check, but it seems so, 
as it does show the name of my cluster(non-default) cluster, it just doesn't 
show anything else.
----
2018-10-04 21:08:07 UTC - Matteo Merli: Also, is the cluster URL set in the 
metadata?
----
2018-10-04 21:08:54 UTC - Matteo Merli: The collector will first hit the 
serviceURL to discover the cluster list, then for each cluster it will fetch 
the list of brokers and contact each of them directly
----
2018-10-04 21:09:54 UTC - Dave Southwell: Do you mean when I initialize?  
Here's my initialize command.  `/opt/pulsar/bin/pulsar 
initialize-cluster-metadata --cluster s-pulsar --zookeeper 10.128.15.0:2184 
--global-zookeeper 10.128.15.0:2181 --web-service-url <http://localhost:8080>`
----
2018-10-04 21:10:10 UTC - Dave Southwell: Maybe web-service-url set as 
localhost is incorrect?
----
2018-10-04 21:10:54 UTC - Matteo Merli: Yes, set that to the same IP
----
2018-10-04 21:11:07 UTC - Matteo Merli: you can update the cluster metadata 
with:
----
2018-10-04 21:11:38 UTC - Matteo Merli: `bin/pulsar-admin clusters update ...`
----
2018-10-04 21:12:26 UTC - Matteo Merli: with same IP, I mean at least the IP of 
one of the brokers
+1 : Dave Southwell
----
2018-10-04 21:32:13 UTC - Shalin: There seems to be some issue consuming or 
publishing messages larger than ~ 7 KB to my brokers. Worked completely fine 
before, used to publish/consume messages around 6 MB to my topics and now on 
sending data to the worker, the client goes through the cycle of
`Connection lost`
`Schedule reconnection`
`Could not send pair message on connection: system:104 Connection reset by peer`
`Connection closed`
`Destroyed connection`
`Getting connection from pool`
`Removing stale connection`
`Connected to broker`
----
2018-10-04 22:58:04 UTC - Alex Mault: Did a new docs site just get deployed, or 
am I crazy?
----
2018-10-04 22:59:57 UTC - Grant Wu: A new one was deployed recently
----
2018-10-04 23:00:12 UTC - Grant Wu: But not “just”
----
2018-10-04 23:01:13 UTC - Alex Mault: strange, maybe the airline’s MITM (and 
injection of annoying ads) of the docs site made it change a bit. TLS / HTTPS 
fixes the issue.
----
2018-10-05 01:15:04 UTC - Nathanial Murphy: Is there any way to force the java 
client to _not_ use batches or to acknowledge only a single payload within a 
batch?
----
2018-10-05 01:15:41 UTC - Sijie Guo: @Nathanial Murphy there is a setting at 
ProducerBuilder to disable batch
----
2018-10-05 01:20:00 UTC - Nathanial Murphy: thanks
----
2018-10-05 02:19:33 UTC - Pablo Valdes: Correct
----
2018-10-05 04:57:04 UTC - Amitabh Akolkar: @Amitabh Akolkar has joined the 
channel
----
2018-10-05 07:21:26 UTC - dba: Hi all.
I got a Windows 10 Pro with the latest stable version of Docker. When running 
pulsar (using: docker run -it --name pulsar -p 6650:6650 -p 8080:8080 
apachepulsar/pulsar:2.1.1-incubating bin/pulsar standalone) it will shutdown by 
itself after some time. Let me give you two examples:
Nr 1:
16:21:25.351 [ProcessThread(sid:0 cport:2181):] INFO 
org.apache.zookeeper.server.PrepRequestProcessor - Got user-level 
KeeperException when processing sessionid:0x1000000beba0008 type:delete 
cxid:0x14f zxid:0x1da2 txntype:-1 reqpath:n/a Error
Path:/ledgers/00/0000 Error:KeeperErrorCode = Directory not empty for
/ledgers/00/0000
......
16:28:41.191 [zk-storage-container-manager] INFO 
org.apache.bookkeeper.stream.storage.impl.sc.ZkStorageContainerManager - 
Process container changes:
         Ideal = [0, 1]
         Live = [0, 1]
         Pending = []
         ToStart = []
         ToStop = []
16:28:47.480 [main-SendThread(localhost:2181)] WARN 
org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from 
server in 4024ms for sessionid 0x1000000beba0000 ..................
16:29:36.262 [main-SendThread(localhost:2181)] INFO 
org.apache.zookeeper.ClientCnxn - Unable to reconnect to ZooKeeper service, 
session 0x1000000beba0000 has expired, closing socket connection
16:29:36.264 [SyncThread:0] INFO
org.apache.zookeeper.server.NIOServerCnxn - Closed socket connection for client 
/127.0.0.1:37512 which had sessionid 0x1000000beba0004
16:29:36.261 [main-EventThread] INFO  org.apache.zookeeper.ClientCnxn - 
EventThread shut down for session: 0x1000000beba0000

Online: About 38 min

Nr 2:
19:36:03.226 [main-SendThread(localhost:2181)] WARN 
org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from 
server in 10046ms for sessionid 0x10000a052e60007
19:36:03.491 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] INFO 
org.apache.zookeeper.server.ZooKeeperServer - Client attempting to renew 
session 0x10000a052e60007 at /127.0.0.1:38038
19:36:03.815 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] INFO 
org.apache.zookeeper.server.ZooKeeperServer - Invalid session
0x10000a052e60007 for client /127.0.0.1:38038, probably expired
19:36:04.106 [main-SendThread(localhost:2181)] INFO 
org.apache.zookeeper.ClientCnxn - Client session timed out, have not heard from 
server in 10046ms for sessionid 0x10000a052e60007, closing socket connection 
and attempting reconnect
19:36:04.537 [main-SendThread(localhost:2181)] INFO 
org.apache.zookeeper.ClientCnxn - Opening socket connection to server 
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown 
error)
19:36:04.815 [main-SendThread(localhost:2181)] INFO 
org.apache.zookeeper.ClientCnxn - Opening socket connection to server 
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown 
error)
19:36:05.847 [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181] INFO 
org.apache.zookeeper.server.NIOServerCnxnFactory - Accepted socket connection 
from /127.0.0.1:38042
19:36:07.720 [main-SendThread(localhost:2181)] INFO 
org.apache.zookeeper.ClientCnxn - Opening socket connection to server 
localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown 
error)
19:36:08.619 [main-SendThread(localhost:2181)] INFO 
org.apache.zookeeper.ClientCnxn - Socket connection established to 
localhost/127.0.0.1:2181, initiating session
19:36:09.223 [main-SendThread(localhost:2181)] INFO 
org.apache.zookeeper.ClientCnxn - Socket connection established to 
localhost/127.0.0.1:2181, initiating session
19:36:09.669 [main-SendThread(localhost:2181)] INFO 
org.apache.zookeeper.ClientCnxn - Socket connection established to 
localhost/127.0.0.1:2181, initiating session

Online: About 51 min

Looks like Zookeeper suddenly dies? Any idea what is wrong? I was not using 
Pulsar in any way in the above cases, it is just running and then shutdown.
----
2018-10-05 07:33:49 UTC - Steven King: Possibly a _silly_ question and could 
well be nothing to do with it but have you configured Docker with enough memory?
----
2018-10-05 07:37:10 UTC - dba: Hi Steven. Good question. I am only running 
RabbitMQ and Pulsar and I have 2 gb of memory and 1 gb of swap. I will give it 
more and see what happens :slightly_smiling_face:
----
2018-10-05 07:39:04 UTC - Steven King: I normally give Docker a minimum of 4GB 
if that helps
+1 : dba, Ali Ahmed
----
2018-10-05 07:43:22 UTC - Nicolas Ha: I am trying to move my tests into a 
docker-compose environment. I get this exception when trying to create the 
client:
```
18-10-05 07:37:42 a3970cc399aa ERROR 
[org.apache.pulsar.client.impl.BinaryProtoLookupService:54] - Invalid 
service-url <pulsar://pulsar_standalone:6650> provided hostname can't be null
```
However I can do this from the same test container:
```
docker exec -it my_test_container curl 
<http://pulsar_standalone:8080/admin/v2/brokers/configuration>
```
Any idea what could be going on here?
----
2018-10-05 07:45:28 UTC - Ali Ahmed: try changing the hostname in the your 
compose to be without underscore
----
2018-10-05 07:51:43 UTC - Nicolas Ha: this is the stacktrace
```
             org.apache.pulsar.client.api.PulsarClient.create              
PulsarClient.java:   58

        org.apache.pulsar.client.impl.PulsarClientImpl.&lt;init&gt;          
PulsarClientImpl.java:   84

        org.apache.pulsar.client.impl.PulsarClientImpl.&lt;init&gt;          
PulsarClientImpl.java:   99

org.apache.pulsar.client.impl.BinaryProtoLookupService.&lt;init&gt;  
BinaryProtoLookupService.java:   52

                            java.net.InetSocketAddress.&lt;init&gt;         
InetSocketAddress.java:  216

                         java.net.InetSocketAddress.checkHost         
InetSocketAddress.java:  149

java.lang.IllegalArgumentException: hostname can't be null

```
----
2018-10-05 07:51:51 UTC - Nicolas Ha: trying that :slightly_smiling_face:
----

Reply via email to