2019-06-04 09:46:01 UTC - Alexandre DUVAL: Hi, there is performance cost
between: 20 messages properties for a message and no "raw" data VS. 0 message
property and all as "raw" data then parsing (I can't use avro or json schema
here)?
----
2019-06-04 10:12:19 UTC - Sébastien de Melo: Hi, we would like to update our
Pulsar cluster from 2.3.1 to 2.3.2 without neither downtime nor data loss.
What would be the best procedure to follow? I have found an old upgrade
documentation
(<https://github.com/apache/pulsar/blob/a8a1453486de6e94a76284503f80c3148bf819c3/site/docs/latest/admin/Upgrade.md>)
but I am not sure that it can work this way with a Kubernetes deployment. I
tried a rolling update (by deleting pods successively) but was unable to avoid
downtimes. Are there some commands to run to cleanly remove a zookeeper,
bookkeeper, broker or proxy from a cluster?
----
2019-06-04 14:06:31 UTC - Richard Sherman: I've currently got a fork where I
have been working on some additions to the dashboard to add some administration
functionality. Namely being able to peek messages in the backlog, clear down a
backlog and to delete a redundant subscription. Whilst these can be done with
the cli there is a desire for a UI solution where I am currently working. Is
there any interest in a PR for this?
+1 : David Kjerrumgaard, Karthik Ramasamy
----
2019-06-04 14:25:41 UTC - David Kjerrumgaard: @Richard Sherman That sounds
great. Would love to see a PR with those changes
----
2019-06-04 14:31:42 UTC - Richard Sherman: Be kind this is my first Python
and/or Django piece of work. I'm mostly a Java dev and recently helped create
the camel-pulsar component. <https://github.com/apache/pulsar/pull/4468>
----
2019-06-04 14:32:11 UTC - Grant Wu: Sounds useful to me as well. Probably
worth noting that this probably has security implications
----
2019-06-04 15:58:16 UTC - Matteo Merli: Currently, there’s no direct way to get
that information (across restarts)
----
2019-06-04 15:58:47 UTC - Matteo Merli: There would be no big difference. the
properties are stored in protobuf format within the message metadata.
----
2019-06-04 16:30:30 UTC - Addison Higham: @Matteo Merli that makes sense, by
metadata, I assume you are talking about the ledger metadata? so assuming I
started EBS snapshots of both BK and ZK volumes at the same time, the biggest
issue would be any changes near the "edges" of the ledger? so either losing
metadata about newly added segments or retaining metadata about segments that
should be purged?
----
2019-06-04 16:33:29 UTC - Matteo Merli: That’s correct, if the ledger was
“deleted” from metadata, the bookies will delete it from disk as well. if you
restore the bookie from a backup, it will come back and redo the same operation
again.
In general, there will be a disconnect between what’s the state of the world
from the metadata and what’s on the bookies disk after the backup. eg: you
rolled back a bookie to the latest backup from 1h ago and it’s missing all the
newer data, though metadata might still pointing to it.
----
2019-06-04 16:34:39 UTC - Matteo Merli: For that, it’s much easier to rely on
replication of data across multiple bookies, and same for ZK, because
everything is handled in a consistent way (eg: bookie auto-recovery to
re-establish the replication factor after a bookie crash).
----
2019-06-04 16:43:09 UTC - Addison Higham: yeah, so I am less worried about
losing disks/etc and more about the "someone deleted a topic, oops", which
still seems pretty covered against for any active topics, but we may have some
use cases that are read/written to more sporadically
----
2019-06-04 18:57:45 UTC - Ryan Samo: Ok thanks @Matteo Merli
----
2019-06-04 23:08:04 UTC - Devin G. Bost: @Jerry Peng Do I need to have a local
instance of Zookeeper running for this to work?
I'm getting:
> 17:05:55.181 [main-SendThread(localhost:2181)] INFO
org.apache.zookeeper.ClientCnxn - Socket error occurred:
localhost/127.0.0.1:2181: Connection refused
----
2019-06-04 23:14:54 UTC - Devin G. Bost: I assumed that the test was
self-contained, but it seems like:
```bkEnsemble = new LocalBookkeeperEnsemble(3, ZOOKEEPER_PORT, () ->
PortManager.nextFreePort());
bkEnsemble.start();```
is already expecting a ZK instance to be running (in a different process),
unless I'm not reading this correctly.
----
2019-06-04 23:25:57 UTC - Jerry Peng: everything (broker, BK, ZK) should be
running in the same process
----
2019-06-04 23:26:35 UTC - Jerry Peng: the ports for all of those components are
dynamically generated based on free ports
----
2019-06-04 23:39:34 UTC - Devin G. Bost: @Jerry Peng Thanks for clarifying
that. Any idea why it might have trouble connecting?
----
2019-06-04 23:53:34 UTC - Jerry Peng: might not have specified the correct
ports?
----
2019-06-04 23:53:46 UTC - Jerry Peng: zookeeper is not going to be on
127.0.0.1:2181
----
2019-06-04 23:54:00 UTC - Jerry Peng: it will be a randomly assigned port
----
2019-06-05 03:07:59 UTC - yzli: @yzli has joined the channel
----
2019-06-05 05:48:03 UTC - StevevanderMerwe: @StevevanderMerwe has joined the
channel
----