2020-05-03 10:14:22 UTC - Vladimir Shchur: To broadcast each consumer should
use different subscription name
----
2020-05-03 10:23:08 UTC - JG: Hi there, apperently it takes 1 minute to get a
pulsar function active after it started:
12:16:02.338 [pulsar-client-io-1-1] INFO Produced message with ID 15:0:-1:0
--> 12:16:*02*
And finally after 1 minute: ( 53 secs )
12:16:55.369 [pulsar-external-listener-3-1] INFO Received message with an ID of
12:0:-1:0 and a payload --> 12:16:*55*
Someone can explain this behavior ? Why it takes so much time to have a pulsar
function effective ? ( it started and running correctly wihtout exception )
The broker needs to refresh something ?
----
2020-05-03 12:06:08 UTC - Liam Clarke: @Liam Clarke has joined the channel
----
2020-05-03 15:24:33 UTC - Hiroyuki Yamada: @David Kjerrumgaard Thank you for
the reply.
Are you saying bookie data doesn’t need to be backed up ?
How do you handle a disk failure in a bookie node usually ?
----
2020-05-03 16:43:53 UTC - David Kjerrumgaard: If you use the default settings
then there are 3 replicas of the same data spread across different bookies.
This allows you to survive up to two failures (disks, nodes, etc) without
losing the data.
----
2020-05-03 17:28:12 UTC - alex kurtser: @alex kurtser has joined the channel
----
2020-05-03 17:33:41 UTC - alex kurtser: Hello.. We have some issue with pulsar.
The disk which stores the ledgers files on all three bookepers are almost full.
So the bookepers are in read only mode. The problem is that the brokers are
constantly crashing and restarting
----
2020-05-03 17:36:39 UTC - alex kurtser: So we just have two questions. How can
we stop broker crashing. but the second and more important question is hot can
we recover from full disk situation on bookeeper . The entire environment is
running on kubernetes and currently we don;t have the ability to resize disks
----
2020-05-03 17:37:44 UTC - alex kurtser: that is the right approach to handle
this situation ? Somebody has encountered with it ?
----
2020-05-03 18:12:27 UTC - David Kjerrumgaard: @alex kurtser The best approach
is to scale up the number of the bookies, which will add storage capacity to
the cluster. I would recommend adding at least 3 new bookies to ensure that you
can successfully write 3 copies of the data across 3 bookies.
----
2020-05-03 18:14:05 UTC - David Kjerrumgaard: @alex kurtser If you don't have
the ability/budget to scale up the bookies, then the next option is to fine
tune your retention policies to decrease the amount of data that is retained on
disk. This will force BK to delete some of the data which will free up disk
space.
----
2020-05-03 18:15:55 UTC - David Kjerrumgaard: @Ruian Which version of Pulsar
are you running?
----
2020-05-03 18:23:06 UTC - alex kurtser: @David Kjerrumgaard Thank to the
answer:)
Actually this is exactly what we did.
But now we try to reduce the amount of the bookepers instances within a
Kubernetes.
The problem that the official instructions for bookeper decommissioning seem
are not working in Kubernetes/container environment because it requires to shut
down the bookeper process first, but we can’t because it’s within a container:(
If we simply try to remove redundant bookeper in order to decrease their amount
back to the 3 instance we can lose some messages on thoese instances
----
2020-05-03 18:23:52 UTC - alex kurtser: So do you know how to safely remove
them ( scale down) without loosing data?
----
2020-05-03 18:24:38 UTC - alex kurtser: And the major question why the brokers
are crashing when there is no space on the bookepers.
----
2020-05-03 18:33:00 UTC - David Kjerrumgaard: @alex kurtser You should be able
to stop a BK pod, and then run the decommission command from a different BK
pod..... If you are running the decommission command for target bookie node
from another bookie node you should mention the target bookie id in the
arguments for `-bookieid` `$ bin/bookkeeper shell decommissionbookie`
----
2020-05-03 18:51:13 UTC - Kirill Merkushev: Hello, is there any ready-to-try
functions example with testcontainers/docker compose or something like that? As
my exclamation function jar just fails to start with no valuable exceptions
except
```Caused by: io.grpc.StatusRuntimeException: UNAVAILABLE: io exception
...
at
io.grpc.internal.SerializingExecutor.run(SerializingExecutor.java:123)
~[io.grpc-grpc-core-1.18.0.jar:1.18.0]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
~[?:1.8.0_232]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
~[?:1.8.0_232]
... 1 more
Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException:
Connection refused: /127.0.0.1:42431
Caused by: <http://java.net|java.net>.ConnectException: Connection refused
at <http://sun.nio.ch|sun.nio.ch>.SocketChannelImpl.checkConnect(Native
Method) ~[?:1.8.0_232]
at
<http://sun.nio.ch|sun.nio.ch>.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
~[?:1.8.0_232]```
----
2020-05-03 18:52:22 UTC - alex kurtser: Thanks:) i will try
----
2020-05-03 18:58:22 UTC - David Kjerrumgaard: I have some in my repo that you
can try. <https://github.com/david-streamlio/pulsar-in-action>
+1 : Kirill Merkushev
heart_eyes : Konstantinos Papalias
----
2020-05-03 19:14:33 UTC - David Kjerrumgaard: If you have any difficulties
please let me know. :smiley:
----
2020-05-03 21:37:31 UTC - JG: Nobody has an idea about it ?
----
2020-05-03 23:00:19 UTC - JG: Does Pulsar supports SSE or only websockets ?
----
2020-05-03 23:50:18 UTC - Hiroyuki Yamada: @David Kjerrumgaard Thank you. Yes,
but how do you recover from there ?
2 replicas are gone and 1 replica is only alive and up-to-date then how do you
recover the state back to a state 3 replicas are up-to-date ? Do you use
Manual/Auto `Recovery` ?
----
2020-05-04 00:40:51 UTC - David Kjerrumgaard: You should use
<https://bookkeeper.apache.org/docs/latest/admin/autorecovery/#manual-recovery>
----
2020-05-04 00:41:52 UTC - David Kjerrumgaard: That will bring the replicas back
to the desired count.
----
2020-05-04 00:42:20 UTC - David Kjerrumgaard: SSE?
----
2020-05-04 00:42:31 UTC - David Kjerrumgaard: Secure socket extensions?
----
2020-05-04 00:58:54 UTC - Guilherme Perinazzo: Probably server-sent events
----
2020-05-04 00:59:24 UTC - Ali Ahmed: pulsar currently doesn;t support sse
----
2020-05-04 01:22:53 UTC - Tymm: Apparently it was due to me running the pulsar
server with -nss option. After clearing all data and running the server with
pulsar-daemon, the state seems to be working fine
----
2020-05-04 02:53:02 UTC - Hiroyuki Yamada: @David Kjerrumgaard Thank you. It’s
getting clear.
But, wouldn’t it too time-consuming if a data data is big like more than
terabytes (that should be pretty common in Pulsar I assume) ?
Or, the recovery process is thought to be as fast as copying backup files from
some storge ?
Also, do you have a recommended way to backup zookeeper data ?
Sorry for asking many questions.
----
2020-05-04 03:37:42 UTC - Ruian: 2.5.1
----
2020-05-04 03:39:48 UTC - Ruian: I have already figure out why it took so long
to resolving the io excpetion. It was because that I used the --cpu 0.01 flag
to limit the function's cpu resource.
----
2020-05-04 04:41:38 UTC - Franck Schmidlin: I'm doing a PoC and using pulsar
beam to push messages from topic to HTTP endpoints.
It works great and was easy to setup.
However, wouldn't I be better off implementing this as a pulsar function?
Any reason why I shouldn't?
----
2020-05-04 05:28:52 UTC - Subash Kunjupillai: Hi,
I was looking through the documents of
<http://pulsar.apache.org/docs/en/security-encryption|End-to-End Encryption>
and I’m having following queries as I haven’t completely understood how this
works:
1. Public key should be provided to Producer and private key to Consumer. In
that case, I’m wondering why we have to provide both Public and Private key
file to CryptoKeyReader . Because ideally the producer application will not
have the private key and vice-versa. Can someone please share any information
on this?
2. I’m not able to understand the significance of `addEncryptionKey("my-app")`
in producer builder. Because I was able to send a message and consume it
without setting this key at producer end. Can someone please help me in
understanding its significance?
3. We are supposed to generate new private and public key often (at least once
in a week due to security policy). In that case, after regenerating both files,
consumer will not be able to read the old messages from Broker as it would have
been encrypted by an old public key or vice-versa. Is there a possibility to
add multiple Public and Private keys so that, we can gradually take down the
old keys?
----
2020-05-04 06:17:43 UTC - elixir: @elixir has joined the channel
----
2020-05-04 08:51:04 UTC - Yifan: Hi, it might be a stupid question, but I
couldn’t find the answer in google (duckduckgo). What is the default username
and password for Grafana installation in Pulsar helm chart deployment?
----