2020-09-21 11:16:54 UTC - Seun: Hi People,
What could be responsible for this kind of error? Googling doesn't offer much
explanation.
`root@pulsar-proxy-0:/pulsar# ./bin/pulsar-admin namespaces public list`
`Expected a command, got public`
`Invalid command, please use `pulsar-admin --help` to check out how to use`
`root@pulsar-proxy-0:/pulsar# ./bin/pulsar-admin tenants list`
`null`
`Reason: java.util.concurrent.CompletionException:
org.apache.pulsar.client.admin.internal.http.AsyncHttpConnector$RetryException:
Could not complete the operation. Number of retries has been exhausted. Failed
reason: Connection refused: localhost/127.0.0.1:8080`
`root@pulsar-proxy-0:/pulsar#`
Environment is Google cloud GKE.
However, these jobs are in pending state for hours. Does this have any effects
on this error?
----
2020-09-21 14:34:48 UTC - Brent Evans: I'm going to attempt just removing the
`pulsar-proxy` and using an ELB in place of it to the brokers, will this cause
any issues I might not be aware of?
----
2020-09-21 14:59:53 UTC - Addison Higham: All of your brokers will need to have
DNS names that can be hit externally
----
2020-09-21 15:51:29 UTC - el akroudi abdessamad: im steel facing this issue, i
destroy all my VMS and pop a new one, but the same problem i can't start more
than one bookie, i have no idea how i can resolve this problem, i need your
help please
----
2020-09-21 16:11:20 UTC - Fernando Miguélez: Is it possible to create a
function that treats and returns GenericRecord, such as this one:
`public GenericRecord process(GenericRecord input, Context context)
throws Exception {`
`FilterOperation filter = (FilterOperation)
context.getUserConfigMap().get(FILTER_KEY);`
`if(filter.eval(input)) {`
`return input;`
`} else {`
`return null;`
`}`
`}`
----
2020-09-21 16:14:02 UTC - Fernando Miguélez: I am receiving this error when
function is started:
> 17:38:10.405
[dbus/test2/mirror-test-dummy-objects-2-test2-dummy-objects-0] ERROR
org.apache.pulsar.functions.sink.PulsarSink - Failed to create Producer while
doing user publish
>
org.apache.pulsar.client.api.PulsarClientException$InvalidConfigurationException:
AutoConsumeSchema is only used by consumers to detect schemas automatically
> at
org.apache.pulsar.client.api.PulsarClientException.unwrap(PulsarClientException.java:823)
~[java-instance.jar:?]
> at
org.apache.pulsar.client.impl.ProducerBuilderImpl.create(ProducerBuilderImpl.java:93)
~[org.apache.pulsar-pulsar-client-original-2.6.0.jar:2.6.0]
> at
org.apache.pulsar.functions.sink.PulsarSink$PulsarSinkProcessorBase.createProducer(PulsarSink.java:107)
~[org.apache.pulsar-pulsar-functions-instance-2.6.0.jar:2.6.0]
> at
org.apache.pulsar.functions.sink.PulsarSink$PulsarSinkAtMostOnceProcessor.<init>(PulsarSink.java:175)
[org.apache.pulsar-pulsar-functions-instance-2.6.0.jar:2.6.0]
> at
org.apache.pulsar.functions.sink.PulsarSink$PulsarSinkAtLeastOnceProcessor.<init>(PulsarSink.java:207)
[org.apache.pulsar-pulsar-functions-instance-2.6.0.jar:2.6.0]
> at
org.apache.pulsar.functions.sink.PulsarSink.open(PulsarSink.java:285)
[org.apache.pulsar-pulsar-functions-instance-2.6.0.jar:2.6.0]
> at
org.apache.pulsar.functions.instance.JavaInstanceRunnable.setupOutput(JavaInstanceRunnable.java:821)
[org.apache.pulsar-pulsar-functions-instance-2.6.0.jar:?]
> at
org.apache.pulsar.functions.instance.JavaInstanceRunnable.setup(JavaInstanceRunnable.java:224)
[org.apache.pulsar-pulsar-functions-instance-2.6.0.jar:?]
> at
org.apache.pulsar.functions.instance.JavaInstanceRunnable.run(JavaInstanceRunnable.java:246)
[org.apache.pulsar-pulsar-functions-instance-2.6.0.jar:?]
> at java.lang.Thread.run(Thread.java:748) [?:1.8.0_252]
----
2020-09-21 17:00:04 UTC - Addison Higham: @Brent Evans apologies for not
getting back to this earlier, if you are still seeing this issue with URL based
broker discovery then we should dig in more to figure out the underlying
problem as it is quite possible it is something on the broker side
Can you talk a bit more about the sort of loads you are running when this
happens? Is this causing pulsar to rebalance topics?
----
2020-09-21 17:00:48 UTC - Emil: @Emil has joined the channel
----
2020-09-21 19:40:30 UTC - Devin G. Bost: Thanks for clarifying this!
----
2020-09-21 20:12:04 UTC - Devin G. Bost: If a DLQ is set and batching is
enabled, if a message in the batch is nack'd, will the entire batch get sent to
the DLQ?
----
2020-09-21 20:16:02 UTC - Addison Higham: hrm, I don't believe it should, DLQ
happens on the client after it sees the re-delivery count has gone over
threshold, I believe the re-delivery count should be tracked at message level
and not batch level
----
2020-09-21 20:38:17 UTC - Devin G. Bost: I think when a message is nack'd when
batching is enabled that it will redeliver the entire batch unless batch index
acknowledgement is enabled (which by default it's not.)
----
2020-09-21 20:55:54 UTC - Sijie Guo: I don’t think it works for non-persistent
topics. @Penghui Li can double confirm.
----
2020-09-21 20:57:19 UTC - Sijie Guo: /cc @Penghui Li in this conversation. I
think there are some improvements required for DLQ on
+1 : Devin G. Bost
----
2020-09-22 00:07:44 UTC - Penghui Li: @Devin G. Bost @Sijie Guo Yes, we can
improve the DLQ handing at the client side. A easy fix is add some metadata to
indicate which batch index is already deleted.
----
2020-09-22 00:08:16 UTC - Devin G. Bost: Sounds good. Should I create a Github
issue for this?
----
2020-09-22 00:19:49 UTC - Devin G. Bost: Does anyone know how to set the retry
rate when using a retry letter topic?
----
2020-09-22 01:44:18 UTC - Addison Higham: I am sure that would be helpful
@Devin G. Bost if you don't see one already
----
2020-09-22 03:42:18 UTC - Rahul Vashishth: @Addison Higham please suggest?
----
2020-09-22 03:52:21 UTC - Addison Higham: I am not entirely sure what you are
asking for, messages won't be deleted by the retention policy if they are in a
subscription, retention policies only apply to messages not in a subscription.
Additionally, deletion of messages isn't an automatic operation, they are
marked to be deleted but the entire ledgers needs to be deleted. The closest to
that would be `pulsar_ml_MarkDeleteRate` however that may only be per namespace
----
2020-09-22 05:03:13 UTC - Penghui Li: @Enrico The key_shared subscription can
work with non-persistent topics. Could you please help add a github issue and
provide the reproduce steps? There are some unit test already cover the
non-persistent topics for key_shared subscription, you can check at
<https://github.com/apache/pulsar/blob/f8b2a2334fb7d2dc5266242a6393c9cc434fba60/pulsar-broker/src/test/java/org/apache/pulsar/client/api/KeySharedSubscriptionTest.java#L75>
----
2020-09-22 05:37:18 UTC - Devin G. Bost: Thanks for the report.
----
2020-09-22 05:38:10 UTC - Devin G. Bost: 2.4.1 has a lot of random issues that
get resolved with restarting brokers.
----
2020-09-22 05:38:20 UTC - Devin G. Bost: I definitely recommend moving to 2.6.1
----
2020-09-22 06:26:49 UTC - Rahul Vashishth: In other words - i would need total
count of message per topic which were delete before any susbscription could
read them.
OR
Message count which were published on topic but were not read by any
subscription. May be because of TTL boundery occured.
----
2020-09-22 08:22:26 UTC - el akroudi abdessamad: any idea please to resolve
this issue, i'm trying to use it on production after this POC
----