2019-11-12 09:28:26 UTC - Sijie Guo: I see. current debezium source writes 
key/value schema to a topic. but jdbc doesn’t handle key/value schema right now.

Can you create a github issue? We can triage to add this feature. @tuteng and 
@Penghui Li can help with this.
+1 : Penghui Li
----
2019-11-12 12:11:17 UTC - sundar: Thanks for all your help, the pulsar admin is 
now working.
----
2019-11-12 12:14:18 UTC - sundar: But we hit another problem...we created an 
issue in github for the same which on completion, we feel the setup will be 
over. 
<https://github.com/apache/pulsar/issues/5630>
Can you please check this out? Thanks in advance.
----
2019-11-12 15:15:37 UTC - Benjamin Egelund-Müller: Hello, does anyone have 
experience or a helpful tip on how to (reasonably efficiently) lookup the ~100 
latest messages on a topic?
----
2019-11-12 16:17:52 UTC - Addison Higham: I am not aware of any plans, but I am 
sure there would be more interest in the future, the pulsar community is 
growing pretty quickly!
----
2019-11-12 16:52:49 UTC - Pedro Cardoso: Is there a timeout for creating a 
pulsar function? I get an HTTP 500 response intermittently when deploying a 
pulsar function

```
bin/pulsar-admin functions create \                
  --jar &lt;path_to_executable.jar&gt; \
  --name rollingsum \
  --classname RollingSum \
  --inputs <non-persistent://public/default/transaction-input> \
  --output <non-persistent://public/default/transaction-output>
HTTP 500 Internal Server Error

Reason: HTTP 500 Internal Server Error
```
----
2019-11-12 16:53:24 UTC - Pedro Cardoso: Is there a way to get more information 
on why the server failed?
----
2019-11-12 16:55:01 UTC - Matteo Merli: There could be 2 kind of approaches:
 1. If you know the time, you can create a reader on that timestamp (or seek a 
consumer)
 2. Keep a subscription with 100 unacked messages so that when you reconnect 
you can always replay them
----
2019-11-12 17:02:23 UTC - Pedro Cardoso: Found the following in the pulsar 
broker logs (this is a minikube-based cluster):
```
16:53:46.360 [DL-io-0] ERROR 
org.apache.bookkeeper.common.allocator.impl.ByteBufAllocatorImpl - Unable to 
allocate memory
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 
byte(s) of direct memory (used: 117440519, max: 134217728)
```

The quick fix is just to increase bookie memory right? Or is there something 
else I'm missing?
----
2019-11-12 17:05:49 UTC - Benjamin Egelund-Müller: With approach 1., do you 
mean keeping track of recently written message IDs in an external system?
----
2019-11-12 17:11:29 UTC - Dave: @Dave has joined the channel
----
2019-11-12 17:17:42 UTC - Benjamin Egelund-Müller: Naively, I’d hope I could 
look up the latest message ID from the broker, subtract N from that number to 
get the message ID of the Nth latest row, then create a reader on that message 
ID. I’m getting the feeling things are not that simple – but why exactly not? 
(I’m still learning how Pulsar works internally)
----
2019-11-12 17:39:03 UTC - Alvaro Ruiz Ramirez: @Alvaro Ruiz Ramirez has joined 
the channel
----
2019-11-12 17:51:21 UTC - Devin G. Bost: Sorry for my delay.
We’re using functions, as well as Java consumers and producers. We also have Go 
consumer/producers.
We haven’t unloaded the topics or namespaces because we can’t afford to have 
the data loss.
----
2019-11-12 17:51:46 UTC - Devin G. Bost: How can I help with this issue? This 
issue impacts us enough that I’d like to help with the fix if possible.
----
2019-11-12 18:56:49 UTC - Ryan Samo: Is there a way to get the count of 
messages that exist in a topic?
----
2019-11-12 18:57:39 UTC - Jerry Peng: @Ryan Samo not an exact count but you can 
get the number of BK entries or batches of messages
----
2019-11-12 18:58:04 UTC - Jerry Peng: If batching is turned off then its a one 
to one ratio with messages
----
2019-11-12 18:58:28 UTC - Ryan Samo: Ok, just looking for a useful way to tie 
out from producer to consumer
----
2019-11-12 18:59:00 UTC - Ryan Samo: So using the bk shell then you mean @Jerry 
Peng ?
----
2019-11-12 18:59:24 UTC - Jerry Peng: ./bin/puslar-admin topics stats-internal 
&lt;TOPIC&gt;
----
2019-11-12 18:59:50 UTC - Ryan Samo: Ok thanks!
----
2019-11-12 19:01:41 UTC - Retardust: It's not a messages actually. It's db 
replication state. Journal log
----
2019-11-12 19:07:28 UTC - Retardust: I have incomming topic, mapper java app 
and downstream topic. I need to save ordering guaranties and i wondering am i 
lose them on acking)
----
2019-11-12 20:04:35 UTC - Aafaq Ahmad: @Aafaq Ahmad has joined the channel
----
2019-11-12 21:25:09 UTC - Oleg Kozlov: Hi everyone, can someone explain what is 
the meaning of proxy_to_broker_url field in the CommandConnect message, and is 
it required if we connect to our brokers via Pulsar Proxy?
----
2019-11-12 21:28:12 UTC - Matteo Merli: Yes, it's required when the client goes 
through a proxy
----
2019-11-12 21:29:30 UTC - Matteo Merli: if there's a proxy, the Lookup will 
have the `ProxyThroughServiceUrl` field set to true
----
2019-11-12 21:30:37 UTC - Matteo Merli: at that point, the client will connect 
to the original  service url (the proxy) and it will mark to proxy the 
connection to the broker returned in the lookup
----
2019-11-12 21:35:06 UTC - Oleg Kozlov: Hi, thanks for the response
----
2019-11-12 21:35:23 UTC - Oleg Kozlov: We are implementing our own client, 
using Proxy -&gt; brokers connection, on Kubernetes
----
2019-11-12 21:35:53 UTC - Oleg Kozlov: Just to clarify - does the client have 
to do a Lookup before connecting to Proxy to figure out the broker url?
----
2019-11-12 21:40:02 UTC - Oleg Kozlov: Just to make sure we understand the flow:
1) Client connects to proxy
2) Sends CommandLookupTopic, and gets broker url in CommandLookupTopicResponse
3) Closes connection
4) Creates another connection with proxy_to_broker_url set to that broker URL ?
----
2019-11-12 21:45:38 UTC - Matteo Merli: &gt; 3) Closes connection

Typically you keep the connections pooled, both for lookup and for creating 
producers
----
2019-11-12 21:46:36 UTC - Matteo Merli: Correct, that's more or less the 
sequence. The client doesn't know initially that it's going through the proxy
----
2019-11-12 21:49:54 UTC - Oleg Kozlov: Is there any way to make a specific 
connection from client to proxy be able to work for any topic on any of the 
brokers?
----
2019-11-12 21:50:12 UTC - Oleg Kozlov: not just topics owned by specific given 
broker?
----
2019-11-12 21:50:50 UTC - Matteo Merli: no, the proxy maintains a 1-1 
connection between client and a specific broker
----
2019-11-12 21:51:41 UTC - Matteo Merli: this is done on purpose so that the 
proxy is very simple and efficient. After the initial connect/connected 
handshake, it will just become a dumb TCP proxy
----
2019-11-12 21:52:49 UTC - Oleg Kozlov: Chris, thank you for the response, this 
helps. We ended up using the last option (terminating SSL on pulsar proxy and 
going plain text proxy &lt;-&gt; broker), it's working now.
----
2019-11-12 21:54:47 UTC - Oleg Kozlov: got it, then, it's not clear what is the 
actual benefit of using the proxy, if we have a kubernetes load balancer in 
front any way, what does it give us?
----
2019-11-12 21:58:24 UTC - Matteo Merli: That you don't have to expose each 
broker IP to the client
----
2019-11-12 21:58:43 UTC - Matteo Merli: without proxy, the client will need to 
have direct TCP connectivity to brokers
----
2019-11-12 21:59:12 UTC - Matteo Merli: you cannot mask brokers behind a single 
VIP load balancer
----
2019-11-12 22:00:04 UTC - Matteo Merli: the proxy, being completely stateless, 
can be put behind a VIP
----
2019-11-12 22:01:27 UTC - Oleg Kozlov: i see, thanks
----
2019-11-12 22:01:47 UTC - Oleg Kozlov: this is very helpful, we'll update our 
flow to include the lookup
----
2019-11-12 22:14:27 UTC - Matteo Merli: :+1:
----
2019-11-12 22:45:26 UTC - Matthew Simoneau: @Matthew Simoneau has joined the 
channel
----
2019-11-13 01:08:53 UTC - Dennis Yung: @Naveen Kumar it is actually in 
Confluent's own repository. You need to add the following:

```
        &lt;repository&gt;
            &lt;id&gt;confluent&lt;/id&gt;
            &lt;name&gt;Confluent&lt;/name&gt;
            &lt;url&gt;http://packages.confluent.io/maven/&lt;/url&gt;
            &lt;releases&gt;
                &lt;enabled&gt;true&lt;/enabled&gt;
                &lt;checksumPolicy&gt;fail&lt;/checksumPolicy&gt;
            &lt;/releases&gt;
            &lt;snapshots&gt;
                &lt;enabled&gt;true&lt;/enabled&gt;
                &lt;checksumPolicy&gt;fail&lt;/checksumPolicy&gt;
            &lt;/snapshots&gt;
        &lt;/repository&gt;
```

Also note that a confluent schema registry is required for the avro converter 
to work (you must supply a schema registry argument). The messages will be 
serialized as KeyValue&lt;bytes, bytes&gt; in pulsar, with the bytes being 
serialized avro records and required further deserialization
----
2019-11-13 04:31:25 UTC - Raja CSP: @Raja CSP has joined the channel
----
2019-11-13 07:46:13 UTC - Endre Karlson: hi guys, how is pulsar doing for 
exactly once semantics and does it differ in local vs geo replicated
----
2019-11-13 08:13:12 UTC - Jasper Li: I have faced this issue before and 
increased brokers' memory to solve it.
----

Reply via email to