2020-05-26 09:32:07 UTC - Abhilash Mandaliya: Hello guys. It seems that I have 
found one bug. Can anyone help on this?

<https://github.com/apache/pulsar/issues/7043>
----
2020-05-26 10:14:16 UTC - Kirill Merkushev: Can this help? 
<https://lanwen.ru/posts/pulsar-functions-how-to-debug-with-testcontainers/> 
there are some examples using admin client, which is utilizing rest api under 
the hood
----
2020-05-26 11:22:09 UTC - Konstantinos Papalias: Interesting one @Patrik 
Kleindl, sorry for getting off topic, but would some of your Kafka Streams use 
cases work out of the box with KOP as long as your clients are v2.0.0 (or up to 
2.3.0 undocumented)?
 Since Kafka Streams is effectively a library that is using Kafka Clients under 
the hood :slightly_smiling_face:.
----
2020-05-26 11:25:06 UTC - Konstantinos Papalias: how did you run your 
standalone?
----
2020-05-26 11:35:52 UTC - Luke Stephenson: Thanks for the responses.  I've 
switched to a Failover subscription and it is working as I require.
slightly_smiling_face : Patrik Kleindl
ok_hand : Konstantinos Papalias
----
2020-05-26 11:39:38 UTC - Deepak Sah: Hi everyone, according to the docs a 
topic gets deleted after 60 seconds of its creation if it is inactive. When I 
try to list topics, that topic is not listed but when I try to create a topic 
with the same name I get an error saying that the topic with that name already 
exists. Is this the desired behaviour?
----
2020-05-26 11:58:00 UTC - Patrik Kleindl: @Konstantinos Papalias I highly doubt 
it as Kafka Streams also uses the Kafka Admin Client under the hood to create 
intermediate topics e.g.
As soon as KOP has caught up to the more recent version I might give it a try 
just for fun. I still consider Kafka Streams a highly undervalued part of Kafka 
as it allows a really great deal of stream processing in a java application 
without any other infrastructure.
100 : Konstantinos Papalias
raised_hands : Konstantinos Papalias
----
2020-05-26 12:13:41 UTC - Chris DiGiovanni: These are running on VMs, but note 
taken for k8s config.
----
2020-05-26 13:57:35 UTC - Josh H: @Josh H has joined the channel
----
2020-05-26 14:11:59 UTC - Raman Gupta: Anyone?
----
2020-05-26 14:17:21 UTC - Konstantinos Papalias: @Patrik Kleindl I'm with you 
on this one and 100% agree that for me the part that I miss the most from Kafka 
Ecosystem is Kafka Streams, cannot highlight enough how easy is to achieve 
(true) stream processing without the need of yet another separate cluster / 
tech. I wasn't aware that it's also using Kafka Admin Client, but probably 
makes sense. Although this would only be needed if your topics are not 
co-partitioned based on the same key etc. I would be really keen to test Kafka 
Streams for some core functionality with the above restrictions of course 
:slightly_smiling_face:
----
2020-05-26 14:26:36 UTC - Patrik Kleindl: I haven’t checked if you can override 
the AdminClient when starting Streams, that would probably allow some kind of 
workaround for Pulsar.
pray : Konstantinos Papalias
----
2020-05-26 14:29:46 UTC - Penghui Li: This is not normal behavior, could you 
please provide some details? I can’t reproduce it on the master branch.
----
2020-05-26 14:46:43 UTC - Raman Gupta: So is scaling bookkeeper down really 
this hard? I see that changing the persistence options in the Pulsar namespace 
does not change them for existing ledgers, and without that I can't scale the 
cluster down.
----
2020-05-26 14:56:20 UTC - Max Streese: @Max Streese has joined the channel
----
2020-05-26 15:30:43 UTC - Tanner Nilsson: Here's what I found just last week 
for creating python functions from the API:

For the equivalent of the pulsar-admin command
```bin/pulsar-admin functions create \
    --tenant public \
    --namespace default \
    --classname main.Function
    --inputs 
<persistent://public/default/input1,persistent://public/default/input2> \
    --output <persistent://public/default/output> \
    --name function-name \
    --py /path/to/function.py```
using curl:
```curl -X POST 
<http://localhost:8080/admin/v3/functions/public/default/function-name> \
    -H "Authorization: Bearer &lt;token&gt;" \
    -F functionConfig='{
        "tenant":"public",
        "namespace":"default",
        "className":"function.Function",
        "inputs":[
            "<persistent://public/default/intput1>",
            "<persistent://public/default/intput2>"],
        "output":"<persistent://public/default/output>",
        "runtime":"PYTHON"};type=application/json' \
    -F data=@/path/to/function.py```
using python:
```import requests
import json

function_config = {
    "tenant":"public",
    "namespace":"default",
    "className":"function.Function",
    "inputs":[
        "<persistent://public/default/intput1>",
        "<persistent://public/default/intput2>"],
    "output":"<persistent://public/default/output>",
    "runtime":"PYTHON"}

files = {"functionConfig": (None, json.dumps(function_config), 
"application/json"),
         "data": ("function.py", open("/path/to/funcion.py", "rb"), 
"multipart/forms-data")}

response = <http://requests.post|requests.post>(
    "<http://localhost:8080/admin/v3/functions/public/default/function-name>",
    headers={"Authorization": "Bearer &lt;token&gt;"},
    files=files
)```
hopefully that helps
+1 : Patrik Kleindl
----
2020-05-26 16:02:21 UTC - Matthew Follegot: @Matthew Follegot has joined the 
channel
----
2020-05-26 16:09:12 UTC - Tymm: I've tried:
sudo bin/pulsar standalone
sudo bin/pulsar-daemon start standalone

I got the same error from both
----
2020-05-26 16:22:57 UTC - Girish Ramnani: @Girish Ramnani has joined the channel
----
2020-05-26 16:25:18 UTC - Sijie Guo: If you are interested in learning the 
Pulsar story of Nutanix, you can checkout the live stream “Lessons from 
managing a pulsar cluster” presented by  Shivji from Nutanix : 
<https://www.youtube.com/watch?v=zAHxgG_U67Q&amp;feature=youtu.be>
+1 : Deepak Sah, Gilles Barbier, Konstantinos Papalias
----
2020-05-26 16:33:44 UTC - Deepak Sah: Sure, will provide more details on this 
:thumbsup:
----
2020-05-26 16:49:07 UTC - Addison Higham: @Raman Gupta are you using pulsar 
functions? In a previous version of the helm chart, the 
`numFunctionPackageReplicas` was set to 1. User code for puslar functions gets 
uploaded to bookkeeper, but if you don't change that setting, it won't have any 
replicas. That would be one problematic ledger.

In general though, I would suggest you familiarize yourself with the bookkeeper 
CLI (`bin/bookkeeper shell`). You can view data about your ledgers and see 
which ones have are under replicated.

I am not familiar with any ways of changing ledgers to be more partitions (but 
perhaps it is possible? not sure...) one other option:
- If you have storage offloading on, eventually your ledgers with bad 
persistence policies will be deleted
----
2020-05-26 16:50:05 UTC - Raman Gupta: I've identified the problematic ledgers, 
and they are Pulsar topics for which I've already modified the settings.
----
2020-05-26 17:05:46 UTC - Deonvdv: @Deonvdv has joined the channel
----
2020-05-26 17:09:23 UTC - Addison Higham: I am not aware of a CLI tool that 
lets you change ensemble size, but perhaps you *could* experiment with manually 
bumping the ensemble size is on zookeeper? Assuming the ledger is closed, that 
likely isn't to cause problems with writes, but it may then trigger it as under 
replicated while your existing bookie is up
----
2020-05-26 17:15:32 UTC - Raman Gupta: Well I'm actually trying to scale 
*down*, not *up* -- the problem is that the recovery process doesn't complete 
because it thinks it needs the shut down bookie to meet the ledger ensemble 
requirements
----
2020-05-26 17:23:52 UTC - Deepa: thanks @Sijie Guo
----
2020-05-26 18:13:06 UTC - Addison Higham: how big is the ensemble size for that 
ledger?
----
2020-05-26 18:13:14 UTC - Raman Gupta: 3
----
2020-05-26 18:13:27 UTC - Addison Higham: and are you trying to go down to 2 
bookies?
----
2020-05-26 18:13:31 UTC - Raman Gupta: Yes
----
2020-05-26 18:14:16 UTC - Raman Gupta: I know that's a bad idea for durability 
but there are other considerations...
----
2020-05-26 18:59:25 UTC - Sijie Guo: &gt;  I see that changing the persistence 
options in the Pulsar namespace does not change them for existing ledgers, and 
without that I can’t scale the cluster down.
changing persistence settings doesn’t change existing ledgers. it only applies 
to new ledgers.

So if you only have 3 bookies and already have ledgers with ensemble size 3, it 
is hard to go down to 2 at this moment.
----
2020-05-26 18:59:55 UTC - Sijie Guo: We can add features in bookkeeper to 
support changing replication settings for existing ledgers if needed.
----
2020-05-26 19:06:50 UTC - Mukilesh: @Mukilesh has joined the channel
----
2020-05-26 19:09:41 UTC - Raman Gupta: Thanks @Sijie Guo. I've worked around 
the issue for now by copying the data from those topics into new topics, and 
deleting the old ones. I imagine people scaling down their clusters like this 
would be pretty unusual, but still, having this be simpler than it was would 
have been nice.
----
2020-05-26 19:11:24 UTC - Raman Gupta: I definitely expected the namespace 
changes to affect exist ledgers as well.
----
2020-05-26 19:13:18 UTC - Raman Gupta: Also in the reverse direction: if I 
increase my durability settings by increasing my ensemble size for a namespace, 
I would expect that setting to apply to my existing data, not just new data. If 
it does not (and it doesn't seem to), then that should be a big caveat in the 
docs.
----
2020-05-26 19:43:41 UTC - Raman Gupta: I have ledgers with ensemble size 1, 
that are located on a broker I want to decommission. I don't understand how to 
do this. The recovery process is unable to recover these ledgers with the 
bookie down, but decommissioning cannot start with the bookie up. Not sure the 
path forward.
----
2020-05-26 19:56:20 UTC - Raman Gupta: Ok, seems like doing a manual `recover` 
command for that bookie worked, rather than leaving it to autorecover on stop.
----
2020-05-26 19:58:34 UTC - Raman Gupta: Seems obvious in hindsight 
:slightly_smiling_face:
----
2020-05-26 21:28:06 UTC - Patrik Kleindl: Hi, I am trying to deploy a Pulsar 
Function to Pulsar running locally in Docker
The deployment seems to go through, but the Healthcheck fails continuosly.
Any ideas what might cause this?
----
2020-05-27 02:35:02 UTC - Future Jiang: @Future Jiang has joined the channel
----
2020-05-27 02:58:31 UTC - Ken Huang: I'm using functions in k8s with 
geo-replication. When I create a function at cluster A, it will create 
statefulset resource in cluster A and B.
But cluster B will get errors when creating function pod.    Here are some 
messages
```Unable to attach or mount volumes: unmounted volumes=[function-auth], 
unattached volumes=[default-token-85hfh function-auth]: timed out waiting for 
the condition```
----
2020-05-27 08:40:27 UTC - Kirill Merkushev: You could check functions logs the 
way described here: 
<https://lanwen.ru/posts/pulsar-functions-how-to-debug-with-testcontainers/#function-logs>
+1 : Patrik Kleindl
----
2020-05-27 08:41:35 UTC - Kirill Merkushev: basically
```/tmp/functions/tenant/ns/func-name/func-name-0.log```

----
2020-05-27 09:04:41 UTC - Patrik Kleindl: Thank you. Got a bit further, it 
seems the python version of the docker container respective the one used 
locally where the problem.
----

Reply via email to