2020-01-20 09:37:35 UTC - Julius.b: Would generally recommend to use localrun 
instead of create
----
2020-01-20 09:38:28 UTC - Julius.b: Python code inside a function:
```context.publish(publish_topic, str(dict1))
context.ack(msg_id, publish_topic)```
raises in context.ack():
```raise ValueError(\u0027Invalid topicname %s\u0027 % topic)\nValueError: 
Invalid topicname cleaned_data\n"```
The message gets published but not acknowledged.. 
How shall the topicname be declared?
I thought it would be just the name of the topic as a string
I also tried "<persistent://public/default/cleaned_data>"
Thanks for help
----
2020-01-20 09:50:21 UTC - Fernando: just looking at the source code it seems 
the code looks for a specific name pattern:
```def ack(self, msgid, topic):
    topic_consumer = None
    if topic in self.consumers:
      topic_consumer = self.consumers[topic]
    else:
      # if this topic is a partitioned topic
      m = re.search('(.+)-partition-(\d+)', topic)
      if not m:
        raise ValueError('Invalid topicname %s' % topic)
      elif m.group(1) in self.consumers:
        topic_consumer = self.consumers[m.group(1)]
      else:
        raise ValueError('Invalid topicname %s' % topic)
    topic_consumer.acknowledge(msgid)```
or it has to be in the map of consumers
----
2020-01-20 10:06:09 UTC - Julius.b: i ll look it up
----
2020-01-20 10:06:27 UTC - Fernando: and the topics that are in the consumers 
map are just the ones you put in your input topics configuration, which makes 
sense since you’re acking the incoming message
----
2020-01-20 10:07:10 UTC - Fernando: in other words your publish topic creates a 
producer not a consumer
----
2020-01-20 10:07:47 UTC - Fernando: so `context.ack(msg_id)`  instead
----
2020-01-20 10:08:23 UTC - Julius.b: the topic is required
----
2020-01-20 10:08:56 UTC - Julius.b: context.ack(msg_id, input_topic) to be 
precise should always work
----
2020-01-20 10:09:14 UTC - Fernando: ah true you want to ack an input topic not 
an output one
----
2020-01-20 10:10:15 UTC - Julius.b: btw what happens if the publish function 
doesnt get acknowledged? :smile:
----
2020-01-20 10:13:11 UTC - Fernando: I’m not sure I understand, you ack a 
message that you consume, not the one you produce
----
2020-01-20 10:48:04 UTC - Julius.b: Problem is still not solved... I need to 
parse the correct value to the topic parameter... Topics are correct and 
existing.
----
2020-01-20 10:48:49 UTC - Fernando: can you show me the code?
----
2020-01-20 10:58:39 UTC - Julius.b: ```class Pythonclean(Function):

    def __init__(self):
        self.input_topic = '<persistent://public/default/python_input>'
        self.cleaned_data_topic = 'cleaned_data'
        pass
    def process(self, input, context):
        if "publish-topic" in context.get_user_config_map():
            input_topic = context.get_user_config_value("input-topic")
            publish_topic = context.get_user_config_value("publish-topic")
        else:
            input_topic = self.input_topic
            publish_topic = self.cleaned_data_topic
        try:
            dict1 = eval(input)
            assert type(dict1) is dict
        except NameError:
            input = input.replace(": NaN", ": None")
            try:
                dict1 = eval(input)
            except Exception as e:
                dict1 = {"id": 0, "Error": e}
                pass # publish in false topic
        except Exception as e:
            dict1 = {"id" : 0, "Error" : e}
        try:
            assert len(dict1["webpage"]) &gt; 0
        except Exception as e:
            return input
        
        msg_id = context.get_message_id()
        context.publish(publish_topic, str(dict1))
        context.ack(msg_id, input_topic)
        return```

----
2020-01-20 10:59:18 UTC - Julius.b: i am getting no errors out of this code... 
Seems to be fine
----
2020-01-20 10:59:56 UTC - Fernando: ok so is the problem solved?
----
2020-01-20 11:00:03 UTC - Julius.b: true
----
2020-01-20 11:00:08 UTC - Fernando: cool
----
2020-01-20 11:00:19 UTC - Julius.b: Thanks
----
2020-01-20 11:02:01 UTC - Fernando: no problem
----
2020-01-20 11:13:32 UTC - Swaroop Kumar: @Swaroop Kumar has joined the channel
----
2020-01-20 13:07:37 UTC - Amit Vyas: @Amit Vyas has joined the channel
----
2020-01-20 19:39:39 UTC - Endre Karlson: Anyone here deploying to Azure and can 
share their Instance sizes ?
----
2020-01-20 20:05:32 UTC - Naby: Hi @Matteo Merli, I was wondering if you can 
have any comment on this problem I’m having. Thanks.
----
2020-01-20 20:16:28 UTC - Nick Ruhl: @Jerry Peng Hi Jerry. I hope all is well. 
I am looking to deploy openbenchmarking to k8s and found your helm chart here;
```<https://github.com/openmessaging/openmessaging-benchmark/blob/master/deployment/kubernetes/helm/benchmark/Chart.yaml>```
I was able to get it deployed to k8s and wanted to see if you had some time to 
answer a couple questions to help me get this tool connected to my Pulsar 
cluster.
1. Is this code still maintained and current or is there another project where 
this is maintained?
2. Have you ever used open-messaging within a k8s cluster to benchmark a pulsar 
cluster deployed within another k8s cluster?
3. Do you have any other documentation, scripts or breadcrumbs to help get me 
started with this?
Thanks for your time.
----
2020-01-20 20:20:30 UTC - Jerry Peng: @Nick Ruhl
&gt;  Is this code still maintained and current or is there another project 
where this is maintained?
Yes we have recently used it to benchmark Pulsar running on Kubernetes at Splunk
&gt; Have you ever used open-messaging within a k8s cluster to benchmark a 
pulsar cluster deployed within another k8s cluster?
No, though it should be possible with some modification. However, you should be 
mindful of cross cluster latency implications
&gt; Do you have any other documentation, scripts or breadcrumbs to help get me 
started with this?
Unfortunately, the documentation is sparse for this.  All the docs I know of 
for this are here:
<https://github.com/openmessaging/openmessaging-benchmark/tree/master/deployment/kubernetes/helm>
----
2020-01-20 20:21:19 UTC - Jerry Peng: though feel free to ask questions in this 
channel about benchmarking Pulsar using the OpenMessaging benchmark.  Many 
people in this channel has experience using it.
----
2020-01-20 20:28:14 UTC - Nick Ruhl: @Jerry Peng thank you
----
2020-01-20 21:13:37 UTC - Addison Higham: hrm, having issues with kubernetes 
function runtime and using `log-topic` , it doesn't seem like logs are making 
it to the log topic, but the log topic does get connected to by a producer (at 
least from the logs I see for the function's pod)
----
2020-01-20 21:16:27 UTC - Addison Higham: yeah, we are specifying a fully 
qualified topic
----
2020-01-20 22:40:24 UTC - Sijie Guo: 1. are you able to see how the command of 
the function pod running the java instance? It will be great to see if the 
topic is passed to the java instance.
2. how do you log the messages in your fuction? 

----
2020-01-20 22:52:01 UTC - Addison Higham: @Sijie Guo Will get that for you, but 
my investgation so far has revealed :
1. I do see an active connected producer to the log topic from the functions 
pod and I also see the log topic in the function configuration that is logged 
on function startup
2. Looking at the code, it looks like we just dynamically add a log4j appender 
(the `LogAppender` to ALL the loggers (and then remove it after each message 
and re-add it?) 

----
2020-01-20 22:54:06 UTC - Addison Higham: oh and we just `context.getLogger` to 
get the logger
----
2020-01-20 22:57:56 UTC - Addison Higham: so, from the function, I am loading 
the log4j LoggerContext to inspect the loggers and appenders
```Logger logger = context.getLogger();
 
LoggerContext logCtx = LoggerContext.getContext(false);
        Configuration config = logCtx.getConfiguration();
        for (final LoggerConfig loggerConfig : config.getLoggers().values()) {
            loggerConfig.getAppenders().forEach((name, appender) -&gt; {
                <http://logger.info|logger.info>("from {} for {} have appender 
{} with state {}", logger.getName(), loggerConfig.toString(), name, 
appender.getState());
            });
        }```
----
2020-01-20 22:58:38 UTC - Addison Higham: and I get this from the logs
```22:55:40.321 [restructure-dev/default/upcase-test-0] INFO  
function-upcase-test - from function-upcase-test for root have appender Console 
with state STARTED
22:55:40.321 [restructure-dev/default/upcase-test-0] INFO  function-upcase-test 
- from function-upcase-test for root have appender 
restructure-dev/default/upcase-test with state STARTED
22:55:40.321 [restructure-dev/default/upcase-test-0] INFO  function-upcase-test 
- from function-upcase-test for 
<http://org.apache.pulsar.functions.runtime.shaded.org|org.apache.pulsar.functions.runtime.shaded.org>.apache.bookkeeper
 have appender Console with state STARTED
22:55:40.321 [restructure-dev/default/upcase-test-0] INFO  function-upcase-test 
- from function-upcase-test for 
<http://org.apache.pulsar.functions.runtime.shaded.org|org.apache.pulsar.functions.runtime.shaded.org>.apache.bookkeeper
 have appender restructure-dev/default/upcase-test with state STARTED```
----
2020-01-20 23:02:07 UTC - Abraham: &gt; Currently, debugging with localrun mode 
is only supported by Pulsar Functions written in Java.
----
2020-01-20 23:02:23 UTC - Abraham: from 
<http://pulsar.apache.org/docs/en/functions-debug/#debug-with-localrun-mode>
----
2020-01-20 23:02:42 UTC - Addison Higham: so, what I think is happening: The 
logger instance that is passed into the `ContextImpl` which is a newly created 
instance of  `Logger` from `LoggerFactory` isn't getting discovered by the 
`LoggerContext` , which AFAICT, may be by design as `LoggerContext` seems to 
only be keeping track of loggers create by the log4j configuration files 
instead of those created directly via a factory
----
2020-01-20 23:03:35 UTC - Abraham: I’m guessing a lot of the problems I’ve been 
running into are due to trying to use the python client
----
2020-01-20 23:10:39 UTC - Addison Higham: okay, if I manually grab the root 
logger and log to that logger, I do get messages on the log topic, so yes, it 
just appears like the logger we create with `LoggerFactory` in 
`JavaInstanceRunnable` and pass to `ContextImpl` is not getting the pulsar 
`LogAppender` attached
----
2020-01-20 23:54:17 UTC - Addison Higham: okay, I think I have a fix, will get 
it tested soon-ish and upstreamed
----
2020-01-21 00:02:25 UTC - Eric Simon: @Eric Simon has joined the channel
----
2020-01-21 00:03:30 UTC - Tamer: We had a successful meetup on Pulsar last week

<https://www.youtube.com/watch?v=jLruEmh3ve0|https://www.youtube.com/watch?v=jLruEmh3ve0>
+1 : Ali Ahmed, Jerry Peng, David Kjerrumgaard, Jasper Li, Sijie Guo, Karthik 
Ramasamy, Zhenhao Li
----
2020-01-21 00:04:17 UTC - Tamer: This a repo for demos used in the meetup (not 
recorded in the video) contributions are welcome

<https://github.com/bitspire/pulsar-meetup|https://github.com/bitspire/pulsar-meetup>
----
2020-01-21 00:04:45 UTC - Ali Ahmed: do you have a link to the slides ?
----
2020-01-21 00:12:36 UTC - Tamer: The link I shared was just a draft, will get 
you the final slides tonight.
----
2020-01-21 00:48:18 UTC - Sijie Guo: The documentation was  trying to say - 
debugging using an IDE is only available for java localrun. but you can still 
use `bin/pulsar-admin functions localrun` to run a function locally at your 
laptop.
+1 : Julius.b
----
2020-01-21 02:16:55 UTC - Sijie Guo: @Sijie Guo set the channel topic: - Pulsar 
2.5.0 released! 
<http://pulsar.apache.org/release-notes/#250-mdash-2019-12-06-a-id250a>
- Pulsar user survey: <https://bit.ly/2Qtrrnf>
- Pulsar Summit SF 2020: <https://pulsar-summit.org/>
party-parrot : Fernando, Vivek Prasad
+1 : Fernando
tada : Fernando, Sergii Zhevzhyk
----
2020-01-21 03:41:32 UTC - Tamer: 
<https://drive.google.com/file/d/1Yaqg3qdWtJu7jNp6GuppgGWnn1ACtTtz/view|https://drive.google.com/file/d/1Yaqg3qdWtJu7jNp6GuppgGWnn1ACtTtz/view>
----

Reply via email to