2019-12-18 09:51:58 UTC - Basitkhan: @Basitkhan has joined the channel
----
2019-12-18 09:52:57 UTC - Basitkhan: :wave: Thanks for the invite, @Apache 
Pulsar Admin!
----
2019-12-18 10:00:49 UTC - Sijie Guo: the later one.
+1 : Jasper Li
----
2019-12-18 10:20:35 UTC - Gautam Lodhiya: @Gautam Lodhiya has joined the channel
----
2019-12-18 10:23:22 UTC - Gautam Lodhiya: Hi All,

Need some help on throughput thing.

I am running apache-pulsar standalone on my local machine as docker container 
for queuing system and pushing 1000 jobs to one topic (lets say ‘demo’).

If I have 1 consumer listening to ‘demo’ topic and processing the job and 
acknowledging (within 100ms - 500ms). All the jobs gets completed in around 80 
secs.

But if I have do the same 1000 jobs test with more consumers (2 or 4 
consumers), the overall throughput remains the same like around 80 secs.

I am not sure whether I am missing some configurations needed or will need 
multiple pulsar brokers or what should I do so that if I increase the 
consumers, consumption throughput should also gets increased (like around 40-45 
secs incase of 2 consumers).

Docker image: apachepulsar/pulsar
consumer options:
“subscriptionType”: “Shared”, “receiverQueueSize”: 100, “ackTimeoutMillis”: 
1200000
+1 : Pradeep Mishra
----
2019-12-18 11:15:55 UTC - vikash: I  want  to  check  number of  open  
connections on  websocket
----
2019-12-18 11:16:05 UTC - vikash: 
----
2019-12-18 11:17:23 UTC - vikash: i  see  UNACKED messages  which  not  yet 
been consumed
----
2019-12-18 11:17:58 UTC - vikash: and  its showing  for  past  12  hour
----
2019-12-18 11:18:05 UTC - vikash: how to  debug   this  type  of  issue
----
2019-12-18 11:37:43 UTC - Subbu Ramanathan: Hi,

I'm seeing a memory issue when using pulsar source and sink with Flink.
I'm using pulsar and pulsar-flink library v2.4.2, flink v1.8.2 on an 8 cpu 16GB 
vm, running centos 7

I have 20 map transformations, each with it's own source and sink, and 
parallelism set to 8.

If the source and sink are Kafka, then the top command shows me 4% memory usage.
When I use pulsar source+sink, the java process consumes 40 % memory. This 
happens even if I have not streamed any data.

My heap size was set to 1024M and I don't see any outOfMemory errors. I think 
the increase in memory usage is because flink uses the off heap memory which 
gets set by flink to -XX:MaxDirectMemorySize=8388607T, essentially 
unlimited,and something with Pulsar Source/Sink is causing it to consume a lot 
of it.

I also see the message in the logs : "Error: You are creating too many 
HashedWheelTimer instances. HashedWheelTimer is a shared resource." I believe 
this is linked to Flink-9009.

Please any advice on this / should I be logging a bug for this / any 
workarounds that I could do ?
----
2019-12-18 12:19:18 UTC - rmb: Hi all, I have another question about the 
documentation: <https://pulsar.apache.org/docs/en/next/client-libraries-node/> 
says that the default message routing mode is UseSinglePartition, but 
<https://pulsar.apache.org/docs/en/concepts-messaging/#partitioned-topics> says 
that the default is RoundRobinPartition is the default.  Which is correct?
----
2019-12-18 12:43:29 UTC - Fernando: according to the source code it’s 
UseSinglePartition 
<https://github.com/apache/pulsar/blob/3a2122b99f1b8856dde508e34f92c96d6b051702/pulsar-client-cpp/lib/ProducerConfigurationImpl.h#L55>
you can also set whatever you want when instantiating the producer
----
2019-12-18 12:49:52 UTC - Roman Popenov: I keep getting the following error:
```helm install --generate-name --values pulsar/values-mini.yaml ./pulsar/
Error: unable to build kubernetes objects from release manifest: error 
validating "": error validating data: 
ValidationError(Deployment.spec.template.spec.containers[0]): unknown field 
"requests" in io.k8s.api.core.v1.Container```
----
2019-12-18 12:53:16 UTC - rmb: thanks!
----
2019-12-18 13:17:47 UTC - Roman Popenov: Is there a particular version of helm 
I should be using?
----
2019-12-18 13:27:15 UTC - Yong Zhang: Which version you are using ? Helm 3? If 
you are using helm3, maybe you can try to remove the requests of the grafana. 
----
2019-12-18 13:29:12 UTC - Roman Popenov: ```helm version
version.BuildInfo{Version:"v3.0.1", 
GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa", GitTreeState:"clean", 
GoVersion:"go1.13.4"}```

----
2019-12-18 13:29:22 UTC - Roman Popenov: It is helm 3
----
2019-12-18 13:32:12 UTC - Yong Zhang: You can try to remove the requests of the 
grafana in values.yaml
----
2019-12-18 13:32:27 UTC - Yong Zhang: I resolve it by this way
----
2019-12-18 13:43:06 UTC - Roman Popenov: Can you guide me a bit what exactly I 
should remove?
----
2019-12-18 13:44:16 UTC - Roman Popenov: Actually, I think you might be right, 
I just set everything to `no` that is pertaining to graphana
----
2019-12-18 13:45:03 UTC - Roman Popenov: But that didn’t solve the issue
----
2019-12-18 13:57:24 UTC - Roman Popenov: Ok, I was able to make it work in the 
end, I must have commented the wrong block
+1 : Yong Zhang
----
2019-12-18 15:04:36 UTC - Roman Popenov: It doesn’t appear that there is a 
`cluster-metadata.yaml` file in the aws kubernetes deployment
----
2019-12-18 16:27:21 UTC - Roman Popenov: Is it needed?
----
2019-12-18 16:39:06 UTC - dbartz: Hi all, we have upgraded our cluster from 
2.3.2 to 2.4.2. At first everything went well but after time (aka hours in our 
case) some consumers do no get the messages. Any insight of where I could 
investigate first ?
----
2019-12-18 16:52:00 UTC - Joe Francis: pulsar-admin non-persistent 
stats-internal  &lt;topic&gt;   should help you find out. Also look at closed 
issues since he release
----
2019-12-18 16:59:41 UTC - dbartz: Thanks @Joe Francis the issue is with some 
subscriptions on a persistent topic. For example we have a `skype` subscription 
where `bin/pulsar-admin topics stats-internal` return the following data
```{
  "markDeletePosition": "203962:30856",
  "readPosition": "203962:30857",
  "waitingReadOp": true,
  "pendingReadOps": 0,
  "messagesConsumedCounter": 580857,
  "cursorLedger": 203881,
  "cursorLedgerLastEntry": 6283,
  "individuallyDeletedMessages": "[]",
  "lastLedgerSwitchTimestamp": "2019-12-18T14:27:13.879Z",
  "state": "Open",
  "numberOfEntriesSinceFirstNotAckedMessage": 1,
  "totalNonContiguousDeletedMessagesRange": 0,
  "properties": {}
}```
----
2019-12-18 18:13:24 UTC - Joe Francis: 
<https://pulsar.apache.org/docs/v1.22.0-incubating/admin-api/partitioned-topics/#Internalstats-f1y6io>
----
2019-12-18 18:51:01 UTC - Mikhail Markov: @Mikhail Markov has joined the channel
----
2019-12-18 18:58:14 UTC - Mikhail Markov: Hello everyone!
I am receiving events over HTTP API and want to route events to different 
topics using Pulsar Client SDK . What is the best way to do that ?
As I understand Client connects to pulsar server when I create a producer with 
specific topic. But it is not good idea to establish connection on every event 
. :(
----
2019-12-18 19:32:18 UTC - Vladimir Shchur: This is a correct link
<https://github.com/apache/pulsar/blob/master/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ProducerBuilderImpl.java#L291-L303>
so it is RoundRobin
----
2019-12-18 19:33:22 UTC - Vladimir Shchur: hm, probably nodejs implementation 
is different
----
2019-12-18 20:36:01 UTC - Roman Popenov: I’ve ran the helm install command in 
AWS EKS cluster, and it seems like it ran successfully, but my pods are forever 
stuck in Init:0/1 mode:
```&gt;k get pods -n pulsar
NAME                                                READY   STATUS     RESTARTS 
  AGE
pulsar-1576701107-autorecovery-6b6d6c55c9-w5r5l     0/1     Init:0/1   0        
  2m32s
pulsar-1576701107-bastion-7ff6cfcf9-bzc7r           1/1     Running    0        
  2m32s
pulsar-1576701107-bookkeeper-0                      0/1     Pending    0        
  2m31s
pulsar-1576701107-broker-66c86d76c7-8kxsj           0/1     Pending    0        
  2m32s
pulsar-1576701107-broker-66c86d76c7-tcr8s           0/1     Pending    0        
  2m32s
pulsar-1576701107-broker-66c86d76c7-xc8b9           0/1     Pending    0        
  2m32s
pulsar-1576701107-proxy-7bb7c5b69c-llmx7            0/1     Init:0/1   0        
  2m32s
pulsar-1576701107-proxy-7bb7c5b69c-lr8hk            0/1     Pending    0        
  2m32s
pulsar-1576701107-proxy-7bb7c5b69c-m2h4c            0/1     Init:0/1   0        
  2m32s
pulsar-1576701107-pulsar-manager-7d45f7568f-dw9sm   1/1     Running    0        
  2m32s
pulsar-1576701107-zookeeper-0                       0/1     Pending    0        
  2m32s
pulsar-1576701107-zookeeper-metadata-wvcfk          0/1     Init:0/1   0        
  2m32s```

----
2019-12-18 20:36:08 UTC - Roman Popenov: Any idea why?
----
2019-12-18 20:37:49 UTC - David Kjerrumgaard: Typically that happens when the 
pods are having trouble allocating the requested resources. How big is your 
underlying compute pool?
----
2019-12-18 20:38:12 UTC - Roman Popenov: Two worker nodes
----
2019-12-18 20:38:38 UTC - David Kjerrumgaard: Cores ? Ram?
----
2019-12-18 22:44:21 UTC - Roman Popenov: Yeah, I’ve upped some resource on the 
cluster and allocating less resources to the pulsar cluster which is the problem
----
2019-12-19 04:52:26 UTC - shcho: @shcho has joined the channel
----
2019-12-19 04:58:22 UTC - Subbu Ramanathan: Adding sample code.

usage for submitting flink job :
./bin/flink run preloadjob-1.0-SNAPSHOT-jar-with-dependencies.jar  --props 
/root/flink-1.8.2/config.properties --brokerType kafka --source oomsource 
--sink oomsink  --parallellism 8 --numPipelines 20
----
2019-12-19 06:25:20 UTC - vikash: Hello Guys,
----
2019-12-19 06:25:34 UTC - vikash: is  there  any  rest  api  or  command  to  
delete  subscription
----
2019-12-19 06:25:48 UTC - vikash: i  can  able  to  delete  from  pulsar  
dashboard  Delete  button
----
2019-12-19 06:51:23 UTC - tuteng: Please try: 
<http://pulsar.apache.org/admin-rest-api/?version=2.4.2#operation/unsubscribeNamespace>
----
2019-12-19 07:30:41 UTC - Vladimir Shchur: Hi! I'm running helm chart on k8s, 
is there a way to lower broker log level to debug?
----

Reply via email to