2019-08-09 09:18:35 UTC - Sijie Guo: I mean in the broker side?
----
2019-08-09 09:21:21 UTC - Diego Salvi: I've checked
----
2019-08-09 09:21:35 UTC - Diego Salvi: today I got some bookie error
----
2019-08-09 09:21:59 UTC - Diego Salvi: like... not enough non fauly bookie
----
2019-08-09 09:22:04 UTC - Diego Salvi: *faulty
----
2019-08-09 09:22:06 UTC - Sijie Guo: ok
----
2019-08-09 09:22:20 UTC - Diego Salvi: It seems working now but it seems 
strange to me
----
2019-08-09 09:22:23 UTC - Sijie Guo: that means you dont’ have enough available 
bookeis
----
2019-08-09 09:22:43 UTC - Diego Salvi: I restarted all bookies yesterday just 
bto be sure I was still getting the same error
----
2019-08-09 09:22:58 UTC - Diego Salvi: I'll check again with new topics
----
2019-08-09 09:23:02 UTC - Diego Salvi: just to be sure
----
2019-08-09 09:23:23 UTC - Sijie Guo: ok
----
2019-08-09 09:23:50 UTC - Diego Salvi: such 30sec tomeout then it sohuld be 
related to missing working bookies?
----
2019-08-09 09:24:23 UTC - Diego Salvi: Really thanks for now sijieg
----
2019-08-09 09:25:53 UTC - Sijie Guo: correct
----
2019-08-09 09:25:59 UTC - Sijie Guo: You are welcome
----
2019-08-09 10:44:20 UTC - Paul Flanagan: @Paul Flanagan has joined the channel
----
2019-08-09 10:56:30 UTC - Kim Christian Gaarder: Using the java client, I am 
unable to continue a subscription from where it was last acknowledged, or from 
the very beginning of the topic if no such ack is recorded or the subscription 
is new. I would rather not have to use the admin client as well to be able to 
support this, is this a missing feature or do any of you have advice on how to 
do this?
----
2019-08-09 11:01:15 UTC - Kim Christian Gaarder: This could be achieved by 
providing a default-subscription-initial-position option in the consumer 
builder. Or the same default-subscription-initial-position option could be set 
at the namespace or topic level, though I think it’s best to control this 
default per subscription.
----
2019-08-09 11:05:17 UTC - Dennis Sehalic: a colleague is trying out the node ws 
api and he's getting io.netty.handler.codec.TooLongFrameException: Adjusted 
frame length exceeds 5253120: 1195725860 - discarded when setting up the 
websocket. running pulsar 2.4.0 standalone, not sure how to solve that
----
2019-08-09 11:05:51 UTC - Dennis Sehalic: he's running the examples from 
<https://pulsar.apache.org/docs/en/client-libraries-websocket/#nodejs>
----
2019-08-09 11:07:19 UTC - Vinay Aggarwal: @Vinay Aggarwal has joined the channel
----
2019-08-09 11:11:55 UTC - Alexandre DUVAL: there is consumer.unsubscribe method
----
2019-08-09 11:43:25 UTC - Kim Christian Gaarder: I have added a feature request 
for this: <https://github.com/apache/pulsar/issues/4928>
----
2019-08-09 12:45:53 UTC - Diego Salvi: Just 2 other questions:
a) reader create need bookie enseble to be succesfully opened?
b) there is a concept of "persistent" subscription too? I was wandering reading 
logs like:
[<persistent://idc3745/ns/events>][reader-68f989f038] Creating non-durable 
subscription at msg id 276371:1:-1:0
or
Removing consumer 
Consumer{subscription=PersistentSubscription{topic=<persistent://idc2049/ns/api>,
 name=reader-ed5e5246d0}, consumerId=0, consumerName=d5658, 
address=/10.168.10.81:35746}
----
2019-08-09 12:46:42 UTC - Diego Salvi: (about a: reader should be cached and 
reused to avoid multiple bk works?)
----
2019-08-09 12:47:42 UTC - Sijie Guo: &gt; a) reader create need bookie enseble 
to be succesfully opened?

the topic to be owned by a broker requires at least an ensemble of bookies.

&gt; b) there is a concept of “persistent” subscription too?

consumer is persistent subscription, reader is non-persistent subscription.
----
2019-08-09 12:47:49 UTC - msk: @msk has joined the channel
----
2019-08-09 12:53:28 UTC - Diego Salvi: then if I only use the reader api i will 
use only non persisten ones ah I have to manage myself the point from where 
continue to read next time (no data stored elsewere with the actual last read 
point) right?
----
2019-08-09 12:59:07 UTC - Sijie Guo: correct
----
2019-08-09 13:00:26 UTC - Alexandre DUVAL: but there is no global configuration 
on pulsar configuration side? or on pulsarconsumerconfig when creating 
consumer? @Sijie Guo
----
2019-08-09 13:04:22 UTC - Sijie Guo: ```
# How long to delete inactive subscriptions from last consuming
# When it is 0, inactive subscriptions are not deleted automatically
subscriptionExpirationTimeMinutes=0

# Enable subscription message redelivery tracker to send redelivery count to 
consumer (default is enabled)
subscriptionRedeliveryTrackerEnabled=true

# How frequently to proactively check and purge expired subscription
subscriptionExpiryCheckIntervalInMinutes=
```
----
2019-08-09 13:11:40 UTC - Alexandre DUVAL: @Sijie Guo this configuration is for 
whole broker?
----
2019-08-09 13:11:49 UTC - Sijie Guo: yes
----
2019-08-09 13:12:22 UTC - Alexandre DUVAL: there is a way to specify them on 
consumer creation, because it's for only some subs ?
----
2019-08-09 13:12:32 UTC - Alexandre DUVAL: @Sijie Guo
----
2019-08-09 13:13:23 UTC - Sijie Guo: no unfortunately. it seems to be a good 
feature request to pulsar.
----
2019-08-09 13:13:35 UTC - Alexandre DUVAL: will open an issue.
----
2019-08-09 13:13:54 UTC - Sijie Guo: :+1:
----
2019-08-09 13:16:32 UTC - Alexandre DUVAL: 
<https://github.com/apache/pulsar/issues/4929>
----
2019-08-09 13:37:39 UTC - Ryan Samo: Hey guys,
When using Pulsar functions, is there a way to make a function use a specific 
set of certs for TLS instead of inheriting the certs that are listed in the 
functions_worker.yml config? I’m just thinking about multi tenancy and security 
around everyone’s functions having the same access to topics as the functions 
worker certs allow.
----
2019-08-09 13:57:40 UTC - msk: Hi i have some questions on setting up pulsar 
topics, where should i post these questions ?
----
2019-08-09 13:58:24 UTC - Richard Sherman: Here is a pretty good place
----
2019-08-09 14:03:25 UTC - msk: I have a requirement where i have 1 producer, 1 
topic and N consumers. Message m1 sent by the producer to the topic must be 
consumed by all the consumers. Basically all consumers should get a copy of m1. 
Which subscription-mode should i use for this requirement ?
----
2019-08-09 14:11:39 UTC - Richard Sherman: Each consumer will need it's own 
subscription
----
2019-08-09 14:13:19 UTC - msk: should i use exclusive or shared mode ?
----
2019-08-09 14:15:17 UTC - Richard Sherman: If you want to enforce that each 
consumer has it's own subscription then use exclusive mode. This will cause an 
error if a second consumer tries to connect to the subscription.
----
2019-08-09 14:18:59 UTC - msk: that's my issue. If i try to use shared mode, 
the messages are getting delivered in round-robin fashion which is not as per 
my expectation. So i am trying to figure out how i can achieve this.
----
2019-08-09 14:21:56 UTC - Richard Sherman: So each consumer must  have a unique 
subscription name. If you can guarantee this then the mode becomes irrelevant.
The issue with exclusive mode is that you get no redundancy or scalability.
Failover gives you redundancy but no scalability and shared gives you both.
----
2019-08-09 14:39:49 UTC - msk: Ok sounds good, thanks a lot.
----
2019-08-09 18:11:18 UTC - Aaron: Hi, I am developing a connector for Pulsar and 
when I run the pulsar-admin sources create command, I get created sucessfully, 
but the standalone's output gives an error:
'java.io.IOException: Cannot run program "java": error=13, Permission denied'
Can anyone help with this?
----
2019-08-09 18:23:11 UTC - Giampaolo: @Giampaolo has joined the channel
----
2019-08-09 19:08:50 UTC - Ali Ahmed: @Aaron is the pulsar running on your local 
machine ?
----
2019-08-09 19:14:20 UTC - Axel Barfod: @Axel Barfod has joined the channel
----
2019-08-09 19:14:34 UTC - Axel Barfod: @Axel Barfod has joined the channel
----
2019-08-09 19:18:00 UTC - Jorge Miralles: @Jorge Miralles has joined the channel
----
2019-08-09 19:33:10 UTC - Tarek Shaar: Question regarding file handles. I 
understand Kafka has a limit on the number of topics used. It is roughly 10k 
(since each topic requires two files and therefore two file handles). Does 
Pulsar require one or two file handles per topic?
----
2019-08-09 19:34:18 UTC - Ali Ahmed: @Tarek Shaar no it does not production 
pulsar clusters run millions of topics, the storage layer is not file centric
----
2019-08-09 19:35:24 UTC - Tarek Shaar: But since Apache book keeper writes to 
files called Ledgers doesn't each Ledger require a new file handle?
----
2019-08-09 19:36:25 UTC - Ali Ahmed: yes but a topic is not mapped to a ledger
----
2019-08-09 19:36:46 UTC - Ali Ahmed: like a kafka partition to a file
----
2019-08-09 19:45:22 UTC - Addison Higham: you are likely to need to scale your 
bookkeeper (and ZK) cluster to hit a million topics regardless, but one of the 
details about bookkeeper is that it writes to the journal, which is common, and 
then a thread pool asynchronously copies from the shared journal into the 
ledger storage. I am not totally clear on the details of how that handles 
opening/closing files, but it does allow for a bookie to have many more active 
ledgers
----
2019-08-09 19:50:15 UTC - Tarek Shaar: yes the journal call is an fsync 
synchronous call (only once the call returns will the ack go back to the 
client) but the flush to the ledgers is asynchronous and happens in the 
background. Am not sure if in the second call to the Ledger a file handle is 
required or not.
----
2019-08-09 20:46:59 UTC - Aaron: Fixed it. It was an error with my PATH
----
2019-08-09 21:20:51 UTC - Giampaolo: are there companies that provide support 
for Pulsar like Confluent does for Kafka?
----
2019-08-09 21:22:08 UTC - Jon Bock: Yes, Streamlio does.
----
2019-08-09 21:22:55 UTC - Jon Bock: It’s the company started by the creators of 
Pulsar.
----
2019-08-09 21:23:24 UTC - Chris Bartholomew: Also Kafkaesque: 
<https://kafkaesque.io/>
----
2019-08-09 21:39:53 UTC - Giampaolo: I’m totally new to Pulsar, in your opinion 
does it fit for event sourcing?
----
2019-08-09 21:40:29 UTC - Giampaolo: let’s say I have 10M orders with 30~50 
events each
----
2019-08-09 21:41:31 UTC - Giampaolo: does it make sense to store events on 
Pulsar and create read only views like using Elasticsearch?
----
2019-08-09 21:42:17 UTC - Giampaolo: or am I just saying something that does 
not make sense..
----
2019-08-09 21:45:04 UTC - Ali Ahmed: pulsar works well for infinitely long 
event stream
----

Reply via email to