2020-04-30 09:14:46 UTC - Raffaele: Just saw that ledgers are immutable, and
only contain a single topic.
----
2020-04-30 09:29:10 UTC - Raffaele: I might have misinterpreted this
explaination from Ivan: <https://youtu.be/FdluULVeH2Y?t=122> from this video it
seemed that there was a single journal/log with multiple topics.
----
2020-04-30 09:34:20 UTC - Matej Šaravanja: @Matej Šaravanja has joined the
channel
----
2020-04-30 10:08:57 UTC - Erik Jansen: Hi, we are new to Pulsar and currently
executing a PoC. We’ve created a simple nodejs producer to test sending
messages to a topic. We would like to know how the batch capability works. It
doesn’t seem to work (do we miss something in understanding how it works). This
is the code we are executing:
``` const client = new Pulsar.Client({
serviceUrl: '<pulsar://localhost:6650>',
operationTimeoutSeconds: 30,
});
// Create a producer
const producer = await client.createProducer({
topic: '<persistent://public/default/my-topic>',
sendTimeoutMs: 30000,
batchingEnabled: true,
batchingMaxPublishDelayMs: 5000,
});
// Send messages
for (let i = 0; i < 500000; i += 1) {
const msg = `my-message-${i}`;
producer.send({
data: msg
});
}
await producer.flush();```
And this is the log from Pulsar:
```2020-04-30 12:00:56.660 INFO ProducerImpl:472 | Producer -
[<persistent://public/default/my-topic>, standalone-1-3] ,
[batchMessageContainer = { BatchContainer [size = 0] [batchSizeInBytes_ = 0]
[maxAllowedMessageBatchSizeInBytes_ = 131072] [maxAllowedNumMessagesInBatch_ =
1000] [topicName = <persistent://public/default/my-topic>] [producerName_ =
standalone-1-3] [batchSizeInBytes_ = 0] [numberOfBatchesSent = 499914]
[averageBatchSize = 1.00017]}]
2020-04-30 12:00:56.660 INFO BatchMessageContainer:170 | [numberOfBatchesSent
= 499914] [averageBatchSize = 1.00017]```
We see a lot of batches. We expected to have send a lot of messages in a single
batch/roundtrip. Are we doing something wrong?
----
2020-04-30 10:48:43 UTC - Alexandre DUVAL: ```2020-04-30T10:47:02.755Z
INFO Query-20200430_104702_00001_dg83k-183 org.apache.zookeeper.ZooKeeper
Initiating client connection,
connectString=yo-zookeeper-c2-n1:2184,yo-zookeeper-c2-n2:2184,yo-zookeeper-c2-n3:2184
sessionTimeout=10000
watcher=org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase@19babd91
2020-04-30T10:47:02.756Z INFO Query-20200430_104702_00001_dg83k-183
org.apache.zookeeper.ClientCnxnSocket jute.maxbuffer value is 4194304 Bytes
2020-04-30T10:47:02.756Z INFO Query-20200430_104702_00001_dg83k-183
org.apache.zookeeper.ClientCnxn zookeeper.request.timeout value is 0. feature
enabled=
2020-04-30T10:47:02.757Z INFO
Query-20200430_104702_00001_dg83k-183-SendThread(yo-zookeeper-c2-n3:2184)
org.apache.zookeeper.ClientCnxnOpening socket connection to server
yo-zookeeper-c2-n3/192.168.10.6:2184. Will not attempt to authenticate using
SASL (unknown error)
2020-04-30T10:47:02.763Z INFO
Query-20200430_104702_00001_dg83k-183-SendThread(yo-zookeeper-c2-n3:2184)
org.apache.zookeeper.ClientCnxnSocket connection established, initiating
session, client: /192.168.10.17:55904, server:
yo-zookeeper-c2-n3/192.168.10.6:2184
2020-04-30T10:47:02.776Z INFO
Query-20200430_104702_00001_dg83k-183-SendThread(yo-zookeeper-c2-n3:2184)
org.apache.zookeeper.ClientCnxnSession establishment complete on server
yo-zookeeper-c2-n3/192.168.10.6:2184, sessionid = 0x308edaa37a20015, negotiated
timeout = 10000
2020-04-30T10:47:02.776Z INFO
Query-20200430_104702_00001_dg83k-183-EventThread
org.apache.bookkeeper.zookeeper.ZooKeeperWatcherBase ZooKeeper client is
connected now.
2020-04-30T10:47:02.778Z INFO Query-20200430_104702_00001_dg83k-183
org.apache.bookkeeper.meta.zk.ZKMetadataDriverBase Initialize zookeeper
metadata driver with external zookeeper client : ledgersRootPath = /ledgers.
2020-04-30T10:47:02.778Z WARN Query-20200430_104702_00001_dg83k-183
org.apache.bookkeeper.util.EventLoopUtil Could not use Netty Epoll event
loop: Could not initialize class io.netty.channel.epoll.EpollEventLoop
2020-04-30T10:47:02.779Z ERROR Query-20200430_104702_00001_dg83k-183
org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicyImpl Failed
to initialize DNS Resolver
<http://org.apache.bookkeeper.net|org.apache.bookkeeper.net>.ScriptBasedMapping,
used default subnet resolver : java.lang.RuntimeException: No network topology
script is found when using script based DNS resolver.
2020-04-30T10:47:02.780Z INFO Query-20200430_104702_00001_dg83k-183
org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicyImpl
Initialize rackaware ensemble placement policy @ <Bookie:192.168.10.17:0>
@ /default-rack :
org.apache.bookkeeper.client.TopologyAwareEnsemblePlacementPolicy$DefaultResolver.
2020-04-30T10:47:02.780Z INFO Query-20200430_104702_00001_dg83k-183
org.apache.bookkeeper.client.RackawareEnsemblePlacementPolicyImpl Not
weighted
2020-04-30T10:47:02.780Z INFO Query-20200430_104702_00001_dg83k-183
org.apache.bookkeeper.client.BookKeeper Weighted ledger placement is not enabled
2020-04-30T10:47:02.787Z INFO
BookKeeperClientScheduler-OrderedScheduler-0-0
<http://org.apache.bookkeeper.net|org.apache.bookkeeper.net>.NetworkTopologyImpl
Adding a new node: /default-rack/yo-bookkeeper-c1-n3:3181
2020-04-30T10:47:02.787Z INFO
BookKeeperClientScheduler-OrderedScheduler-0-0
<http://org.apache.bookkeeper.net|org.apache.bookkeeper.net>.NetworkTopologyImpl
Adding a new node: /default-rack/yo-bookkeeper-c1-n1:3181
2020-04-30T10:47:02.787Z INFO
BookKeeperClientScheduler-OrderedScheduler-0-0
<http://org.apache.bookkeeper.net|org.apache.bookkeeper.net>.NetworkTopologyImpl
Adding a new node: /default-rack/yo-bookkeeper-c1-n2:3181
2020-04-30T10:47:02.787Z ERROR Query-20200430_104702_00001_dg83k-183
org.apache.pulsar.sql.presto.PulsarSplitManager Failed to get splits
<http://java.io|java.io>.IOException: Failed to initialize ledger manager
factory
at
org.apache.bookkeeper.client.BookKeeper.<init>(BookKeeper.java:520)
at
org.apache.bookkeeper.client.BookKeeper.<init>(BookKeeper.java:368)
at
org.apache.bookkeeper.mledger.impl.ManagedLedgerFactoryImpl$DefaultBkFactory.<init>(ManagedLedgerFactoryImpl.java:183)
at
org.apache.bookkeeper.mledger.impl.ManagedLedgerFactoryImpl.<init>(ManagedLedgerFactoryImpl.java:122)
at
org.apache.bookkeeper.mledger.impl.ManagedLedgerFactoryImpl.<init>(ManagedLedgerFactoryImpl.java:114)
at
org.apache.pulsar.sql.presto.PulsarConnectorCache.initManagedLedgerFactory(PulsarConnectorCache.java:108)
at
org.apache.pulsar.sql.presto.PulsarConnectorCache.<init>(PulsarConnectorCache.java:66)
at
org.apache.pulsar.sql.presto.PulsarConnectorCache.getConnectorCache(PulsarConnectorCache.java:83)
at
org.apache.pulsar.sql.presto.PulsarSplitManager.getSplitsNonPartitionedTopic(PulsarSplitManager.java:224)
at
org.apache.pulsar.sql.presto.PulsarSplitManager.getSplits(PulsarSplitManager.java:126)
at
com.facebook.presto.split.SplitManager.getSplits(SplitManager.java:64)
at
com.facebook.presto.sql.planner.DistributedExecutionPlanner$Visitor.visitTableScan(DistributedExecutionPlanner.java:146)
at
com.facebook.presto.sql.planner.DistributedExecutionPlanner$Visitor.visitTableScan(DistributedExecutionPlanner.java:122)
at
com.facebook.presto.sql.planner.plan.TableScanNode.accept(TableScanNode.java:136)
at
com.facebook.presto.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:108)
at
com.facebook.presto.sql.planner.DistributedExecutionPlanner.doPlan(DistributedExecutionPlanner.java:113)
at
com.facebook.presto.sql.planner.DistributedExecutionPlanner.plan(DistributedExecutionPlanner.java:85)
at
com.facebook.presto.execution.SqlQueryExecution.planDistribution(SqlQueryExecution.java:385)
at
com.facebook.presto.execution.SqlQueryExecution.start(SqlQueryExecution.java:287)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.bookkeeper.meta.exceptions.MetadataException: Failed to
initialized ledger manager factory
at
org.apache.bookkeeper.meta.zk.ZKMetadataDriverBase.getLedgerManagerFactory(ZKMetadataDriverBase.java:243)
at
org.apache.bookkeeper.client.BookKeeper.<init>(BookKeeper.java:518)
... 21 more
Caused by: <http://java.io|java.io>.IOException: Empty Ledger Root Path.
at
org.apache.bookkeeper.meta.AbstractZkLedgerManagerFactory.newLedgerManagerFactory(AbstractZkLedgerManagerFactory.java:158)
at
org.apache.bookkeeper.meta.zk.ZKMetadataDriverBase.getLedgerManagerFactory(ZKMetadataDriverBase.java:239)
... 22 more```
----
2020-04-30 10:48:43 UTC - Alexandre DUVAL: What do I miss? :confused:
----
2020-04-30 12:38:09 UTC - Alexandre DUVAL:
<https://github.com/apache/pulsar/issues/6852>
----
2020-04-30 12:47:58 UTC - Frank Xu: @Frank Xu has joined the channel
----
2020-04-30 16:24:29 UTC - Ebere Abanonu: Hi are you deploying it? Kubernetes?
----
2020-04-30 16:40:53 UTC - Alexandre DUVAL: Yes. Nope.
----
2020-04-30 16:43:04 UTC - Ebere Abanonu: How are you deploying it? In my
experience, I got that exception because the zk server address was not right
----
2020-04-30 18:16:10 UTC - Raffaele: Let's see if I understood correctly, each
topic is made of different ledgers and the latter is made of Fragments. Ledgers
are immutable, append only.
Fragments from different ledgers (and also from different topics), are then
written sequentially on entry logs. This means that after a specific ledger
expires, there is a task that purges the entry logs, correct?
----
2020-04-30 19:29:12 UTC - Tolulope Awode: Hi good day, I cannot connect to
pulsar using tls from nodejs client. I followed these steps, it was timing out.
----
2020-04-30 19:29:23 UTC - Tolulope Awode: Please help
----
2020-05-01 05:28:29 UTC - Franck Schmidlin: Is there any sensible way to
implement the return address pattern in pulsar other than using a topic
dedicated to that one response message I care about? Some magic with partition
and/or keys? :thinking_face:
On the other hand, is there any downside in creating loads of short lived
topics, used for a few messages only?
----
2020-05-01 05:31:12 UTC - Matteo Merli: Yes, that is correct.
----
2020-05-01 05:33:45 UTC - Matteo Merli: > This means that after a specific
ledger expires, there is a task that purges the entry logs, correct? (
ledgers are deleted based on the Pulsar data retention (either subscriptions or
time-based retention).
Once ledgers are deleted, BK entry logs are compacted to free up space.
----
2020-05-01 07:06:53 UTC - Ruian: Hi, I am confused by the functions-worker
related configs in the Helm chart. Why there is a separated
`pulsar-functions-worker-config` configmap which is referenced by the
`PF_kubernetesContainerFactory_changeConfigMap` ENV of the broker. I were
thought that all the configs were already set with
`PF_functionRuntimeFactoryConfigs_XXXXXX` ENVs in the broker.
----
2020-05-01 07:20:22 UTC - xue: Does pulsar have delayed consumption message
function?I see Delayed message delivery function.
----
2020-05-01 07:24:54 UTC - Ruian: Or `PF_functionRuntimeFactoryConfigs_XXXXXX`
ENVs are initial values, and would be periodically updated by fetching the
`PF_kubernetesContainerFactory_changeConfigMap` configmap?
----
2020-05-01 07:27:18 UTC - Ruian:
<https://github.com/apache/pulsar/blob/772b789010267829cf3cd921db6782e0dbe59ab2/pulsar-functions/runtime/src/main/java/org/apache/pulsar/functions/runtime/kubernetes/KubernetesRuntimeFactory.java#L353>
----
2020-05-01 07:31:51 UTC - Ruian: Is it possible that it just fetch the
configmap to initialize the factory without setting duplicated vars in the
broker envs?
----