Re: Pricing plan

2022-09-01 Thread Robin Moffatt
You can contact Confluent here: https://confluent.io/contact


-- 

Robin Moffatt | Principal Developer Advocate | ro...@confluent.io | @rmoff


On Thu, 1 Sept 2022 at 09:44, Uzair Ahmed Mughal 
wrote:

> can you please then provide the pricing plan of Confluent.
> Regards Uzair.
>
> On Thu, Sep 1, 2022 at 1:40 PM Robin Moffatt 
> wrote:
>
> > Apache Kafka is licensed under Apache 2.0 and free to use.
> >
> > There are a variety of companies that will sell you a self-hosted
> platform
> > built on Kafka, or a Cloud-hosted version of Kafka.
> > These include Confluent (disclaimer: I work for them), Red Hat, AWS,
> Aiven,
> > Instaclustr, Cloudera, and more.
> >
> >
> > --
> >
> > Robin Moffatt | Principal Developer Advocate | ro...@confluent.io |
> @rmoff
> >
> >
> > On Thu, 1 Sept 2022 at 08:47, Uzair Ahmed Mughal 
> > wrote:
> >
> > > Hello, we are looking for a pricing plan for Kafka in detail, can you
> > > please help us out?
> > > Thanks and regards
> > > Uzair Mughal
> > > IVACY
> > >
> >
>


Re: Pricing plan

2022-09-01 Thread Robin Moffatt
Apache Kafka is licensed under Apache 2.0 and free to use.

There are a variety of companies that will sell you a self-hosted platform
built on Kafka, or a Cloud-hosted version of Kafka.
These include Confluent (disclaimer: I work for them), Red Hat, AWS, Aiven,
Instaclustr, Cloudera, and more.


-- 

Robin Moffatt | Principal Developer Advocate | ro...@confluent.io | @rmoff


On Thu, 1 Sept 2022 at 08:47, Uzair Ahmed Mughal 
wrote:

> Hello, we are looking for a pricing plan for Kafka in detail, can you
> please help us out?
> Thanks and regards
> Uzair Mughal
> IVACY
>


Re: [ANNOUNCE] New Kafka PMC Member: A. Sophie Blee-Goldman

2022-08-02 Thread Robin Moffatt
Congrats Sophie, great news!


-- 

Robin Moffatt | Principal Developer Advocate | ro...@confluent.io | @rmoff


On Tue, 2 Aug 2022 at 00:42, Guozhang Wang  wrote:

> Hi everyone,
>
> I'd like to introduce our new Kafka PMC member, Sophie. She has been a
> committer since Oct. 2020 and has been contributing to the community
> consistently, especially around Kafka Streams and Kafka java consumer. She
> has also presented about Kafka Streams at Kafka Summit London this year. It
> is my pleasure to announce that Sophie agreed to join the Kafka PMC.
>
> Congratulations, Sophie!
>
> -- Guozhang Wang, on behalf of Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Chris Egerton

2022-07-25 Thread Robin Moffatt
Congrats Chris!


-- 

Robin Moffatt | Principal Developer Advocate | ro...@confluent.io | @rmoff


On Mon, 25 Jul 2022 at 17:26, Mickael Maison  wrote:

> Hi all,
>
> The PMC for Apache Kafka has invited Chris Egerton as a committer, and
> we are excited to announce that he accepted!
>
> Chris has been contributing to Kafka since 2017. He has made over 80
> commits mostly around Kafka Connect. His most notable contributions
> include KIP-507: Securing Internal Connect REST Endpoints and KIP-618:
> Exactly-Once Support for Source Connectors.
>
> He has been an active participant in discussions and reviews on the
> mailing lists and on Github.
>
> Thanks for all of your contributions Chris. Congratulations!
>
> -- Mickael, on behalf of the Apache Kafka PMC
>


[ANNOUNCE] Call for Speakers is open for Current 2022: The Next Generation of Kafka Summit

2022-05-24 Thread Robin Moffatt
Hi everyone,

We’re very excited to announce our Call for Speakers for Current 2022: The
Next Generation of Kafka Summit!

With the permission of the ASF, Current will include Kafka Summit as part
of the event.

We’re looking for talks about all aspects of event-driven design, streaming
technology, and real-time systems. Think about Apache Kafka® and similar
technologies, and work outwards from there. Whether it’s a data engineering
talk with real-time data, software engineering with message brokers, or
event-driven architectures—if there’s data in motion, then it’s going to be
relevant.

The talk tracks are as follows:

- Developing Real-Time Applications
- Streaming Technologies
- Fun and Geeky
- Architectures You’ve Always Wondered About
- People & Culture
- Data Development Life Cycle (including SDLC for data, data mesh,
governance, schemas)
- Case Studies
- Operations and Observability
- Pipelines Done Right
- Real-Time Analytics
- Event Streaming in Academia and Beyond

You can find the call for speakers at https://sessionize.com/current-2022/,
and a blog detailing the process (and some tips especially for new
speakers) at
https://www.confluent.io/blog/how-to-be-a-speaker-at-current-2022-the-next-kafka-summit/.
If you have any questions about submitting I would be pleased to answer
them - you can contact me directly at ro...@confluent.io.

The call for speakers closes at June 26, 23:59 CT.

Thanks,

Robin Moffatt
Program Committee Chair


[jira] [Created] (KAFKA-13520) Quickstart does not work at topic creation step

2021-12-08 Thread Robin Moffatt (Jira)
Robin Moffatt created KAFKA-13520:
-

 Summary: Quickstart does not work at topic creation step
 Key: KAFKA-13520
 URL: https://issues.apache.org/jira/browse/KAFKA-13520
 Project: Kafka
  Issue Type: Bug
  Components: website
Reporter: Robin Moffatt


Step 3 fails

 
{code:java}
$ bin/kafka-topics.sh --create --topic quickstart-events --bootstrap-server 
localhost:9092

Missing required argument "[partitions]" {code}
Also needs `replication-factor`

 

Correct statement is: 
{code:java}
$ bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --partitions 1 
--replication-factor 1 --topic quickstart-events {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (KAFKA-13374) [Docs] - All reads from the leader of the partition even after KIP-392?

2021-10-13 Thread Robin Moffatt (Jira)
Robin Moffatt created KAFKA-13374:
-

 Summary: [Docs] - All reads from the leader of the partition even 
after KIP-392?
 Key: KAFKA-13374
 URL: https://issues.apache.org/jira/browse/KAFKA-13374
 Project: Kafka
  Issue Type: Bug
Reporter: Robin Moffatt


On `https://kafka.apache.org/documentation/#design_replicatedlog` it says

> All reads and writes go to the leader of the partition.



However with KIP-392 I didn't think this was the case any more. If so, the doc 
should be updated to clarify. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-10865) Improve trace-logging for Transformations (including Predicates)

2020-12-18 Thread Robin Moffatt (Jira)
Robin Moffatt created KAFKA-10865:
-

 Summary: Improve trace-logging for Transformations (including 
Predicates)
 Key: KAFKA-10865
 URL: https://issues.apache.org/jira/browse/KAFKA-10865
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Robin Moffatt


I've been spending [a bunch of time poking around 
SMTs|https://rmoff.net/categories/twelvedaysofsmt/] recently, and one common 
challenge I've had is being able to debug when things don't behave as I expect.
  
 I know that there is the {{TransformationChain}} logger, but this only gives 
(IIUC) the input record
{code:java}
[2020-12-17 09:38:58,057] TRACE [sink-simulator-day12-00|task-0] Applying 
transformation io.confluent.connect.transforms.Filter$Value to 
SinkRecord{kafkaOffset=10551, timestampType=CreateTime} 
ConnectRecord{topic='day12-sys01', kafkaPartition=0, 
key=2c2ceb9b-8b31-4ade-a757-886ebfb7a398, keySchema=Schema{STRING}, 
value=Struct{units=16,product=Founders Breakfast 
Stout,amount=30.41,txn_date=Sat Dec 12 18:21:18 GMT 2020,source=SYS01}, 
valueSchema=Schema{io.mdrogalis.Gen0:STRUCT}, timestamp=1608197938054, 
headers=ConnectHeaders(headers=)} 
(org.apache.kafka.connect.runtime.TransformationChain:47)
{code}
 
 I think it would be really useful to also have trace level logging that 
included:
 - the _output_ of *each* transform
 - the evaluation and result of any `predicate`s


I have been using 
{{com.github.jcustenborder.kafka.connect.simulator.SimulatorSinkConnector}} 
which is really useful for seeing the final record:
{code:java}
[2020-12-17 09:38:58,057] INFO [sink-simulator-day12-00|task-0] 
record.value=Struct{units=16,product=Founders Breakfast 
Stout,amount=30.41,txn_date=Sat Dec 12 18:21:18 GMT 2020,source=SYS01} 
(com.github.jcustenborder.kafka.connect.simulator.SimulatorSinkTask:50)
{code}
 
 But doesn't include things like topic name (which is often changed by common 
SMTs)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-657: Add Customized Kafka Streams Logo

2020-08-19 Thread Robin Moffatt
I echo what Michael says here.

Another consideration is that logos are often shrunk (when used on slides)
and need to work at lower resolution (think: printing swag, stitching
socks, etc) and so whatever logo we come up with needs to not be too fiddly
in the level of detail - something that I think both the current proposed
options will fall foul of IMHO.


On Wed, 19 Aug 2020 at 15:33, Michael Noll  wrote:

> Hi all!
>
> Great to see we are in the process of creating a cool logo for Kafka
> Streams.  First, I apologize for sharing feedback so late -- I just learned
> about it today. :-)
>
> Here's my *personal, subjective* opinion on the currently two logo
> candidates for Kafka Streams.
>
> TL;DR: Sorry, but I really don't like either of the proposed "otter" logos.
> Let me try to explain why.
>
>- The choice to use an animal, regardless of which specific animal,
>seems random and doesn't fit Kafka. (What's the purpose? To show that
>KStreams is 'cute'?) In comparison, the O’Reilly books always have an
>animal cover, that’s their style, and it is very recognizable.  Kafka
>however has its own, different style.  The Kafka logo has clear, simple
>lines to achieve an abstract and ‘techy’ look, which also alludes
> nicely to
>its architectural simplicity. Its logo is also a smart play on the
>Kafka-identifying letter “K” and alluding to it being a distributed
> system
>(the circles and links that make the K).
>- The proposed logos, however, make it appear as if KStreams is a
>third-party technology that was bolted onto Kafka. They certainly, for
> me,
>do not convey the message "Kafka Streams is an official part of Apache
>Kafka".
>- I, too, don't like the way the main Kafka logo is obscured (a concern
>already voiced in this thread). Also, the Kafka 'logo' embedded in the
>proposed KStreams logos is not the original one.
>- None of the proposed KStreams logos visually match the Kafka logo.
>They have a totally different style, font, line art, and color scheme.
>- Execution-wise, the main Kafka logo looks great at all sizes.  The
>style of the otter logos, in comparison, becomes undecipherable at
> smaller
>sizes.
>
> What I would suggest is to first agree on what the KStreams logo is
> supposed to convey to the reader.  Here's my personal take:
>
> Objective 1: First and foremost, the KStreams logo should make it clear and
> obvious that KStreams is an official and integral part of Apache Kafka.
> This applies to both what is depicted and how it is depicted (like font,
> line art, colors).
> Objective 2: The logo should allude to the role of KStreams in the Kafka
> project, which is the processing part.  That is, "doing something useful to
> the data in Kafka".
>
> The "circling arrow" aspect of the current otter logos does allude to
> "continuous processing", which is going in the direction of (2), but the
> logos do not meet (1) in my opinion.
>
> -Michael
>
>
>
>
> On Tue, Aug 18, 2020 at 10:34 PM Matthias J. Sax  wrote:
>
> > Adding the user mailing list -- I think we should accepts votes on both
> > lists for this special case, as it's not a technical decision.
> >
> > @Boyang: as mentioned by Bruno, can we maybe add black/white options for
> > both proposals, too?
> >
> > I also agree that Design B is not ideal with regard to the Kafka logo.
> > Would it be possible to change Design B accordingly?
> >
> > I am not a font expert, but the fonts in both design are different and I
> > am wondering if there is an official Apache Kafka font that we should
> > reuse to make sure that the logos align -- I would expect that both
> > logos (including "Apache Kafka" and "Kafka Streams" names) will be used
> > next to each other and it would look awkward if the font differs.
> >
> >
> > -Matthias
> >
> > On 8/18/20 11:28 AM, Navinder Brar wrote:
> > > Hi,
> > > Thanks for the KIP, really like the idea. I am +1(non-binding) on A
> > mainly because I felt like you have to tilt your head to realize the
> > otter's head in B.
> > > Regards,Navinder
> > >
> > > On Tuesday, 18 August, 2020, 11:44:20 pm IST, Guozhang Wang <
> > wangg...@gmail.com> wrote:
> > >
> > >  I'm leaning towards design B primarily because it reminds me of the
> > Firefox
> > > logo which I like a lot. But I also share Adam's concern that it should
> > > better not obscure the Kafka logo --- so if we can tweak a bit to fix
> it
> > my
> > > vote goes to B, otherwise A :)
> > >
> > >
> > > Guozhang
> > >
> > > On Tue, Aug 18, 2020 at 9:48 AM Bruno Cadonna 
> > wrote:
> > >
> > >> Thanks for the KIP!
> > >>
> > >> I am +1 (non-binding) for A.
> > >>
> > >> I would also like to hear opinions whether the logo should be
> colorized
> > >> or just black and white.
> > >>
> > >> Best,
> > >> Bruno
> > >>
> > >>
> > >> On 15.08.20 16:05, Adam Bellemare wrote:
> > >>> I prefer Design B, but given that I missed the discussion thread, I
> > think
> > >>> it would be 

[jira] [Created] (KAFKA-9252) Kafka Connect

2019-11-29 Thread Robin Moffatt (Jira)
Robin Moffatt created KAFKA-9252:


 Summary: Kafka Connect 
 Key: KAFKA-9252
 URL: https://issues.apache.org/jira/browse/KAFKA-9252
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Affects Versions: 2.3.1
Reporter: Robin Moffatt


If I mis-configure my *single* Kafka broker with 
`offsets.topic.replication.factor=3` (the default), Kafka Connect will start up 
absolutely fine (Kafka Connect started in the log file, `/connectors` endpoint 
returns HTTP 200). But if I try to create a connector, it (eventually) returns
{code:java}
{"error_code":500,"message":"Request timed out"}{code}
There's no error in the Kafka Connect worker log at INFO level. More details: 
[https://rmoff.net/2019/11/29/kafka-connect-request-timed-out/]

This could be improved. Either at startup ensure that the Kafka consumer 
offsets topic is available and not startup if it's not, or at least log why the 
connector failed to be created.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9024) org.apache.kafka.connect.transforms.ValueToKey throws NPE

2019-10-11 Thread Robin Moffatt (Jira)
Robin Moffatt created KAFKA-9024:


 Summary: org.apache.kafka.connect.transforms.ValueToKey throws NPE
 Key: KAFKA-9024
 URL: https://issues.apache.org/jira/browse/KAFKA-9024
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Robin Moffatt


If a field named in the SMT does not exist a NPE is thrown. This is not helpful 
to users and should be caught correctly and reported back in a more friendly 
way.

For example, importing data from a database with this transform: 

 
{code:java}
transforms = [ksqlCreateKey, ksqlExtractString]
transforms.ksqlCreateKey.fields = [ID]
transforms.ksqlCreateKey.type = class 
org.apache.kafka.connect.transforms.ValueToKey
transforms.ksqlExtractString.field = ID
transforms.ksqlExtractString.type = class 
org.apache.kafka.connect.transforms.ExtractField$Key
{code}
If the field name is {{id}} not {{ID}} then the task fails : 
{code:java}
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error 
handler
   at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
   at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
   at 
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:50)
   at 
org.apache.kafka.connect.runtime.WorkerSourceTask.sendRecords(WorkerSourceTask.java:293)
   at 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:229)
   at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
   at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
   at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
   at 
org.apache.kafka.connect.transforms.ValueToKey.applyWithSchema(ValueToKey.java:85)
   at org.apache.kafka.connect.transforms.ValueToKey.apply(ValueToKey.java:65)
   at 
org.apache.kafka.connect.runtime.TransformationChain.lambda$apply$0(TransformationChain.java:50)
   at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
   at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
   ... 11 more

{code}
 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-9018) Kafka Connect - throw clearer exceptions on serialisation errors

2019-10-10 Thread Robin Moffatt (Jira)
Robin Moffatt created KAFKA-9018:


 Summary: Kafka Connect - throw clearer exceptions on serialisation 
errors
 Key: KAFKA-9018
 URL: https://issues.apache.org/jira/browse/KAFKA-9018
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Robin Moffatt


When Connect fails on a deserialisation error, it doesn't show if that's the 
*key or value* that's thrown the error, nor does it give the user any 
indication of the *topic/partition/offset* of the message. Kafka Connect should 
be improved to return this information.
Caused by: org.apache.kafka.connect.errors.DataException: Failed to deserialize 
data for topic sample_topic to Avro:
 at 
io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:110)
 at 
org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:487)
 at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
 at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
 ... 13 more
Caused by: org.apache.kafka.common.errors.SerializationException: Error 
deserializing Avro message for id -1
Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic 
byte!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-7497) Kafka Streams should support self-join on streams

2018-10-11 Thread Robin Moffatt (JIRA)
Robin Moffatt created KAFKA-7497:


 Summary: Kafka Streams should support self-join on streams
 Key: KAFKA-7497
 URL: https://issues.apache.org/jira/browse/KAFKA-7497
 Project: Kafka
  Issue Type: Bug
Reporter: Robin Moffatt


ref [https://github.com/confluentinc/ksql/issues/2030]

 

There are valid reasons to want to join a stream to itself, but Kafka Streams 
does not currently support this ({{Invalid topology: Topic foo has already been 
registered by another source.}}).  To perform the join requires creating a 
second stream as a clone of the first, and then doing a join between the two. 
This is a clunky workaround and results in unnecessary duplication of data.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7138) Kafka Connect - Make errors.deadletterqueue.topic.replication.factor default consistent

2018-07-06 Thread Robin Moffatt (JIRA)
Robin Moffatt created KAFKA-7138:


 Summary: Kafka Connect - Make 
errors.deadletterqueue.topic.replication.factor default consistent
 Key: KAFKA-7138
 URL: https://issues.apache.org/jira/browse/KAFKA-7138
 Project: Kafka
  Issue Type: Bug
Reporter: Robin Moffatt


{{errors.deadletterqueue.topic.replication.factor}} defaults to RF 3

The standard out of the box config files override the RF for 
{{offset.storage.replication.factor}} (and {{config}} and {{status}}) to 1

To make the experience consistent for users (especially new users, running a 
single-node dev environment), the default RF in effect for 
{{errors.deadletterqueue.topic.replication.factor}} should also be 1. 

It would make it easier for devs getting started on single-node setups.

For prod people should be actively configuring this stuff anyway, this would 
get included in that.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7052) ExtractField SMT throws NPE - needs clearer error message

2018-06-13 Thread Robin Moffatt (JIRA)
Robin Moffatt created KAFKA-7052:


 Summary: ExtractField SMT throws NPE - needs clearer error message
 Key: KAFKA-7052
 URL: https://issues.apache.org/jira/browse/KAFKA-7052
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Robin Moffatt


With the following Single Message Transform: 
{code:java}
"transforms.ExtractId.type":"org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.ExtractId.field":"id"{code}
Kafka Connect errors with : 
{code:java}
java.lang.NullPointerException
at org.apache.kafka.connect.transforms.ExtractField.apply(ExtractField.java:61)
at 
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:38){code}
There should be a better error message here, identifying the reason for the NPE.

Version: Confluent Platform 4.1.1



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-5699) Validate and Create connector endpoint should take the same format message body

2017-08-03 Thread Robin Moffatt (JIRA)
Robin Moffatt created KAFKA-5699:


 Summary: Validate and Create connector endpoint should take the 
same format message body
 Key: KAFKA-5699
 URL: https://issues.apache.org/jira/browse/KAFKA-5699
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Robin Moffatt
Priority: Minor


It's a fairly ugly UX to want to 'do the right thing' and validate a connector, 
but to have to do so with a different message body than that used for a POST to 
/connectors. Can the format be standardised across the calls (and for a PUT to 
//config too)?




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)