MQTT broker data validation against Schema Registry

2024-05-29 Thread Perez
Hi Team,

I have a requirement where I want MQTT broker to validate the records as
per the schema registered in the registry and if it matches then write to
the kafka topic or park it in a separate kafka topic.

So I was wondering if I can do something like this part of the code with
any of the MQTT brokers like EMQ, Mosquitto and Waterstream.

schema_registry_conf = {'url': args.schema_registry}
schema_registry_client = SchemaRegistryClient(schema_registry_conf)

string_serializer = StringSerializer('utf8')
protobuf_serializer = ProtobufSerializer(user_pb2.User,
 schema_registry_client,
 {'use.deprecated.format':
False})

producer_conf = {'bootstrap.servers': args.bootstrap_servers}

producer = Producer(producer_conf)

producer.produce(topic=topic, partition=0,
 key=string_serializer(str(uuid4())),
 value=protobuf_serializer(user,
SerializationContext(topic, MessageField.VALUE)),
 on_delivery=delivery_report)

I didn't get any such relevant information w.r.t MQTT broker so any hints
would also help.

Thanks


Re: Confluent Schema Registry - auto.register.schemas

2022-11-14 Thread karan alang
hello All -
checking to see if anyone has updates on this.

tia!

On Fri, Nov 11, 2022 at 1:55 PM karan alang  wrote:

> Hello All,
>
> ref stackoverflow :
> https://stackoverflow.com/questions/74396652/confluent-schema-registry-why-is-auto-register-schemas-not-applicable-at-schem
>
> ---
>
> I've installed the Schema Registry and want to set the
> auto.register.schema set to false - so only avro messages conforming to the
> registered schema can be published to Kafka Topic.
>
> From what I understand, the property - auto.register.schemas is a Kafka
> Producer property, and not a schema registry property.
>
> Here is the code I use to set the property auto.register.schemas
> ```
>
> Console Producer:
>
> kafka-avro-console-producer --bootstrap-server localhost:9092 --property 
> schema.registry.url=http://localhost:8081 --topic srtest-optionalfield 
> --property 
> value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f2","type":"string"}]}'
>  --property 
> value.subject.name.strategy=io.confluent.kafka.serializers.subject.RecordNameStrategy
>  --property auto.register.schemas=false
>
> Java Kafka Avro Producer :
>
> Properties props = new Properties();
> props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092;);
> props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
> props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, 
> KafkaAvroSerializer.class);
> props.put(ProducerConfig.CLIENT_ID_CONFIG, "Kafka Avro  Producer");
> props.put("schema.registry.url", "http://localhost:8081;);
> props.put(AbstractKafkaAvroSerDeConfig.AUTO_REGISTER_SCHEMAS, false);
>
> ```
>
> Does this mean that - if a Kafka producer passes the property
> auto.register.schemas=true, it will be able to add the schema to the Schema
> Registry ?
>
> This does not provide safeguard, since I want to ensure that producer is
> allowed to produce only messages that conform to schemas in Schema Registry
>
> How do I do this ? Is there a way for me to set the property -
> auto.register.schemas - at the Schema Registry level ?
>
> tia!
>


Confluent Schema Registry - auto.register.schemas

2022-11-11 Thread karan alang
Hello All,

ref stackoverflow :
https://stackoverflow.com/questions/74396652/confluent-schema-registry-why-is-auto-register-schemas-not-applicable-at-schem

---

I've installed the Schema Registry and want to set the auto.register.schema
set to false - so only avro messages conforming to the registered schema
can be published to Kafka Topic.

>From what I understand, the property - auto.register.schemas is a Kafka
Producer property, and not a schema registry property.

Here is the code I use to set the property auto.register.schemas
```

Console Producer:

kafka-avro-console-producer --bootstrap-server localhost:9092
--property schema.registry.url=http://localhost:8081 --topic
srtest-optionalfield --property
value.schema='{"type":"record","name":"myrecord","fields":[{"name":"f2","type":"string"}]}'
--property 
value.subject.name.strategy=io.confluent.kafka.serializers.subject.RecordNameStrategy
--property auto.register.schemas=false

Java Kafka Avro Producer :

Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "http://localhost:9092;);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,
KafkaAvroSerializer.class);
props.put(ProducerConfig.CLIENT_ID_CONFIG, "Kafka Avro  Producer");
props.put("schema.registry.url", "http://localhost:8081;);
props.put(AbstractKafkaAvroSerDeConfig.AUTO_REGISTER_SCHEMAS, false);

```

Does this mean that - if a Kafka producer passes the property
auto.register.schemas=true, it will be able to add the schema to the Schema
Registry ?

This does not provide safeguard, since I want to ensure that producer is
allowed to produce only messages that conform to schemas in Schema Registry

How do I do this ? Is there a way for me to set the property -
auto.register.schemas - at the Schema Registry level ?

tia!


Glue avro schema registry with flink and kafka for any object

2021-12-24 Thread Sergio Pérez Felipe
I am trying to registry and serialize an abject with flink, kafka, glue and
avro. I've seen this method which I'm trying.

Schema schema = parser.parse(new File("path/to/avro/file"));
GlueSchemaRegistryAvroSerializationSchema test=
GlueSchemaRegistryAvroSerializationSchema.forGeneric(schema, topic,
configs);
FlinkKafkaProducer producer = new
FlinkKafkaProducer(
kafkaTopic,
test,
properties);

My problem is that this system doesn't allow to include an object different
than *GenericRecord*, the object that I want to send is another and is very
big. So big that is too complex to transform to GenericRecord.

I don't find too much documentation. How can I send an object different
than GenericRecord, or any way to include my object inside GenericRecord?

-- 
Confidentiality Notice: This email and any files transmitted with it are 
confidential and intended solely for the use of the individual or entity to 
whom they are addressed.  Additionally, this email and any files 
transmitted with it may not be disseminated, distributed or copied. Please 
notify the sender immediately by email if you have received this email by 
mistake and delete this email from your system. If you are not the intended 
recipient, you are notified that disclosing, copying, distributing or 
taking any action in reliance on the contents of this information is 
strictly prohibited.

-- 
 



Re: Confluent Schema Registry Compatibility config

2021-12-16 Thread Mayuresh Gharat
Hi Folks,

I was reading docs on Confluent Schema Registry about Compatibility :
https://docs.confluent.io/platform/current/schema-registry/avro.html#compatibility-types

I was confused with "BACKWARDS" vs "BACKWARDS_TRANSITIVE".

If we have 3 schemas X, X-1, X-2 and configure a schema registry with
compatibility = "BACKWARDS". When we registered the X-1 schema it must have
been compared against the X-2 schema. When we register Xth schema it must
have been compared against X-1 schema. So by transitivity Xth Schema would
also be compatible with X-2.

So I am wondering what is the difference between "BACKWARDS" vs
"BACKWARDS_TRANSITIVE"? Any example would be really helpful.

--
-Regards,
Mayuresh R. Gharat
(862) 250-7125


-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Confluent Schema Registry Compatibility config

2021-12-16 Thread Mayuresh Gharat
Hi Folks,

I was reading docs on Confluent Schema Registry about Compatibility :
https://docs.confluent.io/platform/current/schema-registry/avro.html#compatibility-types

I was confused with "BACKWARDS" vs "BACKWARDS_TRANSITIVE".

If we have 3 schemas X, X-1, X-2 and configure a schema registry with
compatibility = "BACKWARDS". When we registered the X-1 schema it must have
been compared against the X-2 schema. When we register Xth schema it must
have been compared against X-1 schema. So by transitivity Xth Schema would
also be compatible with X-2.

So I am wondering what is the difference between "BACKWARDS" vs
"BACKWARDS_TRANSITIVE"? Any example would be really helpful.

-- 
-Regards,
Mayuresh R. Gharat
(862) 250-7125


Non-canonical form in Schema Registry?

2021-05-14 Thread Dan Bradley
The canonical form for Avro schemas is to use a single "name" key whose 
value is the concatenation of the namespace, if any, with the record 
name: 
https://avro.apache.org/docs/current/spec.html#Transforming+into+Parsing+Canonical+Form


There is a common, non-canonical alternative out in the wild that uses a 
short "name" plus an additional "namespace" key to represent the same 
information.


I've found that when I submit a schema in canonical form to the schema 
registry and then retrieve the stored schema back from the registry 
using the appropriate subject, the version that the registry provides 
uses the non-canonical form. (This is via Python, but I think the 
transformation is happening on the server.) Is there a configuration 
option to make the registry return a canonical form instead? Or is this 
a bug?





Schema registry for Kafka topic

2021-03-15 Thread Mich Talebzadeh
Hi,


We have an in-house cluster of Kafka brokers and ZooKeepers.


Kafka version kafka_2.12-2.7.0

ZooKeeper version apache-zookeeper-3.6.2-bin


The topic is published to Google BigQuery.


We would like to use a schema registry as opposed to sending schema with
payload for each message that adds to the volume of traffic.


As I understand from the available literature, this schema registry acts as
a a kind of API to decouple consumers from producers and provide a common
interface for topic metadata.


Currently I am looking at the open source community version for creating
and maintaining this schema registry. We tried wepay provider from
Confluent (which I assumed it seamlessly allow one to publish kafka topic
(source Json) to Google BigQuery but still getting the error:


org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask
due to unrecoverable exception.

at
org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:614)

at
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:329)

at
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:232)

at
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:201)

at
org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:185)

at
org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)

at
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

at java.util.concurrent.FutureTask.run(FutureTask.java:266)

at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

at java.lang.Thread.run(Thread.java:748)

Caused by:
com.wepay.kafka.connect.bigquery.exception.ConversionConnectException:
Top-level Kafka Connect schema must be of type 'struct'


Appreciate any advice.


Thanks



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.


Mirror Maker for Avro Messages (with schema registry)

2021-03-03 Thread Adithya Tirumale
Hi,

We have a use case where we need to maintain a replicated/cloned kafka cluster 
in a lower environment mirroring the production kafka cluster. The messages in 
kafka are a mix of JSON and AVRO messages. The schemas for the avro messages 
are registered using schema registry.

Using Mirror maker works for JSON messages but doesn't work well with avro 
messages. Since the lower(cloned) environment has its own schema registry, the 
schema id for the messages might go out of sync between the production and the 
lower(cloned) environment.

To solve this problem, I can think of the below options :


  *   Add _schemas topic to the list of topics to be syncd by mirror maker. 
This way when the applications in the cloned environment try to look up schema 
for a schema id it gets the same thing as in production.
  *   Add a special message transformation to the mirror maker to basically 
translate the production schema ids to the cloned environment schema ids. For 
example if the schema id for a value for the foo-bar topic is 15 in production 
and 13 in the cloned environment, mirror maker would set schema id field to 13 
before writing the message to the cloned env.

My Questions :

  *   Are the above 2 options acceptable or am I missing something here ?
  *   Are there any other options I'm missing ?

Thanks!



__
See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


kafka schema registry - some queries and questions

2020-10-08 Thread Manoj.Agrawal2
 Hi All,

Wanted to understand a bit more on the schema registry
1. With Apache kafka , can we use  schema registry
2. Amazon MSK , can we use Schema registry ?

Thanks
Manoj A

This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.
This e-mail and any files transmitted with it are for the sole use of the 
intended recipient(s) and may contain confidential and privileged information. 
If you are not the intended recipient(s), please reply to the sender and 
destroy all copies of the original message. Any unauthorized review, use, 
disclosure, dissemination, forwarding, printing or copying of this email, 
and/or any action taken in reliance on the contents of this e-mail is strictly 
prohibited and may be unlawful. Where permitted by applicable law, this e-mail 
and other e-mail communications sent to and from Cognizant e-mail addresses may 
be monitored.


Re: kafka schema registry - some queries and questions

2020-09-21 Thread Bruno Cadonna

Hi Pushkar,

This question is better suited for 
https://groups.google.com/g/confluent-platform since the Schema Registry 
is part of the Confluent Platform but not of Apache Kafka.


Best,
Bruno

On 21.09.20 16:58, Pushkar Deole wrote:

Hi All,

Wanted to understand a bit more on the schema registry provided by
confluent.
Following are the queries:
1. Is the schema registry provided by confluent over the top of Apache
Kafka?
2. If a managed kafka service is used in cloud e.g. say Aiven Kafka, then
does the schema registry implementation is different for different vendors
i.e. will Aiven has their own implementation or confluent has open sourced
the schema registry implementation?
3. Does the confluent Avro client libraries will work with schema registry
of managed services from other vendors like Aiven?



kafka schema registry - some queries and questions

2020-09-21 Thread Pushkar Deole
Hi All,

Wanted to understand a bit more on the schema registry provided by
confluent.
Following are the queries:
1. Is the schema registry provided by confluent over the top of Apache
Kafka?
2. If a managed kafka service is used in cloud e.g. say Aiven Kafka, then
does the schema registry implementation is different for different vendors
i.e. will Aiven has their own implementation or confluent has open sourced
the schema registry implementation?
3. Does the confluent Avro client libraries will work with schema registry
of managed services from other vendors like Aiven?


Confluent Kafka - Schema Registry on windows

2020-07-22 Thread Nag Y
I happened to see an example how to run schema registry using
"schema-registry-start.bat" from windows on 5.0.1

I didnt see the file in 5.5.0 . Is the schema registry not supported in
windows now ? IT seems only the way to go about running schema registry in
windows through dockers . Please someone confirm


Namespace problem in schema registry

2020-07-14 Thread vishnu murali
Hi all,

I am having questions on namespace in schema registry

If schema is automatically generate from JDBC source connector Means then
the schema doesn't have namespace field and value

But if we created schema manually with namespace and register into schema
registry and if I try to run connector

Schema not found Exception came..

How can we handle this and I need this namespace function for  deserialize
in Java??

Any have any solution for this?


Re: Avro schema registry error

2020-05-21 Thread AJ Chen
Thanks, Liam. Will run registry along side kafka
.
-aj


On Wed, May 20, 2020 at 8:58 PM Liam Clarke-Hutchinson <
liam.cla...@adscale.co.nz> wrote:

> Hi AJ,
>
> No, there is  no public schema registry, you will need to deploy and
> maintain your own.
>
> Cheers,
>
> Liam Clarke-Hutchinson
>
> On Thu, May 21, 2020 at 11:56 AM AJ Chen  wrote:
>
> > I use avro for kafka message. When producing avro message, it fails to
> > access schema registry,
> > ERROR io.confluent.kafka.schemaregistry.client.rest.RestService - Failed
> to
> > send HTTP request to endpoint:
> > http://localhost:8081/subjects/avro_emp-value/versions
> >
> > When using confluent schema registry, dose it require to explicitly start
> > the local registry? Is there a free public avro schema registry available
> > to use?
> >
> > Thanks,
> > aj
> >
>


Re: Avro schema registry error

2020-05-20 Thread Liam Clarke-Hutchinson
Hi AJ,

No, there is  no public schema registry, you will need to deploy and
maintain your own.

Cheers,

Liam Clarke-Hutchinson

On Thu, May 21, 2020 at 11:56 AM AJ Chen  wrote:

> I use avro for kafka message. When producing avro message, it fails to
> access schema registry,
> ERROR io.confluent.kafka.schemaregistry.client.rest.RestService - Failed to
> send HTTP request to endpoint:
> http://localhost:8081/subjects/avro_emp-value/versions
>
> When using confluent schema registry, dose it require to explicitly start
> the local registry? Is there a free public avro schema registry available
> to use?
>
> Thanks,
> aj
>


Avro schema registry error

2020-05-20 Thread AJ Chen
I use avro for kafka message. When producing avro message, it fails to
access schema registry,
ERROR io.confluent.kafka.schemaregistry.client.rest.RestService - Failed to
send HTTP request to endpoint:
http://localhost:8081/subjects/avro_emp-value/versions

When using confluent schema registry, dose it require to explicitly start
the local registry? Is there a free public avro schema registry available
to use?

Thanks,
aj


Re: Re: Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-11 Thread wangl...@geekplus.com.cn
Hi robin, 

Seems i didn't make it clear. 

Actually we still use jdbc sink connector. 
But we want to use the JDBC Sink function  in our own distributed  platform 
intead of kafka connector

I want to consolidate the code here: 
https://github.com/confluentinc/kafka-connect-jdbc/

Receive kafka avro record,  add it to JDBCSinkTask, the task will automatically 
generate the sql and execute it according to the schema registry. 

Seems i can do it like this.
But i am not able to transform GenericRecord to SinkRecord. 

JdbcSinkTask task = new JdbcSinkTask();
task.start(props2);
consumer.subscribe(Collections.singletonList("orders"));

while (true) {
final ConsumerRecords records = 
consumer.poll(Duration.ofMillis(100));
for (final ConsumerRecord record : records) {
final String key = record.key();
final GenericRecord value = record.value();
change the GenericRecord to sinkRecord
task.put(Arrays.asList(sinkRecord));
}
}

Thanks,
Lei


wangl...@geekplus.com.cn

 
From: Robin Moffatt
Date: 2020-05-11 16:40
To: users
Subject: Re: Write to database directly by referencing schema registry, no jdbc 
sink connector
>  wirite  to target database. I want to use self-written  java code
instead of kafka jdbc sink connector.
 
Out of interest, why do you want to do this? Why not use the JDBC sink
connector (or a fork of it if you need to amend its functionality)?
 
 
 
-- 
 
Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
 
 
On Sat, 9 May 2020 at 03:38, wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:
 
>
> Using debezium to parse binlog, using avro serialization and send to kafka.
>
> Need to consume the avro serialized  binlog data and  wirite  to target
> database
> I want to use self-written  java code instead of kafka jdbc sink
> connector.
>
> How can i  reference the schema registry, convert a kafka message to
> corresponding table record and write to corresponding table?
> Is there any example code to do this ?
>
> Thanks,
> Lei
>
>
>
> wangl...@geekplus.com.cn
>
>


Re: Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-11 Thread Robin Moffatt
>  wirite  to target database. I want to use self-written  java code
instead of kafka jdbc sink connector.

Out of interest, why do you want to do this? Why not use the JDBC sink
connector (or a fork of it if you need to amend its functionality)?



-- 

Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff


On Sat, 9 May 2020 at 03:38, wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:

>
> Using debezium to parse binlog, using avro serialization and send to kafka.
>
> Need to consume the avro serialized  binlog data and  wirite  to target
> database
> I want to use self-written  java code instead of kafka jdbc sink
> connector.
>
> How can i  reference the schema registry, convert a kafka message to
> corresponding table record and write to corresponding table?
> Is there any example code to do this ?
>
> Thanks,
> Lei
>
>
>
> wangl...@geekplus.com.cn
>
>


Re: Re: Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-11 Thread wangl...@geekplus.com.cn

Hi Liam, 

I have consumed the avro record using the java code:
for (final ConsumerRecord record : records) {
final String key = record.key();
final GenericRecord value = record.value();
System.out.println(record.value().getSchema());
System.out.printf("key = %s, value = %s%n", key, value);
}
Next I need to write it to database using the existing kafka jdbc sink 
connector API: 

Seems i need to consolidate the code here: 
https://github.com/confluentinc/kafka-connect-jdbc/ 
Just new a JDBCSinkTask,  add the record to the JDBCSinkTask, then the task 
will automatically  genterate the sql according the record schema and execute 
it, no matter what the table is.

But i have no idea how  to get it.

Thanks,
Lei





wangl...@geekplus.com.cn

 
From: Liam Clarke-Hutchinson
Date: 2020-05-09 18:20
To: users
Subject: Re: Re: Write to database directly by referencing schema registry, no 
jdbc sink connector
Hi Lei,
 
This tutorial will introduce you to the Avro consumers.
https://docs.confluent.io/current/schema-registry/schema_registry_tutorial.html
 
In terms of going from Avro record to SQL, the JDBC sink generates SQL
based on the field names in the schema, and configured table names.
 
IIRC, the Avro consumer returns an Avro GenericRecord.Record[1], which has
a getSchema() method that returns the schema used to deserialise it,  so
you could access that to generate the SQL.
 
[1]:
https://avro.apache.org/docs/current/api/java/org/apache/avro/generic/GenericData.Record.html
 
Good luck,
 
Liam Clarke-Hutchinson
 
On Sat, 9 May 2020, 10:03 pm wangl...@geekplus.com.cn, <
wangl...@geekplus.com.cn> wrote:
 
>
> Thanks Liam,
>
> I want to achive the following  function using java code:
>
> For each avro serialized record received:
>  1  deserialized the record automatically  by referencing  schema
> registry
>  2  change the record to a sql statement needed to be executed and
> execute it
>
> Seems the kafka jdbc sink connector (
> https://docs.confluent.io/current/connect/kafka-connect-jdbc/sink-connector/index.html)
> can achieve this function.
>
> But i have no idea how to write with java code.
> Is there any code example to achieve this?
>
> Thanks,
> Lei
>
>
>
> wangl...@geekplus.com.cn
>
>
> From: Liam Clarke-Hutchinson
> Date: 2020-05-09 16:30
> To: users
> Subject: Re: Write to database directly by referencing schema registry, no
> jdbc sink connector
> Hi Lei,
>
> You could use the Kafka Avro consumer to deserialise records using the
> Schema Registry automatically.
>
> Then write to the DB as you see fit.
>
> Cheers,
>
> Liam Clarke-Hutchinson
>
> On Sat, 9 May 2020, 2:38 pm wangl...@geekplus.com.cn, <
> wangl...@geekplus.com.cn> wrote:
>
> >
> > Using debezium to parse binlog, using avro serialization and send to
> kafka.
> >
> > Need to consume the avro serialized  binlog data and  wirite  to target
> > database
> > I want to use self-written  java code instead of kafka jdbc sink
> > connector.
> >
> > How can i  reference the schema registry, convert a kafka message to
> > corresponding table record and write to corresponding table?
> > Is there any example code to do this ?
> >
> > Thanks,
> > Lei
> >
> >
> >
> > wangl...@geekplus.com.cn
> >
> >
>


Re: Re: Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-09 Thread Liam Clarke-Hutchinson
Hi Lei,

This tutorial will introduce you to the Avro consumers.
https://docs.confluent.io/current/schema-registry/schema_registry_tutorial.html

In terms of going from Avro record to SQL, the JDBC sink generates SQL
based on the field names in the schema, and configured table names.

IIRC, the Avro consumer returns an Avro GenericRecord.Record[1], which has
a getSchema() method that returns the schema used to deserialise it,  so
you could access that to generate the SQL.

[1]:
https://avro.apache.org/docs/current/api/java/org/apache/avro/generic/GenericData.Record.html

Good luck,

Liam Clarke-Hutchinson

On Sat, 9 May 2020, 10:03 pm wangl...@geekplus.com.cn, <
wangl...@geekplus.com.cn> wrote:

>
> Thanks Liam,
>
> I want to achive the following  function using java code:
>
> For each avro serialized record received:
>  1  deserialized the record automatically  by referencing  schema
> registry
>  2  change the record to a sql statement needed to be executed and
> execute it
>
> Seems the kafka jdbc sink connector (
> https://docs.confluent.io/current/connect/kafka-connect-jdbc/sink-connector/index.html)
> can achieve this function.
>
> But i have no idea how to write with java code.
> Is there any code example to achieve this?
>
> Thanks,
> Lei
>
>
>
> wangl...@geekplus.com.cn
>
>
> From: Liam Clarke-Hutchinson
> Date: 2020-05-09 16:30
> To: users
> Subject: Re: Write to database directly by referencing schema registry, no
> jdbc sink connector
> Hi Lei,
>
> You could use the Kafka Avro consumer to deserialise records using the
> Schema Registry automatically.
>
> Then write to the DB as you see fit.
>
> Cheers,
>
> Liam Clarke-Hutchinson
>
> On Sat, 9 May 2020, 2:38 pm wangl...@geekplus.com.cn, <
> wangl...@geekplus.com.cn> wrote:
>
> >
> > Using debezium to parse binlog, using avro serialization and send to
> kafka.
> >
> > Need to consume the avro serialized  binlog data and  wirite  to target
> > database
> > I want to use self-written  java code instead of kafka jdbc sink
> > connector.
> >
> > How can i  reference the schema registry, convert a kafka message to
> > corresponding table record and write to corresponding table?
> > Is there any example code to do this ?
> >
> > Thanks,
> > Lei
> >
> >
> >
> > wangl...@geekplus.com.cn
> >
> >
>


Re: Re: Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-09 Thread wangl...@geekplus.com.cn

Thanks Liam, 

I want to achive the following  function using java code:

For each avro serialized record received:
 1  deserialized the record automatically  by referencing  schema 
registry
 2  change the record to a sql statement needed to be executed and 
execute it

Seems the kafka jdbc sink connector 
(https://docs.confluent.io/current/connect/kafka-connect-jdbc/sink-connector/index.html)
 can achieve this function.

But i have no idea how to write with java code. 
Is there any code example to achieve this?

Thanks,
Lei



wangl...@geekplus.com.cn 

 
From: Liam Clarke-Hutchinson
Date: 2020-05-09 16:30
To: users
Subject: Re: Write to database directly by referencing schema registry, no jdbc 
sink connector
Hi Lei,
 
You could use the Kafka Avro consumer to deserialise records using the
Schema Registry automatically.
 
Then write to the DB as you see fit.
 
Cheers,
 
Liam Clarke-Hutchinson
 
On Sat, 9 May 2020, 2:38 pm wangl...@geekplus.com.cn, <
wangl...@geekplus.com.cn> wrote:
 
>
> Using debezium to parse binlog, using avro serialization and send to kafka.
>
> Need to consume the avro serialized  binlog data and  wirite  to target
> database
> I want to use self-written  java code instead of kafka jdbc sink
> connector.
>
> How can i  reference the schema registry, convert a kafka message to
> corresponding table record and write to corresponding table?
> Is there any example code to do this ?
>
> Thanks,
> Lei
>
>
>
> wangl...@geekplus.com.cn
>
>


Re: Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-09 Thread Liam Clarke-Hutchinson
Hi Lei,

You could use the Kafka Avro consumer to deserialise records using the
Schema Registry automatically.

Then write to the DB as you see fit.

Cheers,

Liam Clarke-Hutchinson

On Sat, 9 May 2020, 2:38 pm wangl...@geekplus.com.cn, <
wangl...@geekplus.com.cn> wrote:

>
> Using debezium to parse binlog, using avro serialization and send to kafka.
>
> Need to consume the avro serialized  binlog data and  wirite  to target
> database
> I want to use self-written  java code instead of kafka jdbc sink
> connector.
>
> How can i  reference the schema registry, convert a kafka message to
> corresponding table record and write to corresponding table?
> Is there any example code to do this ?
>
> Thanks,
> Lei
>
>
>
> wangl...@geekplus.com.cn
>
>


Re: Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-08 Thread Chris Toomey
Write your own implementation of the JDBC sink connector and use the avro
serializer to convert the kafka record into a connect record that your
connector takes and writes to DB via JDBC.


On Fri, May 8, 2020 at 7:38 PM wangl...@geekplus.com.cn <
wangl...@geekplus.com.cn> wrote:

>
> Using debezium to parse binlog, using avro serialization and send to kafka.
>
> Need to consume the avro serialized  binlog data and  wirite  to target
> database
> I want to use self-written  java code instead of kafka jdbc sink
> connector.
>
> How can i  reference the schema registry, convert a kafka message to
> corresponding table record and write to corresponding table?
> Is there any example code to do this ?
>
> Thanks,
> Lei
>
>
>
> wangl...@geekplus.com.cn
>
>


Write to database directly by referencing schema registry, no jdbc sink connector

2020-05-08 Thread wangl...@geekplus.com.cn

Using debezium to parse binlog, using avro serialization and send to kafka.

Need to consume the avro serialized  binlog data and  wirite  to target database
I want to use self-written  java code instead of kafka jdbc sink connector. 

How can i  reference the schema registry, convert a kafka message to 
corresponding table record and write to corresponding table?
Is there any example code to do this ?

Thanks,
Lei



wangl...@geekplus.com.cn



Re: How to change schema registry port

2019-12-25 Thread Kafka Shil
On Wed, Dec 25, 2019 at 5:51 PM Kafka Shil  wrote:

> Hi,
> I am using docker compose file to start schema-registry. I need to change
> default port to 8084 instead if 8081. I made the following changes in
> docker compose file.
>
> schema-registry:
> image: confluentinc/cp-schema-registry:5.3.1
> hostname: schema-registry
> container_name: schema-registry
> depends_on:
>   - zookeeper
>   - broker
> ports:
>   - "8084:8084"
> environment:
>   SCHEMA_REGISTRY_HOST_NAME: schema-registry
>   SCHEMA_REGISTRY_KAFKASTORE_CONNECTION_URL: 'zookeeper:2181'
>   SCHEMA_REGISTRY_LISTENERS: http://localhost:8084
>
> But If I execute "docker ps", I can see it still listens to 8081.
>
> 69511efd32d4confluentinc/cp-schema-registry:5.3.1
> "/etc/confluent/dock…"   9 minutes ago   Up 9 minutes
> 8081/tcp, 0.0.0.0:8084->8084/tcp schema-registry
>
> How do I change port to 8084?
>
> Thanks
>


Re: Ksql with schema registry

2019-10-07 Thread KhajaAsmath Mohammed
Thanks Robin, I was able to do it after passing schema registry url to 
properties file. Ksql-server.properties

Sent from my iPhone

> On Oct 7, 2019, at 9:12 AM, Robin Moffatt  wrote:
> 
> You can specify VALUE_FORMAT='AVRO' to use Avro serialisation and
> the Schema Registry. See docs for more details
> https://docs.confluent.io/current/ksql/docs/installation/server-config/avro-schema.html
> 
> 
> -- 
> 
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
> 
> 
> On Sat, 5 Oct 2019 at 20:37, KhajaAsmath Mohammed 
> wrote:
> 
>> Hi,
>> 
>> What is the configuration that I need to follow to register schema
>> registry on ksql client installed in my machine that connects to cluster.
>> 
>> Creating stream with select should create schema automatically in schema
>> registry .
>> 
>> Thanks,
>> Asmath
>> 
>> Sent from my iPhone


Re: Ksql with schema registry

2019-10-07 Thread Robin Moffatt
You can specify VALUE_FORMAT='AVRO' to use Avro serialisation and
the Schema Registry. See docs for more details
https://docs.confluent.io/current/ksql/docs/installation/server-config/avro-schema.html


-- 

Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff


On Sat, 5 Oct 2019 at 20:37, KhajaAsmath Mohammed 
wrote:

> Hi,
>
> What is the configuration that I need to follow to register schema
> registry on ksql client installed in my machine that connects to cluster.
>
> Creating stream with select should create schema automatically in schema
> registry .
>
> Thanks,
> Asmath
>
> Sent from my iPhone


Ksql with schema registry

2019-10-05 Thread KhajaAsmath Mohammed
Hi,

What is the configuration that I need to follow to register schema registry on 
ksql client installed in my machine that connects to cluster.

Creating stream with select should create schema automatically in schema 
registry .

Thanks,
Asmath 

Sent from my iPhone

Kstreams and Schema registry question

2019-10-03 Thread KhajaAsmath Mohammed
Hi,

I have kstream in our cluster. I am not sure how the schema is generated
for this. Dpes kstream have schema in schema registry?

How to move data from kstream to topic which has the same name? How to
register schema automatically for this?

This is how schema looks in our registry. My assumption was it was created
automatically. May I know how it was done.

{
  "type": "record",

*  "name": "KsqlDataSourceSchema",  "namespace":
"io.confluent.ksql.avro_schemas",*
  "fields": [
{
  "name": "IN_EMP_ID",
  "type": [
"null",
"string"
  ],
  "default": null
},
{
  "name": "IN_SYSTEM_ID",
  "type": [
"null",
"string"
  ],
  "default": null
},
{
  "name": "IN_DIRECTORY_TITLE",
  "type": [
"null",
"string"
  ],
  "default": null
},
{
  "name": "IN_OFFICE_LOCATION",
  "type": [
"null",
"string"
  ],
  "default": null
},
{
  "name": "IN_DIRECTORY_DEPARTMENT",
  "type": [
"null",
"string"
  ],
  "default": null
},
{
  "name": "EMP_VISA_TYPE_CODE",
  "type": [
"null",
"string"
  ],
  "default": null
},
{
  "name": "EMP_CITIZENSHIP_CODE",
  "type": [
"null",
"string"
  ],
  "default": null
},
{
  "name": "IN_APP_TITLE_CODE",
  "type": [
"null",
"string"
  ],
  "default": null
},
{
  "name": "IN_APP_TITLE_NAME",
  "type": [
"null",
"string"
  ],
  "default": null
}
  ]
}

Thanks,
Asmath


Re: Open Source Schema Registry

2018-10-23 Thread Peter Bukowinski
Have a look at https://github.com/confluentinc/schema-registry 
<https://github.com/confluentinc/schema-registry>


> On Oct 23, 2018, at 9:28 AM, chinchu chinchu  wrote:
> 
> Hi folks,
> We are looking  to use  open source schema registry with apache kafka 1.0.1
> and avro. *Do we need to write a serialzer/deserialzier  similar to
> confluent's KafkAvroSerializer to achieve this ?*.Our schemas are large ,so
> one of the reason that  we are looking to use the registry is to decrease
> the  payload with every record. The consumers at any point in time will
> know what the schema is for each topic ,because they are all internal to my
> team so that is some thing that we can work around for now.
> 
> Thanks,
> Chinchu



Open Source Schema Registry

2018-10-23 Thread chinchu chinchu
Hi folks,
We are looking  to use  open source schema registry with apache kafka 1.0.1
and avro. *Do we need to write a serialzer/deserialzier  similar to
confluent's KafkAvroSerializer to achieve this ?*.Our schemas are large ,so
one of the reason that  we are looking to use the registry is to decrease
the  payload with every record. The consumers at any point in time will
know what the schema is for each topic ,because they are all internal to my
team so that is some thing that we can work around for now.

Thanks,
Chinchu


Re: Building schema-registry module fails - kafka-connect-hdfs FAQ

2018-05-03 Thread Kanagha
Hi,

Found the reason that the latest schema-registry module version isn't
compatible with our internal kafka version 0.11 and hence the above error
message.

Investigating if the previous version, 4.1.0-SNAPSHOT would work with our
internal version as per this change -
https://github.com/confluentinc/schema-registry/commit/8cbcc97d20fdcd6b4f21bea8de04e12b716fe7e1#diff-600376dffeb79835ede4a0b285078036

>From the git repo, I'm unsure what is the appropriate git tag for
fetching 4.1.0
version.

Appreciate any advice on how to fetch earlier stable versions of these
modules required for building kafka-connect-hdfs. Is there a link where we
can find out all the stable version released for these modules?

Thanks!


Kanagha

On Thu, May 3, 2018 at 10:40 AM, Kanagha <er.kana...@gmail.com> wrote:

> Hi,
>
> I'm trying to build kafka-connect-hdfs separately by following this FAQ -
> https://github.com/confluentinc/kafka-connect-hdfs/wiki/FAQ
>
> While compiling schema-registry, I get the following error:
>
> [*INFO*] -
>
> [*ERROR*] COMPILATION ERROR :
>
> [*INFO*] -
>
> [*ERROR*] /Users/workspacename/kafka-connect/schema-registry/
> client/src/main/java/io/confluent/kafka/schemaregistry/client/
> security/basicauth/SaslBasicAuthCredentialProvider.java:[40,42] cannot
> find symbol
>
>   symbol:   method loadClientContext(java.util.Map<java.lang.String,java.
> lang.Object>)
>
>   location: class org.apache.kafka.common.security.JaasContext
>
> [*INFO*] 1 error
>
> We are using kafka 0.11 version and kafka connect binaries in our
> platform. Hence trying to integrate kafka-connect-hdfs into the existing
> platform.
>
> I see using kafka-connect-hdfs is supported outside of confluent platform
> by following this link - https://github.com/confluentinc/kafka-connect-
> hdfs/issues/175
>
> Appreciate any inputs/suggestions. Thanks!
>
>
>
>


Building schema-registry module fails - kafka-connect-hdfs FAQ

2018-05-03 Thread Kanagha
Hi,

I'm trying to build kafka-connect-hdfs separately by following this FAQ -
https://github.com/confluentinc/kafka-connect-hdfs/wiki/FAQ

While compiling schema-registry, I get the following error:

[*INFO*] -

[*ERROR*] COMPILATION ERROR :

[*INFO*] -

[*ERROR*]
/Users/workspacename/kafka-connect/schema-registry/client/src/main/java/io/confluent/kafka/schemaregistry/client/security/basicauth/SaslBasicAuthCredentialProvider.java:[40,42]
cannot find symbol

  symbol:   method
loadClientContext(java.util.Map<java.lang.String,java.lang.Object>)

  location: class org.apache.kafka.common.security.JaasContext

[*INFO*] 1 error

We are using kafka 0.11 version and kafka connect binaries in our platform.
Hence trying to integrate kafka-connect-hdfs into the existing platform.

I see using kafka-connect-hdfs is supported outside of confluent platform
by following this link -
https://github.com/confluentinc/kafka-connect-hdfs/issues/175

Appreciate any inputs/suggestions. Thanks!


Schema Registry

2017-12-26 Thread Nishanth S
Hi All,
What is the best way to  add schemas to  schema registry ?. I am   using
confluent platform and  our data is in avro . We use confluent kafka avro
serializers  and I read  that using this would  register the schema in
schema registry automatically .We also use the same schemas for downstream
processing and *do not want to make use of schema  registry for short term*.
Now if I want to compare  the schemas in schema registry and in source
control(from where downstream process would refer it) is there a way to do
that ?. How can I retrieve the schemas from schema registry if I do not
upload them by a different process and just use the  serializers  to do the
job.

Thanks,
Nishanth


Re: Schema Registry Maven Artifacts Missing?

2017-08-01 Thread Debasish Ghosh
U can try this ..

"io.confluent"  % "kafka-avro-serializer"% "3.2.2"

regards.

On Wed, Aug 2, 2017 at 9:08 AM, Chaoran Yu <chaoran...@lightbend.com> wrote:

> Does anyone know what artifacts I need to include in my project in order
> to use Schema Registry?
>
> I looked at this SO link: https://stackoverflow.com/
> questions/37317567/how-to-use-the-avro-serializer-with-
> schema-registry-from-a-kafka-connect-sourcet <https://stackoverflow.com/
> questions/37317567/how-to-use-the-avro-serializer-with-
> schema-registry-from-a-kafka-connect-sourcet>
> But the artifacts referred to there can’t be found at Maven Central:
> https://mvnrepository.com/artifact/io.confluent.kafka <
> https://mvnrepository.com/artifact/io.confluent.kafka>
>
> Thanks in advance!
>
>


-- 
Debasish Ghosh
http://manning.com/ghosh2
http://manning.com/ghosh

Twttr: @debasishg
Blog: http://debasishg.blogspot.com
Code: http://github.com/debasishg


Schema Registry Maven Artifacts Missing?

2017-08-01 Thread Chaoran Yu
Does anyone know what artifacts I need to include in my project in order to use 
Schema Registry?

I looked at this SO link: 
https://stackoverflow.com/questions/37317567/how-to-use-the-avro-serializer-with-schema-registry-from-a-kafka-connect-sourcet
 
<https://stackoverflow.com/questions/37317567/how-to-use-the-avro-serializer-with-schema-registry-from-a-kafka-connect-sourcet>
But the artifacts referred to there can’t be found at Maven Central: 
https://mvnrepository.com/artifact/io.confluent.kafka 
<https://mvnrepository.com/artifact/io.confluent.kafka>

Thanks in advance!



Re: Schema Registry on DC/OS

2017-07-27 Thread Debasish Ghosh
I am using on the non-enterprise edition .. it works fine.

regards.

On Tue, Jul 25, 2017 at 5:16 PM, Hassaan Pasha <hpa...@an10.io> wrote:

> Is confluent-kafka supported by DCOS (non-enterprise edition) or does it
> only work with the enterprise edition?
>
> On Tue, Jul 25, 2017 at 4:37 PM, Affan Syed <as...@an10.io> wrote:
>
> > again, another person from confluent saying it is supported. Ask here
> asap
> > :)
> > - Affan
> >
> > -- Forwarded message --
> > From: Kaufman Ng <kauf...@confluent.io>
> > Date: Tue, Jul 25, 2017 at 9:20 AM
> > Subject: Re: Schema Registry on DC/OS
> > To: users@kafka.apache.org, dgh...@acm.org
> >
> >
> > Confluent Schema Registry is available in the DC/OS Universe, see here
> for
> > the package definitions
> > https://github.com/mesosphere/universe/tree/dcd777a7e429678f
> > d74fc7306945cdd27bda3b94/repo/packages/C/confluent-schema-registry/5
> >
> > The stackoverflow thread is quite out of date, as it mentions Confluent
> > Platform 2.0 (the latest version as of now is 3.2.2).
> >
> >
> > On Mon, Jul 24, 2017 at 2:16 PM, Debasish Ghosh <
> ghosh.debas...@gmail.com>
> > wrote:
> >
> > > Hi -
> > >
> > > Is it possible to run schema registry service on DC/OS ? I checked the
> > > Confluent Kafka package on Mesosphere Universe. It doesn't have any
> > support
> > > for running Schema Registry. Also this thread
> > > https://stackoverflow.com/questions/37322078/cant-start-
> > > confluent-2-0-apache-kafka-schema-registry-in-dc-os
> > > on SoF says the same thing. Just wondering if there has been any
> progress
> > > on this front ..
> > >
> > > regards.
> > >
> > > --
> > > Debasish Ghosh
> > > http://manning.com/ghosh2
> > > http://manning.com/ghosh
> > >
> > > Twttr: @debasishg
> > > Blog: http://debasishg.blogspot.com
> > > Code: http://github.com/debasishg
> > >
> >
> >
> >
> > --
> > Kaufman Ng
> > +1 646 961 8063
> > Solutions Architect | Confluent | www.confluent.io
> >
> >
>
>
> --
> Regards,
>
> *Hassaan Pasha*
> Mobile: 03347767442
>



-- 
Debasish Ghosh
http://manning.com/ghosh2
http://manning.com/ghosh

Twttr: @debasishg
Blog: http://debasishg.blogspot.com
Code: http://github.com/debasishg


Re: Schema Registry on DC/OS

2017-07-27 Thread Hassaan Pasha
Is confluent-kafka supported by DCOS (non-enterprise edition) or does it
only work with the enterprise edition?

On Tue, Jul 25, 2017 at 4:37 PM, Affan Syed <as...@an10.io> wrote:

> again, another person from confluent saying it is supported. Ask here asap
> :)
> - Affan
>
> -- Forwarded message --
> From: Kaufman Ng <kauf...@confluent.io>
> Date: Tue, Jul 25, 2017 at 9:20 AM
> Subject: Re: Schema Registry on DC/OS
> To: users@kafka.apache.org, dgh...@acm.org
>
>
> Confluent Schema Registry is available in the DC/OS Universe, see here for
> the package definitions
> https://github.com/mesosphere/universe/tree/dcd777a7e429678f
> d74fc7306945cdd27bda3b94/repo/packages/C/confluent-schema-registry/5
>
> The stackoverflow thread is quite out of date, as it mentions Confluent
> Platform 2.0 (the latest version as of now is 3.2.2).
>
>
> On Mon, Jul 24, 2017 at 2:16 PM, Debasish Ghosh <ghosh.debas...@gmail.com>
> wrote:
>
> > Hi -
> >
> > Is it possible to run schema registry service on DC/OS ? I checked the
> > Confluent Kafka package on Mesosphere Universe. It doesn't have any
> support
> > for running Schema Registry. Also this thread
> > https://stackoverflow.com/questions/37322078/cant-start-
> > confluent-2-0-apache-kafka-schema-registry-in-dc-os
> > on SoF says the same thing. Just wondering if there has been any progress
> > on this front ..
> >
> > regards.
> >
> > --
> > Debasish Ghosh
> > http://manning.com/ghosh2
> > http://manning.com/ghosh
> >
> > Twttr: @debasishg
> > Blog: http://debasishg.blogspot.com
> > Code: http://github.com/debasishg
> >
>
>
>
> --
> Kaufman Ng
> +1 646 961 8063
> Solutions Architect | Confluent | www.confluent.io
>
>


-- 
Regards,

*Hassaan Pasha*
Mobile: 03347767442


Re: Schema Registry on DC/OS

2017-07-25 Thread Debasish Ghosh
The reason I was looking for the CLI was to inspect into the service and
get information like the host, port etc. But later I found I can get the
same using the task command of dcos ..

$ dcos marathon task list --json /schema-registry

So, it's ok .. I don't need the specific CLI ..

regards.

On Tue, Jul 25, 2017 at 7:52 PM, Kaufman Ng <kauf...@confluent.io> wrote:

> The Confluent Schema Registry is a RESTful service, so no CLI really.
> What's your use case?
>
> If you are not familiar with it, the quickstart docs is a good place to
> start: http://docs.confluent.io/current/schema-registry/
> docs/intro.html#quickstart
>
>
> On Tue, Jul 25, 2017 at 2:09 AM, Debasish Ghosh <ghosh.debas...@gmail.com>
> wrote:
>
>> Thanks a lot .. I found it from the community supported packages on the
>> DC/OS UI. Installed it and it runs ok. One question - is there any CLI for
>> confluent-schema-registry ? dcos package install
>> confluent-schema-registry --cli does not give anything ..
>>
>> regards.
>>
>> On Tue, Jul 25, 2017 at 9:50 AM, Kaufman Ng <kauf...@confluent.io> wrote:
>>
>>> Confluent Schema Registry is available in the DC/OS Universe, see here
>>> for the package definitions https://github.com
>>> /mesosphere/universe/tree/dcd777a7e429678fd74fc7306945cdd27b
>>> da3b94/repo/packages/C/confluent-schema-registry/5
>>>
>>> The stackoverflow thread is quite out of date, as it mentions Confluent
>>> Platform 2.0 (the latest version as of now is 3.2.2).
>>>
>>>
>>> On Mon, Jul 24, 2017 at 2:16 PM, Debasish Ghosh <
>>> ghosh.debas...@gmail.com> wrote:
>>>
>>>> Hi -
>>>>
>>>> Is it possible to run schema registry service on DC/OS ? I checked the
>>>> Confluent Kafka package on Mesosphere Universe. It doesn't have any
>>>> support
>>>> for running Schema Registry. Also this thread
>>>> https://stackoverflow.com/questions/37322078/cant-start-conf
>>>> luent-2-0-apache-kafka-schema-registry-in-dc-os
>>>> on SoF says the same thing. Just wondering if there has been any
>>>> progress
>>>> on this front ..
>>>>
>>>> regards.
>>>>
>>>> --
>>>> Debasish Ghosh
>>>> http://manning.com/ghosh2
>>>> http://manning.com/ghosh
>>>>
>>>> Twttr: @debasishg
>>>> Blog: http://debasishg.blogspot.com
>>>> Code: http://github.com/debasishg
>>>>
>>>
>>>
>>>
>>> --
>>> Kaufman Ng
>>> +1 646 961 8063 <(646)%20961-8063>
>>> Solutions Architect | Confluent | www.confluent.io
>>>
>>>
>>>
>>
>>
>> --
>> Debasish Ghosh
>> http://manning.com/ghosh2
>> http://manning.com/ghosh
>>
>> Twttr: @debasishg
>> Blog: http://debasishg.blogspot.com
>> Code: http://github.com/debasishg
>>
>
>
>
> --
> Kaufman Ng
> +1 646 961 8063
> Solutions Architect | Confluent | www.confluent.io
>
>
>


-- 
Debasish Ghosh
http://manning.com/ghosh2
http://manning.com/ghosh

Twttr: @debasishg
Blog: http://debasishg.blogspot.com
Code: http://github.com/debasishg


Re: Schema Registry on DC/OS

2017-07-25 Thread Kaufman Ng
The Confluent Schema Registry is a RESTful service, so no CLI really.
What's your use case?

If you are not familiar with it, the quickstart docs is a good place to
start:
http://docs.confluent.io/current/schema-registry/docs/intro.html#quickstart


On Tue, Jul 25, 2017 at 2:09 AM, Debasish Ghosh <ghosh.debas...@gmail.com>
wrote:

> Thanks a lot .. I found it from the community supported packages on the
> DC/OS UI. Installed it and it runs ok. One question - is there any CLI for
> confluent-schema-registry ? dcos package install
> confluent-schema-registry --cli does not give anything ..
>
> regards.
>
> On Tue, Jul 25, 2017 at 9:50 AM, Kaufman Ng <kauf...@confluent.io> wrote:
>
>> Confluent Schema Registry is available in the DC/OS Universe, see here
>> for the package definitions https://github.com
>> /mesosphere/universe/tree/dcd777a7e429678fd74fc7306945cdd27b
>> da3b94/repo/packages/C/confluent-schema-registry/5
>>
>> The stackoverflow thread is quite out of date, as it mentions Confluent
>> Platform 2.0 (the latest version as of now is 3.2.2).
>>
>>
>> On Mon, Jul 24, 2017 at 2:16 PM, Debasish Ghosh <ghosh.debas...@gmail.com
>> > wrote:
>>
>>> Hi -
>>>
>>> Is it possible to run schema registry service on DC/OS ? I checked the
>>> Confluent Kafka package on Mesosphere Universe. It doesn't have any
>>> support
>>> for running Schema Registry. Also this thread
>>> https://stackoverflow.com/questions/37322078/cant-start-conf
>>> luent-2-0-apache-kafka-schema-registry-in-dc-os
>>> on SoF says the same thing. Just wondering if there has been any progress
>>> on this front ..
>>>
>>> regards.
>>>
>>> --
>>> Debasish Ghosh
>>> http://manning.com/ghosh2
>>> http://manning.com/ghosh
>>>
>>> Twttr: @debasishg
>>> Blog: http://debasishg.blogspot.com
>>> Code: http://github.com/debasishg
>>>
>>
>>
>>
>> --
>> Kaufman Ng
>> +1 646 961 8063 <(646)%20961-8063>
>> Solutions Architect | Confluent | www.confluent.io
>>
>>
>>
>
>
> --
> Debasish Ghosh
> http://manning.com/ghosh2
> http://manning.com/ghosh
>
> Twttr: @debasishg
> Blog: http://debasishg.blogspot.com
> Code: http://github.com/debasishg
>



-- 
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io


Re: Schema Registry on DC/OS

2017-07-25 Thread Debasish Ghosh
Thanks a lot .. I found it from the community supported packages on the
DC/OS UI. Installed it and it runs ok. One question - is there any CLI for
confluent-schema-registry ? dcos package install confluent-schema-registry
--cli does not give anything ..

regards.

On Tue, Jul 25, 2017 at 9:50 AM, Kaufman Ng <kauf...@confluent.io> wrote:

> Confluent Schema Registry is available in the DC/OS Universe, see here for
> the package definitions https://github.com/mesosphere/universe/tree/
> dcd777a7e429678fd74fc7306945cdd27bda3b94/repo/packages/C/
> confluent-schema-registry/5
>
> The stackoverflow thread is quite out of date, as it mentions Confluent
> Platform 2.0 (the latest version as of now is 3.2.2).
>
>
> On Mon, Jul 24, 2017 at 2:16 PM, Debasish Ghosh <ghosh.debas...@gmail.com>
> wrote:
>
>> Hi -
>>
>> Is it possible to run schema registry service on DC/OS ? I checked the
>> Confluent Kafka package on Mesosphere Universe. It doesn't have any
>> support
>> for running Schema Registry. Also this thread
>> https://stackoverflow.com/questions/37322078/cant-start-conf
>> luent-2-0-apache-kafka-schema-registry-in-dc-os
>> on SoF says the same thing. Just wondering if there has been any progress
>> on this front ..
>>
>> regards.
>>
>> --
>> Debasish Ghosh
>> http://manning.com/ghosh2
>> http://manning.com/ghosh
>>
>> Twttr: @debasishg
>> Blog: http://debasishg.blogspot.com
>> Code: http://github.com/debasishg
>>
>
>
>
> --
> Kaufman Ng
> +1 646 961 8063
> Solutions Architect | Confluent | www.confluent.io
>
>
>


-- 
Debasish Ghosh
http://manning.com/ghosh2
http://manning.com/ghosh

Twttr: @debasishg
Blog: http://debasishg.blogspot.com
Code: http://github.com/debasishg


Re: Schema Registry on DC/OS

2017-07-24 Thread Kaufman Ng
Confluent Schema Registry is available in the DC/OS Universe, see here for
the package definitions
https://github.com/mesosphere/universe/tree/dcd777a7e429678fd74fc7306945cdd27bda3b94/repo/packages/C/confluent-schema-registry/5

The stackoverflow thread is quite out of date, as it mentions Confluent
Platform 2.0 (the latest version as of now is 3.2.2).


On Mon, Jul 24, 2017 at 2:16 PM, Debasish Ghosh <ghosh.debas...@gmail.com>
wrote:

> Hi -
>
> Is it possible to run schema registry service on DC/OS ? I checked the
> Confluent Kafka package on Mesosphere Universe. It doesn't have any support
> for running Schema Registry. Also this thread
> https://stackoverflow.com/questions/37322078/cant-start-
> confluent-2-0-apache-kafka-schema-registry-in-dc-os
> on SoF says the same thing. Just wondering if there has been any progress
> on this front ..
>
> regards.
>
> --
> Debasish Ghosh
> http://manning.com/ghosh2
> http://manning.com/ghosh
>
> Twttr: @debasishg
> Blog: http://debasishg.blogspot.com
> Code: http://github.com/debasishg
>



-- 
Kaufman Ng
+1 646 961 8063
Solutions Architect | Confluent | www.confluent.io


Schema Registry on DC/OS

2017-07-24 Thread Debasish Ghosh
Hi -

Is it possible to run schema registry service on DC/OS ? I checked the
Confluent Kafka package on Mesosphere Universe. It doesn't have any support
for running Schema Registry. Also this thread
https://stackoverflow.com/questions/37322078/cant-start-confluent-2-0-apache-kafka-schema-registry-in-dc-os
on SoF says the same thing. Just wondering if there has been any progress
on this front ..

regards.

-- 
Debasish Ghosh
http://manning.com/ghosh2
http://manning.com/ghosh

Twttr: @debasishg
Blog: http://debasishg.blogspot.com
Code: http://github.com/debasishg


Re: Avro Serialization & Schema Registry ..

2017-07-19 Thread Michael Noll
In short, Avro serializers/deserializers provided by Confluent always
integrate with (and thus require) Confluent Schema Registry.  That's why
you must set the `schema.registry.url` configuration for them.

If you want to use Avro but without a schema registry, you'd need to work
with the Avro API directly.  You can also implement your own "no schema
registry" Avro serializers/deserializers for more convenience, of course.

Best wishes,
Michael



On Mon, Jul 17, 2017 at 8:51 PM, Debasish Ghosh <ghosh.debas...@gmail.com>
wrote:

> I am using the class io.confluent.kafka.serializers.KafkaAvroSerializer as
> one of the base abstractions for Avro serialization. From the stack trace I
> see that the instantiation of this class needs set up of
> KafkaAvroSerializerConfig which needs a value for the schema registry url
> ..
>
> regards.
>
> On Tue, Jul 18, 2017 at 12:02 AM, Richard L. Burton III <
> mrbur...@gmail.com>
> wrote:
>
> > For your first question, no you can use the avro API.
> >
> >
> >
> > On Mon, Jul 17, 2017 at 2:29 PM Debasish Ghosh <ghosh.debas...@gmail.com
> >
> > wrote:
> >
> >> Hi -
> >>
> >> I am using Avro Serialization in a Kafka Streams application through the
> >> following dependency ..
> >>
> >> "io.confluent"  % "kafka-avro-serializer" % "3.2.2"
> >>
> >> My question is : Is schema registry mandatory for using Avro
> Serialization
> >> ? Because when I run the application I get the following exception where
> >> it
> >> complains that there is no default value for "schema.registry.url". My
> >> current settings for StreamsConfig are the following ..
> >>
> >>   settings.put(StreamsConfig.APPLICATION_ID_CONFIG,
> >> "kstream-log-processing-avro")
> >>settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, config.brokers)
> >>settings.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG,
> >> Serdes.ByteArray.getClass.getName)
> >>settings.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG,
> >> classOf[SpecificAvroSerde[LogRecordAvro]])
> >>
> >> .. and the exception ..
> >>
> >> 23:49:34.054 TKD [StreamThread-1] ERROR o.a.k.c.c.i.ConsumerCoordinator
> -
> >> User provided listener
> >> org.apache.kafka.streams.processor.internals.StreamThread$1 for group
> >> kstream-log-processing-avro failed on partition assignment
> >> org.apache.kafka.streams.errors.StreamsException: Failed to configure
> >> value
> >> serde class com.lightbend.fdp.sample.kstream.serializers.
> >> SpecificAvroSerde
> >> at org.apache.kafka.streams.StreamsConfig.valueSerde(
> >> StreamsConfig.java:594)
> >> at
> >> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.<
> >> init>(AbstractProcessorContext.java:58)
> >> at
> >> org.apache.kafka.streams.processor.internals.
> ProcessorContextImpl.(
> >> ProcessorContextImpl.java:41)
> >> at
> >> org.apache.kafka.streams.processor.internals.
> >> StreamTask.(StreamTask.java:137)
> >> at
> >> org.apache.kafka.streams.processor.internals.
> >> StreamThread.createStreamTask(StreamThread.java:864)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.
> >> createTask(StreamThread.java:1237)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread$
> >> AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
> >> at
> >> org.apache.kafka.streams.processor.internals.
> StreamThread.addStreamTasks(
> >> StreamThread.java:967)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread.access$600(
> >> StreamThread.java:69)
> >> at
> >> org.apache.kafka.streams.processor.internals.StreamThread$1.
> >> onPartitionsAssigned(StreamThread.java:234)
> >> at
> >> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.
> >> onJoinComplete(ConsumerCoordinator.java:259)
> >> at
> >> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> >> joinGroupIfNeeded(AbstractCoordinator.java:352)
> >> at
> >> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
> >> ensureActiveGroup(AbstractCoordinator.java:303)
> >> at
> >> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(
> >> ConsumerCoordinator.java:290

Re: Avro Serialization & Schema Registry ..

2017-07-17 Thread Debasish Ghosh
I am using the class io.confluent.kafka.serializers.KafkaAvroSerializer as
one of the base abstractions for Avro serialization. From the stack trace I
see that the instantiation of this class needs set up of
KafkaAvroSerializerConfig which needs a value for the schema registry url ..

regards.

On Tue, Jul 18, 2017 at 12:02 AM, Richard L. Burton III <mrbur...@gmail.com>
wrote:

> For your first question, no you can use the avro API.
>
>
>
> On Mon, Jul 17, 2017 at 2:29 PM Debasish Ghosh <ghosh.debas...@gmail.com>
> wrote:
>
>> Hi -
>>
>> I am using Avro Serialization in a Kafka Streams application through the
>> following dependency ..
>>
>> "io.confluent"  % "kafka-avro-serializer" % "3.2.2"
>>
>> My question is : Is schema registry mandatory for using Avro Serialization
>> ? Because when I run the application I get the following exception where
>> it
>> complains that there is no default value for "schema.registry.url". My
>> current settings for StreamsConfig are the following ..
>>
>>   settings.put(StreamsConfig.APPLICATION_ID_CONFIG,
>> "kstream-log-processing-avro")
>>settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, config.brokers)
>>settings.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG,
>> Serdes.ByteArray.getClass.getName)
>>settings.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG,
>> classOf[SpecificAvroSerde[LogRecordAvro]])
>>
>> .. and the exception ..
>>
>> 23:49:34.054 TKD [StreamThread-1] ERROR o.a.k.c.c.i.ConsumerCoordinator -
>> User provided listener
>> org.apache.kafka.streams.processor.internals.StreamThread$1 for group
>> kstream-log-processing-avro failed on partition assignment
>> org.apache.kafka.streams.errors.StreamsException: Failed to configure
>> value
>> serde class com.lightbend.fdp.sample.kstream.serializers.
>> SpecificAvroSerde
>> at org.apache.kafka.streams.StreamsConfig.valueSerde(
>> StreamsConfig.java:594)
>> at
>> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.<
>> init>(AbstractProcessorContext.java:58)
>> at
>> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.(
>> ProcessorContextImpl.java:41)
>> at
>> org.apache.kafka.streams.processor.internals.
>> StreamTask.(StreamTask.java:137)
>> at
>> org.apache.kafka.streams.processor.internals.
>> StreamThread.createStreamTask(StreamThread.java:864)
>> at
>> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.
>> createTask(StreamThread.java:1237)
>> at
>> org.apache.kafka.streams.processor.internals.StreamThread$
>> AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
>> at
>> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(
>> StreamThread.java:967)
>> at
>> org.apache.kafka.streams.processor.internals.StreamThread.access$600(
>> StreamThread.java:69)
>> at
>> org.apache.kafka.streams.processor.internals.StreamThread$1.
>> onPartitionsAssigned(StreamThread.java:234)
>> at
>> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.
>> onJoinComplete(ConsumerCoordinator.java:259)
>> at
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
>> joinGroupIfNeeded(AbstractCoordinator.java:352)
>> at
>> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.
>> ensureActiveGroup(AbstractCoordinator.java:303)
>> at
>> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(
>> ConsumerCoordinator.java:290)
>> at
>> org.apache.kafka.clients.consumer.KafkaConsumer.
>> pollOnce(KafkaConsumer.java:1029)
>> at
>> org.apache.kafka.clients.consumer.KafkaConsumer.poll(
>> KafkaConsumer.java:995)
>> at
>> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(
>> StreamThread.java:592)
>> at
>> org.apache.kafka.streams.processor.internals.
>> StreamThread.run(StreamThread.java:361)
>> Caused by: io.confluent.common.config.ConfigException: Missing required
>> configuration "schema.registry.url" which has no default value.
>> at io.confluent.common.config.ConfigDef.parse(ConfigDef.java:241)
>> at io.confluent.common.config.AbstractConfig.(
>> AbstractConfig.java:76)
>> at
>> io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig.(
>> AbstractKafkaAvroSerDeConfig.java:51)
>> at
>> io.confluent.kafka.serializers.KafkaAvroSerializerConfig.(
>> KafkaAvroSerializerConfig.java:33)
>> at
>> io.confluent.kafka.ser

Re: Avro Serialization & Schema Registry ..

2017-07-17 Thread Richard L. Burton III
For your first question, no you can use the avro API.


On Mon, Jul 17, 2017 at 2:29 PM Debasish Ghosh <ghosh.debas...@gmail.com>
wrote:

> Hi -
>
> I am using Avro Serialization in a Kafka Streams application through the
> following dependency ..
>
> "io.confluent"  % "kafka-avro-serializer" % "3.2.2"
>
> My question is : Is schema registry mandatory for using Avro Serialization
> ? Because when I run the application I get the following exception where it
> complains that there is no default value for "schema.registry.url". My
> current settings for StreamsConfig are the following ..
>
>   settings.put(StreamsConfig.APPLICATION_ID_CONFIG,
> "kstream-log-processing-avro")
>settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, config.brokers)
>settings.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG,
> Serdes.ByteArray.getClass.getName)
>settings.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG,
> classOf[SpecificAvroSerde[LogRecordAvro]])
>
> .. and the exception ..
>
> 23:49:34.054 TKD [StreamThread-1] ERROR o.a.k.c.c.i.ConsumerCoordinator -
> User provided listener
> org.apache.kafka.streams.processor.internals.StreamThread$1 for group
> kstream-log-processing-avro failed on partition assignment
> org.apache.kafka.streams.errors.StreamsException: Failed to configure value
> serde class com.lightbend.fdp.sample.kstream.serializers.SpecificAvroSerde
> at
> org.apache.kafka.streams.StreamsConfig.valueSerde(StreamsConfig.java:594)
> at
>
> org.apache.kafka.streams.processor.internals.AbstractProcessorContext.(AbstractProcessorContext.java:58)
> at
>
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.(ProcessorContextImpl.java:41)
> at
>
> org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:137)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
> at
>
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352)
> at
>
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
> at
>
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290)
> at
>
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
> at
>
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:592)
> at
>
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361)
> Caused by: io.confluent.common.config.ConfigException: Missing required
> configuration "schema.registry.url" which has no default value.
> at io.confluent.common.config.ConfigDef.parse(ConfigDef.java:241)
> at io.confluent.common.config.AbstractConfig.(AbstractConfig.java:76)
> at
>
> io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig.(AbstractKafkaAvroSerDeConfig.java:51)
> at
>
> io.confluent.kafka.serializers.KafkaAvroSerializerConfig.(KafkaAvroSerializerConfig.java:33)
> at
>
> io.confluent.kafka.serializers.KafkaAvroSerializer.configure(KafkaAvroSerializer.java:49)
> at
>
> com.lightbend.fdp.sample.kstream.serializers.SpecificAvroSerializer.configure(SpecificAvroSerializer.scala:21)
> at
>
> com.lightbend.fdp.sample.kstream.serializers.SpecificAvroSerde.configure(SpecificAvroSerde.scala:18)
> at
> org.apache.kafka.streams.StreamsConfig.valueSerde(StreamsConfig.java:591)
> ... 17 common frames omitted
>
> regards.
>
> --
> Debasish Ghosh
> http://manning.com/ghosh2
> http://manning.com/ghosh
>
> Twttr: @debasishg
> Blog: http://debasishg.blogspot.com
> Code: http://github.com/debasishg
>


Avro Serialization & Schema Registry ..

2017-07-17 Thread Debasish Ghosh
Hi -

I am using Avro Serialization in a Kafka Streams application through the
following dependency ..

"io.confluent"  % "kafka-avro-serializer" % "3.2.2"

My question is : Is schema registry mandatory for using Avro Serialization
? Because when I run the application I get the following exception where it
complains that there is no default value for "schema.registry.url". My
current settings for StreamsConfig are the following ..

  settings.put(StreamsConfig.APPLICATION_ID_CONFIG,
"kstream-log-processing-avro")
   settings.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, config.brokers)
   settings.put(StreamsConfig.KEY_SERDE_CLASS_CONFIG,
Serdes.ByteArray.getClass.getName)
   settings.put(StreamsConfig.VALUE_SERDE_CLASS_CONFIG,
classOf[SpecificAvroSerde[LogRecordAvro]])

.. and the exception ..

23:49:34.054 TKD [StreamThread-1] ERROR o.a.k.c.c.i.ConsumerCoordinator -
User provided listener
org.apache.kafka.streams.processor.internals.StreamThread$1 for group
kstream-log-processing-avro failed on partition assignment
org.apache.kafka.streams.errors.StreamsException: Failed to configure value
serde class com.lightbend.fdp.sample.kstream.serializers.SpecificAvroSerde
at org.apache.kafka.streams.StreamsConfig.valueSerde(StreamsConfig.java:594)
at
org.apache.kafka.streams.processor.internals.AbstractProcessorContext.(AbstractProcessorContext.java:58)
at
org.apache.kafka.streams.processor.internals.ProcessorContextImpl.(ProcessorContextImpl.java:41)
at
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:137)
at
org.apache.kafka.streams.processor.internals.StreamThread.createStreamTask(StreamThread.java:864)
at
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:1237)
at
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.retryWithBackoff(StreamThread.java:1210)
at
org.apache.kafka.streams.processor.internals.StreamThread.addStreamTasks(StreamThread.java:967)
at
org.apache.kafka.streams.processor.internals.StreamThread.access$600(StreamThread.java:69)
at
org.apache.kafka.streams.processor.internals.StreamThread$1.onPartitionsAssigned(StreamThread.java:234)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:259)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:352)
at
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:303)
at
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:290)
at
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1029)
at
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995)
at
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:592)
at
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:361)
Caused by: io.confluent.common.config.ConfigException: Missing required
configuration "schema.registry.url" which has no default value.
at io.confluent.common.config.ConfigDef.parse(ConfigDef.java:241)
at io.confluent.common.config.AbstractConfig.(AbstractConfig.java:76)
at
io.confluent.kafka.serializers.AbstractKafkaAvroSerDeConfig.(AbstractKafkaAvroSerDeConfig.java:51)
at
io.confluent.kafka.serializers.KafkaAvroSerializerConfig.(KafkaAvroSerializerConfig.java:33)
at
io.confluent.kafka.serializers.KafkaAvroSerializer.configure(KafkaAvroSerializer.java:49)
at
com.lightbend.fdp.sample.kstream.serializers.SpecificAvroSerializer.configure(SpecificAvroSerializer.scala:21)
at
com.lightbend.fdp.sample.kstream.serializers.SpecificAvroSerde.configure(SpecificAvroSerde.scala:18)
at org.apache.kafka.streams.StreamsConfig.valueSerde(StreamsConfig.java:591)
... 17 common frames omitted

regards.

-- 
Debasish Ghosh
http://manning.com/ghosh2
http://manning.com/ghosh

Twttr: @debasishg
Blog: http://debasishg.blogspot.com
Code: http://github.com/debasishg


Mirroring Schema registry using mirror maker

2017-06-20 Thread Manoj Murumkar
Hi,

I am trying to mirror _schemas topic (essentially schema registry) using
mirror maker, which is not working. It only replicates newly created
schemas even though auto.offset.reset is set to earliest. Any ideas?

Producer Config:

bootstrap.servers=:9092
group.id=mirrormaker
exclude.internal.topics=true
client.id=mirror_maker_consumer
# Default
#enable.auto.commit=true
# Read from the beginning of the topic
auto.offset.reset=earliest

Consumer Config:

bootstrap.servers=:9092
batch.size=100
client.id=mirror_maker_consumer
max.in.flight.requests.per.connection=1
retries=1000
acks=-1

Command that I am using:

kafka-mirror-maker --consumer.config source_cluster.properties
--producer.config target_cluster.properties --whitelist '\_schemas.*

Thanks,

Manoj


Kafka Schema Registry JAAS file

2017-02-19 Thread Stephane Maarek
Hi

I’d be great to document what the JAAS file may look like at:
http://docs.confluent.io/3.1.2/schema-registry/docs/security.html

I need to ask for principals from my IT which takes a while, so is this a
correct JAAS?

KafkaClient{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab=“/etc/kafka/keytabs/kafka-schema-registry.keytab”
principal=“kafka-schema-regis...@example.com";
}

Client{
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab=“/etc/kafka/keytabs/myzkclient.keytab”
principal=“myzkcli...@example.com";
}

My guess is that the Client section needs to be the exact same for
schema-registry and kafka brokers because they both manipulate the same
znodes?

Regarding the KafkaClient, that’s where I’m a little bit lost. Schema
registry will authenticate to Kafka using SASL 9095, but then does it need
any ACLs or permissions? Or am I missing something?
And where do I set the serviceName in the JAAS file?

Thanks
Stephane


Schema registry

2017-02-15 Thread 陈江枫
Hi, I'm new to Kafka, and I would like to use schema registry to manager
the schema of my topic.

The schema I've created:

curl -X POST -i -H "Content-Type: application/vnd.schemaregistry.v1+json"
--data '{ "schema": "{\"type\": \"record\",\"name\": \"Customer\",
\"fields\": [ { \"type\": \"int\", \"name\": \"id\" }, { \"type\":
\"string\", \"name\": \"name\" } ] }" }'
http://localhost:8081/subjects/Customer/versions
HTTP/1.1 200 OK
Date: Wed, 15 Feb 2017 07:03:49 GMT
Content-Type: application/vnd.schemaregistry.v1+json
Content-Length: 9
Server: Jetty(9.2.12.v20150709)

{"id":21}%
My Customer class:

public class Customer {

private int id;

private String name;

public Customer(int ID, String name) {
this.id = ID;
this.name = name;
}

//getters and setters
}


My main code is like the following:

for (int i = 0; i < total; i++) {
Customer customer = new Customer(i,"customer"+i);
producer.send(new ProducerRecord<String, Customer>("test1",
customer)).get();
}

I got the following error:

org.apache.kafka.common.errors.SerializationException: Error
serializing Avro message
Caused by: java.lang.IllegalArgumentException: Unsupported Avro type.
Supported types are null, Boolean, Integer, Long, Float, Double,
String, byte[] and IndexedRecord
at 
io.confluent.kafka.serializers.AbstractKafkaAvroSerDe.getSchema(AbstractKafkaAvroSerDe.java:115)


I thought I followed the example of the book 


Re: Schema Registry in Staging Environment

2016-09-27 Thread Ewen Cheslack-Postava
Lawrence,

There are two common ways to approach registration of schemas. The first is
to just rely on auto-registration that the serializers do (I'm assuming
you're using the Java clients & serializers here, or an equivalent
implementation in another language). In this case you can generally just
allow the registration to happen as the updated application hits each
stage. If it gets rejected in staging, it won't ever make it to prod. If
you discover an issue in staging, the most common case is that it isn't a
problem with the schema but rather with the surrounding code, in which case
a subsequent deploy will generally be able to succeed. Note that unrelated
changes can continue even if you end up not deploying the change to
production since they will not be affected by the newly registered schema.

The second way is to build the registration into your deployment pipeline,
so you may perform the registration before ever deploying an instance of
the app. This generally requires more coordination between your deployment
and apps (since deployment needs to know what schemas exist and how to
register them in the appropriate environment), but allows you to catch
errors a bit earlier (and may allow you to restrict writes to the schema
registry to the machines performing deployment, which some shops may want
to do).

One of the goals of the schema registry is to help decouple developers
within your organization, so it is absolutely common for developers to
simply create new schemas. In fact, they may just build the entire process
into their app. For example, while not released yet, we have a maven plugin
that you can use to integrate interactions with the schema registry into
your Maven/Java application development process:
https://github.com/confluentinc/schema-registry/tree/master/maven-plugin

-Ewen

On Mon, Sep 26, 2016 at 1:33 PM, Lawrence Weikum <lwei...@pandora.com>
wrote:

> Hello,
>
> Has anyone used Confluent’s Schema Registry?  If so, I’m curious to hear
> about best practices for using it in a staging environment.
>
> Do users typically copy schemas over to the staging environment from
> production?  Are developers allowed to create new schemas in the staging
> environment?
>
> Thanks!
>
> Lawrence Weikum
>
>


-- 
Thanks,
Ewen


Schema Registry in Staging Environment

2016-09-26 Thread Lawrence Weikum
Hello,

Has anyone used Confluent’s Schema Registry?  If so, I’m curious to hear about 
best practices for using it in a staging environment.

Do users typically copy schemas over to the staging environment from 
production?  Are developers allowed to create new schemas in the staging 
environment?

Thanks!

Lawrence Weikum



Re: Kafka Connect HdfsSink and the Schema Registry

2016-06-19 Thread Ewen Cheslack-Postava
Great, glad you sorted it out. If the namespace is being omitted
incorrectly from the request the connector is making, please file a bug
report -- I can't think of a reason we'd omit that, but it's certainly
possible it is a bug on our side.

-Ewen

On Wed, Jun 15, 2016 at 7:08 AM, Tauzell, Dave <dave.tauz...@surescripts.com
> wrote:

> Thanks Ewan,
>
> The second request was made by me directly.  I'm trying to add this
> functionality into my .Net application.  The library I'm using doesn't have
> an implementation of the AvroSeriazlizer that interacts with the schema
> registry.  I've now added in code to make to POST to
> /subjects/-value with the schema.   Something I noticed is that I
> was using schema like this:
>
> {
>   "subject": "AuditHdfsTest5-value",
>   "version": 1,
>   "id": 5,
>   "schema":
> "{\"type\":\"record\",\"name\":\"GenericAuditRecord\",\"namespace\":\"audit\",\"fields\":[{\"name\":\"xml\",\"type\":[\"string\",\"null\"]}]}"
> }
>
> When the connector got a message and did a lookup it didn't have the
> "namespace" field and the lookup failed.  I then posted a new version of
> the schema without the "namespace" field and it worked.
>
> -Dave
>
> Dave Tauzell | Senior Software Engineer | Surescripts
> O: 651.855.3042 | www.surescripts.com |   dave.tauz...@surescripts.com
> Connect with us: Twitter I LinkedIn I Facebook I YouTube
>
>
> -Original Message-
> From: Ewen Cheslack-Postava [mailto:e...@confluent.io]
> Sent: Tuesday, June 14, 2016 6:59 PM
> To: users@kafka.apache.org
> Subject: Re: Kafka Connect HdfsSink and the Schema Registry
>
> On Tue, Jun 14, 2016 at 8:08 AM, Tauzell, Dave <
> dave.tauz...@surescripts.com
> > wrote:
>
> > I have been able to get my C# client to put avro records to a Kafka
> > topic and have the HdfsSink read and save them in files.  I am
> > confused about interaction with the registry.  The kafka message
> > contains a schema id an I see the connector look that up in the
> > registry.  Then it also looks up a subject which is -value.
> >
> > What is the relationship between the passed schema id and the subject
> > which is derived from the topic name?
> >
>
> The HDFS connector doesn't work directly with the schema registry, the
> AvroConverter does. I'm not sure what the second request you're seeing is
> -- normally it would only look up the schema ID in order to get the schema.
> Where are you seeing the second request, and can you include some logs? I
> can't think of any other requests the AvroConverter would be making just
> for deserialization.
>
> The subject names are generating in the serializer as -key and
> -value and this is just the standardized approach Confluent's
> serializers use. The ID will have been registered under that subject.
>
> -Ewen
>
>
> >
> > -Dave
> >
> > This e-mail and any files transmitted with it are confidential, may
> > contain sensitive information, and are intended solely for the use of
> > the individual or entity to whom they are addressed. If you have
> > received this e-mail in error, please notify the sender by reply
> > e-mail immediately and destroy all copies of the e-mail and any
> attachments.
> >
>
>
>
> --
> Thanks,
> Ewen
> This e-mail and any files transmitted with it are confidential, may
> contain sensitive information, and are intended solely for the use of the
> individual or entity to whom they are addressed. If you have received this
> e-mail in error, please notify the sender by reply e-mail immediately and
> destroy all copies of the e-mail and any attachments.
>



-- 
Thanks,
Ewen


RE: Kafka Connect HdfsSink and the Schema Registry

2016-06-15 Thread Tauzell, Dave
Thanks Ewan,

The second request was made by me directly.  I'm trying to add this 
functionality into my .Net application.  The library I'm using doesn't have an 
implementation of the AvroSeriazlizer that interacts with the schema registry.  
I've now added in code to make to POST to /subjects/-value with the 
schema.   Something I noticed is that I was using schema like this:

{
  "subject": "AuditHdfsTest5-value",
  "version": 1,
  "id": 5,
  "schema": 
"{\"type\":\"record\",\"name\":\"GenericAuditRecord\",\"namespace\":\"audit\",\"fields\":[{\"name\":\"xml\",\"type\":[\"string\",\"null\"]}]}"
}

When the connector got a message and did a lookup it didn't have the 
"namespace" field and the lookup failed.  I then posted a new version of the 
schema without the "namespace" field and it worked.

-Dave

Dave Tauzell | Senior Software Engineer | Surescripts
O: 651.855.3042 | www.surescripts.com |   dave.tauz...@surescripts.com
Connect with us: Twitter I LinkedIn I Facebook I YouTube


-Original Message-
From: Ewen Cheslack-Postava [mailto:e...@confluent.io]
Sent: Tuesday, June 14, 2016 6:59 PM
To: users@kafka.apache.org
Subject: Re: Kafka Connect HdfsSink and the Schema Registry

On Tue, Jun 14, 2016 at 8:08 AM, Tauzell, Dave <dave.tauz...@surescripts.com
> wrote:

> I have been able to get my C# client to put avro records to a Kafka
> topic and have the HdfsSink read and save them in files.  I am
> confused about interaction with the registry.  The kafka message
> contains a schema id an I see the connector look that up in the
> registry.  Then it also looks up a subject which is -value.
>
> What is the relationship between the passed schema id and the subject
> which is derived from the topic name?
>

The HDFS connector doesn't work directly with the schema registry, the 
AvroConverter does. I'm not sure what the second request you're seeing is
-- normally it would only look up the schema ID in order to get the schema.
Where are you seeing the second request, and can you include some logs? I can't 
think of any other requests the AvroConverter would be making just for 
deserialization.

The subject names are generating in the serializer as -key and 
-value and this is just the standardized approach Confluent's 
serializers use. The ID will have been registered under that subject.

-Ewen


>
> -Dave
>
> This e-mail and any files transmitted with it are confidential, may
> contain sensitive information, and are intended solely for the use of
> the individual or entity to whom they are addressed. If you have
> received this e-mail in error, please notify the sender by reply
> e-mail immediately and destroy all copies of the e-mail and any attachments.
>



--
Thanks,
Ewen
This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.


Re: Kafka Connect HdfsSink and the Schema Registry

2016-06-14 Thread Ewen Cheslack-Postava
On Tue, Jun 14, 2016 at 8:08 AM, Tauzell, Dave <dave.tauz...@surescripts.com
> wrote:

> I have been able to get my C# client to put avro records to a Kafka topic
> and have the HdfsSink read and save them in files.  I am confused about
> interaction with the registry.  The kafka message contains a schema id an I
> see the connector look that up in the registry.  Then it also looks up a
> subject which is -value.
>
> What is the relationship between the passed schema id and the subject
> which is derived from the topic name?
>

The HDFS connector doesn't work directly with the schema registry, the
AvroConverter does. I'm not sure what the second request you're seeing is
-- normally it would only look up the schema ID in order to get the schema.
Where are you seeing the second request, and can you include some logs? I
can't think of any other requests the AvroConverter would be making just
for deserialization.

The subject names are generating in the serializer as -key and
-value and this is just the standardized approach Confluent's
serializers use. The ID will have been registered under that subject.

-Ewen


>
> -Dave
>
> This e-mail and any files transmitted with it are confidential, may
> contain sensitive information, and are intended solely for the use of the
> individual or entity to whom they are addressed. If you have received this
> e-mail in error, please notify the sender by reply e-mail immediately and
> destroy all copies of the e-mail and any attachments.
>



-- 
Thanks,
Ewen


Kafka Connect HdfsSink and the Schema Registry

2016-06-14 Thread Tauzell, Dave
I have been able to get my C# client to put avro records to a Kafka topic and 
have the HdfsSink read and save them in files.  I am confused about interaction 
with the registry.  The kafka message contains a schema id an I see the 
connector look that up in the registry.  Then it also looks up a subject which 
is -value.

What is the relationship between the passed schema id and the subject which is 
derived from the topic name?

-Dave

This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.


RE: Schema Registry Exception

2016-06-13 Thread Tauzell, Dave
I figured it out.  The value of field "schem" is just a string so I needed to 
escape my JSON:

{
"schema":"{\"type\": \"record\",\"name\": \"GenericAuditRecord\", 
\"fields\": [{ \"name\": \"xml\",\"type\": [\"string\", \"null\"] }]}"
}

-Dave


-Original Message-
From: Dustin Cote [mailto:dus...@confluent.io]
Sent: Monday, June 13, 2016 11:36 AM
To: users@kafka.apache.org
Subject: Re: Schema Registry Exception

Hi Dave,

This looks like invalid JSON.  I see an extra comma in the fields stanza.
Maybe you can try with this (below)?

{
"schema": {
"type": "record",
"name": "GenericAuditRecord",
"fields": [{
"name": "xml",
"type": ["string", "null"]
}]
}
}



On Mon, Jun 13, 2016 at 8:38 AM, Tauzell, Dave <dave.tauz...@surescripts.com
> wrote:

> I'm getting the following exception when trying to register a schema:
>
>
> [2016-06-13 11:37:14,531] INFO 172.23.147.101 - -
> [13/Jun/2016:11:37:14 -0400] "POST
> /subjects/GenericAuditRecord/versions HTTP/1.1" 500 52  7
> (io.confluent.rest-utils.requests:77)
> [2016-06-13 11:38:31,292] ERROR Unhandled exception resulting in
> internal server error response
> (io.confluent.rest.exceptions.GenericExceptionMapper:37)
> com.fasterxml.jackson.databind.JsonMappingException: Can not
> deserialize instance of java.lang.String out of START_OBJECT token at
> [Source:
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnClos
> eableInputStream@726bacc9;
> line: 2, column: 5] (through reference chain:
> io.confluent.kafka.schemaregistry.client.rest.entities.requests.Regist
> erSchemaRequest["schema"])
>
> I am posting this:
>
> {
> "schema": {
> "type": "record",
> "name": "GenericAuditRecord",
> "fields": [
> {"name":
> "xml", "type": ["string", "null"]},
>  ]
> }
> }
>
> Any ideas as to what I am doing wrong?
>
> -Dave
>
> This e-mail and any files transmitted with it are confidential, may
> contain sensitive information, and are intended solely for the use of
> the individual or entity to whom they are addressed. If you have
> received this e-mail in error, please notify the sender by reply
> e-mail immediately and destroy all copies of the e-mail and any attachments.
>



--
Dustin Cote
confluent.io
This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.


Re: Schema Registry Exception

2016-06-13 Thread Dustin Cote
Hi Dave,

This looks like invalid JSON.  I see an extra comma in the fields stanza.
Maybe you can try with this (below)?

{
"schema": {
"type": "record",
"name": "GenericAuditRecord",
"fields": [{
"name": "xml",
"type": ["string", "null"]
}]
}
}



On Mon, Jun 13, 2016 at 8:38 AM, Tauzell, Dave  wrote:

> I'm getting the following exception when trying to register a schema:
>
>
> [2016-06-13 11:37:14,531] INFO 172.23.147.101 - - [13/Jun/2016:11:37:14
> -0400] "POST /subjects/GenericAuditRecord/versions HTTP/1.1" 500 52  7
> (io.confluent.rest-utils.requests:77)
> [2016-06-13 11:38:31,292] ERROR Unhandled exception resulting in internal
> server error response
> (io.confluent.rest.exceptions.GenericExceptionMapper:37)
> com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize
> instance of java.lang.String out of START_OBJECT token
> at [Source:
> org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream@726bacc9;
> line: 2, column: 5] (through reference chain:
> io.confluent.kafka.schemaregistry.client.rest.entities.requests.RegisterSchemaRequest["schema"])
>
> I am posting this:
>
> {
> "schema": {
> "type": "record",
> "name": "GenericAuditRecord",
> "fields": [
> {"name":
> "xml", "type": ["string", "null"]},
>  ]
> }
> }
>
> Any ideas as to what I am doing wrong?
>
> -Dave
>
> This e-mail and any files transmitted with it are confidential, may
> contain sensitive information, and are intended solely for the use of the
> individual or entity to whom they are addressed. If you have received this
> e-mail in error, please notify the sender by reply e-mail immediately and
> destroy all copies of the e-mail and any attachments.
>



-- 
Dustin Cote
confluent.io


Schema Registry Exception

2016-06-13 Thread Tauzell, Dave
I'm getting the following exception when trying to register a schema:


[2016-06-13 11:37:14,531] INFO 172.23.147.101 - - [13/Jun/2016:11:37:14 -0400] 
"POST /subjects/GenericAuditRecord/versions HTTP/1.1" 500 52  7 
(io.confluent.rest-utils.requests:77)
[2016-06-13 11:38:31,292] ERROR Unhandled exception resulting in internal 
server error response (io.confluent.rest.exceptions.GenericExceptionMapper:37)
com.fasterxml.jackson.databind.JsonMappingException: Can not deserialize 
instance of java.lang.String out of START_OBJECT token
at [Source: 
org.glassfish.jersey.message.internal.ReaderInterceptorExecutor$UnCloseableInputStream@726bacc9;
 line: 2, column: 5] (through reference chain: 
io.confluent.kafka.schemaregistry.client.rest.entities.requests.RegisterSchemaRequest["schema"])

I am posting this:

{
"schema": {
"type": "record",
"name": "GenericAuditRecord",
"fields": [
{"name": "xml", 
"type": ["string", "null"]},
 ]
}
}

Any ideas as to what I am doing wrong?

-Dave

This e-mail and any files transmitted with it are confidential, may contain 
sensitive information, and are intended solely for the use of the individual or 
entity to whom they are addressed. If you have received this e-mail in error, 
please notify the sender by reply e-mail immediately and destroy all copies of 
the e-mail and any attachments.


Re: Schema registry question

2016-06-04 Thread Ewen Cheslack-Postava
In any case, to answer the question, I think there's just an omission in
the docs. Getting by subject + version (GET
/subjects/{subject}/versions/{version} -
http://docs.confluent.io/3.0.0/schema-registry/docs/api.html#get--subjects-%28string-%20subject%29-versions-%28versionId-%20version%29)
also includes the id field to get the unique ID. The endpoint POST
/subject/{subject} is also correctly documented as returning the id, but of
course this also requires the full schema.

But also, yes, the expectation on the producer side is that it has the
schema it is going to generate data with. It simply passes the data using
that schema to the producer and the serializer transparently
registers/validates the schema and uses the resulting ID to efficiently
include the reference to the schema in the serialized data.

-Ewen

On Mon, May 23, 2016 at 9:13 AM, Avi Flax <avi.f...@parkassist.com> wrote:

> Mike, the schema registry you’re using is a part of Confluent’s platform,
> which builds upon but is not affiliated with Kafka itself. You might want
> to post your question to Confluent’s mailing list:
>
> https://groups.google.com/d/forum/confluent-platform
>
> HTH,
> Avi
>
> On 5/20/16, 21:40, "Mike Thomsen" <mikerthom...@gmail.com> wrote:
>
> >The only method I can seem to find that returns a globally unique ID for a
> >schema is POST /subjects/{subject}/versions
> >
> >Is that true or am I missing something?
> >
> >Was it designed that way because the expectation is that if a producer
> dies
> >it'll call that service again to ensure that the schema topic hasn't
> >rotated?
> >
> >Thanks,
> >
> >Mike
>
>


-- 
Thanks,
Ewen


Re: Schema registry question

2016-05-23 Thread Avi Flax
Mike, the schema registry you’re using is a part of Confluent’s platform, which 
builds upon but is not affiliated with Kafka itself. You might want to post 
your question to Confluent’s mailing list:

https://groups.google.com/d/forum/confluent-platform

HTH,
Avi

On 5/20/16, 21:40, "Mike Thomsen" <mikerthom...@gmail.com> wrote:

>The only method I can seem to find that returns a globally unique ID for a
>schema is POST /subjects/{subject}/versions
>
>Is that true or am I missing something?
>
>Was it designed that way because the expectation is that if a producer dies
>it'll call that service again to ensure that the schema topic hasn't
>rotated?
>
>Thanks,
>
>Mike



Schema registry question

2016-05-20 Thread Mike Thomsen
The only method I can seem to find that returns a globally unique ID for a
schema is POST /subjects/{subject}/versions

Is that true or am I missing something?

Was it designed that way because the expectation is that if a producer dies
it'll call that service again to ensure that the schema topic hasn't
rotated?

Thanks,

Mike


500 ms delay using new consumer and schema registry.

2015-11-28 Thread Gerard Klijs
Hi all,
I'm running all little test, with both zookeeper, Kafka and the schema
registry running locally. Using the new consumer, and the 2.0.0-snapshot
version of the registry, which has an decoder giving back instances of the
schema object.

It's all working fine, but I see a consistent delay maximum around 500 ms.
I'm just wondering if anyone knows what might be the cause. The delay is
from creating the record, to receiving the object.

For who wants to try the same thing, I ran into some problems until I
created the java from a schema using avro, instead of using avro to
generate schema from a class. Thus could just have been caused by the
default constructor not being available in the java class I used.