Re: custom value.deserializer for storm-kafka-client-1.1.1?

2017-09-25 Thread Manish Sharma
Thanks Stig,
So I tried using

>> .setValue(EmailObjectDeserializer.class)

and the EmailObjectDeserializer class is implementing the interface
org.apache.storm.kafka.spout.SerializableDeserializer

>> public class EmailObjectDeserializer implements SerializableDeserializer
{...}


I see the following compilation error..



[ERROR] COMPILATION ERROR :
[INFO] -
[ERROR] /xxx/comms/topology/SmtpInjectionTopology.java:[72,18] no suitable
method found for
setValue(java.lang.Class)
method
org.apache.storm.kafka.spout.KafkaSpoutConfig.Builder.setValue(org.apache.storm.kafka.spout.SerializableDeserializer)
is not applicable
  (cannot infer type-variable(s) NV
(argument mismatch;
java.lang.Class cannot be
converted to org.apache.storm.kafka.spout.SerializableDeserializer))
method
org.apache.storm.kafka.spout.KafkaSpoutConfig.Builder.setValue(java.lang.Class>) is not
applicable
  (cannot infer type-variable(s) NV
(argument mismatch;
java.lang.Class cannot be
converted to java.lang.Class>))


I tried implementing the org.apache.kafka.common.serialization.Deserializer
class too and got the same error..

Thanks for your help. /Manish





On Sun, Sep 24, 2017 at 6:46 AM, Stig Rohde Døssing  wrote:

> Hi Manish,
>
> The setProp method will not work for setting deserializers until Storm
> 1.2.0. For 1.1.1 you will need to use setKey/setValue to set a different
> deserializer.
>
> e.g.
> KafkaSpoutConfig kafkaSpoutConfig = KafkaSpoutConfig
> .builder(property.getKafka_consumer_bootstrap_servers(), topics)
> .setValue(TestDeserializer.class)
> .build()
>
> Also when you upgrade to 1.2.0 please note that you can either do
> .setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
> EmailObjectDeserializer.class)
>
> or
>
> .setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "com.example.
> EmailObjectDeserializer")
>
> that is, you need to use the fully qualified class name of the
> deserializer class if you're setting it as a string.
>
> 2017-09-24 1:38 GMT+02:00 Manish Sharma :
>
>> Hello,
>> I am trying to use a custom ValueDeserializer when consuming from kafka,
>> I tried the following
>>
>>
>> --snip--
>> KafkaSpoutConfig kafkaSpoutConfig = KafkaSpoutConfig
>> .builder(property.getKafka_consumer_bootstrap_servers(), topics)
>> .setFirstPollOffsetStrategy(KafkaSpoutConfig.FirstPollOffset
>> Strategy.EARLIEST)
>> .setGroupId(property.getKafka_consumer_groupid())
>> .setProp(ConsumerConfig.CLIENT_ID_CONFIG, "StormKafkaConsumer")
>> .setProp(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
>> "EmailObjectDeserializer")
>> .build();
>> --snip--
>>
>>
>> It didn't take, In the logs I still see spout executor instantiated with
>> default "StringDeserializer" class.
>>
>>
>> --snip--
>> 6348 [Thread-18-SMTPInjectionKafkaSpout-executor[2 2]] INFO
>> o.a.k.c.c.ConsumerConfig - ConsumerConfig values:
>> auto.commit.interval.ms = 5000
>> auto.offset.reset = latest
>> bootstrap.servers = [..:9092]
>> check.crcs = true
>> client.id = StormKafkaConsumer
>> connections.max.idle.ms = 54
>> enable.auto.commit = false
>> exclude.internal.topics = true
>> fetch.max.bytes = 52428800 <52%2042%2088%2000>
>> fetch.max.wait.ms = 500
>> fetch.min.bytes = 1
>> group.id = dev_worker
>> heartbeat.interval.ms = 3000
>> interceptor.classes = null
>> key.deserializer = class org.apache.kafka.common.serial
>> ization.StringDeserializer
>> max.partition.fetch.bytes = 1048576
>> max.poll.interval.ms = 30
>> max.poll.records = 100
>> metadata.max.age.ms = 30
>> metric.reporters = []
>> metrics.num.samples = 2
>> metrics.recording.level = INFO
>> metrics.sample.window.ms = 3
>> partition.assignment.strategy = [class org.apache.kafka.clients.consu
>> mer.RangeAssignor]
>> receive.buffer.bytes = 65536
>> reconnect.backoff.ms = 50
>> request.timeout.ms = 305000
>> retry.backoff.ms = 100
>> sasl.jaas.config = null
>> sasl.kerberos.kinit.cmd = /usr/bin/kinit
>> sasl.kerberos.min.time.before.relogin = 6
>> sasl.kerberos.service.name = null
>> sasl.kerberos.ticket.renew.jitter = 0.05
>> sasl.kerberos.ticket.renew.window.factor = 0.8
>> sasl.mechanism = GSSAPI
>> security.protocol = PLAINTEXT
>> send.buffer.bytes = 131072
>> session.timeout.ms = 1
>> ssl.cipher.suites = null
>> ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
>> ssl.endpoint.identification.algorithm = null
>> ssl.key.password = null
>> ssl.keymanager.algorithm = SunX509
>> ssl.keystore.location = null
>> ssl.keystore.password = null
>> ssl.keystore.type = JKS
>> ssl.protocol = TLS
>> ssl.provider = null
>> ssl.secure.random.implementation = null
>> ssl.trustmanager.algorithm = PKIX
>> ssl.truststore.location = null
>> ssl.truststore.password = null
>> ssl.truststore.type = JKS
>> value.deserializer = class 
>> org.apache.kafka.common.serialization.StringDeserializer
>> <---
>> --snip--
>>
>>
>> Any thoughts on how to get 

Difficulties getting log viewer to work in 1.1.1

2017-09-25 Thread Stephen Powis
Hey!  I've been struggling getting logviewer to work in storm 1.1.1 and was
wondering if anyone could lend a hand w/ the required configuration and
help me understand where I've gone wrong.

First of all I have the logviewer daemon running on every supervisor host.

Second I've modified the worker.xml A1 appender as follows:


> 
> fileName="/static/path/log/storm/topologies/${sys:logfile.prefix}-${sys:worker.port}.log"
>
> filePattern="/static/path/log/storm/topologies/archived/${sys:logfile.prefix}-${sys:worker.port}.%i.gz">
> 
> ${pattern}
> 
> 
>  
> 
> 
> .
>

Deployed with each topology I have:
topology.worker.childopts  "-Dlogfile.prefix=topology-name-here"

It seems as tho logviewer / storm UI is unable to locate my log files.  I'm
guessing this because of how I have the log appender setup?  Is it possible
to have log viewer locate my log files w/o reverting to the standard
logging path?  Am I missing some configuration option somewhere?

Using the standard log layout of:
"${sys:workers.artifacts}/${sys:storm.id}/${sys:worker.port}/${sys:
logfile.name}" is difficult for us because of how we aggregate our logs and
ensuring that old logs from previous deploys are cleaned up appropriately.

Thanks!
Stephen