Hey Raja,
By default, the kafka input operator and the conversion operator will run
in different containers. You can set the stream locality to thread_local or
container_local. The input operator is io intensive and the your conversion
operator could be cpu intensive, correct me if I am wrong.
Ano
Hey Raja,
For 0.8, you have to implement OffsetManager interface on your own. The
updateOffsets will be called in application master every time when it get
updated offsets from each physical partition. And the offsets that you see
in the method is committed offset. So you can safely save these off
, to apex is not storing offsets status at any location ?
> Like how Storm stores in Zookeeper ?
>
>
> Regards,
> Raja.
>
> From: "hsy...@gmail.com"
> Reply-To: "users@apex.apache.org"
> Date: Monday, June 6, 2016 at 6:42 PM
>
> To: "u
l] ?
>
> Will the application latency greatly decreases if I use HDFS for storage ?
>
> Thank you very much.
>
> Regards,
> Raja.
>
> From: "hsy...@gmail.com"
> Reply-To: "users@apex.apache.org"
> Date: Monday, June 6, 2016 at 7:13 PM
>
> T
Hi Ananth,
Unlike files, Kafka is usually for streaming cases. Correct me if I'm
wrong, your use case seems like a batch processing. We didn't consider end
offset in our Kafka input operator design. But it could be a useful
feature. Unfortunately there is no easy way, as of I know, to extend
existi
Hey guys,
I've a problem using jdbc output operator. It is same as
http://stackoverflow.com/questions/37887390/datatorrent-jdbcoperator-not-workiing
After trouble shooting, I found there might be some validation issue here
that prerun validation for dag won't work in some special case
Here is an
I've never used SBT to build Apex application. But I guess you can try 2
things here
Use the sbt maven plugin
https://github.com/shivawu/sbt-maven-plugin
or use sbt assembly plugin
https://github.com/sbt/sbt-assembly
In the 2nd way, you need to translate the plugin configuration part in
pom.xml to
+1
On Tue, Jul 12, 2016 at 11:53 AM, David Yan wrote:
> Hi all,
>
> I would like to renew the discussion of retiring operators in Malhar.
>
> As stated before, the reason why we would like to retire operators in
> Malhar is because some of them were written a long time ago before Apache
> incuba
wrote:
>
> My vote is to do 2&3
>
> Thks
> Amol
>
>
> On Tue, Jul 12, 2016 at 12:14 PM, Kottapalli, Venkatesh <
> vkottapa...@directv.com> wrote:
>
>> +1 for deprecating the packages listed below.
>>
>> -Original Message-
>> From:
Akshay and Ankit,
Scala support is definitely in our roadmap.
Akshay,
The exception you see might be related to some Scala function that compiled
to java anonymous class. There are 2 possible workarounds (I haven't tried
but possibly they would work).
1. You can try move your functional expre
BTW Akshay, if you are using anonymous function as a field in the operator,
it's very likely that your function is stateless? If that's the case, you
can try to mark it transient (
http://www.scala-lang.org/api/rc2/scala/transient.html)
On Thu, Jul 14, 2016 at 11:06 PM, Akshay S Harale <
akshay.
He Michael,
You can throw a ShutdownException.
Siyuan
On Mon, Jul 18, 2016 at 10:06 AM, Silver, Michael <
michael.sil...@capitalone.com> wrote:
>
>
>
>
> Hello,
>
>
>
> I am looking for a solution to force shutdown or fail my application. I
> have an operator that checks that a file (which is n
Hey Vlad,
Thanks for bringing this up. Is there an easy way to detect unexpected use
of emit method without hurt the performance. Or at least if we can detect
this in debug mode.
Regards,
Siyuan
On Wed, Aug 10, 2016 at 11:27 AM, Vlad Rozov
wrote:
> The short answer is no, creating worker threa
Hey McCullough,
What malhar version do you use?
Regards,
Siyuan
On Wed, Aug 24, 2016 at 9:07 AM, McCullough, Alex <
alex.mccullo...@capitalone.com> wrote:
> Hey All,
>
>
>
> We are using the 0.8.1 kafka operator and the ZK connection string has a
> chroot on it. We get errors when launching and
Hey Alex,
Do you use ONE_TO_ONE or ONE_TO_MANY partition?
Regards,
Siyuan
On Wed, Aug 24, 2016 at 10:27 AM, McCullough, Alex <
alex.mccullo...@capitalone.com> wrote:
> Hey Siyuan,
>
>
>
> We are using 3.4.0
>
>
>
> Thanks,
>
> Alex
>
> *Fr
n Wed, Aug 24, 2016 at 10:40 AM, McCullough, Alex <
alex.mccullo...@capitalone.com> wrote:
> ONE_TO_ONE
>
>
>
>
>
>
>
> *From: *"hsy...@gmail.com"
> *Reply-To: *"users@apex.apache.org"
> *Date: *Wednesday, August 24, 2016 at 1:38 PM
>
>
Hey Alex,
Does the workaround work? I just want to follow up to see my hypothesis for
the root cause is correct. Thanks!
Regards,
Siyuan
On Wed, Aug 24, 2016 at 10:56 AM, hsy...@gmail.com wrote:
> Hey Alex,
>
> Yeah, I think there is a bug for multitenant kafka support in the code.
server to connect to?
>
>
>
> Thanks,
>
> Alex
>
>
>
> *From: *"hsy...@gmail.com"
> *Reply-To: *"users@apex.apache.org"
> *Date: *Thursday, August 25, 2016 at 12:57 PM
>
> *To: *"users@apex.apache.org"
> *Subject: *Re: Malha
Is you pojo a shared object? I think you need to create new pojo every time.
Regards,
Siyuan
On Thu, Sep 29, 2016 at 3:03 PM, Jaikit Jilka wrote:
> Hello,
>
> I am trying to emit values from an array. I am emitting using an for loop.
> Number of records emitted is correct but it is emitting onl
I think the problem is what people expect when we say "certified". To me,
If I see something is certified with java 8, I would assume that I can use
java 8 api(new features stream, lambda etc.) to write the operator code,
not only just run the code with jre 8 or compile existing code with jdk 8
an
noverlord.com/2014/04/01/jdk-compatibility.html
>
> Ram
>
>
> On Wed, Oct 5, 2016 at 10:22 AM, Thomas Weise wrote:
>
>> Source level can be 1.8, which allows you to use 1.8 features. Did you
>> keep target level at 1.7?
>>
>>
>> On Wed, Oct 5, 2016 at
Hey Jaspal,
Did you add any code to existing KafkaSinglePortExactlyOnceOutputOperator from
malhar? If so please make sure the producer you use here
is org.apache.kafka.clients.producer.KafkaProducer instead of
kafka.javaapi.producer.Producer. That is old api and that is not supported
by MapR str
.* or 0.9
Regards,
Siyuan
On Fri, Oct 7, 2016 at 10:38 AM, hsy...@gmail.com wrote:
> Hey Jaspal,
>
> Did you add any code to existing KafkaSinglePortExactlyOnceOutputOperator from
> malhar? If so please make sure the producer you use here
> is org.apache.kafka.clients.produce
;>>> On Fri, Oct 7, 2016 at 11:01 AM, Jaspal Singh <
>>>> jaspal.singh1...@gmail.com> wrote:
>>>>
>>>>> Hi Siyuan,
>>>>>
>>>>> I am using the same Kafka producer as you mentioned. But I am n
Jaspal,
I think you miss the kafkaOut :)
Regards,
Siyuan
On Fri, Oct 7, 2016 at 11:32 AM, Jaspal Singh
wrote:
> Siyuan,
>
> That's how we have given it in properties file:
>
> [image: Inline image 1]
>
>
> Thanks!!
>
> On Fri, Oct 7, 2016 at 1:27 PM, h
e-stream:error",
> tenant.getVolumeName(), gson.toJson(tenant)));
>
> }
> producer.flush();
> }
> }
>
>
>
> Thanks!!
>
> On Fri, Oct 7, 2016 at 1:34 PM, hsy...@gmail.com wrote:
>
>>
Hey Raja,
The setup for secure kafka input operator is not easy. You can follow these
steps.
1. Follow kafka document to setup your brokers properly (
https://kafka.apache.org/090/documentation.html#security_overview)
2. You have to manually create the client JAAS file (
https://kafka.apache.org/0
27 matches
Mail list logo