Hi,
Take a look at TimeBasedDirectoryScanner in FileSplitterInput, this scanner
accepts list of files/directories to scan. Also it accepts regex to filter
on file names. I think you can pick ides on how to scan multiple
directories from there.
-Priyanka
On Thu, Jul 7, 2016 at 6:59 PM, Mukkamula,
Hi,
The file will be available after window is committed, you can overwrite
committed call and start your thread after super.commit is called. You
might want to double check if file is actually finalized before starting
your thread..
For your usecase I would suggest you to use AbstractFileOutputO
Take a look at the attached example, which sends 3 strings to 3 different
instances.
Yes, the tuples going to one partition will be processed in the order they
are sent to that partition.
On Thu, Jul 7, 2016 at 3:57 PM Raja.Aravapalli
wrote:
>
>
> Hi Sandesh,
>
>
> Below is the way I am setting
Hi Sandesh,
Below is the way I am setting codec:
SsnCheckCodec ssnCodec = new SsnCheckCodec(3);
dag.setInputPortAttribute(ssnCheck.input, PortContext.STREAM_CODEC, ssnCodec);
Also, one quick question,
When some operator is running in multiple instances… will the records received
by each in
Where are you setting the StreamCodec?
Here is one method,
dag.setInputPortAttribute(campaignProcessor.input,
Context.PortContext.STREAM_CODEC, new MyCodec());
On Thu, Jul 7, 2016 at 1:57 PM Ashwin Chandra Putta <
ashwinchand...@gmail.com> wrote:
> Raja,
>
> Is MyOperator the actual name of th
Hi David,
The application is failing on Data torrent UI.
Application Overview Application Overview
smallWindow1 restart
id: 0009 user: hadoop uptime: 00:00:06
State
FAILED (FAILED)
state
-
current wID
-
recovery wID
Performance
-
latency (ms)
-
processed/s
-
emitted/s
-
total processed
-
Raja,
Is MyOperator the actual name of the operator within your dag? If not, can
you replace it with the actual name of the operator and try.
Regards,
Ashwin.
On Thu, Jul 7, 2016 at 1:28 PM, Raja.Aravapalli
wrote:
>
>
> Also, to share some info on the parititoner I am using:
>
> I am using Sta
Also, to share some info on the parititoner I am using:
I am using Stateless Partitioner with below code:
dt.operator.MyOperator.attr.PARTITIONER
com.datatorrent.common.partitioner.StatelessPartitioner:3
Thanks.
Regards,
Raja.
From: "Raja.Aravapalli"
mailto:raja.aravapa...@target
Hello,
It is running fine now. Got expected results. Thank you
- Original Message -
From: "Jaikit Jilka"
To: "users"
Sent: Thursday, July 7, 2016 12:55:58 PM
Subject: Re: Jdbcoutputoperator implementation
Hello,
I am not able to open application master log.
- Original Message ---
Hi,
I have an operator, which is running in 3 instances I.e partions… And, I want
all the records with same key, here my key is "String" type, to be transferred
to same instance/partition.
But, I am unable to achieve this with my below codec.
import com.datatorrent.lib.codec.KryoSerializabl
Hello,
I am not able to open application master log.
- Original Message -
From: "Pradeep A. Dalvi"
To: "users"
Sent: Thursday, July 7, 2016 12:35:13 PM
Subject: Re: Jdbcoutputoperator implementation
Please share Application Master logs.
On Thu, Jul 7, 2016 at 11:43 AM, Jaikit Jilka w
Please share Application Master logs.
On Thu, Jul 7, 2016 at 11:43 AM, Jaikit Jilka wrote:
> Hello,
>
> I completed my application now. Till now I was launching this application in
> Sandbox and it works perfectly fine. I am getting expected results. But now I
> tried running it on live cluster
Hi Sushil,
Can you tell us more details on how it fails after upgrading to 3.4.0?
Any error messages? App master logs? Container logs?
David
On Thu, Jul 7, 2016 at 12:23 PM, Chaudhary, Sushil (CONT) <
sushil.chaudh...@capitalone.com> wrote:
> We already tried Apex Malhar 3.4.0 with
> Datatorre
We already tried Apex Malhar 3.4.0 with
Datatorrent RTS 3.4.0. As soon as we upgrade the Apex Malhar library to 3.4.0,
out Data torrent application started failing.
However application is working good if we use malhar version 3.3.0-incubating.
Also, once I move to malhar.version>3.4.0, I nee
Hi ,
Can you please let me know what happen when the requestFinalize() method is
called as per below ?
Once the output files are written to HDFS, I would like to initiate a thread
that reads the HDFS files and copies to FTP location. So I am trying to
understand when can I trigger the thread.
Hello,
I completed my application now. Till now I was launching this application in
Sandbox and it works perfectly fine. I am getting expected results. But now I
tried running it on live cluster and it is not working. It is not giving me any
error but all my operators are killed by application
Hi Yunhan,
This example I am already using for reading the data from multiple directories
in parallel. Hear each directory is given to an operator in parallel.
My requirement is I would like add multiple directories to a single operator.
Regards,
Surya Vamshi
From: Yunhan Wang [mailto:yun...@d
17 matches
Mail list logo