The completed filename will always contain the epochTimestamp/counter added
to it (this is to uniquely distinguish the rolled files)
Thanks,
Rufus
On Fri, May 29, 2015 at 10:46 AM, Guyle M. Taber wrote:
> Ok I figured this out by using the %{basename} placeholder.
>
> However I’m trying to figu
Ok I figured this out by using the %{basename} placeholder.
However I’m trying to figure out how to prevent the epoch suffix from being
applied to every file as it’s written to hdfs.
Example:
20150528133001.txt-.1432920411283
How do I prevent the epoch timestamp from being appended to every fil
I second that, murphy law ...
New releases or patch can break the correct event format, manual mistakes too
and so on
> Le 16 oct. 2014 à 17:54, Paul Chavez a écrit :
>
> Human error is most common reason in my experience. Whether it is a
> configuration error or fault in app development, I wa
Human error is most common reason in my experience. Whether it is a
configuration error or fault in app development, I was just relaying a method
to make your flume infrastructure more resilient. Regarding corrupted events,
now that I think of it those have always been within the event payload a
Le 15/10/2014 17:57, Gwen Shapira a écrit :
> Yes, this is absolutely possible - but you need to make sure the flume
> event has the matching keys in the event header (tenant, type, and
> timestamp).
> Do this either using interceptors or through a custom source.
Thanks I'll try it (maybe next wee
Le 15/10/2014 17:57, Paul Chavez a écrit :
> Yes, that will work fine. From experience, I can say definitely account for
> the possibility of the 'tenant' and 'data_type' headers being corrupted or
> missing outright.
How come they are missing or corrupted ?
If my app is the only source for thes
er a year.
Hope that helps,
Paul Chavez
-Original Message-
From: Jean-Philippe Caruana [mailto:j...@target2sell.com]
Sent: Wednesday, October 15, 2014 7:03 AM
To: user@flume.apache.org
Subject: HDFS sink: "clever" routing
Hi,
I am new to Flume (and to HDFS), so I hope my question is not stu
Yes, this is absolutely possible - but you need to make sure the flume
event has the matching keys in the event header (tenant, type, and
timestamp).
Do this either using interceptors or through a custom source.
On Wed, Oct 15, 2014 at 7:02 AM, Jean-Philippe Caruana
wrote:
> Hi,
>
> I am new to F
Hi,
I am new to Flume (and to HDFS), so I hope my question is not stupid.
I have a multi-tenant application (about 100 different customers as for
now).
I have 16 different data types.
(In production, we have approx. 15 million messages/day through our
RabbitMQ)
I want to write to HDFS all my ev