Congrats Bessenyei and Jeff !!
Regards,
Rufus
On Mon, Sep 19, 2016 at 4:43 PM, Mike Percy wrote:
> Hi Apache Flume community,
>
> I am very happy to announce that the Flume PMC has voted to add Bessenyei
> Balázs Donát and Jeff Holoman as committers in recognition of their
> contributions to Fl
Congrats Hari !! Wonderful news !!
Regards,
Rufus
On Wed, Oct 21, 2015 at 5:50 PM, Arvind Prabhakar wrote:
> Dear Flume Users and Developers,
>
> I have had the pleasure of serving as the PMC Chair of Apache Flume since
> its graduation three years ago. I sincerely thank you and the Flume PMC
Your assumption is correct, as duplicates in a failure scenario will occur.
Thanks,
Rufus
On Tue, Sep 8, 2015 at 4:10 AM, Aljoscha Krettek
wrote:
> Hi,
> as I understand it the HDFS sink uses the transaction system to verify
> that all the elements in a transaction are written. This is what I w
For your second question, when agent2's HDFS sink is throwing exception,
this wont be propagated to agent1. This wont even be propagated to agent2's
source. But the channel will start filling in and will reach max capacity
and then the agent2's source cannot take any more data so it will throw
exce
Do you want to build Flume from source ? If not you can just download the
binary version.
https://cwiki.apache.org/confluence/display/FLUME/Getting+Started this
should give you a fairly good idea of how to proceed.
Thanks,
Rufus
On Wed, Jul 22, 2015 at 12:47 PM, Sutanu Das wrote:
> Hello,
>
danand.
> Illinois, USA.
> 857-253-9553.
>
> On Wed, Jul 22, 2015 at 11:55 AM, Johny Rufus wrote:
>
>> Can you confirm one more thing, do you have some files in the spool
>> directory that are 0 bytes ?
>>
>> Thanks,
>> Rufus
>>
>> On Wed, Jul
. Will it be solved?
>
> Regards,
> Nikhil Gopishetti Sadanand.
> Illinois, USA.
> 857-253-9553.
>
> On Wed, Jul 22, 2015 at 10:35 AM, Johny Rufus wrote:
>
>> [image: Boxbe] <https://www.boxbe.com/overview> This message is
>> eligible for Automatic
apache.org/jira/browse/FLUME-1934
>
> Thanks,
> Flume User.
>
>
> On Wed, Jul 22, 2015 at 10:16 AM, Johny Rufus wrote:
>
>> Are you renaming or deleting the file that has been placed in the
>> spooling directory ?
>>
>> Thanks,
>> Rufus
>&g
Are you renaming or deleting the file that has been placed in the spooling
directory ?
Thanks,
Rufus
On Wed, Jul 22, 2015 at 6:41 AM, Nikhil Gs
wrote:
> Hello Everyone,
>
> Facing a problem with flume spool.
> Below is my configuration,
>
> # Please paste flume.conf here. Example:
>
> # Sources
The spooling directory source as of now supports only reading from a flat
directory and wont read files from subdirectories.
You could write an external script that transfers all the files in all the
date directories to a common directory which spooling source points to. (If
this fits your use case
When you restart the agent, all the events in the memory channel will be
lost (if the memory channel was full and contained 100k events, they will
be lost)
File channel will not result in loss of events, on restart the channel's
state will be restored (File channel always maintains the last two dat
You are running out of space in your memory channel queue.
Can you try reducing your transaction capacity to an experimental value
like 1000, and batchSize to 100 and see how that works for you and take it
from there.
Also make sure that the size of your memory channel can always account for
the
2015 at 8:00 PM, Johny Rufus wrote:
>
>> There is a typo in your property "cnannel1", hence the property is not
>> set
>>
>> a1.channels.cnannel1.kafka.consumer.timeout.ms = 100
>>
>> Thanks,
>> Rufus
>>
>> On Fri, Jul 3, 201
There is a typo in your property "cnannel1", hence the property is not set
a1.channels.cnannel1.kafka.consumer.timeout.ms = 100
Thanks,
Rufus
On Fri, Jul 3, 2015 at 4:49 PM, Jun Ma wrote:
> Thanks for your reply. But from what I read, the magic things Flafka does
> is that you don't need t
Yes, when the channel is not available Syslog source loses data.
Thanks,
Rufus
On Thu, Jul 2, 2015 at 1:51 AM, Michael Morello
wrote:
> Seems related to https://issues.apache.org/jira/browse/FLUME-1103
> If I understand correctly data is lost and syslog source is not reliable ?
>
> 2015-07-02 1
Embedded Agent is designed to send events to other Flume agents (as opposed
to sending events to the final destination), and hence only Avro RPC Sink
is supported.
If you want to support other sinks, you should look at changing the
value ALLOWED_SINKS in EmbeddedAgentConfiguration and take things
If the checkpointing interval is 30 seconds (by default), and
dualCheckpoints are enabled (in case, the agent was interrupted while
writing a checkpoint), then replay should happen only from the last 30 secs
(worst case 60 secs). Not sure if this is happening in your case, or a
Full replay is happ
Based on the value of the Hour in the event's time stamp header, Flume will
write the event to the active file in the corresponding partition. If the
file in that partition is still active (not yet renamed from the temporary
inUseSuffix extension), then the event goes to the active file, If the fil
> To: "d...@flume.apache.org<mailto:d...@flume.apache.org>" <
> d...@flume.apache.org<mailto:d...@flume.apache.org>>, "user@flume.apache.org
> <mailto:user@flume.apache.org>" user@flume.apache.org>>
> > Subject: [ANNOUNCE] New Flume committe
A transaction in Flume consists of 1 or more batches. So the minimum
requirement is your channel's transaction capacity >= batchSize of the
src/sink.
Since Flume supports "at least once" transaction semantics, all events part
of the current transaction are stored internally as part of a Take List
t
Looks more like a config issue, can you verify the memory channel
transaction capacity ? By default it is 100, and the error reports the same
too.
Thanks,
Rufus
On Wed, Jun 17, 2015 at 2:55 PM, Quintana, Cesar (C) <
cesar.quint...@csaa.com> wrote:
> Hello,
>
>
>
> I have a Memory Channel and a
to start flume agent in this case rite ?
>
> Thanks,
> Nithesh
>
> On Wed, Jun 17, 2015 at 3:54 AM, Johny Rufus wrote:
>
>> An embedded agent only supports Embedded source/Avro sink combination, as
>> it is mainly used to send data generated by an application to a next l
Yes, each message in JMS is translated into one event. So you should be
able to see two messages when consuming from Kafka.
Can you use a logger sink instead of kafka sink and verify you are getting
two messages from the JMS source.
Thanks,
Rufus
On Tue, Jun 16, 2015 at 6:16 AM, Guillermo Ortiz
An embedded agent only supports Embedded source/Avro sink combination, as
it is mainly used to send data generated by an application to a next level
flume agent running Avro source.
For your case, you need to setup a Flume agent configured with an exec
source and roll file sink. (as the data is gen
The Apache Flume team is pleased to announce the release of Flume
version 1.6.0.
Flume is a distributed, reliable, and available service for efficiently
collecting, aggregating, and moving large amounts of log data.
This release can be downloaded from the Flume download page at:
http://flume.apac
The completed filename will always contain the epochTimestamp/counter added
to it (this is to uniquely distinguish the rolled files)
Thanks,
Rufus
On Fri, May 29, 2015 at 10:46 AM, Guyle M. Taber wrote:
> Ok I figured this out by using the %{basename} placeholder.
>
> However I’m trying to figu
The Roll File Sink does not support escape sequences in the directory
parameter and it also does not support filePrefix config.
Thanks,
Rufus
On Sun, May 24, 2015 at 6:00 AM, rafeeq shanavaz
wrote:
> Hi,
>
> I am using Flume Roll-File-Sink to write data in text file on local
> directory.
>
> I
Are you running your Hadoop cluster in Kerberos mode ? If so, is your
kerberos principal/keytab combination correct ? You can try to login into
the KDC server of your hadoop cluster independently using the
specified principal/keytab to make sure the combination can login to the
KDC/cluster and use
+1 for dropping hadoop-1,
Should we also consider dropping the hbase-94 compatible profile ?
Thanks,
Rufus
On Wed, May 13, 2015 at 7:08 AM, wrote:
> +1
>
>
> Jean-François Guilmard
>
> -Original Message-
> From: Needham, Guy [mailto:guy
Congratulations Ashish !!
Thanks,
Rufus
On Fri, May 8, 2015 at 10:42 AM, Hari Shreedharan
wrote:
> On behalf of the Apache Flume PMC, I am excited to welcome Ashish Paliwal as
> a committer on the Apache Flume project. Ashish has actively contributed
> several patches to the Flume project, incl
Sink Groups are designed to operate such that only one sink is chosen at
any time out of the configured sinks in the group, to process data out of
the channel. So this is the expected behavior. As you had noted, removing
the sink group will result in each sink processing the data out of the
channe
31 matches
Mail list logo