.
On Thursday, April 16, 2015 12:47 PM, Rahul Ravindran rahu...@yahoo.com
wrote:
Hi, Below is my flume config and I am attempting to get Load Balancing sink
group to LB across multiple machines. I see only 2 threads created for the
entire sink group when using load balancing sink group and see
Hi,
We are using CDH flume 1.3 (which ships with 4.2.1). We see this error in our
flume logs in our production system and restarting flume did not help. Looking
at the flume code, it appears to be expecting the byte to be an OPERATION, but
is not. Any ideas on what happened?
Thanks,
~Rahul.
; Rahul Ravindran
rahu...@yahoo.com
Sent: Thursday, June 27, 2013 11:24 AM
Subject: Re: Flume error in FIleChannel
Looks like the file may have been corrupted. Can you verify if you are out of
disk space or can see something that might have caused the data to be corrupted?
Hari
On Thu, Jun 27
Hi,
Is there a rough estimate on when 1.4 may be shipped? We were primarily
looking for https://issues.apache.org/jira/browse/FLUME-997 and perhaps,
looking to port that to 1.3.1 or use 1.4 if it is looking to ship sometime
soon(by end of June)
~Rahul.
Pinging again since this has been happening a lot more frequently recently
From: Rahul Ravindran rahu...@yahoo.com
To: User-flume user@flume.apache.org
Sent: Tuesday, May 7, 2013 8:42 AM
Subject: IOException with HDFS-Sink:flushOrSync
Hi,
We have noticed
(but
it was not in CDH4.1.2)
Hari
--
Hari Shreedharan
On Monday, May 13, 2013 at 7:23 PM, Rahul Ravindran wrote:
We are using cdh 4.1.2 - Hadoop version 2.0.0. Looks like cdh 4.2.1 also uses
the same Hadoop version. Any suggestions on any mitigations?
Sent from my phone.Excuse the terseness.
On May 13
...@cloudera.com
To: user@flume.apache.org user@flume.apache.org; Rahul Ravindran
rahu...@yahoo.com
Sent: Monday, May 6, 2013 9:57 PM
Subject: Re: Usage of use-fast-replay for FileChannel
Did you have an issue with the checkpoint that the entire 6G of data was
replayed (look
Hi,
We have noticed this a few times now where we appear to have an IOException
from HDFS and this stops draining the channel until the flume process is
restarted. Below are the logs: namenode-v01-00b is the active namenode
(namenode-v01-00a is standby). We are using Quorum Journal Manager
Hi,
For FileChannel, how much of a performance improvement in replay times were
observed with use-fast-replay? We currently have use-fast-replay set to false
and were replaying about 6 G of data. We noticed replay times of about one
hour. I looked at the code and it appears that fast-replay
Hi,
Flume writes to HDFS(we use Cloudera 4.1.2 release and Flume 1.3.1) using the
HDFS nameservice which points to 2 namenodes (one of which is active and the
other is standby). When the HDFS service is restarted, the namenode which comes
up first becomes active. If the active namenode was
I have attached the zipped log file at
https://issues.apache.org/jira/browse/FLUME-1928
From: Hari Shreedharan hshreedha...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Monday, February 25, 2013 1:30 PM
Subject: Re: File
Re..sending.
From: Rahul Ravindran rahu...@yahoo.com
To: User-flume user@flume.apache.org
Sent: Thursday, January 31, 2013 2:39 PM
Subject: Security between Avro-source and Avro-sink
Hi,
Is there a way to have secure communications between 2 Flume
Hi Brock,
I created a JIRA https://issues.apache.org/jira/browse/FLUME-1900 which has
the log file attached.
~Rahul.
From: Brock Noland br...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Saturday, February 2, 2013 4:05 PM
Hi,
Is there a way to have secure communications between 2 Flume machines(one
which has an avro source which forwards data to an avro sink)?
Thanks,
~Rahul.
Hi,
Is there any additional management/monitoring abilities or anything else for
flume which is available via Cloudera Manager?
Thanks,
~Rahul.
Hi,
Is Flume 1.3 part of CDH4? Is Flume 1.3 part of any debian repo for
installation? I have the link for http://flume.apache.org/download.html which
gives me the tar file. However, this does not install Flume's dependencies.
Thanks,
~Rahul.
Hi,
This is primarily to try and address a flume upgrade scenario in the case of
any incompatible changes in future. I tried this with multiple processes of the
same version, and it appeared to work. Are there any concerns on running
multiple versions of flume on the same box (each with
does come up.
Thanks for all the info.
~Rahul.
From: Mike Percy mpe...@apache.org
To: user@flume.apache.org
Cc: Rahul Ravindran rahu...@yahoo.com
Sent: Wednesday, November 21, 2012 2:24 PM
Subject: Re: Running multiple flume versions on the same box
On Mon, Nov 19, 2012 at 2:18 PM, Rahul Ravindran rahu...@yahoo.com wrote:
Are there other such libraries which will need to be downloaded? Is there a
well-defined location for the hadoop jar and any other jars that flume may
depend on?
is that hadoop-hdfs brings in a ton of other stuff which will not be
used in any box except the one running the hdfs sink.
Thanks,
~Rahul.
From: Hari Shreedharan hshreedha...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Monday
On Nov 19, 2012, at 4:27 PM, Rahul Ravindran rahu...@yahoo.com wrote:
That is unfortunate. Is it sufficient if I package just hadoop-common.jar or
is the recommended way essentially doing an apt-get install flume-ng which
will install the below
# apt-cache depends flume-ng
flume-ng
HAProxy has a TCP mode where it round robins TCP connections. Does it need to
understand something specific about the wire protocol used by Flume?
From: Brock Noland br...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent
Resending given I sent it during off-hours.
From: Rahul Ravindran rahu...@yahoo.com
To: user@flume.apache.org user@flume.apache.org
Sent: Tuesday, November 13, 2012 5:52 PM
Subject: Flume hops behind HAProxy
Hi,
Before I try it, I wanted to check
In the 1.3 snapshot documentation, I don't see anything about the spool
directory source. Is that ready?
Sent from my phone.Excuse the terseness.
On Nov 13, 2012, at 9:43 AM, Hari Shreedharan hshreedha...@cloudera.com wrote:
You can find the details of the components and how to wire them
, 2012 10:12 AM
Subject: Re: high level plugin architecture
Where are you seeing that? I see that documented in the 1.3.0 branch
under Spooling Directory Source
On Tue, Nov 13, 2012 at 11:57 AM, Rahul Ravindran rahu...@yahoo.com wrote:
In the 1.3 snapshot documentation, I don't see anything
to build trunk/1.3 branch or wait for 1.3
release).
Thanks
Hari
--
Hari Shreedharan
On Thursday, November 8, 2012 at 3:05 PM, Rahul Ravindran wrote:
Hello,
I wanted to perform a load test to get an idea of how we would look to
scale flume for our deployment. I have pasted
file channel with this source, we will result in double
writes to disk, correct? (one for the legacy log files which will be ingested
by the Spool Directory source, and the other for the WAL)
From: Rahul Ravindran rahu...@yahoo.com
To: user@flume.apache.org
source on failure?
Thanks,
~Rahul.
From: Brock Noland br...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Wednesday, November 7, 2012 11:48 AM
Subject: Re: Guarantees of the memory channel for delivering to sink
Hi,
Yes if you
Apologies. I am new to Flume, and I am probably missing something fairly
obvious. I am attempting to test using a timestamp interceptor and host
interceptor but I see only a sequence of numbers in the remote end.
Below is the flume config:
agent1.channels.ch1.type = MEMORY
Hi,
I am very new to Flume and we are hoping to use it for our log aggregation
into HDFS. I have a few questions below:
FileChannel will double our disk IO, which will affect IO performance on
certain performance sensitive machines. Hence, I was hoping to write a custom
Flume source which
?
From: Brock Noland br...@cloudera.com
To: user@flume.apache.org; Rahul Ravindran rahu...@yahoo.com
Sent: Tuesday, November 6, 2012 1:44 PM
Subject: Re: Guarantees of the memory channel for delivering to sink
But in your architecture you are going to write
31 matches
Mail list logo