Sorry. My mistake. It is loading JSON data properly after the temporary fix.
On Thu, May 8, 2014 at 6:24 PM, Deepak Subhramanian <
deepak.subhraman...@gmail.com> wrote:
> Hi Ashish,
>
> Thanks for the solution. I made the changes and I can see the JSON message
> now. There i
er tmp = jsonBuilder();
>> parser = XContentFactory.xContent(contentType).createParser(data);
>> parser.nextToken();
>> tmp.copyCurrentStructure(parser);
>> builder.field(fieldName, tmp); <<<< This is where the we might have
>> an issue (
, 2014 at 12:56 PM, Deepak Subhramanian <
deepak.subhraman...@gmail.com> wrote:
>
> I tried using the latest flume elastic search sink jar build from
> 1.5SNAPSHOT ,but still no luck. I will try to see if it is an issue with
> elastic search api . When I loaded json data using h
; On Wed, Apr 9, 2014 at 6:17 PM, Deepak Subhramanian <
> deepak.subhraman...@gmail.com> wrote:
>
>> Thanks Otis. I will give it a try. Do I have to replace the flume jars or
>> add the flume jar to the plugins directory to override the current flume
>> version.
>
Thanks Simon. I am also struggling with no luck. I tried using the latest
flume elastic search sink jar build from 1.5SNAPSHOT ,but still no luck. I
will try to see if it is an issue with elastic search api . When I loaded
json using hive it loaded JSON properly. But we have to pass a property
es
<http://hostname1/dev>
We have to go through firewall request everytime we need to add an
additional port to a flume agent.
--
Deepak Subhramanian
d it make
> sense to give that a try?
>
> Otis
> --
> Performance Monitoring * Log Analytics * Search Analytics
> Solr & Elasticsearch Support * http://sematext.com/
>
>
> On Mon, Apr 7, 2014 at 12:08 PM, Deepak Subhramanian <
> deepak.subhraman...@gmail.com> wr
urce": {
"@message": "org.elasticsearch.common.xcontent.XContentBuilder@58bf76d2
",
"@timestamp": "2014-04-07T09:49:26.490Z",
"@fields": {
"timestamp": "1396864166490"
}
},
"sort": [
1396864166490,
1396864166490
]
}
org.elasticsearch.common.xcontent.XContentBuilder
--
Deepak Subhramanian
the easiest way to go about this ? I saw from reading docs
> that creating a morphline interceptor could be the way to go but I did not
> fully understand how that
> works.
>
> Thanks a lot
>
>
--
Deepak Subhramanian
elements to strings and
>> try. else you may want to build your own udf to do it which looks more
>> elegant way rather than just typecasting it.
>>
>>
>> On Wed, Nov 13, 2013 at 5:18 PM, Deepak Subhramanian <
>> deepak.subhraman...@gmail.com> wrote:
>
quot;type":{"type":"map","values":"string"}},{"name":"body","type":"bytes"}]}');
describe flume_avro_test
> ;
OK
headers map from deserializer
body array from deserializer
Thanks,
Deepak Subhramanian
Thanks David.
On Thu, Oct 24, 2013 at 2:47 PM, David Sinclair <
dsincl...@chariotsolutions.com> wrote:
> You could do that or add the TimestampInterceptor to your source as well.
>
>
> On Thu, Oct 24, 2013 at 5:16 AM, Deepak Subhramanian <
> deepak.subhraman...@gmail.com
present
> in the event for this to work. What source are you using?
>
>
> On Wed, Oct 23, 2013 at 11:25 AM, Deepak Subhramanian <
> deepak.subhraman...@gmail.com> wrote:
>
>> Hi ,
>>
>> I am trying to store my logs in folders named with date for my file_
java.io.FileOutputStream.(Unknown Source)
at java.io.FileOutputStream.(Unknown Source)
at org.apache.flume.sink.RollingFileSink.process(RollingFileSink.java:169)
Thanks,
Deepak Subhramanian
mCh issue.
>
>
> On Fri, Oct 18, 2013 at 8:47 AM, Deepak Subhramanian <
> deepak.subhraman...@gmail.com> wrote:
>
>> Hi
>> I am getting error while load testing a flume agent with HTTSrc and Avro
>> Sink.
>>
>> Is there an optimum configuration for
ht
at
org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:128)
at
org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)
at
org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:192)
--
Deepak Subhramanian
at 12:23 AM, Hari Shreedharan wrote:
> The Avro Sink is used for comunication between Flume agents. To directly
> insert into HDFS you simply use an Avro Serializer with the HDFS sink.
>
>
> Thanks,
> Hari
>
> On Sunday, October 6, 2013 at 3:38 PM, Deepak Subhramanian
Hi Hari ,
I tried using an avro sink after HTTPSource and then an avro source and
hdfs sink and it seems to be working. Do we have to use an avro sink first
or can we directly convert to avro using HDFS sink ?
Thanks, Deepak
On Sun, Oct 6, 2013 at 11:27 PM, Deepak Subhramanian
!org.apache.hadoop.io.LongWritableorg.apache.hadoop.io.TextK???2-%??-/??
A??,? ?xmldata
On Fri, Oct 4, 2013 at 10:43 PM, Deepak Subhramanian <
deepak.subhraman...@gmail.com> wrote:
> Thanks Hari.
>
> I speficied the fileType. This is what I have. I will try again and let
> you know.
>
> tier1.sources = httpsrc1
>
pache.org/FlumeUserGuide.html#hdfs-sink
>
>
> Thanks,
> Hari
>
> On Friday, October 4, 2013 at 6:52 AM, Deepak Subhramanian wrote:
>
> I tried using the HDFS Sink to generate the avro file by using the
> serializer as avro_event. But it is not generating avro file. But a
> sequence fil
?
"{ \"type\":\"record\", \"name\": \"Event\", \"fields\": [" +
" {\"name\": \"headers\", \"type\": { \"type\": \"map\", \"values\":
\"string\" } }, " +
arty-plugins
>
>
> Thanks,
> Hari
>
> On Thursday, October 3, 2013 at 6:26 AM, Deepak Subhramanian wrote:
>
> Thanks Dave. I got it done extended. I struggled for sometime since I put
> the jar in the wrong directory. It will be good to have the documentation
> updated wi
avro with some configuration details. I want to store the entire xml string
in an avro variable.
Thanks in advance for any inputs.
Deepak Subhramanian
gt; class. Additional parameters will be given to your handle on the configure
> method. The properties will be anything that starts with *handler.*.*
> *
> *
> hope that helps,
>
> dave
>
>
> On Wed, Oct 2, 2013 at 4:25 PM, Deepak Subhramanian <
> deepak.subhraman...
Hi ,
Has anyone extended HTTPHandler in Flume . I am trying to add an extension
to recieve xml documents and convert it to Avro. Appreciate any inputs
Thanks,
Deepak Subhramanian
It started working. It looks like it is something related to way cdh update
the config file. It was not getting updated properly.
On Wed, Oct 2, 2013 at 12:26 PM, Deepak Subhramanian <
deepak.subhraman...@gmail.com> wrote:
> Hi I am trying to use Flume HTTP Handler . But getting an er
filePrefix = access_log
--
Deepak Subhramanian
27 matches
Mail list logo