It’s a common case in that the Event listed here is the generic avro_event
object when serializing to HDFS.
We had someone simply change Event from body[byte[]] to body[String] when
serializing, which has the unfortunate side-effect of altering data if it’s not
UTF-8.
It did however solve the H
Concat support is there .. but for for string datatypes. Not for tinyints.
Not sure its so common use case.
If you want to build it then you can contribute back to hive.
On Thu, Nov 14, 2013 at 11:48 PM, Deepak Subhramanian <
deepak.subhraman...@gmail.com> wrote:
> Thanks Nitin. UDF is a good s
Thanks Nitin. UDF is a good solution. I was wondering if there was a
builtin support for hive since it is the default flume format for flume
avro sink.
Thanks, Deepak
On Wed, Nov 13, 2013 at 1:15 PM, Nitin Pawar wrote:
> sorry hit send to soon ..
>
> correction rather than just changing your ta
On Thu, Nov 14, 2013 at 2:50 AM, Jan Van Besien wrote:
> On 11/13/2013 03:04 PM, Brock Noland wrote:
> > The file channel uses a WAL which sits on disk. Each time an event is
> > committed an fsync is called to ensure that data is durable. Without
> > this fsync there is no durability guarantee.
Hello,
We are planning to mount Hadoop 2 in a cluster with the following machines:
- One machine for Management Server
- One machine for Namenode Server
- One machine for Resource Manager
- n machines for Datanodes
We have doubts where to run Flume agents in the cluster.
Is r
On 11/13/2013 03:04 PM, Brock Noland wrote:
> The file channel uses a WAL which sits on disk. Each time an event is
> committed an fsync is called to ensure that data is durable. Without
> this fsync there is no durability guarantee. More details here:
> https://blogs.apache.org/flume/entry/apache