lol
On Sun, Mar 10, 2013 at 7:29 PM, Dan Han dannahan2...@gmail.com wrote:
I would like to unsubscribe now as the email is huge. Thanks.
Best Wishes
Dan Han
--
Regards,
Ouch Whisper
010101010101
Hello,
I am trying to write MapReduce jobs to read data from JSON files and load
it into HBase tables.
Please suggest me an efficient way to do it. I am trying to do it using
Spring Data Hbase Template to make it thread safe and enable table locking.
I use the Map methods to read and parse the
Hello,
Thank you for the replies.
I have not used pig yet. I am looking into it. I wanted to implement both
the approaches.
Are pig scripts maintainable? Because the Json structure that I will be
receiving will be changing quite often. Almost 3 times a month.
I will be processing 24 million Json
Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Thu, Feb 7, 2013 at 6:25 PM, Panshul Whisper ouchwhis...@gmail.com
wrote:
Hello,
Thank you for the replies.
I have not used pig yet. I am looking into it. I wanted to implement both
the approaches.
Are pig scripts
Hello,
I was wondering if anyone is using spring for hadoop to execute map reduce
jobs or to perform hbase operations on a hadoop cluster using spring data
for hadoop.
Please suggest me a working example as I am unable to find any working
sample and spring data documentation is of no use for
is a field in Json objects ]
-Anoop-
From: Panshul Whisper [ouchwhis...@gmail.com]
Sent: Wednesday, January 16, 2013 6:36 PM
To: user@hbase.apache.org
Subject: Re: Hbase as mongodb
Hello Tariq,
Thank you for the reply
Hello,
Is it possible to use hbase to query json documents in a same way as we can
do with Mongodb
Suggestions please.
If we can then a small example as how.. not the query but the process flow..
Thanku so much
Regards,
Panshul.
into a sequence of bytes into Hbase and query it. Could you
please elaborate your problem a bit?It will help us to answer your question
in a better manner.
Warm Regards,
Tariq
https://mtariq.jux.com/
cloudfront.blogspot.com
On Wed, Jan 16, 2013 at 4:03 PM, Panshul Whisper ouchwhis...@gmail.com
wrote
on the HDFS file system (run compaction first).
Next, you 5 times that since you have 5 times replication, so 5x150=750GB
On Jan 11, 2013, at 5:07 AM, Panshul Whisper wrote:
Hello,
I have a 5 node hadoop cluster and a fully distributed Hbase setup on
the
cluster with 130 GB of HDFS space
Hello,
Is it possible to add Hbase nodes on the fly?
In the case of fully distributed setup.
Thanks
Regards,
Ouch Whisper
01010101010
and start to use.
Thank you!
Sincerely,
Leonid Fedotov
On Jan 11, 2013, at 4:02 AM, Panshul Whisper wrote:
Hello,
Is it possible to add Hbase nodes on the fly?
In the case of fully distributed setup.
Thanks
Regards,
Ouch Whisper
01010101010
--
Regards,
Ouch Whisper
this is really helpful.. thanks so much for the ideas...
--
Regards,
Ouch Whisper
010101010101
Hello,
I have a 5 node hadoop cluster and a fully distributed Hbase setup on the
cluster with 130 GB of HDFS space avaialble. HDFS replication is set to 5.
I have a total of 115 GB of JSON files that need to be loaded into the
Hbase database and then they have to processed.
So is the available
13 matches
Mail list logo