Filed as HBASE-3960 (https://issues.apache.org/jira/browse/HBASE-3960).
-Joey
On Tue, Jun 7, 2011 at 5:04 PM, Stack wrote:
> Dunno. Implication is that there is a MR cluster out there just
> sitting idle waiting on the occasional job of whimsy (if the data is
> 'big'). And, we'd have to make o
Dunno. Implication is that there is a MR cluster out there just
sitting idle waiting on the occasional job of whimsy (if the data is
'big'). And, we'd have to make our load/dump at least as smart as the
? equivalent (fill in one from the list pig, hive, cascading, etc.)?
Then why not just fire up
Yes, shell could launch MR jobs; I bet a lot of people will find this
feature very useful.
-Jack
On Tue, Jun 7, 2011 at 11:32 AM, Joey Echeverria wrote:
> I think Jack was suggesting that the shell would launch the MR jobs.
>
> -Joey
> On Jun 7, 2011 1:34 PM, "Stack" wrote:
>> On Tue, Jun 7, 2
I think Jack was suggesting that the shell would launch the MR jobs.
-Joey
On Jun 7, 2011 1:34 PM, "Stack" wrote:
> On Tue, Jun 7, 2011 at 10:20 AM, Jack Levin wrote:
>> That would be a real nice feature though, imagine going to the shell,
>> and requesting a dump of your table.
>>
>
> But it wo
On Tue, Jun 7, 2011 at 10:20 AM, Jack Levin wrote:
> That would be a real nice feature though, imagine going to the shell,
> and requesting a dump of your table.
>
But it would only work for mickey mouse tables, no? If your
input/output is of any substantial size, wouldn't you want to MR it?
St.
That would be a real nice feature though, imagine going to the shell,
and requesting a dump of your table.
> load into outfile/hdfs '/tmp/table_a' from table_a
Or something similar.
> load from outfile/hdfs -- could power bulk uploader
-Jack
On Mon, Jun 6, 2011 at 9:09 PM, Jack Levin wrote:
>
> Can you hook hive to hbase?
Yes, we used hbase to hive and back before, but its not real flexible,
especially going hbase -> hive route. Much better prefer bulk uploader
tool for modified tables via hive map-reduce of tsv or csv.
-Jack
You could hook up
http://hadoop.apache.org/common/docs/r0.20.1/api/org/apache/hadoop/mapreduce/lib/output/TextOutputFormat.html
to a map that emit tsv lines (use the tsv escaping lib du jour to make
sure tabs are properly escaped or just search and replace tabs in
source yourself if not corrupting)
there is export tool that exports tables into sequence files, question
is, what do I do with those seq. files to convert them to text?
-Jack
On Mon, Jun 6, 2011 at 4:56 PM, Bill Graham wrote:
> You can do this in a few lines of Pig, check out the HBaseStorage class.
> You'll need to now the name
You can do this in a few lines of Pig, check out the HBaseStorage class.
You'll need to now the names of your column families, but besides that it
could be done fairly generically.
On Mon, Jun 6, 2011 at 3:57 PM, Jack Levin wrote:
> Hello, does anyone have any tools you could share that would
Hello, does anyone have any tools you could share that would take a
table, and dump the contents as TSV text format? We want it in tsv
for quick HIVE processing that we have in the another datamining
cluster. We do not want to write custom map-reduce jobs for hbase
because we already have an exte
11 matches
Mail list logo