Hi All,
We want to use hadoop archives for our internal project. Just wanted
to know how stable this har functionality is.
Is anyone using this in production? If so, did u face any problems by
using HAR.
Appreciate for all the help!.
regards,
R
Hi everyone,
Anyone knows if the new corona tools (Facebook just released as open
source) are compatible with hadoop 1.0.x ? or just 0.20.x ?
Thanks.
Hello everybody,
We are use using CapacityScheduler and Hadoop 0.20.2 (cdh3U3) on a cluster
composed of nodes(20) with :
ONE-NODE=16 core, 24 GB memory (AMD 4274), JVM HotSpot 1.7.0_05
We execute scientific Job in Map/Reduce task (using Cascading 1.2). We use
CapacityScheduler to avoid
I am looking for an example that read snappy compressed snappy file. Could
someone point me to it? What I have so far is this:
Configuration conf = *new* Configuration();
FileSystem fs = FileSystem.*get*(URI.*create*(uri), conf);
Path path = *new* Path(uri);
SequenceFile.Reader reader =
Is it necessary to add hadoop and hbase site xmls in the classpath of the
java client? Is there any other way we can configure it using general
properties file using key=value?
Hi,
Can anyone reference me to an article/blog detailing how Counters are
implemented in HBase?
We get very low throughput of batch of Increment relative to batch of Put, and
I would like to investigate why, by understanding the basic workflow on
updating a counter - does it use block-cache?
I was simple able to read using below code. Didn't have to decompress. It
looks like reader automatically knows and decompresses the file before
returning it to the user.
On Mon, Nov 12, 2012 at 3:16 PM, Mohit Anchlia mohitanch...@gmail.comwrote:
I am looking for an example that read snappy
I found the following:
http://cloudfront.blogspot.com/2012/06/hbase-counters-part-i.html
http://palominodb.com/blog/2012/08/24/distributed-counter-performance-hbase-part-1
Which hbase version are you using ?
Cheers
On Mon, Nov 12, 2012 at 4:21 PM, Mesika, Asaf asaf.mes...@gmail.com wrote:
Hi, Mohit
You can input everything in a directory. See the step 12 in this link.
http://raseshmori.wordpress.com/
On Mon, Nov 12, 2012 at 5:40 PM, Mohit Anchlia mohitanch...@gmail.comwrote:
Using Java dfs api is it possible to read all the files in a directory? Or
do I need to list all the
when submiting a job,the ToolRunnuer or JobClient just distribute your jars
to hdfs,
so that tasktrackers can launch/re-run it.
In your case,you should have your dynamic class re-generate in
mapper/reducer`s setup method,
or the runtime classloader will miss them all.
On Tue, Nov 13, 2012 at
Hi,
My Hadoop cluster runs log with the level of INFO, and it outputs every
block-related information to the HDFS log. I would like to keep the INFO
level, but suppress block-related information. Is it possible? Can one
control Hadoop logs by components?
Thank you. Sincerely,
Mark
I was actually looking for an example to do it in the java code. But I
think I've found a way to do it by iterating over all the files using
globStatus() method.
On Mon, Nov 12, 2012 at 5:50 PM, yinghua hu yinghua...@gmail.com wrote:
Hi, Mohit
You can input everything in a directory. See the
try copying files from hadoop in hbase to each other's conf directory.
Regards,
Mohammad Tariq
On Tue, Nov 13, 2012 at 5:04 AM, Mohit Anchlia mohitanch...@gmail.comwrote:
Is it necessary to add hadoop and hbase site xmls in the classpath of the
java client? Is there any other way we can
There are two candidate:
1) You need to copy your Hadoop/HBase configuration such as
common-site.xml, hdfs-site.xml, or *hbase-site.xml *file from etc or
conf subdirectory of Hadoop/HBase installation directory into the Java
project directory. Then the configuration of Hadoop/HBase will be auto
Hi,
I am new to hadoop.
I am trying to use
for (StatisticsCollector.TimeWindow window : tracker.getStatistics().
collector.DEFAULT_COLLECT_WINDOWS) { JobTrackerStatistics.TaskTrackerStat
ttStat = tracker.getStatistics(). getTaskTrackerStat(tt.getTrackerName());
out.println(/tdtd +
15 matches
Mail list logo