Can you tell us the hadoop release you're using ?
Seems there is inconsistency in protobuf library.
On Mon, Mar 3, 2014 at 8:01 AM, Margusja mar...@roo.ee wrote:
Hi
I even don't know what information to provide but my container log is:
2014-03-03 17:36:05,311 FATAL [main]
Have you run the following command under the root of your workspace ?
mvn eclipse:eclipse
On Mar 3, 2014, at 9:18 PM, nagarjuna kanamarlapudi
nagarjuna.kanamarlap...@gmail.com wrote:
Hi,
I checked out the hadoop trunck from
http://svn.apache.org/repos/asf/hadoop/common/trunk.
I set up
Have you looked at
http://www.gedis-studio.com/online-call-detail-records-cdr-generator.html ?
On Sat, Mar 1, 2014 at 7:39 AM, John Lilley john.lil...@redpoint.netwrote:
I would like to explore Call Data Record (CDR aka Call Detail Record)
analysis, and to that end I'm looking for a large
27, 2014 at 10:16 PM, Ted Yu yuzhih...@gmail.com wrote:
You're using 0.94, right ?
RowLock has been dropped since 0.96.0
Can you tell us more about your use case ?
On Thu, Feb 27, 2014 at 9:56 PM, Shailesh Samudrala
shailesh2...@gmail.com wrote:
I'm running a sample code I wrote to test
You can start from here:
http://wiki.apache.org/hadoop/HowToContribute
See this prior response:
http://search-hadoop.com/m/FZpRqM7Jsc
Cheers
On Thu, Feb 27, 2014 at 9:05 PM, Avinash Kujur avin...@gmail.com wrote:
i am new for hadoop. what are the issues i should start working with. i
need
You're using 0.94, right ?
RowLock has been dropped since 0.96.0
Can you tell us more about your use case ?
On Thu, Feb 27, 2014 at 9:56 PM, Shailesh Samudrala
shailesh2...@gmail.comwrote:
I'm running a sample code I wrote to test HBase lockRow() and unlockRow()
methods.
The sample code
Which hadoop release are you using ?
Cheers
On Thu, Feb 20, 2014 at 8:57 PM, ch huang justlo...@gmail.com wrote:
hi,maillist:
i see the following info in my hdfs log ,and the block belong to
the file which write by scribe ,i do not know why
is there any limit in hdfs system ?
See https://hadoop.apache.org/mailing_lists.html#User
On Thu, Feb 13, 2014 at 9:39 AM, Scott Kahler skah...@adknowledge.comwrote:
Unsubscribe
What's the value for io.compression.codecs config parameter ?
Thanks
On Tue, Feb 11, 2014 at 10:11 PM, Li Li fancye...@gmail.com wrote:
I am runing example of wordcout but encount an exception:
I googled and know lzo compression's license is incompatible with apache's
so it's not built in.
/value
/property
On Thu, Feb 13, 2014 at 2:54 AM, Ted Yu yuzhih...@gmail.com wrote:
What's the value for io.compression.codecs config parameter ?
Thanks
On Tue, Feb 11, 2014 at 10:11 PM, Li Li fancye...@gmail.com wrote:
I am runing example of wordcout but encount an exception:
I
See https://issues.apache.org/jira/browse/HBASE-10303
And https://hbase.apache.org/book.html#snappy.compression
Cheers
On Feb 11, 2014, at 4:16 AM, Yves Weissig weis...@uni-mainz.de wrote:
Hi list,
I'm trying to enable the Hadoop native library and the snappy library
for compression in
You're welcome.
On Feb 9, 2014, at 6:27 AM, John Lilley john.lil...@redpoint.net wrote:
Thanks! I would have never found that.
john
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Monday, January 27, 2014 4:57 PM
To: common-u...@hadoop.apache.org
Subject: Re: HDFS read stats
For hadoop version, you can use the hadoop command:
echo Usage: hadoop [--config confdir] COMMAND
...
echo version print the version
On Sun, Feb 9, 2014 at 8:32 AM, Raj Hadoop hadoop...@yahoo.com wrote:
All,
Is there any way from the command prompt I can find which hive
For Hive, you can use:
bin/hive --version
Cheers
On Sun, Feb 9, 2014 at 8:48 AM, Raj Hadoop hadoop...@yahoo.com wrote:
Thanks Ted.
Also - I am looking for to find the Hive version
On Sunday, February 9, 2014 11:39 AM, Ted Yu yuzhih...@gmail.com
wrote:
For hadoop version, you can
There isn't System.exit call in TestMRJobsWithHistoryService.java
What
did
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/surefire-reports/org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService.txt
say ?
Cheers
On Thu, Feb 6, 2014 at 4:41 PM,
, Time elapsed: 52.669 sec
On 7 February 2014 14:12, Ted Yu yuzhih...@gmail.com wrote:
The output was
from
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/target/surefire-reports/org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService-output.txt
Can you
TestMRJobsWithHistoryService.
Sloppy terminology I know, sorry if I wasn't very clear.
Regards
Chris
On 7 February 2014 11:53, Ted Yu yuzhih...@gmail.com wrote:
There isn't System.exit call in TestMRJobsWithHistoryService.java
What
did
hadoop-mapreduce-project/hadoop-mapreduce-client
*From:* Ted Yu [mailto:yuzhih...@gmail.com]
*Sent:* Sunday, January 26, 2014 6:16 PM
*To:* common-u...@hadoop.apache.org
*Subject:* Re: HDFS read stats
Please take a look at DFSInputStream#ReadStatistics which contains four
metrics including local bytes read.
You can obtain ReadStatistics
fix, and I can find
FileSystem$Statistics class in 2.2.0 but it only seems to talk about
read/write ops and bytes, not the local-vs-remote bytes. What am I missing?
John
*From:* Ted Yu [mailto:yuzhih...@gmail.com]
*Sent:* Sunday, January 26, 2014 10:26 AM
*To:* common-u
this important
folders are missing in 2.2 and also in release version.
Can i use lib and libexec folders of 0.23 with2.2
On Thu, Jan 16, 2014 at 10:52 PM, Ted Yu yuzhih...@gmail.com wrote:
From 0.96 pom.xml:
hadoop-two.version2.2.0/hadoop-two.version
Meaning, 0.96.1.1-hadoop2
See http://hadoop.apache.org/mailing_lists.html#User
On Thu, Jan 16, 2014 at 3:53 AM, Fernando Iwamoto - Plannej
fernando.iwam...@plannej.com.br wrote:
Please send email to user-unsubscr...@hadoop.apache.org
See http://hadoop.apache.org/mailing_lists.html#User
On Tue, Jan 14, 2014 at 6:42 PM, Aijas Mohammed
aijas.moham...@infotech-enterprises.com wrote:
Dear All,
Thanks for the support.
Thanks Regards,
Aijas Mohammed
Cycling recent bits:
http://search-hadoop.com/m/Q2G2R1TJruq1/Unable+to+load+native-hadoop+librarysubj=Re+Unable+to+load+native+hadoop+library
On Mon, Jan 13, 2014 at 9:12 AM, Michael sjp120...@gmail.com wrote:
How to remove the warning in the subject?
Putting common-user to bcc since this is an HBase related question.
Which version of HBase are you using ?
Can you say a bit more about the cluster you're connecting to ?
When your client hangs at the last line, can you get jstack and pastebin it
?
Thanks
On Mon, Jan 13, 2014 at 7:00 PM, Mark
using the latest HBase installed with Cloudera.
By the way, should I ask on a different mailing list?
Thank you,
Mark
On Mon, Jan 13, 2014 at 9:08 PM, Ted Yu yuzhih...@gmail.com wrote:
Putting common-user to bcc since this is an HBase related question.
Which version of HBase are you
Can you utilize the following API ?
public FileStatus[] listStatus(Path f, PathFilter filter)
Cheers
On Sat, Jan 11, 2014 at 3:52 PM, John Lilley john.lil...@redpoint.netwrote:
Is there an HDFS file system method for listing a directory contents
iteratively, or at least stopping at some
2.4.0 release. For 2.2.0, is there any way
to reach the individual task container logs?
John
*From:* Ted Yu [mailto:yuzhih...@gmail.com]
*Sent:* Saturday, January 04, 2014 10:47 AM
*To:* common-u...@hadoop.apache.org
*Subject:* Re: YARN log access
YARN-649 is targeted at 2.4.0 release
Can you pastebin the stack trace involving the NPE ?
Thanks
On Jan 4, 2014, at 9:25 AM, Manikandan Saravanan
manikan...@thesocialpeople.net wrote:
Hi,
I’m trying to run Nutch 2.2.1 on a Haddop 2-node cluster. My hadoop cluster
is running fine and I’ve successfully added the input and
Please send email to user-unsubscr...@hadoop.apache.org
On Sat, Jan 4, 2014 at 6:59 PM, Brent Nikolaus bnikol...@gmail.com wrote:
Please take a look at http://hbase.apache.org/book.html#snappy.compression
Cheers
On Wed, Jan 1, 2014 at 8:05 AM, Amit Sela am...@infolinks.com wrote:
Hi all,
I'm running on Hadoop 1.0.4 and I'd like to use Snappy for map output
compression.
I'm adding the configurations:
You can find subscribe mail Ids on this page:
http://hadoop.apache.org/mailing_lists.html
On Mon, Dec 30, 2013 at 12:10 AM, sunqp qipeng@gmail.com wrote:
You can find subscribe mail Ids on this page:
http://hadoop.apache.org/mailing_lists.html
On Dec 24, 2013, at 6:55 AM, 李立伟 li-li...@outlook.com wrote:
· Subscribe to List
Are your data nodes running as user 'hdfs', or 'mapred' ?
If the former, you need to increase file limit for 'hdfs' user.
Cheers
On Sat, Dec 21, 2013 at 8:30 AM, sam liu samliuhad...@gmail.com wrote:
Hi Experts,
We failed to run an MR job which accesses hive, as hdfs is unable to
create
In 0.94, see src/main/java/org/apache/hadoop/hbase/mapreduce/RowCounter.java
Cheers
On Wed, Dec 18, 2013 at 9:34 PM, Ranjini Rathinam ranjinibe...@gmail.comwrote:
Hi,
Need to write a mapreduce program to count the number of rows in a table.
Please suggest me with example.
Thanks in
Have you set umask to 022 ?
See https://issues.apache.org/jira/browse/HDFS-2556
Cheers
On Tue, Dec 17, 2013 at 3:12 PM, Karim Awara karim.aw...@kaust.edu.sawrote:
Hi,
I am running Junit test on hadoop 2.2.0 on eclipse on mac os x. Whenever I
run the test, I am faced with the following
.hdfs.server.common.Storage.getBuildVersion!
I feel I am missing some parameters?
--
Best Regards,
Karim Ahmed Awara
On Sun, Dec 15, 2013 at 5:08 AM, Ted Yu yuzhih...@gmail.com wrote:
You can use the following command to generate .project files for Eclipse
(at the root of your workspace):
mvn clean package -DskipTests
Kishore:
Some NoSQL from your initial post, such as mongodb, is not built on top of
hdfs.
See:
http://www.ikanow.com/blog/02/15/how-well-does-mongodb-integrate-with-hadoop/
Cheers
On Sun, Dec 15, 2013 at 5:42 AM, Peter Lin wool...@gmail.com wrote:
you're question doesn't make any sense. Did
at 5:59 PM, Ted Yu yuzhih...@gmail.com wrote:
Can you show us the full stack trace ?
In Eclipse, was there any project shown with a red bang or red cross ?
Cheers
On Sun, Dec 15, 2013 at 2:26 AM, Karim Awara karim.aw...@kaust.edu.sawrote:
It tells me java.lang.ExceptionInitializerError
If you search under hadoop-hdfs-project/hadoop-hdfs/src/test, you would see
a lot of tests which use MiniDFSCluster
e.g.
cluster = new MiniDFSCluster.Builder(conf).numDataNodes(3).build();
hadoop-hdfs-project/hadoop-hdfs/src/test//java/org/apache/hadoop/hdfs/TestWriteRead.java
Cheers
On
a detailed source where it explains how to run Junit
through Eclipse for hadoop 2.2.x?
--
Best Regards,
Karim Ahmed Awara
On Sun, Dec 15, 2013 at 2:55 AM, Ted Yu yuzhih...@gmail.com wrote:
If you search under hadoop-hdfs-project/hadoop-hdfs/src/test, you would
see a lot of tests which
Siddharth :
Take a look at 2.1.2.5. ulimit and nproc under
http://hbase.apache.org/book.html#os
Cheers
On Wed, Nov 27, 2013 at 6:04 PM, Azuryy Yu azury...@gmail.com wrote:
yes. you need to increase it, a simple way is put it in your /etc/profile
On Thu, Nov 28, 2013 at 9:59 AM,
Can you show us the classpath ?
Cheers
On Tue, Nov 26, 2013 at 2:40 AM, Srinivas Chamarthi
srinivas.chamar...@gmail.com wrote:
I have the following error while running 2.2.0 using cygwin. anyone can
help with the problem ?
/cygdrive/c/hadoop-2.2.0/bin
$ ./hdfs namenode -format
Which platform did you perform the build on ?
I was able to build trunk on Mac.
I found the following dependency in dependency tree output:
[INFO] +-
org.apache.directory.server:apacheds-jdbm-partition:jar:2.0.0-M15:compile
[INFO] | \-
, Nov 18, 2013 at 10:51 AM, Azuryy Yu azury...@gmail.com wrote:
Ted,
I am on Linux.
On 2013-11-19 1:30 AM, Ted Yu yuzhih...@gmail.com wrote:
Which platform did you perform the build on ?
I was able to build trunk on Mac.
I found the following dependency in dependency tree output:
[INFO
Take a look at HBASE-3996
Cheers
On Nov 17, 2013, at 5:35 AM, samir das mohapatra samir.help...@gmail.com
wrote:
Dear hadoop/hbase developer
Did Anyone work with Hbase mapreduce with multiple table as input ?
Any url-link or example will help me alot.
Thanks in advance.
From the command line, can you run 'jmap -heap' ?
http://download.oracle.com/javase/1.5.0/docs/tooldocs/share/jmap.html
On Fri, Nov 15, 2013 at 10:50 AM, Viswanathan J
jayamviswanat...@gmail.comwrote:
Hi guys,
I had JT OOME in hadoop version 1.2.1 and applied the patch based on the
fix
For #1, I see the following in output of javap:
public synchronized void seek(long) throws java.io.IOException;
which is described here:
http://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FSDataInputStream.html#seek(long)
On Tue, Nov 12, 2013 at 3:08 PM, John Lilley
You should package your class in a jar file.
Cheers
On Nov 11, 2013, at 1:05 AM, ch huang justlo...@gmail.com wrote:
here is my java code ,i compile it and run in test env ,it's ok ,but when i
run in product env ,it get error info
package com.hadoop.export;
import
I
compiled was 1.2.1 but it generated as 1.2.2 snapshot version jar. Is the
snapshot version because of changes in the source?
Shall I use that jar in production environment? If yes will that not any
issue.
Please help.
Thanks,
On Oct 20, 2013 7:59 PM, Ted Yu yuzhih...@gmail.com wrote
If I read Lars' comment on the JIRA correctly, HBASE-8912's target was moved to
0.94.13
It is still open. Meaning, if there is no patch, the target may move to next
release.
Cheers
On Oct 17, 2013, at 2:25 AM, Boris Emelyanov emelya...@post.km.ru wrote:
Hello! I've just upgraded my hadoop
Karim:
If you want to debug unit tests, using Eclipse is a viable approach.
Here is what I did the past week debugging certain part of hadoop
(JobSubmitter in particular) through an HBase unit test.
Run 'mvn install -DskipTests' to install hadoop locally
Open the class you want to debug and place
Please send email to user-unsubscr...@hadoop.apache.org
Cheers
On Sun, Sep 22, 2013 at 8:49 AM, YaoYao keyao...@gmail.com wrote:
Please send email to user-unsubscr...@hadoop.apache.org
Cheers
On Sep 1, 2013, at 5:00 AM, Sandeesh nadellasande...@gmail.com wrote:
Unsubscribe
Please send email to:
user-subscr...@hadoop.apache.org
On Sat, Aug 31, 2013 at 12:36 PM, Surendra , Manchikanti
surendra.manchika...@gmail.com wrote:
-- Surendra Manchikanti
Pavan:
Did you use TableInputFormat or its variant ?
If so, take a look at TableSplit and how it is used in
TableInputFormatBase#getSplits().
Cheers
On Sun, Aug 25, 2013 at 2:36 PM, Jens Scheidtmann
jens.scheidtm...@gmail.com wrote:
Hi Pavan,
2. ) If my table is in the order of millions,
New features can be found here:
https://blogs.apache.org/pig/
I found the above URL through http://search-hadoop.com/m/ib1SlsHMtb1
Cheers
On Sat, Aug 24, 2013 at 8:21 AM, Viswanathan J
jayamviswanat...@gmail.comwrote:
Had sent mail to pig user group but no response.
On Aug 24, 2013 10:47
Please look at the example in 15.1.1 under
http://hbase.apache.org/book.html#tools
On Fri, Aug 23, 2013 at 1:41 PM, Botelho, Andrew andrew.bote...@emc.comwrote:
I am trying to use the function HBaseStorage() in my Pig code in order to
load an HBase table into Pig.
When I run my
http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/DeprecatedProperties.html
On Wed, Aug 21, 2013 at 6:00 PM, ch huang justlo...@gmail.com wrote:
thanks ,all
Can you check the config entry
for yarn.scheduler.capacity.resource-calculator ?
It should point
to org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator
bq. I was able to fix all issues
What other issues came up ?
Thanks
On Sun, Aug 11, 2013 at 2:07 PM, Rob Blah tmp5...@gmail.com
In Configuration class, you should be able to find addDeprecation() methods.
Below is the result of quick search where addDeprecation() is called.
Configuration.addDeprecation(topology.script.file.name,
Configuration.addDeprecation(topology.script.number.args,
If you look at pom.xml for 0.94, you should see hadoop-1.1 and hadoop-1.2
profiles.
Those hadoop releases (1.1.2 and 1.2.0, respectively) should work.
On Wed, Aug 7, 2013 at 12:13 PM, oc tsdb oc.t...@gmail.com wrote:
Hi,
I need to create a opentsdb cluster which needs hbase and hadoop.
I
For scheduling mechanism please take a look at oozie.
Cheers
On Jul 22, 2013, at 10:37 PM, Balamurali balamurali...@gmail.com wrote:
Hi,
I configured hadoop-1.0.3, hbase-0.92.1 and hive-0.10.0 .
Created table in HBase.Inserted records.Processing the data using Hive.
I have to show a
See this thread also:
http://search-hadoop.com/m/3pgakkVpm71/Distributed+Cache+omkarsubj=Re+Distributed+Cache
On Fri, Jul 19, 2013 at 6:20 AM, Botelho, Andrew andrew.bote...@emc.comwrote:
I have been using Job.addCacheFile() to cache files in the distributed
cache. It has been working for me
You should use Job#addCacheFile()
Cheers
On Tue, Jul 9, 2013 at 3:02 PM, Botelho, Andrew andrew.bote...@emc.comwrote:
Hi,
** **
I was wondering if I can still use the DistributedCache class in the
latest release of Hadoop (Version 2.0.5).
In my driver class, I use this code to
Take a look here: http://search-hadoop.com/m/FXOOOTJruq1
On Tue, Jul 2, 2013 at 3:25 PM, Chui-Hui Chiu cch...@tigers.lsu.edu wrote:
Hello,
I have a Hadoop 2.0.5 Alpha cluster. When I execute any Hadoop command, I
see the following message.
WARN util.NativeCodeLoader: Unable to load
Have you looked at http://hbase.apache.org/book.html#zookeeper ?
Thanks
On Wed, Jun 26, 2013 at 5:09 PM, ch huang justlo...@gmail.com wrote:
i change zookeeper from 2181 to 2281,it cause hbase region
server auto-closed after it start for a while
any one can help?
Rams:
For hadoop related log directories, you can use ps command to see the
command line of namenode.
You would see the log dir in the command line, e.g.:
-Dhadoop.log.dir=/homes/zy/deploy/hadoop-common-2.0.5-SNAPSHOT/logs
Cheers
On Wed, Jun 5, 2013 at 8:38 AM, Jean-Marc Spaggiari
Looking at the tip of 0.94:
private boolean checkTable(HBaseAdmin admin) throws IOException {
HTableDescriptor tableDescriptor = getTableDescriptor();
if (this.presplitRegions 0) {
// presplit requested
if (admin.tableExists(tableDescriptor.getName())) {
What's the output of:
protoc --version
You should be using 2.4.1
Cheers
On Wed, May 29, 2013 at 11:33 AM, John Lilley john.lil...@redpoint.netwrote:
Sorry if this is a dumb question, but I’m not sure where to start. I am
following BUILDING.txt instructions for source checked out today
I assume region server was running fine on hostname/IP:60020
BTW what HBase version are you using ?
Thanks
On Tue, May 14, 2013 at 8:49 PM, Manoj S manoj.sundara...@gmail.com wrote:
Hi,
I am trying to benchmark hbase with YCSB.
# ./bin/ycsb load hbase -p columnfamily=family -P
Can you tell us which HBase version you are using ?
Did you issue table creation command from HBase shell ?
Cheers
On Mon, May 6, 2013 at 11:04 AM, Rajeev Yadav rajeya...@gmail.com wrote:
Hi All,
I am a newbie to hadoop and habse.so need your help on this.
I am trying to create table in
I looked under my local maven repo and didn't see source code along
side hadoop-core-1.1.2.jar
Can you check out the 1.1.2 source code ?
Cheers
On Wed, May 1, 2013 at 6:58 AM, Oleg Ruchovets oruchov...@gmail.com wrote:
Hi
I have hadoop-core 1.1.2 and hadoop-test 1.1.2 as dependency in my
, 2013 at 5:23 PM, Ted Yu yuzhih...@gmail.com wrote:
I looked under my local maven repo and didn't see source code along
side hadoop-core-1.1.2.jar
Can you check out the 1.1.2 source code ?
Cheers
On Wed, May 1, 2013 at 6:58 AM, Oleg Ruchovets oruchov...@gmail.comwrote:
Hi
I have hadoop
bq. 'java -cp /usr/lib/hbase/hbase...
Instead of hard coding class path, can you try specifying `hbase classpath`
?
Cheers
On Mon, Apr 29, 2013 at 5:52 AM, Shahab Yunus shahab.yu...@gmail.comwrote:
Hello,
This might be something very obvious that I am missing but this has been
bugging me
In hadoop-yarn-project/hadoop-yarn/bin/yarn , you can find:
YARN_OPTS=$YARN_OPTS
-Dhadoop.root.logger=${YARN_ROOT_LOGGER:-INFO,console}
YARN_OPTS=$YARN_OPTS -Dyarn.root.logger=${YARN_ROOT_LOGGER:-INFO,console}
Meaning you can override logger setting through YARN_ROOT_LOGGER
environment variable.
2.0.4-alpha is being released.
To my knowledge it passed the votes yesterday.
FYI
On Apr 20, 2013, at 5:10 AM, Hemanth Yamijala yhema...@thoughtworks.com wrote:
2.x.x provides NN high availability.
I think this question would be more appropriate for HBase user mailing list.
Moving hadoop user to bcc.
Please tell us the HBase version you are using.
Thanks
On Mon, Apr 15, 2013 at 6:51 PM, dylan dwld0...@gmail.com wrote:
Hi
** **
I am a newer for hadoop, and set up hadoop with
Jay:
Harsh is correct.
Take a look at
http://mvnrepository.com/artifact/org.apache.hadoop/hadoop-common and you
will see what versions have artifacts in the repo.
On Sat, Apr 6, 2013 at 2:00 PM, Harsh J ha...@cloudera.com wrote:
I don't think we publish nightly or rolling jars anywhere on
FileSystem is an abstract class, what concrete class are you using
(DistributedFileSystem, etc) ? For FileSystem, I find the following for
create() method:
* but the implementation is thread-safe. The other option is to change
the
* value of umask in configuration to be 0, but it is not
This question is more related to mapreduce.
I put user@hbase in Bcc.
Cheers
On Sun, Mar 31, 2013 at 11:15 AM, tojaneyang xia_y...@dell.com wrote:
Hi Ted,
Do you have any suggestions for this?
I am using hadoop which is packaged within hbase -0.94.1. It is hadoop
1.0.3.
Thanks,
Xia
From http://msdn.microsoft.com/en-us/library/cc278097(v=sql.100).aspx :
The new technology employed is based on bitmap filters, also known as *Bloom
filters *(see *Bloom filter, *Wikipedia 2007,
http://en.wikipedia.org/wiki/Bloom_filter) ...
HBase uses bloom filters extensively. I can give
From http://www.javapractices.com/topic/TopicAction.do?Id=10 :
consistency with equals is required for ensuring sorted collections (such
as TreeSet) are well-behaved.
On Wed, Mar 27, 2013 at 8:16 PM, Sai Sai saigr...@yahoo.in wrote:
IntPair class has these 2 methods, i understand that
See
http://stackoverflow.com/questions/1353309/java-static-vs-non-static-inner-class
I believe Josh Bloch covers this in his famous book.
On Wed, Mar 27, 2013 at 9:01 PM, Sai Sai saigr...@yahoo.in wrote:
In some examples/articles sometimes they use:
public static class MyMapper
and
Take a look at Effective Java 2nd edition:
Item 22: Favor static member classes over nonstatic
On Wed, Mar 27, 2013 at 9:05 PM, Ted Yu yuzhih...@gmail.com wrote:
See
http://stackoverflow.com/questions/1353309/java-static-vs-non-static-inner-class
I believe Josh Bloch covers this in his
The answer to second question would be subjective.
Do you have specific use case in mind ?
Thanks
On Wed, Mar 20, 2013 at 9:07 AM, oualid ait wafli oualid.aitwa...@gmail.com
wrote:
Hi,
Which is the best HBase or Cassandra ?
Which are the criteria to compare those tools( HBase and
files and store them
any idea ?
thanks
2013/3/20 Ted Yu yuzhih...@gmail.com
The answer to second question would be subjective.
Do you have specific use case in mind ?
Thanks
On Wed, Mar 20, 2013 at 9:07 AM, oualid ait wafli
oualid.aitwa...@gmail.com wrote:
Hi,
Which is the best
From src/test/org/apache/hadoop/mapred/GenericMRLoadGenerator.java, looks
like it is used to generate IndirectSplit's:
public InputSplit[] getSplits(JobConf job, int numSplits)
throws IOException {
Path src = new Path(job.get(mapred.indirect.input.file, null));
FileSystem
Have you logged a JIRA ?
If not, open one and attach patch there.
Cheers
On Thu, Mar 7, 2013 at 1:53 PM, Алексей Бабутин
zorlaxpokemon...@gmail.comwrote:
Hi,
I have changed jetty6 to jetty7.But I don't know where to send patch.
where can i send it for check?
Cheers,
Alexey Babutin.
The following JIRAs are related to your research:
HADOOP-9331: Hadoop crypto codec framework and crypto codec implementations
https://issues.apache.org/jira/browse/hadoop-9331 and related sub-tasks
MAPREDUCE-5025: Key Distribution and Management for supporting crypto codec
in Map
: TaskStatus Exception using HFileOutputFormat
Using the below construct, do you still get exception ?
Correct, I am still getting this exception.
Sean
From: Ted Yu yuzhih...@gmail.com
Reply-To: user@hadoop.apache.org user@hadoop.apache.org
Date: Tuesday, February 5, 2013 7:50 PM
To: user
you for your help!!!
Sean
From: Ted Yu yuzhih...@gmail.com
Reply-To: user@hadoop.apache.org user@hadoop.apache.org
Date: Wednesday, February 6, 2013 2:25 PM
To: user@hadoop.apache.org user@hadoop.apache.org
Subject: Re: TaskStatus Exception using HFileOutputFormat
Thanks
: 1.0.3
HBase: 0.92.0
I guess you have used the above construct
Our code is as follows:
HTable table = new HTable(conf, configHBaseTable);
FileOutputFormat.setOutputPath(job, outputDir);
HFileOutputFormat.configureIncrementalLoad(job, table);
Thanks!
From: Ted Yu yuzhih
Where did you checkout the code from ?
You can get latest update in this JIRA:
HBASE-7290 Online snapshots
Cheers
On Fri, Feb 1, 2013 at 8:53 AM, YouPeng Yang yypvsxf19870...@gmail.comwrote:
Hi
I have got the latest source from Git.
when I perform mvn install -DskipTests.
it was
I found the following:
http://cloudfront.blogspot.com/2012/06/hbase-counters-part-i.html
http://palominodb.com/blog/2012/08/24/distributed-counter-performance-hbase-part-1
Which hbase version are you using ?
Cheers
On Mon, Nov 12, 2012 at 4:21 PM, Mesika, Asaf asaf.mes...@gmail.com wrote:
I think Bookkeeper should be included as well.
On Sat, Jan 28, 2012 at 7:59 AM, Ayad Al-Qershi alqer...@gmail.com wrote:
I'm compiling a list of all Hadoop ecosystem/sub projects ordered
alphabetically and I need your help if I missed something.
1. Ambari
2. Avro
3. Cascading
Same with Solr and Lily.
On Sat, Jan 28, 2012 at 8:09 AM, Ted Yu yuzhih...@gmail.com wrote:
I think Bookkeeper should be included as well.
On Sat, Jan 28, 2012 at 7:59 AM, Ayad Al-Qershi alqer...@gmail.comwrote:
I'm compiling a list of all Hadoop ecosystem/sub projects ordered
want to point out there's a dedicated forum for SHDP, if the
discussion becomes too Spring specific.
[1] http://forum.springsource.org/forumdisplay.php?80-NoSQL
On 12/30/2011 12:14 PM, Ted Yu wrote:
Hi, Costin:
I work on HBase.
I went over
http://static.springsource.org/spring
Which hadoop version are you using ?
If it is 0.20.2, mapred.reduce.parallel.copies is the number of copying
threads in ReduceTask
In the scenario you described, at least 2 concurrent connections to a single
node would be made.
I am not familiar with newer versions of hadoop.
On Tue, Jun 28,
Questions 2 and 3 can be answered relatively easily:
Remember, the output of the combiner is going to be consumed by the reducer.
So the output key/vlaue classes of the combiner have to align with the input
key/vlaue classes of the reducer.
On Mon, May 23, 2011 at 11:32 AM, Mike Spreitzer
Cycling bits:
http://search-hadoop.com/m/O7sT4278lbG/but+it+seems+a+trade+off+with+the+number+of+files+that+have+to+be+shuffled+for+thesubj=RE+HDFS+block+size+v+s+mapred+min+split+size
On Fri, Mar 18, 2011 at 12:54 PM, Pedro Costa psdc1...@gmail.com wrote:
Hi
What's the purpose of the
201 - 300 of 547 matches
Mail list logo