e too many rows.
>
> 于 2013/8/26 4:36, Pavan Sudheendra 写道:
>
> Another Question, why does it indicate number of mappers as 1? Can i
>> change it so that multiple mappers perform computation?
>>
>
>
--
Thanks & Regards,
Anil Gupta
pache.hadoop.ipc.Server$Handler$1.run(Server.java:1693)
> at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1689)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGr
Hadoop is not a database. So , why would you do comparison?
HBase vs Traditional RDBMS might sound ok.
On Feb 25, 2013 5:15 AM, "Oleg Ruchovets" wrote:
> Hi ,
>Can you please share hadoop advantages vs Traditional Relational DB.
> A link to short presentations or benchmarks would be great.
>
some problem.
Can someone tell me the correct way of deserializing the output file of
mapper? or There is some problem with my code?
Here is the link to my initial stab at RecordReader:
https://dl.dropbox.com/u/64149128/ImmutableBytesWritable_Put_RecordReader.java
--
Thanks & Regards,
Anil Gupta
at 12:20 PM, Panshul Whisper
> wrote:
>
>> Hello,
>>
>> Can someone please suggest a Change Management System suitable for Hadoop
>> deployed projects?
>>
>> Thanking You,
>>
>> --
>> Regards,
>> Ouch Whisper
>> 010101010101
>>
>
>
--
Thanks & Regards,
Anil Gupta
*at ApplicationMaster$**SectionLeaderRunnable.run(**
ApplicationMaster.java:825)*
* *
*at java.lang.Thread.run(Thread.**java:736)*
You might need to increase the HeapSize of ApplicationMaster.
HTH,
Anil Gupta
On Mon, Jan 14, 2013 at 4:35 AM, Krishna Kishore Bonagiri <
writ
t;>>> --
>>>> *From: * Panshul Whisper
>>>> *Date: *Mon, 14 Jan 2013 17:25:08 -0800
>>>> *To: *
>>>> *ReplyTo: * user@hadoop.apache.org
>>>> *Subject: *hadoop namenode recovery
>>>>
>>>> Hello,
>>>>
>>>> Is there a standard way to prevent the failure of Namenode crash in a
>>>> Hadoop cluster?
>>>> or what is the standard or best practice for overcoming the Single
>>>> point failure problem of Hadoop.
>>>>
>>>> I am not ready to take chances on a production server with Hadoop 2.0
>>>> Alpha release, which claims to have solved the problem. Are there any other
>>>> things I can do to either prevent the failure or recover from the failure
>>>> in a very short time.
>>>>
>>>> Thanking You,
>>>>
>>>> --
>>>> Regards,
>>>> Ouch Whisper
>>>> 010101010101
>>>>
>>>
>>>
>>>
>>> --
>>> Regards,
>>> Ouch Whisper
>>> 010101010101
>>>
>>
>>
>>
>> --
>> Regards,
>> Ouch Whisper
>> 010101010101
>>
>
>
>
> --
> Regards,
> Ouch Whisper
> 010101010101
>
--
Thanks & Regards,
Anil Gupta
l number of rows and
> demanded # of maps.
>
> How are you passing -Dmapred.map.tasks=20 (no spaces) exactly? All
> generic options must go in before any other options do, so it should
> appear right after the word "teragen" in your command.
>
> On Wed, Dec 26, 2012 a
r
> default.
>
> Also, what version of Hadoop are you asking your question around? The
> property mapreduce.cluster.temp.dir does not exist/is not available in
> 1.x and is irrelevant in 2.x. It seems to be a legacy property that is
> no longer utilized.
>
> On Wed, Dec 19
Hi Yogesh,
As others have said Hadoop vs Cassandra is not a fair comparison. Although,
HBase vs Cassandra is a fair comparison. You can have a look at this
comparison: http://bigdatanoob.blogspot.com/2012/11/hbase-vs-cassandra.html
HTH,
Anil Gupta
On Thu, Dec 6, 2012 at 11:27 AM, Colin McCabe
the HBase mailing list also since this is more about HBase.
Hope This Helps,
Anil Gupta
On Thu, Nov 29, 2012 at 8:51 PM, Lance Norskog wrote:
> Please! There are lots of blogs etc. about the two, but very few
> head-to-head for a real use case.
>
> ------
&
If you can share some details then HBase community will try to
help you out.
Thanks,
Anil Gupta
On Wed, Nov 28, 2012 at 9:55 AM, jeff l wrote:
> Hi,
>
> I have quite a bit of experience with RDBMSs ( Oracle, Postgres, Mysql )
> and MongoDB but don't feel any are quite right for
integration with
Hadoop ecosystem so you can do a lot of stuff on HBase data using Hadoop
Tools. HBase has integration with Hive querying but AFAIK it has some
limitations.
HTH,
Anil Gupta
On Sun, Nov 25, 2012 at 4:52 AM, Mahesh Balija
wrote:
> Hi Jeff,
>
> As HDFS paradigm
et peuvent être couverts par le secret professionnel. Toute utilisation,
> copie ou divulgation non autorisée est interdite. Si vous n'êtes pas le
> destinataire prévu de ce courriel, supprimez-le et contactez immédiatement
> l'expéditeur. Veuillez penser à l'environnement avant d'imprimer le présent
> courriel
>
--
Thanks & Regards,
Anil Gupta
Hi visioner,
It won't work on pseudo distributed because you need to run at least 2 NN and
if you run 2NN then u need to configure them separately on different ports.
It's little tricky doing on one node. Multiple VM is the cleanest solution.
Best Regards,
Anil
On Oct 17, 2012, at 12:04 AM, Vis
ur.
>
> ERROR org.apache.hadoop.hdfs.server.datanode.DataNode:
> java.io.IOException: Incompatible namepaceIDs
>
> Thank you !!!
>
>
> ****
> *Cheers !!!*
> *Siddharth Tiwari*
> Have a refreshing day !!!
> *"Every duty is holy, and devotion to duty is the highest form of worship
> of God.” *
> *"Maybe other people will try to limit me but I don't limit myself"*
>
--
Thanks & Regards,
Anil Gupta
> right to monitor and review the content of all messages sent to or from
> this e-mail
> address. Messages sent to or from this e-mail address may be stored on the
> Infosys e-mail system.
> ***INFOSYS End of Disclaimer INFOSYS***
>
--
Thanks & Regards,
Anil Gupta
If possible, try to run netstat as sudo.
On Thu, Aug 30, 2012 at 11:21 AM, anil gupta wrote:
> Can you also try to run telnet and netstat for port: 60030 and 60010 ? I
> dont see post 60030 and 60010 in the output of netstat. Did you configured
> some other ports for HBase Master?
&g
IT-1 hbasedemos]$ telnet 192.265.47.222 2181
> Trying 192.265.47.222...
> Connected to 192.265.47.222.
> Escape character is '^]'.
>
>
> Connection closed by foreign host.
> [rtit@RTIT-1 hbasedemos]$
>
>
>
>
> On Thu, Aug 30, 2012 at 8:58 PM, anil gu
d Hadoop Ecosystem. I am
> interested to join this group to share and learn things.
>
> --
> Thanks and Regards
> Nagamallikarjuna
> -
> No virus found in this message.
> Checked by AVG - www.avg.com
> Version: 2012.0.2197 / Virus Database: 2437/5225 - Release Date: 08/26/12
>
>
--
Thanks & Regards,
Anil Gupta
Hi,
AFAIK, these properties are being ignored by YARN:
- mapreduce.tasktracker.map.tasks.maximum,
- mapreduce.tasktracker.reduce.tasks.maximum
Thanks,
Anil Gupta
On Thu, Aug 16, 2012 at 9:28 AM, mg wrote:
> Hi,
>
> I am currently trying to tune a CDH 4.0.1 (i~ hadoop 2.0.0-alpha
Hi Gaurav,
Did you turn off speculative execution?
Best Regards,
Anil
On Aug 16, 2012, at 7:13 AM, Gaurav Dasgupta wrote:
> Hi users,
>
> I am working on a CDH3 cluster of 12 nodes (Task Trackers running on all the
> 12 nodes and 1 node running the Job Tracker).
> In order to perform a Word
er servers.
> Is it advisable to run zk process along here name node process for
> production?
>
> What factors do I need to look into to decide if this an option for us?
>
> Thanks.
> VV
--
Thanks & Regards,
Anil Gupta
ion to use(cdh3u3 or cdh4).
Personally, i have used both cdh3u2 and cdh4. Recently, i completed setting
up a fully distributed cluster of cdh4 with HA for Namenode, Zookeeper, and
HBase Master. HA for Namenode is a big advantage with Hadoop-2.0.0.
HTH,
Anil Gupta
On Tue, Aug 14, 2012 at 6:
>
> tel: +1 408 400 3721
>
> ** **
>
> <http://hortonworks.com/download/>
>
> ** ** <http://hortonworks.com/download/>
>
> ** ** <http://hortonworks.com/download/>
>
> ** ** <http://hortonworks.com/download/>
> --
> <http://hortonworks.com/download/>
>
> No virus found in this message.
> Checked by AVG - *www.avg.com*
> Version: 2012.0.2197 / Virus Database: 2437/5190 - Release Date: 08/09/12*
> *** <http://hortonworks.com/download/>
>
--
Thanks & Regards,
Anil Gupta
Hi Aji,
Adding onto whatever Mohammad Tariq said, If you use Hadoop 2.0.0-Alpha
then Namenode is not a single point of failure.However, Hadoop 2.0.0 is not
of production quality yet(its in Alpha).
Namenode use to be a Single Point of Failure in releases prior to Hadoop
2.0.0.
HTH,
Anil Gupta
On
what's this buddy??
On Wed, Aug 8, 2012 at 10:13 PM, mohamad hosein jafari <
smhjafar...@gmail.com> wrote:
>
>
--
Thanks & Regards,
Anil Gupta
e" as message body can be
blocked by filters. It seems like a never ending annoying stuff.
Thanks,
Anil
On Wed, Aug 8, 2012 at 10:06 PM, Saniya Khalsa wrote:
>
>
--
Thanks & Regards,
Anil Gupta
To unsubscribe, you have to send a mail to
user-unsubscr...@hadoop.apache.org with subject line as unsubscribe.
Please read the instruction of mailing list before sending emails.
On Wed, Aug 8, 2012 at 8:48 PM, Jianjun Wu wrote:
>
> unsubscribe
>
--
Thanks & Regards,
Anil Gupta
, 2012 at 4:11 PM, Hennig, Ryan wrote:
>
>> Hello,
>>
>> ** **
>>
>> I’m thinking about building a hadoop cluster to analyze all the
>> unsubscribe mails that people mistakenly send to this address. How many PB
>> of storage will I need?
>>
>&g
This link might provide you some more information:
http://mail-archives.apache.org/mod_mbox/hadoop-common-user/201203.mbox/%3ccaau13zhcmeuthva9opn9memyve9shxsd1gsdznzows3qrqz...@mail.gmail.com%3E
HTH,
Anil
On Wed, Aug 8, 2012 at 12:56 AM, anil gupta wrote:
> Hi Prabhu,
>
> Did you
r.datanode.DataNode.secureMain(DataNode.java:1665)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1682)
>
> Please help me on this issue.
>
> Thanks,
> Prabhu.
>
--
Thanks & Regards,
Anil Gupta
32 matches
Mail list logo