many rows.
于 2013/8/26 4:36, Pavan Sudheendra 写道:
Another Question, why does it indicate number of mappers as 1? Can i
change it so that multiple mappers perform computation?
--
Thanks Regards,
Anil Gupta
.realm=yourrealm
-Djava.security.krb5.kdc=yourkdc and also ensure your Mac's
configured for the cluster's kerberos (i.e. via a krb5.conf or so)?
On Mon, Jun 17, 2013 at 9:56 AM, anil gupta anilgupt...@gmail.com wrote:
Hi All,
I am trying to connect to a secure Hadoop/HBase cluster. I wrote
)*
I have googled this problem but i cannot find a solution for fixing *
SCDynamicStore* related problem in Eclipse on Mac. I want to run this code
from eclipse. Please let me know if anyone knows the trick to resolve this
problem on Mac. This is really annoying problem.
--
Thanks Regards,
Anil
)
at javax.security.auth.Subject.doAs(Subject.java:396)
at
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1332)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1687)
Regards,
samir.
--
Thanks Regards,
Anil Gupta
Hadoop is not a database. So , why would you do comparison?
HBase vs Traditional RDBMS might sound ok.
On Feb 25, 2013 5:15 AM, Oleg Ruchovets oruchov...@gmail.com wrote:
Hi ,
Can you please share hadoop advantages vs Traditional Relational DB.
A link to short presentations or benchmarks
://dl.dropbox.com/u/64149128/ImmutableBytesWritable_Put_RecordReader.java
--
Thanks Regards,
Anil Gupta
, Panshul Whisper
ouchwhis...@gmail.comwrote:
Hello,
Can someone please suggest a Change Management System suitable for Hadoop
deployed projects?
Thanking You,
--
Regards,
Ouch Whisper
010101010101
--
Thanks Regards,
Anil Gupta
the failure or recover from the failure
in a very short time.
Thanking You,
--
Regards,
Ouch Whisper
010101010101
--
Regards,
Ouch Whisper
010101010101
--
Regards,
Ouch Whisper
010101010101
--
Regards,
Ouch Whisper
010101010101
--
Thanks Regards,
Anil Gupta
$**SectionLeaderRunnable.run(**
ApplicationMaster.java:825)*
* *
*at java.lang.Thread.run(Thread.**java:736)*
You might need to increase the HeapSize of ApplicationMaster.
HTH,
Anil Gupta
On Mon, Jan 14, 2013 at 4:35 AM, Krishna Kishore Bonagiri
write2kish...@gmail.com wrote:
Hi,
I am
, what version of Hadoop are you asking your question around? The
property mapreduce.cluster.temp.dir does not exist/is not available in
1.x and is irrelevant in 2.x. It seems to be a legacy property that is
no longer utilized.
On Wed, Dec 19, 2012 at 12:15 AM, anil gupta anilgupt...@gmail.com
Hi Yogesh,
As others have said Hadoop vs Cassandra is not a fair comparison. Although,
HBase vs Cassandra is a fair comparison. You can have a look at this
comparison: http://bigdatanoob.blogspot.com/2012/11/hbase-vs-cassandra.html
HTH,
Anil Gupta
On Thu, Dec 6, 2012 at 11:27 AM, Colin McCabe
mailing list also since this is more about HBase.
Hope This Helps,
Anil Gupta
On Thu, Nov 29, 2012 at 8:51 PM, Lance Norskog goks...@gmail.com wrote:
Please! There are lots of blogs etc. about the two, but very few
head-to-head for a real use case.
--
*From: *anil
share some details then HBase community will try to
help you out.
Thanks,
Anil Gupta
On Wed, Nov 28, 2012 at 9:55 AM, jeff l jeff.pubm...@gmail.com wrote:
Hi,
I have quite a bit of experience with RDBMSs ( Oracle, Postgres, Mysql )
and MongoDB but don't feel any are quite right for this problem
integration with
Hadoop ecosystem so you can do a lot of stuff on HBase data using Hadoop
Tools. HBase has integration with Hive querying but AFAIK it has some
limitations.
HTH,
Anil Gupta
On Sun, Nov 25, 2012 at 4:52 AM, Mahesh Balija
balijamahesh@gmail.comwrote:
Hi Jeff
classes.
--
Jay Vyas
http://jayunit100.blogspot.com
--
Thanks Regards,
Anil Gupta
!!!
****
*Cheers !!!*
*Siddharth Tiwari*
Have a refreshing day !!!
*Every duty is holy, and devotion to duty is the highest form of worship
of God.” *
*Maybe other people will try to limit me but I don't limit myself*
--
Thanks Regards,
Anil Gupta
on the
Infosys e-mail system.
***INFOSYS End of Disclaimer INFOSYS***
--
Thanks Regards,
Anil Gupta
at java.lang.Thread.run(Thread.java:662)
Aug 30, 2012 6:46:50 PM org.apache.zookeeper.ClientCnxn$EventThread run
INFO: EventThread shut down
Please suggest me how to resolve this.
Thank You,
Jilani
--
Thanks Regards,
Anil Gupta
,
Already I disabled firewall of linux using iptables service.
Thank You,
Jilani
On Thu, Aug 30, 2012 at 8:35 PM, anil gupta anilgupt...@gmail.com wrote:
Hi Jilani,
It seems like a firewall issue. You will need to open appropriate ports
or
disable the firewall on the machine you
are not running or as Anil is suggesting, the
connectivity between machines needs fixing (It looks like all binds to
localhost.. can you fix that?). Once your connectivity fixed, then
try running HBase.
St.Ack
--
Thanks Regards,
Anil Gupta
the netstat command like this: sudo netstat
-alnp . sudo is used to run a command with root privileges.
~Anil
On Thu, Aug 30, 2012 at 3:54 PM, anil gupta anilgupt...@gmail.com wrote:
In addition to Stack's suggestion, use DNS names instead of IP address in
configuration of Hadoop and HBase. Its
, 2012 at 8:58 PM, anil gupta anilgupt...@gmail.com wrote:
Then, it might be a issue with port binding. try to do telnet on the port
to which HBase listens from localhost as well as remote machine.
Also, try to run the netstat command and see the bindings of service.
~Anil
On Thu, Aug 30
If possible, try to run netstat as sudo.
On Thu, Aug 30, 2012 at 11:21 AM, anil gupta anilgupt...@gmail.com wrote:
Can you also try to run telnet and netstat for port: 60030 and 60010 ? I
dont see post 60030 and 60010 in the output of netstat. Did you configured
some other ports for HBase
to join this group to share and learn things.
--
Thanks and Regards
Nagamallikarjuna
-
No virus found in this message.
Checked by AVG - www.avg.com
Version: 2012.0.2197 / Virus Database: 2437/5225 - Release Date: 08/26/12
--
Thanks Regards,
Anil Gupta
zookeeper servers.
Is it advisable to run zk process along here name node process for
production?
What factors do I need to look into to decide if this an option for us?
Thanks.
VV
--
Thanks Regards,
Anil Gupta
Hi,
AFAIK, these properties are being ignored by YARN:
- mapreduce.tasktracker.map.tasks.maximum,
- mapreduce.tasktracker.reduce.tasks.maximum
Thanks,
Anil Gupta
On Thu, Aug 16, 2012 at 9:28 AM, mg userformailingli...@gmail.com wrote:
Hi,
I am currently trying to tune a CDH 4.0.1 (i
??
On Thu, Aug 9, 2012 at 10:40 AM, Mike Lyon mike.l...@gmail.com wrote:
How hard would it be to block **ALL** messages with unsubscribe in the
title?
--
Mike Lyon
408-621-4826
mike.l...@gmail.com
http://www.linkedin.com/in/mlyon
--
Thanks Regards,
Anil Gupta
Hi Folks,
I would appreciate if someone can share their views on the problem below. I
am going to file a JIra for the same. If someone thinks that i am
missing(or my conf is incorrect) something then please let me know.
Thanks,
Anil Gupta
On Wed, Aug 1, 2012 at 10:56 AM, anil gupta anilgupt
/MAPREDUCE-4508
Please let me know if anything else if required for the JIRA.
Thanks,
Anil Gupta
On Tue, Jul 31, 2012 at 11:26 AM, anil gupta anilgupt...@gmail.com wrote:
Hi Harsh and Others,
I was able to run the job when I login as user hdfs. However, it fails
if i run it as root. I was suspecting
property i need to set for YARN?
--
Thanks Regards,
Anil Gupta
the problem with error code?
I strongly feel that there is a major bug in Yarn when we try to run it
with lesser memory. I have a already identified one a couple of days ago(
http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/33110)
--
Thanks Regards,
Anil Gupta
Hi Harsh and Others,
I was able to run the job when I login as user hdfs. However, it fails if
i run it as root. I was suspecting this as a problem before also and it
came out to be true.
Thanks,
Anil gupta
On Mon, Jul 30, 2012 at 9:21 PM, abhiTowson cal
abhishek.dod...@gmail.comwrote:
Hi
, 2012 at 4:52 AM, anil gupta anilgupt...@gmail.com wrote:
Hi Harsh,
Thanks a lot for your response. I am going to try your suggestions and
let
you know the outcome.
I am running the cluster on VMWare hypervisor. I have 3 physical machines
with 16GB of RAM, and 4TB( 2 HD of 2TB each
-Reduce task are started by the cluster. I dont see any errors
anywhere in the application. Please help me in resolving this problem.
Thanks,
Anil Gupta
--
Thanks Regards,
Anil Gupta
or doesn't belong to this node at all.
Please let me know.
Thanks,
Anil Gupta
On Mon, Jul 30, 2012 at 7:30 PM, abhiTowson cal
abhishek.dod...@gmail.comwrote:
hi anil,
Adding these help me resolve the issue for me
yarn.resourcemanager.resource-tracker.address
Regards
Abhishek
On Mon, Jul 30
JAVA_HEAP_MAX=-Xmx1000m
Regards
Abhishek
On Mon, Jul 30, 2012 at 10:47 PM, anil gupta anilgupt...@gmail.com
wrote:
Hi Abhishek,
Did you mean that adding yarn.resourcemanager.resource-tracker.address
along with yarn.log-aggregation-enable in my configuration will resolve
the
problem in which
)
snapshot=0
12/07/29 13:36:02 INFO mapred.JobClient: Total committed heap
usage (bytes)=124715008
12/07/29 13:36:02 INFO mapred.JobClient:
org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
Regards
Abhishek
--
Harsh J
--
Thanks Regards,
Anil Gupta
.
Regards
Abhishek
On Sun, Jul 29, 2012 at 3:05 PM, anil gupta anilgupt...@gmail.com wrote:
Hi Abhishek,
Once you make sure that whatever Harsh said in the previous email is
present in the cluster and then also the job runs in Local Mode. Then try
running the job with hadoop --config option
-u hdfs hadoop classpath
sudo -u yarn hadoop classpath
sudo -u $installation_user classpath
~Anil
On Sun, Jul 29, 2012 at 12:20 PM, abhiTowson cal
abhishek.dod...@gmail.comwrote:
Hi Anil,
Iam using chd4 with yarn.
On Sun, Jul 29, 2012 at 3:17 PM, Anil Gupta anilgupt...@gmail.com
class path is also working fine.
Regards
Abhishek
Thanks for
On Sun, Jul 29, 2012 at 3:20 PM, abhiTowson cal
abhishek.dod...@gmail.com wrote:
Hi Anil,
Iam using chd4 with yarn.
On Sun, Jul 29, 2012 at 3:17 PM, Anil Gupta anilgupt...@gmail.com
wrote:
Are you using cdh4
name)
~Anil
On Sun, Jul 29, 2012 at 1:08 PM, abhiTowson cal
abhishek.dod...@gmail.comwrote:
Hi anil,
Thanks for the reply.Same as your case my pi job is haulted and their
is no progress.
Regards
Abhishek
On Sun, Jul 29, 2012 at 3:31 PM, anil gupta anilgupt...@gmail.com wrote:
Hi Abhishek
it, and what's it seems weird and scary to me.
-- Abe (Grandpa) Simpson
--
Thanks Regards,
Anil Gupta
:23 PM, Harsh J ha...@cloudera.com wrote:
Can you share your yarn-site.xml contents? Have you tweaked memory
sizes in there?
On Fri, Jul 27, 2012 at 11:53 PM, anil gupta anilgupt...@gmail.com
wrote:
Hi All,
I have a Hadoop 2.0 alpha(cdh4) hadoop/hbase cluster runnning on
CentOS6.0
this will be the right approach for
my cluster environment?
Also, on a side note, shouldn't the NodeManager throw an error on this kind
of memory problem? Should i file a JIRA for this? It just sat quietly over
there.
Thanks a lot,
Anil Gupta
On Fri, Jul 27, 2012 at 3:36 PM, Harsh J ha...@cloudera.com wrote
(BlockReceiver.java:577)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:494)
at
org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:183)
--
Thanks Regards,
Anil Gupta
Forgot to mention:
Hadoop version: Hadoop 2.0.0-cdh4.0.0
On Wed, Jun 13, 2012 at 12:16 PM, anil gupta anilgupt...@gmail.com wrote:
Hi All
I am using cdh4 for running a HBase cluster on CentOs6.0. I have 5
nodes in my cluster(2 Admin Node and 3 DN).
My resourcemanager is up and running
%29onaCluster-Step3
On 06/13/2012 03:16 PM, anil gupta wrote:
Hi All
I am using cdh4 for running a HBase cluster on CentOs6.0. I have 5
nodes in my cluster(2 Admin Node and 3 DN).
My resourcemanager is up and running and showing that all three DN are
running the nodemanager. HDFS is also working
-site.xml:
configuration
property
namemapred.job.tracker/name
valuelocalhost:9001/value
/property
/configuration
but no effect !Have you any Idee,How can I solve my problem?
--
Thanks Regards,
Anil Gupta
it?
if I type hadoop -version as result I have:
java version 1.6.0_26
Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
Java HotSpot(TM) Server VM (build 20.1-b02, mixed mode)
and I downloaded hadoop 0.20.2
On Wed, Jun 6, 2012 at 6:41 PM, anil gupta anilgupt...@gmail.com wrote:
Babak
..
Appreciate your suggestions !!!
Thanks,
Srinivas Surasani
--
Thanks Regards,
Anil Gupta
@amit: if the DN is getting the IP from dhcp then the ip address might change
after a reboot.
Dynamic ip's in the cluster are not a good choice. IMO
Best Regards,
Anil
On Apr 30, 2012, at 8:22 PM, Amith D K amit...@huawei.com wrote:
Hi sumadhur,
As u mentioned configureg the NN and JT ip
it is called a client-sided property - it applies
per-job).
If HBase strongly recommends turning it off, HBase should also, by
default, turn it off for its own offered jobs?
On Sat, Mar 31, 2012 at 4:02 AM, anil gupta anilg...@buffalo.edu wrote:
Hi Doug,
Yes, that's why i had set
Goryunov a.goryu...@gmail.com
Hi Anil,
Yes, the second table is distributed, the first is not and I have 3х better
results for nondistrubuted table.
I use distributed hadoop mode for all cases.
Thanks.
On Fri, Mar 30, 2012 at 3:26 AM, anil gupta anilg...@buffalo.edu wrote:
Hi Alexander
activity is a little low. Just wondering if
theres a better chat channel for hadoop other than the official one
(#hadoop on freenode)?
In any case... Im on there :) come say hi.
--
Jay Vyas
MMSB/UCHC
--
Todd Lipcon
Software Engineer, Cloudera
--
Thanks Regards,
Anil Gupta
Hi Alexander,
Is data properly distributed over the cluster in Distributed Mode? If the
data is not then you wont get good results in distributed mode.
Thanks,
Anil Gupta
On Thu, Mar 29, 2012 at 8:37 AM, Alexander Goryunov a.goryu...@gmail.comwrote:
Hello,
I'm running 3 data node cluster
Have a look at NLineInputFormat class in Hadoop. That class will solve your
purpose.
Best Regards,
Anil
On Mar 20, 2012, at 11:07 PM, Jane Wayne jane.wayne2...@gmail.com wrote:
i have a matrix that i am performing operations on. it is 10,000 rows by
5,000 columns. the total size of the file
, InputFormat and RecordReader
to achieve this? I would appreciate any example code :)
Best,
Deepak
--
Thanks Regards,
Anil Gupta
...
Thanks
--
View this message in context:
http://old.nabble.com/help-to-fix-this-issue-tp33457865p33458308.html
Sent from the Hadoop core-user mailing list archive at Nabble.com.
--
Regards
Tousif
+918050227279
--
Thanks Regards,
Anil Gupta
balancer
Do I also need to run dfsadmin -refreshnodes?
--
Thanks Regards,
Anil Gupta
Oracle is
committed to developing practices and products that help protect the
environment
--
Thanks Regards,
Anil Gupta
This post might be helpful for u:
https://groups.google.com/a/cloudera.org/group/cdh-user/browse_thread/thread/4165f39d8b0bbc56
On Thu, Feb 9, 2012 at 11:42 AM, Anil Gupta anilgupt...@gmail.com wrote:
Hi,
I have dealt with this kind of this problem earlier.
Check the logs of datanode as well
outputs (1 slow hosts
and0 dup hosts)
2012-02-03 16:43:10,050 INFO org.apache.hadoop.mapred.ReduceTask:
Penalized(slow) Hosts:
2012-02-03 16:43:10,050 INFO org.apache.hadoop.mapred.ReduceTask:
hadoopdata3 Will be considered after: 39 seconds.
--
Thanks Regards,
Anil Gupta
If u use VMware and create vm's then you can do it.
Best Regards,
Anil
On Feb 1, 2012, at 8:22 PM, Arun Prakash ckarunprak...@gmail.com wrote:
I have windows machine,i am trying to install hadoop with multiple data
node like cluster in single machine .Is it possible?
Best Regards
Arun
Do u have enough data to start more than one mapper?
If entire data is less than a block size then only 1 mapper will run.
Best Regards,
Anil
On Feb 1, 2012, at 4:21 PM, Mark Kerzner mark.kerz...@shmsoft.com wrote:
Hi,
I have a simple MR job, and I want each Mapper to get one line from my
Yes, if ur block size is 64mb. Btw, block size is configurable in Hadoop.
Best Regards,
Anil
On Feb 1, 2012, at 5:06 PM, Mark Kerzner mark.kerz...@shmsoft.com wrote:
Anil,
do you mean one block of HDFS, like 64MB?
Mark
On Wed, Feb 1, 2012 at 7:03 PM, Anil Gupta anilgupt...@gmail.com
?
(Also, slightly OT, but you need to fix this:)
Do not use IPs in your fs location. Do the following instead:
1. Append an entry to /etc/hosts, across all nodes:
192.168.1.99 nn-host.remote nn-host
2. Set fs.default.name to hdfs://nn-host.remote
On Tue, Jan 31, 2012 at 3:18 AM, anil gupta
Hi Hema,
I had set-up a Hadoop cluster in which the name has hyphen character and it
works fine. So, it don't think this problem is related to hyphen character.
The problem is related to your Hadoop classpath settings. So, check your
Hadoop classpath.
I don't have experience of running the
67 matches
Mail list logo