nd add
> "-Djava.security.krb5.realm=yourrealm
> -Djava.security.krb5.kdc=yourkdc" and also ensure your Mac's
> configured for the cluster's kerberos (i.e. via a krb5.conf or so)?
>
> On Mon, Jun 17, 2013 at 9:56 AM, anil gupta wrote:
> > Hi All,
> >
&g
have googled this problem but i cannot find a solution for fixing "*
SCDynamicStore*" related problem in Eclipse on Mac. I want to run this code
from eclipse. Please let me know if anyone knows the trick to resolve this
problem on Mac. This is really annoying problem.
--
Thanks &
lmost no
> comments and appear to simply wrap other classes.
>
> --
> Jay Vyas
> http://jayunit100.blogspot.com
>
--
Thanks & Regards,
Anil Gupta
t run the netstat command like this: "sudo netstat
-alnp" . "sudo" is used to run a command with root privileges.
~Anil
On Thu, Aug 30, 2012 at 3:54 PM, anil gupta wrote:
> In addition to Stack's suggestion, use DNS names instead of IP address in
> configuration of
> Then the hbase daemons are not running or as Anil is suggesting, the
> connectivity between machines needs fixing (It looks like all binds to
> localhost.. can you fix that?). Once your connectivity fixed, then
> try running HBase.
>
> St.Ack
>
--
Thanks & Regards,
Anil Gupta
Already I disabled firewall of linux using iptables service.
>
> Thank You,
> Jilani
>
>
> On Thu, Aug 30, 2012 at 8:35 PM, anil gupta wrote:
>
> > Hi Jilani,
> >
> > It seems like a firewall issue. You will need to open appropriate ports
> or
> > disabl
java:853)
> > 12/08/30 18:46:50 at
> >
> com.evermind.server.http.ServletRequestDispatcher.invoke(ServletRequestDispatcher.java:810)
> > 12/08/30 18:46:50 at
> >
> com.evermind.server.http.ServletRequestDispatcher.forwardInternal(ServletRequestDispatcher.java:322)
> > 12/08/30 18:46:50 at
> >
> com.evermind.server.http.HttpRequestHandler.processRequest(HttpRequestHandler.java:790)
> > 12/08/30 18:46:50 at
> >
> com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:270)
> > 12/08/30 18:46:50 at
> >
> com.evermind.server.http.HttpRequestHandler.run(HttpRequestHandler.java:112)
> > 12/08/30 18:46:50 at
> >
> com.evermind.util.ReleasableResourcePooledExecutor$MyWorker.run(ReleasableResourcePooledExecutor.java:186)
> > 12/08/30 18:46:50 at java.lang.Thread.run(Thread.java:662)
> > Aug 30, 2012 6:46:50 PM org.apache.zookeeper.ClientCnxn$EventThread run
> > INFO: EventThread shut down
> >
> >
> > Please suggest me how to resolve this.
> >
> >
> > Thank You,
> > Jilani
> >
> >
>
--
Thanks & Regards,
Anil Gupta
s??
>
> On Thu, Aug 9, 2012 at 10:40 AM, Mike Lyon wrote:
> > How hard would it be to block **ALL** messages with "unsubscribe" in the
> > title?
> >
> > --
> > Mike Lyon
> > 408-621-4826
> > mike.l...@gmail.com
> >
> > http://www.linkedin.com/in/mlyon
>
--
Thanks & Regards,
Anil Gupta
browse/MAPREDUCE-4508>
Please let me know if anything else if required for the JIRA.
Thanks,
Anil Gupta
On Tue, Jul 31, 2012 at 11:26 AM, anil gupta wrote:
> Hi Harsh and Others,
>
> I was able to run the job when I login as user "hdfs". However, it fails
> if i run it as &qu
Hi Folks,
I would appreciate if someone can share their views on the problem below. I
am going to file a JIra for the same. If someone thinks that i am
missing(or my conf is incorrect) something then please let me know.
Thanks,
Anil Gupta
On Wed, Aug 1, 2012 at 10:56 AM, anil gupta wrote
property
is not having any impact on YARN, YARN is running simultaneously 8 map
tasks on one NodeManager.
mapreduce.tasktracker.map.tasks.maximum
1
mapreduce.tasktracker.reduce.tasks.maximum
1
Is there some other property i need to set for YARN?
--
Thanks & Regards,
Anil Gupta
Hi Harsh and Others,
I was able to run the job when I login as user "hdfs". However, it fails if
i run it as "root". I was suspecting this as a problem before also and it
came out to be true.
Thanks,
Anil gupta
On Mon, Jul 30, 2012 at 9:21 PM, abhiTowson cal
wrote:
>
he problem with error code?
I strongly feel that there is a major bug in Yarn when we try to run it
with lesser memory. I have a already identified one a couple of days ago(
http://comments.gmane.org/gmane.comp.jakarta.lucene.hadoop.user/33110)
--
Thanks & Regards,
Anil Gupta
; ]; then
> echo "Error: JAVA_HOME is not set."
> exit 1
> fi
>
> JAVA=$JAVA_HOME/bin/java
> JAVA_HEAP_MAX=-Xmx1000m
>
> Regards
> Abhishek
>
>
> On Mon, Jul 30, 2012 at 10:47 PM, anil gupta
> wrote:
> > Hi Abhishek,
> >
> > Did yo
doesn't belong to this node at all.
Please let me know.
Thanks,
Anil Gupta
On Mon, Jul 30, 2012 at 7:30 PM, abhiTowson cal
wrote:
> hi anil,
>
> Adding these help me resolve the issue for me
> yarn.resourcemanager.resource-tracker.address
>
> Regards
> Abhishek
>
>
t; 12/07/27 09:38:27 INFO mapreduce.Job: Running job: job_1343365114818_0002
> >
> > No Map-Reduce task are started by the cluster. I dont see any errors
> > anywhere in the application. Please help me in resolving this problem.
> >
> > Thanks,
> > Anil Gupta
> >
>
--
Thanks & Regards,
Anil Gupta
>
> On Sat, Jul 28, 2012 at 4:52 AM, anil gupta wrote:
> > Hi Harsh,
> >
> > Thanks a lot for your response. I am going to try your suggestions and
> let
> > you know the outcome.
> > I am running the cluster on VMWare hypervisor. I have 3 physical machines
name)
~Anil
On Sun, Jul 29, 2012 at 1:08 PM, abhiTowson cal
wrote:
> Hi anil,
>
> Thanks for the reply.Same as your case my pi job is haulted and their
> is no progress.
>
> Regards
> Abhishek
>
> On Sun, Jul 29, 2012 at 3:31 PM, anil gupta wrote:
> > Hi Abhishek
h is also working fine.
>
> Regards
> Abhishek
>
> Thanks for
>
> On Sun, Jul 29, 2012 at 3:20 PM, abhiTowson cal
> wrote:
> > Hi Anil,
> > Iam using chd4 with yarn.
> >
> > On Sun, Jul 29, 2012 at 3:17 PM, Anil Gupta
> wrote:
>
t; and user used while
installation by running the following:
sudo -u hdfs hadoop classpath
sudo -u yarn hadoop classpath
sudo -u $installation_user classpath
~Anil
On Sun, Jul 29, 2012 at 12:20 PM, abhiTowson cal
wrote:
> Hi Anil,
> Iam using chd4 with yarn.
>
> On Sun, Jul 29,
ards
> Abhishek
>
> On Sun, Jul 29, 2012 at 3:05 PM, anil gupta wrote:
>> Hi Abhishek,
>>
>> Once you make sure that whatever Harsh said in the previous email is
>> present in the cluster and then also the job runs in Local Mode. Then try
>> running the job wi
Input split bytes=82
> > 12/07/29 13:36:02 INFO mapred.JobClient: Spilled Records=0
> > 12/07/29 13:36:02 INFO mapred.JobClient: CPU time spent (ms)=0
> > 12/07/29 13:36:02 INFO mapred.JobClient: Physical memory (bytes)
> snapshot=0
> > 12/07/29 13:36:02 INFO mapred.JobClient: Virtual memory (bytes)
> snapshot=0
> > 12/07/29 13:36:02 INFO mapred.JobClient: Total committed heap
> > usage (bytes)=124715008
> > 12/07/29 13:36:02 INFO mapred.JobClient:
> > org.apache.hadoop.mapreduce.lib.input.FileInputFormatCounter
> >
> > Regards
> > Abhishek
>
>
>
> --
> Harsh J
>
--
Thanks & Regards,
Anil Gupta
will be the right approach for
my cluster environment?
Also, on a side note, shouldn't the NodeManager throw an error on this kind
of memory problem? Should i file a JIRA for this? It just sat quietly over
there.
Thanks a lot,
Anil Gupta
On Fri, Jul 27, 2012 at 3:36 PM, Harsh J wrote:
>
memory, in MB, that can be allocated
for containers.
yarn.nodemanager.resource.memory-mb
1200
*
On Fri, Jul 27, 2012 at 2:23 PM, Harsh J wrote:
> Can you share your yarn-site.xml contents? Have you tweaked memory
> sizes in there?
>
> On Fri, Jul
ey.com keithwiley.com
> music.keithwiley.com
>
> "I used to be with it, but then they changed what it was. Now, what I'm
> with
> isn't it, and what's it seems weird and scary to me."
> -- Abe (Grandpa) Simpson
>
>
>
>
--
Thanks & Regards,
Anil Gupta
p.hdfs.server.datanode.DataNode:
> DatanodeRegistration(DN01:50010,
> storageID=DS-798921853-DN01-50010-1328651609047, infoPort=50075,
> ipcPort=50020):DataXceiver
> java.io.EOFException: while trying to read 65557 bytes
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readToBuf(BlockReceiver.java:290)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.readNextPacket(BlockReceiver.java:334)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:398)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:577)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:494)
> at
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:183)
>
--
Thanks & Regards,
Anil Gupta
n+a+Cluster#DeployingMapReducev2%28YARN%29onaCluster-Step3>
>
>
>
>
> On 06/13/2012 03:16 PM, anil gupta wrote:
>
>> Hi All
>>
>> I am using cdh4 for running a HBase cluster on CentOs6.0. I have 5
>> nodes in my cluster(2 Admin Node and 3 DN).
>> My reso
Forgot to mention:
Hadoop version: Hadoop 2.0.0-cdh4.0.0
On Wed, Jun 13, 2012 at 12:16 PM, anil gupta wrote:
> Hi All
>
> I am using cdh4 for running a HBase cluster on CentOs6.0. I have 5
> nodes in my cluster(2 Admin Node and 3 DN).
> My resourcemanager is up and running and s
I type hadoop -version as result I have:
>
> java version "1.6.0_26"
> Java(TM) SE Runtime Environment (build 1.6.0_26-b03)
> Java HotSpot(TM) Server VM (build 20.1-b02, mixed mode)
>
> and I downloaded hadoop 0.20.2
> On Wed, Jun 6, 2012 at 6:41 PM, anil gupta wrot
gt;
>
>
>dfs.replication
>1
>
>
>
>
> in conf/mapred-site.xml:
>
>
>
>mapred.job.tracker
>localhost:9001
>
>
>
> but no effect !Have you any Idee,How can I solve my problem?
>
--
Thanks & Regards,
Anil Gupta
way of doing it or is sqoop a good
> > > candidate for this type of scenario?
> > >
> > > Currently the same process is done by generating tsv files mysql
> > > server and dumped into staging server and from there we'll generate
> > > hdfs put statements..
> > >
> > > Appreciate your suggestions !!!
> > >
> > >
> > > Thanks,
> > > Srinivas Surasani
> >
>
--
Thanks & Regards,
Anil Gupta
@amit: if the DN is getting the IP from dhcp then the ip address might change
after a reboot.
Dynamic ip's in the cluster are not a good choice. IMO
Best Regards,
Anil
On Apr 30, 2012, at 8:22 PM, Amith D K wrote:
> Hi sumadhur,
>
> As u mentioned configureg the NN and JT ip would be enough.
why it is called a client-sided property - it applies
> per-job).
>
> If HBase strongly recommends turning it off, HBase should also, by
> default, turn it off for its own offered jobs?
>
> On Sat, Mar 31, 2012 at 4:02 AM, anil gupta wrote:
> > Hi Doug,
> >
> >
Goryunov
> Hi Anil,
>
> Yes, the second table is distributed, the first is not and I have 3х better
> results for nondistrubuted table.
>
> I use distributed hadoop mode for all cases.
>
> Thanks.
>
>
>
> On Fri, Mar 30, 2012 at 3:26 AM, anil gupta wrote:
Hi Alexander,
Is data properly distributed over the cluster in Distributed Mode? If the
data is not then you wont get good results in distributed mode.
Thanks,
Anil Gupta
On Thu, Mar 29, 2012 at 8:37 AM, Alexander Goryunov wrote:
> Hello,
>
> I'm running 3 data node cluster (8
.
> >>
> >> --
> >> Jay Vyas
> >> MMSB/UCHC
> >
> >
> >
> > --
> > Todd Lipcon
> > Software Engineer, Cloudera
>
--
Thanks & Regards,
Anil Gupta
Have a look at NLineInputFormat class in Hadoop. That class will solve your
purpose.
Best Regards,
Anil
On Mar 20, 2012, at 11:07 PM, Jane Wayne wrote:
> i have a matrix that i am performing operations on. it is 10,000 rows by
> 5,000 columns. the total size of the file is just under 30 MB. my
construct InputSplit, InputFormat and RecordReader
> to achieve this? I would appreciate any example code :)
>
> Best,
> Deepak
>
--
Thanks & Regards,
Anil Gupta
This
> could
> > > be a sign that the server has too many connections (30 is the default).
> > > Consider inspecting your ZK server logs for that error and then make
> sure
> > > you are reusing HBaseConfiguration as often as you can. See HTable's
> > > javadoc for more information.
> > >
> > >
> > > Can someone please help me...
> > >
> > > Thanks
> > >
> >
> > --
> > View this message in context:
> > http://old.nabble.com/help-to-fix-this-issue-tp33457865p33458308.html
> > Sent from the Hadoop core-user mailing list archive at Nabble.com.
> >
> >
>
>
> --
> Regards
> Tousif
> +918050227279
>
--
Thanks & Regards,
Anil Gupta
gt; Is this the right procedure to add nodes? I took some from hadoop wiki
> >>> FAQ:
> >>>>
> >>>> http://wiki.apache.org/hadoop/FAQ
> >>>>
> >>>> 1. Update conf/slave
> >>>> 2. on the slave nodes start datanode and tasktracker
> >>>> 3. hadoop balancer
> >>>>
> >>>> Do I also need to run dfsadmin -refreshnodes?
> >>>
> >
> >
> >
>
--
Thanks & Regards,
Anil Gupta
ill(BufferedInputStream.java:235)
>>> at java.io.BufferedInputStream.read(BufferedInputStream.java:254)
>>> at java.io.DataInputStream.readInt(DataInputStream.java:387)
>>> at
>>> org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:501)
>>> at org.apache.hadoop.ipc.Client$Connection.run(Client.java:446)
>>>
>>>
>>> Can you please tell what could be the reason behind this or point me to
>>> some pointers?
>>>
>>> Regards,
>>> Guruprasad
>>>
>>>
>>>
>>> --
>>> [image: Oracle] <http://www.oracle.com>
>>> Robin Müller-Bady | Sales Consultant
>>> Phone: +49 211 74839 701 <+49%20211%2074839%20701> | Mobile: +49 172
>>> 8438346 <+49%20172%208438346>
>>> Oracle STCC Fusion Middleware
>>>
>>> ORACLE Deutschland B.V. & Co. KG | Hamborner Strasse 51 | 40472
>>> Düsseldorf
>>>
>>> ORACLE Deutschland B.V. & Co. KG
>>> Hauptverwaltung: Riesstr. 25, D-80992 München
>>> Registergericht: Amtsgericht München, HRA 95603
>>> Geschäftsführer: Jürgen Kunz
>>>
>>> Komplementärin: ORACLE Deutschland Verwaltung B.V.
>>> Hertogswetering 163/167, 3543 AS Utrecht, Niederlande
>>> Handelsregister der Handelskammer Midden-Niederlande, Nr. 30143697
>>> Geschäftsführer: Alexander van der Ven, Astrid Kepper, Val Maher
>>>
>>> [image: Green Oracle] <http://www.oracle.com/commitment> Oracle is
>>> committed to developing practices and products that help protect the
>>> environment
>>>
>>
>>
>
--
Thanks & Regards,
Anil Gupta
This post might be helpful for u:
https://groups.google.com/a/cloudera.org/group/cdh-user/browse_thread/thread/4165f39d8b0bbc56
On Thu, Feb 9, 2012 at 11:42 AM, Anil Gupta wrote:
> Hi,
> I have dealt with this kind of this problem earlier.
> Check the logs of datanode as well as
Hi,
I have dealt with this kind of this problem earlier.
Check the logs of datanode as well as namenode.
In order to test the connectivity:
ssh into slave from master and ssh into master from the same slave. Leave the
ssh session open for as long as u can.
In my case when I did the above exper
0_0 Scheduled 0 outputs (1 slow hosts
> and0 dup hosts)
> 2012-02-03 16:43:10,050 INFO org.apache.hadoop.mapred.ReduceTask:
> Penalized(slow) Hosts:
> 2012-02-03 16:43:10,050 INFO org.apache.hadoop.mapred.ReduceTask:
> hadoopdata3 Will be considered after: 39 seconds.
>
--
Thanks & Regards,
Anil Gupta
If u use VMware and create vm's then you can do it.
Best Regards,
Anil
On Feb 1, 2012, at 8:22 PM, Arun Prakash wrote:
> I have windows machine,i am trying to install hadoop with multiple data
> node like cluster in single machine .Is it possible?
>
>
> Best Regards
> Arun Prakash C.K
>
> Ke
Yes, if ur block size is 64mb. Btw, block size is configurable in Hadoop.
Best Regards,
Anil
On Feb 1, 2012, at 5:06 PM, Mark Kerzner wrote:
> Anil,
>
> do you mean one block of HDFS, like 64MB?
>
> Mark
>
> On Wed, Feb 1, 2012 at 7:03 PM, Anil Gupta wrote:
>
>
Do u have enough data to start more than one mapper?
If entire data is less than a block size then only 1 mapper will run.
Best Regards,
Anil
On Feb 1, 2012, at 4:21 PM, Mark Kerzner wrote:
> Hi,
>
> I have a simple MR job, and I want each Mapper to get one line from my
> input file (which co
ut?
>
> (Also, slightly OT, but you need to fix this:)
>
> Do not use IPs in your fs location. Do the following instead:
>
> 1. Append an entry to /etc/hosts, across all nodes:
>
> 192.168.1.99 nn-host.remote nn-host
>
> 2. Set fs.default.name to "hdfs://nn-host.remote&
Hi Hema,
I had set-up a Hadoop cluster in which the name has hyphen character and it
works fine. So, it don't think this problem is related to hyphen character.
The problem is related to your Hadoop classpath settings. So, check your
Hadoop classpath.
I don't have experience of running the jo
49 matches
Mail list logo