Re: New to Hadoop-SSH communication

2013-05-01 Thread kishore alajangi
Might be you are copying by logging slave machine, Exit from slave in
Master.

Thanks,
Kishore.




On Wed, May 1, 2013 at 3:00 AM, Automation Me  wrote:

> Thank you Tariq.
>
> I am using  the same username on both the machines and when i try to copy
> a file master to slave just to make sure SSH is working fine, The file is
> copying into master itself not an slave machine.
>
>   scp -r /usr/local/somefile hduser@slave:/usr/local/somefile
>
> Any suggestions...
>
>
> Thanks
> Annt
>
>
>
> On Tue, Apr 30, 2013 at 5:14 PM, Mohammad Tariq wrote:
>
>> ssh is actually *user@some_machine *to *user@some_other_machine*. either
>> use same username on both the machines or add the IPs along with proper
>> user@hostname in /etc/hosts file.
>>
>> HTH
>>
>> Warm Regards,
>> Tariq
>> https://mtariq.jux.com/
>> cloudfront.blogspot.com
>>
>>
>> On Wed, May 1, 2013 at 2:39 AM, Automation Me wrote:
>>
>>> Hello,
>>>
>>> I am new to Hadoop and trying to install multinode cluster on ubuntu
>>> VM's.I am not able to communicate between two clusters using SSH.
>>>
>>> My host file:
>>>
>>> 127.0.1.1 Master
>>> 127.0.1.2 Slave
>>>
>>> The following changes i made in two VM's
>>>
>>> 1.Updated the etc/hosts file in two vm's
>>>
>>> on Master VM
>>>  i did SSH keygen and trying to copy the key into Slave
>>>
>>> ssh-keygen -t rsa -P ""
>>>cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
>>>ssh-copy-id -i $HOME/.ssh/id_rsa.pub slave@ubuntu.
>>>
>>> When i login into master and slave  and check
>>>
>>> master@ubuntu>Hostname it says UBUNTU
>>> slave@ubuntu>Hostname it says UBUNTU
>>>
>>>
>>> Could you assist me on this?
>>>
>>> Thanks
>>> Annt
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>
>


Re: Configuring SSH - is it required? for a psedo distriburted mode?

2013-05-16 Thread kishore alajangi
When you start the hadoop procecess, each process will ask the password to
start, to overcome this we will configure SSH if you use single node or
multiple nodes for each process, if you can enter the password for each
process Its not a mandatory even if you use multiple systems.

Thanks,
Kishore.


On Thu, May 16, 2013 at 8:24 PM, Raj Hadoop  wrote:

>  Hi,
>
> I have a dedicated user on Linux server for hadoop. I am installing it in
> psedo distributed mode on this box. I want to test my programs on this
> machine. But i see that in installation steps - they were mentioned that
> SSH needs to be configured. If it is single node, I dont require it
> ...right? Please advise.
>
> I was looking at this site
>
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
> It menionted like this -
> "
> Hadoop requires SSH access to manage its nodes, i.e. remote machines plus
> your local machine if you want to use Hadoop on it (which is what we want
> to do in this short tutorial). For our single-node setup of Hadoop, we
> therefore need to configure SSH access to localhost for the hduser user
> we created in the previous section.
> "
>
> Thanks,
> Raj
>
>


Re: Can I move block data directly?

2013-07-07 Thread kishore alajangi
run start-balancer.sh


On Mon, Jul 8, 2013 at 9:10 AM, Azuryy Yu  wrote:

> Hi Dear all,
>
> There are some unbalanced data nodes in my cluster, some nodes reached
> more than 95% disk usage.
>
> so Can I move some block data from one node to another node directly?
>
> such as: from n1 to n2:
>
> 1) scp /data//blk_*   n2:/data/subdir11/
> 2) rm -rf data//blk_*
> 3) hadoop-dameon.sh stop datanode (on n1)
> 4) hadoop-damon.sh start datanode(on n1)
> 5) hadoop-dameon.sh stop datanode (on n2)
> 6) hadoop-damon.sh start datanode(on n2)
>
> Am I right? Thanks for any input.
>
>
>


Re: Hadoop Jobtracker OOME

2013-09-15 Thread kishore alajangi
Increase the memory value in mapred-site.xml file with the property name
mapred.child.java.opts.

Thanks,
Kishore.


On Mon, Sep 16, 2013 at 12:05 PM, Viswanathan J
wrote:

> Appreciate the response.
> On Sep 16, 2013 1:26 AM, "Viswanathan J" 
> wrote:
>
>> Hi Guys,
>>
>> Currently we are running the small Hadoop(1.2.1) cluster with 13 nodes,
>> today we getting OutOfMemory error in jobtracker,
>>
>> java.io.IOException: Call to nn:8020 failed on local exception:
>> java.io.IOException: Couldn't set up IO streams
>> at org.apache.hadoop.ipc.Client.wrapException(Client.java:1150)
>> at org.apache.hadoop.ipc.Client.call(Client.java:1118)
>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
>> at $Proxy7.renewLease(Unknown Source)
>> at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>> at
>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>> at java.lang.reflect.Method.invoke(Method.java:597)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85)
>> at
>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62)
>> at $Proxy7.renewLease(Unknown Source)
>> at org.apache.hadoop.hdfs.DFSClient.renewLease(DFSClient.java:379)
>> at
>> org.apache.hadoop.hdfs.LeaseRenewer.renew(LeaseRenewer.java:378)
>> at org.apache.hadoop.hdfs.LeaseRenewer.run(LeaseRenewer.java:400)
>> at
>> org.apache.hadoop.hdfs.LeaseRenewer.access$600(LeaseRenewer.java:69)
>> at
>> org.apache.hadoop.hdfs.LeaseRenewer$1.run(LeaseRenewer.java:273)
>> at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.io.IOException: Couldn't set up IO streams
>> at
>> org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:642)
>> at
>> org.apache.hadoop.ipc.Client$Connection.access$2200(Client.java:205)
>> at org.apache.hadoop.ipc.Client.getConnection(Client.java:1249)
>> at org.apache.hadoop.ipc.Client.call(Client.java:1093)
>> ... 14 more
>> Caused by: java.lang.OutOfMemoryError: Java heap space
>>
>>
>> -
>> 2013-09-15 13:13:31,198 ERROR org.apache.hadoop.mapred.JobTracker: Job
>> initialization failed:
>> java.lang.OutOfMemoryError: Java heap space
>> at
>> com.sun.org.apache.xml.internal.serializer.ToUnknownStream.characters(ToUnknownStream.java:341)
>> at
>> com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:240)
>> at
>> com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:226)
>> at
>> com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:226)
>> at
>> com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:226)
>> at
>> com.sun.org.apache.xalan.internal.xsltc.trax.DOM2TO.parse(DOM2TO.java:132)
>>
>> Please help to resolve this issue asap.
>>
>> What are the best mapred/hadoop core configuration to resolve this.
>>
>> --
>> Regards,
>> Viswa.J
>>
>


Hadoop setup

2013-12-11 Thread kishore alajangi
Hi Experts,

Today I have a task to build hadoop cluster with 4 nodes in hardware.
Anybody suggest me the hardware specifications, OS and Hadoop version.

-- 
Thanks,
Kishore.


Re: Hadoop setup

2013-12-14 Thread kishore alajangi
what makes difference in H/W selection, when we choosed "yarn" to
install, and is necessary ?

On 12/14/13, Adam Kawa  wrote:
> In general, it is very open question and there are many possibilities
> depending on your workload (e.g. CPU-bound, IO-bound etc).
>
> If it is your first Hadoop cluster, and you do not know too much about what
> types of jobs you will be running, I would recommend just to collect any
> available machines that you have in your data-center (they should not be a
> garage machines, though). Personally, I try to avoid buying hardware, if I
> am not sure what to buy :)
>
> If you type "hadoop hardware recommnedations" in Google, you will get many
> interesting links:
> e.g.
> http://blog.cloudera.com/blog/2013/08/how-to-select-the-right-hardware-for-your-new-hadoop-cluster/
> http://my.safaribooksonline.com/book/databases/hadoop/9781449327279/4dot-planning-a-hadoop-cluster/id2760689
> http://www.youtube.com/watch?v=UQJnJvwcsA8
> http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.0.6.0/bk_cluster-planning-guide/content/ch_hardware-recommendations.html
>
>
> 2013/12/12 kishore alajangi 
>
>> Hi Experts,
>>
>> Today I have a task to build hadoop cluster with 4 nodes in hardware.
>> Anybody suggest me the hardware specifications, OS and Hadoop version.
>>
>> --
>> Thanks,
>> Kishore.
>>
>


-- 
Thanks,
Kishore.


Re: DataNode not starting in slave machine

2013-12-25 Thread kishore alajangi
Replace hdfs:// to file:/// in fs.default.name property.


On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
vishnu.viswanat...@gmail.com> wrote:

> Hi,
>
> I am getting this error while starting the datanode in my slave system.
>
> I read the JIRA HDFS-2515,
> it says it is because hadoop is using wrong conf file.
>
> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
> hadoop-metrics2.properties
> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
> MetricsSystem,sub=Stats registered.
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
> at 10 second(s).
> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
> started
> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
> registered.
> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
> exists!
> 13/12/24 15:57:15 ERROR datanode.DataNode:
> java.lang.IllegalArgumentException: Does not contain a valid host:port
> authority: file:///
> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
> at
> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:321)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>
> But how do i check which conf file hadoop is using? or how do i set it?
>
> These are my configurations:
>
> core-site.xml
> --
> 
> 
> fs.defualt.name
> hdfs://master:9000
> 
>
> 
> hadoop.tmp.dir
> /home/vishnu/hadoop-tmp
> 
> 
>
> hdfs-site.xml
> 
> 
> 
> dfs.replication
> 2
> 
> 
>
> mared-site.xml
> 
> 
> 
> mapred.job.tracker
> master:9001
> 
> 
>
> any help,
>
>


-- 
Thanks,
Kishore.


Re: DataNode not starting in slave machine

2013-12-25 Thread kishore alajangi
change mapred.job.tracker property to http://master:9101 in mapred-site.xml


On Wed, Dec 25, 2013 at 7:41 PM, Vishnu Viswanath <
vishnu.viswanat...@gmail.com> wrote:

> Made that change . Still the same error.
>
> And why should fs.default.name set to file:/// ?  I am not running in
> pseudo-distributed mode. I am having two systems one is master and the
> other is slave.
>
> Vishnu Viswanath
>
> On 25-Dec-2013, at 19:35, kishore alajangi 
> wrote:
>
> Replace hdfs:// to file:/// in fs.default.name property.
>
>
> On Wed, Dec 25, 2013 at 7:01 PM, Vishnu Viswanath <
> vishnu.viswanat...@gmail.com> wrote:
>
>> Hi,
>>
>> I am getting this error while starting the datanode in my slave system.
>>
>> I read the JIRA HDFS-2515<https://issues.apache.org/jira/browse/HDFS-2515>,
>> it says it is because hadoop is using wrong conf file.
>>
>> 13/12/24 15:57:14 INFO impl.MetricsConfig: loaded properties from
>> hadoop-metrics2.properties
>> 13/12/24 15:57:14 INFO impl.MetricsSourceAdapter: MBean for source
>> MetricsSystem,sub=Stats registered.
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: Scheduled snapshot period
>> at 10 second(s).
>> 13/12/24 15:57:14 INFO impl.MetricsSystemImpl: DataNode metrics system
>> started
>> 13/12/24 15:57:15 INFO impl.MetricsSourceAdapter: MBean for source ugi
>> registered.
>> 13/12/24 15:57:15 WARN impl.MetricsSystemImpl: Source name ugi already
>> exists!
>> 13/12/24 15:57:15 ERROR datanode.DataNode:
>> java.lang.IllegalArgumentException: Does not contain a valid host:port
>> authority: file:///
>> at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:164)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:212)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:244)
>> at
>> org.apache.hadoop.hdfs.server.namenode.NameNode.getServiceAddress(NameNode.java:236)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:359)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.(DataNode.java:321)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:1712)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:1651)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:1669)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:1795)
>> at
>> org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:1812)
>>
>> But how do i check which conf file hadoop is using? or how do i set it?
>>
>> These are my configurations:
>>
>> core-site.xml
>> --
>> 
>> 
>> fs.defualt.name
>> hdfs://master:9000
>> 
>>
>> 
>> hadoop.tmp.dir
>> /home/vishnu/hadoop-tmp
>> 
>> 
>>
>> hdfs-site.xml
>> 
>> 
>> 
>> dfs.replication
>> 2
>> 
>> 
>>
>> mared-site.xml
>> 
>> 
>> 
>> mapred.job.tracker
>> master:9001
>> 
>> 
>>
>> any help,
>>
>>
>
>
> --
> Thanks,
> Kishore.
>
>


-- 
Thanks,
Kishore.


column renaming issue in hive

2014-04-18 Thread kishore alajangi
Hi Experts,

After I changed the column names in hive table,m the result showing all
null values with new column names, if i query with select * from table
giving the actual values result, what could be the problem please explain
what should i do now, help me.

-- 
Thanks,
Kishore.


analyzing s3 data

2014-04-21 Thread kishore alajangi
Hi Experts,

We are running four node cluster which is installed cdh4.5 with cm4.8, We
have large size files in zip format in s3, we want to analyze that files
for every hour in hive, which is the best way to do that, please help me
with examples or with any reference links.
-- 
Thanks,
Kishore.


Hive table creation to support multiple delimiters

2014-04-30 Thread kishore alajangi
Hi All,

My input data looks like below as | delimited and I want to extract appid,
appname, bundleid etc, please help me to create hive table ,

|0|{\x22appid\x22:\x228\x22,\x22appname\x22:\x22CONVX-0008\x22,\x22bundleid\x22:\x22com.zeptolab.timetravel.free.google\x22}|14|


-- 
Thanks,
Kishore.


Re: Data node with multiple disks

2014-05-13 Thread kishore alajangi
replication factor=1


On Tue, May 13, 2014 at 11:04 AM, SF Hadoop  wrote:

> Your question is unclear. Please restate and describe what you are
> attempting to do.
>
> Thanks.
>
>
> On Monday, May 12, 2014, Marcos Sousa  wrote:
>
>> Hi,
>>
>> I have 20 servers with 10 HD with 400GB SATA. I'd like to use them to be
>> my datanode:
>>
>> /vol1/hadoop/data
>> /vol2/hadoop/data
>> /vol3/hadoop/data
>> /volN/hadoop/data
>>
>> How do user those distinct discs not to replicate?
>>
>> Best regards,
>>
>> --
>> Marcos Sousa
>>
>


-- 
Thanks,
Kishore.


Yarn Running Application Logs

2015-12-10 Thread kishore alajangi
Hi,

I want to collect running yarn application logs, when I try

# yarn logs -applicationId 

it is giving error

"application has not completed logs are only available after an application
completes yarn"

but I can see the logs through resourcemanager webui

Anybody can help me how to collect the logs in a file ?

-- 
Sincere Regards,
A.Kishore Kumar,
Ph: +91 9246274575


java.io.EOFException

2015-12-26 Thread kishore alajangi
Hi,

My hadoop test run "hadoop jar
/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100" get
killed and in the logs if i check it shows

WARN [ResponseProcessor for block
BP-437460642-10.0.0.1-1391018641114:blk_1084609656_11045296]
org.apache.hadoop.hdfs.DFSClient: DFSOutputStream ResponseProcessor
exception  for block
BP-437460642-10.0.0.1-1391018641114:blk_1084609656_11045296
java.io.EOFException: Premature EOF: no length prefix available

anybody have any idea about this ? please help me.

-- 
Sincere Regards,
A.Kishore Kumar,
Ph: +91 9246274575


Kerberos Issue

2016-01-21 Thread kishore alajangi
Hi,

I am unable to browse the hdfs in the browser, its giving error

Authentication failed when trying to open /webhdfs/v1/?op=LISTSTATUS:
Unauthorized.

My cluster is kerberos enabled,
I am able to browse hdfs in command line, what could be the reason,
appreciate for the suggestions.

-- 
Sincere Regards,
A.Kishore Kumar,
Ph: +91 9246274575


MapTasks Reallocation Issue

2016-01-21 Thread kishore alajangi
Hi,

I am running a mapreduce job in yarn, the map tasks are running 85 out of
3065 maps, I ran another mapreduce job and finished in 3mins, but the first
job is running only 41 tasks now lieu of 85, even when the second job
completed, what went wrong ? appreciate if anybody explain me.

-- 
Sincere Regards,
A.Kishore Kumar,
Ph: +91 9246274575


Re: Kerberos Issue

2016-01-21 Thread kishore alajangi
Hi zheng,

Thanks for your quick response, I configured
"network.negotiate-auth.trusted-uris"
 value to my hostname, but still the issue is same, kindly help me.

On Thu, Jan 21, 2016 at 4:29 PM, Zheng, Kai  wrote:

> To access resources protected by a Kerberized system like Hadoop with
> security through web browser, your web browser must be configured to
> support Kerberos HTTP SPNEGO first. You can do a google about how-to
> according to your browser (Firefox or else).
>
>
>
> Regards,
>
> Kai
>
>
>
> *From:* kishore alajangi [mailto:alajangikish...@gmail.com]
> *Sent:* Thursday, January 21, 2016 6:49 PM
> *To:* cdh-u...@cloudera.org; user@hadoop.apache.org
> *Subject:* Kerberos Issue
>
>
>
> Hi,
>
> I am unable to browse the hdfs in the browser, its giving error
>
> Authentication failed when trying to open /webhdfs/v1/?op=LISTSTATUS:
> Unauthorized.
>
> My cluster is kerberos enabled,
>
> I am able to browse hdfs in command line, what could be the reason,
> appreciate for the suggestions.
>
>
> --
>
> Sincere Regards,
> A.Kishore Kumar,
>
> Ph: +91 9246274575
>



-- 
Sincere Regards,
A.Kishore Kumar,
Ph: +91 9246274575


Re: Kerberos Issue

2016-01-21 Thread kishore alajangi
Zheng,

It is working now on the machine where hadoop and kdc running after i
configured kerberos spnego, but how to browse on my machine browser ?

On Thu, Jan 21, 2016 at 7:12 PM, kishore alajangi  wrote:

> Hi zheng,
>
> Thanks for your quick response, I configured
> "network.negotiate-auth.trusted-uris"
>  value to my hostname, but still the issue is same, kindly help me.
>
> On Thu, Jan 21, 2016 at 4:29 PM, Zheng, Kai  wrote:
>
>> To access resources protected by a Kerberized system like Hadoop with
>> security through web browser, your web browser must be configured to
>> support Kerberos HTTP SPNEGO first. You can do a google about how-to
>> according to your browser (Firefox or else).
>>
>>
>>
>> Regards,
>>
>> Kai
>>
>>
>>
>> *From:* kishore alajangi [mailto:alajangikish...@gmail.com]
>> *Sent:* Thursday, January 21, 2016 6:49 PM
>> *To:* cdh-u...@cloudera.org; user@hadoop.apache.org
>> *Subject:* Kerberos Issue
>>
>>
>>
>> Hi,
>>
>> I am unable to browse the hdfs in the browser, its giving error
>>
>> Authentication failed when trying to open /webhdfs/v1/?op=LISTSTATUS:
>> Unauthorized.
>>
>> My cluster is kerberos enabled,
>>
>> I am able to browse hdfs in command line, what could be the reason,
>> appreciate for the suggestions.
>>
>>
>> --
>>
>> Sincere Regards,
>> A.Kishore Kumar,
>>
>> Ph: +91 9246274575
>>
>
>
>
> --
> Sincere Regards,
> A.Kishore Kumar,
> Ph: +91 9246274575
>



-- 
Sincere Regards,
A.Kishore Kumar,
Ph: +91 9246274575


Re: Kerberos Issue

2016-01-21 Thread kishore alajangi
Should I install kerberos on my machine to do this ?

On Thu, Jan 21, 2016 at 7:25 PM, Zheng, Kai  wrote:

> Glad you almost got it. It’s not required you use browser on the KDC host,
> but it’s required you can run kinit. Please run kinit and klist to ensure
> ticket is ready, then open your browser on the same host.
>
>
>
> http://people.redhat.com/mikeb/negotiate/
>
>
>
> Regards,
>
> Kai
>
>
>
> *From:* kishore alajangi [mailto:alajangikish...@gmail.com]
> *Sent:* Thursday, January 21, 2016 9:51 PM
> *To:* Zheng, Kai 
> *Cc:* cdh-u...@cloudera.org; user@hadoop.apache.org
> *Subject:* Re: Kerberos Issue
>
>
>
> Zheng,
>
> It is working now on the machine where hadoop and kdc running after i
> configured kerberos spnego, but how to browse on my machine browser ?
>
>
>
> On Thu, Jan 21, 2016 at 7:12 PM, kishore alajangi <
> alajangikish...@gmail.com> wrote:
>
> Hi zheng,
>
> Thanks for your quick response, I configured
> "network.negotiate-auth.trusted-uris"
>
>  value to my hostname, but still the issue is same, kindly help me.
>
>
>
> On Thu, Jan 21, 2016 at 4:29 PM, Zheng, Kai  wrote:
>
> To access resources protected by a Kerberized system like Hadoop with
> security through web browser, your web browser must be configured to
> support Kerberos HTTP SPNEGO first. You can do a google about how-to
> according to your browser (Firefox or else).
>
>
>
> Regards,
>
> Kai
>
>
>
> *From:* kishore alajangi [mailto:alajangikish...@gmail.com]
> *Sent:* Thursday, January 21, 2016 6:49 PM
> *To:* cdh-u...@cloudera.org; user@hadoop.apache.org
> *Subject:* Kerberos Issue
>
>
>
> Hi,
>
> I am unable to browse the hdfs in the browser, its giving error
>
> Authentication failed when trying to open /webhdfs/v1/?op=LISTSTATUS:
> Unauthorized.
>
> My cluster is kerberos enabled,
>
> I am able to browse hdfs in command line, what could be the reason,
> appreciate for the suggestions.
>
>
> --
>
> Sincere Regards,
> A.Kishore Kumar,
>
> Ph: +91 9246274575
>
>
>
>
> --
>
> Sincere Regards,
> A.Kishore Kumar,
>
> Ph: +91 9246274575
>
>
>
>
> --
>
> Sincere Regards,
> A.Kishore Kumar,
>
> Ph: +91 9246274575
>



-- 
Sincere Regards,
A.Kishore Kumar,
Ph: +91 9246274575


getmerge output size

2016-01-21 Thread kishore alajangi
Hi,

Is getmerge output file size is equal to the number of all files size in
source directory ?

-- 
Sincere Regards,
A.Kishore Kumar,
Ph: +91 9246274575


Reg: Yarn resourcemanager logs alert

2016-06-09 Thread kishore alajangi
Hi Experts,

My requirement is to get the alert from yarn resourcemanager logs to mail,
if specific pattern occurred in the log, which is the best way ? please
help me.

-- 
Sincere Regards,
KishoreKumar.


ResourceManager API

2016-06-09 Thread kishore alajangi
Hi Experts,

Is there a way to get the logs from resourcemanager api for running job ?
please help me.

-- 
Sincere Regards,
A.Kishore Kumar,


Config file for hive via connecting jdbc

2016-06-22 Thread kishore alajangi
Hi Experts,

We are connecting hive with beeline via jdbc connector, which file we need
to use to set "mapreduce.map.memory.mb" value ? I think hive-site.xml file
used for hive cli, Kindly help me.
-- 
Thanks,
KishoreKumar.


Re:

2016-09-19 Thread kishore alajangi
check with -R option.

hadoop fs -ls -R 

On Mon, Sep 19, 2016 at 1:55 PM, Vinodh Nagaraj 
wrote:

> Hi All,
>
> When I execute *hdfs dfs -ls*,it shows all the directory. I have created
> one directory in hadoop.
> Remaining files are created at OS level.
>
> Executing from Hadoop home/bin.
>
>
> Thanks,
>
>


-- 
Sincere Regards,
A.Kishore Kumar,
Ph: +91 9246274575