Can somebody help me please?
Thanks
On Sun, Feb 23, 2014 at 3:27 AM, orahad bigdata wrote:
> Hi Experts,
>
> I'm facing an issue with Hiveserver2 and Ldap Integration, I have followed
> all the mentioned steps for the integration, In addition I'm able to do
> 'g
Hi Experts,
I'm facing an issue with Hiveserver2 and Ldap Integration, I have followed
all the mentioned steps for the integration, In addition I'm able to do
'getent passwd someuser' command working fine and even can also login on
client machine through LDAP authentication but it's not working wi
Hi All,
while installing pivotal HD we are facing issue during scan host step.
Scanning Hosts...
[RESULT] The following hosts do not meet GPHD prerequisites: [
hadoop3.test.net hadoop1.test.net ] Details...
Host: hadoop3.test.net
Status: [FAILED]
[ERROR] Host is not reachable from the GPHD
Hi,
Well in my case I can see the files in hdfs with supergroup ownership
but this group is not exist at OS level.
I think if we are not providing below parameter value in XML file,
the default group will be "supergroup" though it will exist at OS
level.
dfs.permissions.supergroup
Thanks
On
; 192.168.126.129 master.bigmix.com master loghost
> 192.168.126.130 clone1.bigmix.com clone1
> 192.168.126.133 clone2.bigmix.com clone2
>
> Yusaku
>
>
> On Thu, Sep 12, 2013 at 11:55 AM, orahad bigdata
> wrote:
>
>> Hi,
>>
>> Thanks for your reply.
>>
>> @C
ts
> Data Analyst
> New York Presbyterian Hospital
>
> - Original Message -
> From: Chris Embree [mailto:cemb...@gmail.com]
> Sent: Wednesday, September 11, 2013 02:40 PM
> To: user@hadoop.apache.org
> Subject: Re: Hadoop Metrics Issue in ganglia.
>
> Did you try g
Hi All,
Can somebody help me please?
Thanks
On 9/11/13, orahad bigdata wrote:
> Hi All,
>
> I'm facing an issue while showing Hadoop metrics in ganglia, Though I
> have installed ganglia on my master/slaves nodes and I'm able to see
> all the default metrics on ganglia
Hi All,
I'm facing an issue while showing Hadoop metrics in ganglia, Though I
have installed ganglia on my master/slaves nodes and I'm able to see
all the default metrics on ganglia UI from all the nodes but I'm not
able to see Hadoop metrics in metrics section.
versions:-
Hadoop 1.1.1
ganglia 3
Hi,
Many thanks to everyone.
Now issue got resolve after changing client version.
Regards
On Fri, Aug 30, 2013 at 1:12 PM, Francis.Hu wrote:
> Did you start up your ZKFC service on both of your name nodes ?
>
> Thanks,
> Francis.Hu
>
> -邮件原件-----
> 发件人: orahad b
you should restart
> your DN once and check your NN weburl.
>
> Regards
> Jitendra
>
> On 8/31/13, orahad bigdata wrote:
> > here is my conf files.
> >
> > ---core-site.xml---
> >
> >
> > fs.defaultFS
> > hdfs://orahad
r configurations.
>
> On Fri, Aug 30, 2013 at 11:32 AM, orahad bigdata
> wrote:
> > Thanks Jing,
> >
> > I'm using same configuration files at datanode side.
> >
> > dfs.nameservices -> orahadoop (hdfs-site.xml)
> >
> > fs.defaultFS ->
d for HA. If your DN's configuration still uses the old URL
> (e.g., one of your NN's host+port) for "fs.defaultFS", DN will only
> connect to that NN.
>
> On Fri, Aug 30, 2013 at 10:56 AM, orahad bigdata
> wrote:
>> Hi All,
>>
>> I'm usin
Hi All,
I'm using Hadoop 2.0.5 HA with QJM, After starting the cluster I did
some manual switch overs between NN.Then after I opened WEBUI page for
both the NN, I saw some strange situation where my DN connected to
standby NN but not sending the heartbeat to primary NameNode .
please guide.
Than
atus; Host Details : local host is: "clone2/XX.XX.XX.XX";
destination host is: "clone1":8020;
So is there any alternative to resolve this issue?.
Thanks
On 8/30/13, Harsh J wrote:
> On the actual issue though: Do you also have auto-failover configured?
>
> On Fri,
Hi,
I'm facing an error while starting Hadoop in HA(2.0.5) cluster , both
the NameNode started in standby mode and not changing the state.
When I tried to do health check through "hdfs haadmin -checkhealth
" it's giving me below error.
Failed on local exception:
com.google.protobuf.InvalidProt
"df -k /tmp/hadoop-hadoop/dfs/name/"
> as user "hadoop"?
>
> On Wed, Aug 28, 2013 at 12:53 AM, orahad bigdata
> wrote:
> > Hi All,
> >
> > I'm new in Hadoop administration, Can someone please help me?
> >
> > Hadoop-version :- 2.0.5 a
Hi All,
I'm new in Hadoop administration, Can someone please help me?
Hadoop-version :- 2.0.5 alpha and using QJM
I'm getting below error messages while starting Hadoop hdfs using 'start-dfs.sh'
2013-01-23 03:25:43,208 INFO
org.apache.hadoop.hdfs.server.namenode.FSImage: Image file of size 121
17 matches
Mail list logo