RE: Incorrect configuration issue

2015-07-14 Thread Brahma Reddy Battula
you need to configure resolved host name (hdfs://kyahadmaster:54310)

configure like following and start cluster.


configuration
property
namefs.defaultFS/name
valuehdfs://localhost:9000/value
/property
/configuration


For more details, check the following link

https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html






Thanks  Regards

 Brahma Reddy Battula





From: khalid yatim [yatimkha...@gmail.com]
Sent: Monday, July 13, 2015 11:16 PM
To: user@hadoop.apache.org
Subject: Re: Incorrect configuration issue


Hello,

I'm expressing some difficulties making a single node hadoop (2.6.0) install 
working.

My confs files seem to be OK. but I'm getting this errors

Incorrect configuration: namenode address dfs.namenode.servicerpc-address or 
dfs.namenode.rpc-address is not configured.

In hadoop-user-namenode-machine.log, I'm getting
Invalid URI for NameNode address (check fs.defaultFS): file:/// has no 
authority.

Here is the content of my core-site.xml file

?xml version=1.0 encoding=UTF-8?
?xml-stylesheet type=text/xsl href=configuration.xsl?
!--
  Licensed under the Apache License, Version 2.0 (the License);
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an AS IS BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
--

!-- Put site-specific property overrides in this file. --

configuration
property
namehadoop.tmp.dir/name
value/app/hadoop/app/value
descriptionTemporary Directory./description
/property
property
namefs.default.namehttp://fs.default.name/name
valuehdfs://kyahadmaster:54310/value
descriptionUse HDFS as file storage engine/description
/property
/configuration


2015-07-13 17:26 GMT+00:00 khalid yatim 
yatimkha...@gmail.commailto:yatimkha...@gmail.com:
Hello,

I'm expressing some difficulties making a single node hadoop (2.6.0) install 
working.

My confs files seem to be OK. but I'm getting this errors

Incorrect configuration: namenode address dfs.namenode.servicerpc-address or 
dfs.namenode.rpc-address is not configured.

In hadoop-user-namenode-machine.log, I'm getting
Invalid URI for NameNode address (check fs.defaultFS): file:/// has no 
authority.


how can I configure logs to get more explicit information about what's going 
wrong?


I'm new here!

Thank you.

--
YATIM Khalid
06 76 19 87 95

INGENIEUR ENSIASTE
Promotion 2007



--
YATIM Khalid
06 76 19 87 95

INGENIEUR ENSIASTE
Promotion 2007


Re: Incorrect configuration issue

2015-07-14 Thread khalid yatim
I found the issue, I just had to set HADOOP_PREFIX property in my env. Now
it works fine!

Thank you.

2015-07-14 7:45 GMT+01:00 Brahma Reddy Battula 
brahmareddy.batt...@huawei.com:

  you need to configure resolved host name (hdfs://*kyahadmaster*:54310)

 configure like following and start cluster.

 configuration
 property
 namefs.defaultFS/name
 valuehdfs://localhost:9000/value
 /property
 /configuration



 *For more details, check the following link*


 https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html





  Thanks  Regards

  Brahma Reddy Battula




--
 *From:* khalid yatim [yatimkha...@gmail.com]
 *Sent:* Monday, July 13, 2015 11:16 PM
 *To:* user@hadoop.apache.org
 *Subject:* Re: Incorrect configuration issue


 Hello,

  I'm expressing some difficulties making a single node hadoop (2.6.0)
 install working.

  My confs files seem to be OK. but I'm getting this errors

  Incorrect configuration: namenode address
 dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not
 configured.

  In hadoop-user-namenode-machine.log, I'm getting
 Invalid URI for NameNode address (check fs.defaultFS): file:/// has no
 authority.

  Here is the content of my core-site.xml file

 ?xml version=1.0 encoding=UTF-8?
 ?xml-stylesheet type=text/xsl href=configuration.xsl?
 !--
   Licensed under the Apache License, Version 2.0 (the License);
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

 http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an AS IS BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License. See accompanying LICENSE file.
 --

 !-- Put site-specific property overrides in this file. --

 configuration
 property
 namehadoop.tmp.dir/name
 value/app/hadoop/app/value
 descriptionTemporary Directory./description
 /property
 property
 namefs.default.name/name
 valuehdfs://kyahadmaster:54310/value
 descriptionUse HDFS as file storage engine/description
 /property
 /configuration


 2015-07-13 17:26 GMT+00:00 khalid yatim yatimkha...@gmail.com:

 Hello,

  I'm expressing some difficulties making a single node hadoop (2.6.0)
 install working.

  My confs files seem to be OK. but I'm getting this errors

  Incorrect configuration: namenode address
 dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not
 configured.

  In hadoop-user-namenode-machine.log, I'm getting
 Invalid URI for NameNode address (check fs.defaultFS): file:/// has no
 authority.


  how can I configure logs to get more explicit information about what's
 going wrong?


  I'm new here!

  Thank you.

  --
 YATIM Khalid
 06 76 19 87 95

 INGENIEUR ENSIASTE
 Promotion 2007




 --
 YATIM Khalid
 06 76 19 87 95

 INGENIEUR ENSIASTE
 Promotion 2007




-- 
YATIM Khalid
06 76 19 87 95

INGENIEUR ENSIASTE
Promotion 2007


Re: Hadoop or RDBMS

2015-07-14 Thread James Peterzon | 123dm
Customer works already with Hadoop and doesn¹t want a RDMS on the side.
So maybe ElasticSearch for Hadoop or similar solution might be good for this
to guarantee performance for real time queries.

Van:  Sean Busbey bus...@cloudera.com
Beantwoorden - Aan:  user@hadoop.apache.org
Datum:  maandag 13 juli 2015 16:27
Aan:  user user@hadoop.apache.org
Onderwerp:  Re: Hadoop or RDBMS

Given the relatively modest dataset size, this sounds like a straight
forward use case for a traditional RDBMS.

Is there some other criteria that's leading you to evaluate things built on
Hadoop? Are you expecting several orders of magnitude of growth in the
record count?

On Mon, Jul 13, 2015 at 5:46 AM, James Peterzon | 123dm ja...@123dm.nl
wrote:
 Hi there,
 
 We have build a (online) selection tool where marketeers can select their
 target groups for marketing purposes eg direct mail or telemarketing.
 Now we were asked to build a similar selection tool based on a Hadoop
 database. This database contains about 35 million records (companies) with
 different fields to select on (Number of emplyees, Activity code, Geographical
 codes, Legal form code, Turnover figures, Year of establishment and so on)
 
 Performance is very important for this online app. If one makes a selection
 with different criteria, the number of selected records should be on your
 screen in (milli) seconds.
 
 We are not sure if Hadoop will be a good choice, for fast results we need a
 good indexed relational database in our opinionŠ
 
 Can anybody advise me?
 
 Thanks!
 
 Best regards,
 
 James Peterzon



-- 
Sean




Re: Hadoop or RDBMS

2015-07-14 Thread James Peterzon | 123dm
Thanks, this is clear to me.

Op 13-07-15 20:53 schreef Roman Shaposhnik ro...@shaposhnik.org:

On Mon, Jul 13, 2015 at 7:27 AM, Sean Busbey bus...@cloudera.com wrote:
 Given the relatively modest dataset size, this sounds like a straight
 forward use case for a traditional RDBMS.

To pile on top of Sean's reply I'd say that given the current estimates a
traditional RDBMS such as Postgres could fit the bill. If the potential
scalability needs to be taken into account you could look at MPP-type
of solutions to scale from Postgres. Ping me off-list if you're interested
in that route.

To get us back on topic for user@hadoop, I'd say that Hadoop ecosystem
strategy for your use case has to be warranted by more than a size
of data. In fact, I'd say that the Hadoop and/or Spark would only make
sense to you if you feel like taking advantage of various analytical
frameworks available for those.

Just my 2c.

Thanks,
Roman.