Re: Hadoop or RDBMS

2015-07-14 Thread James Peterzon | 123dm
Customer works already with Hadoop and doesn¹t want a RDMS on the side.
So maybe ElasticSearch for Hadoop or similar solution might be good for this
to guarantee performance for real time queries.

Van:  Sean Busbey 
Beantwoorden - Aan:  
Datum:  maandag 13 juli 2015 16:27
Aan:  user 
Onderwerp:  Re: Hadoop or RDBMS

Given the relatively modest dataset size, this sounds like a straight
forward use case for a traditional RDBMS.

Is there some other criteria that's leading you to evaluate things built on
Hadoop? Are you expecting several orders of magnitude of growth in the
record count?

On Mon, Jul 13, 2015 at 5:46 AM, James Peterzon | 123dm 
wrote:
> Hi there,
> 
> We have build a (online) selection tool where marketeers can select their
> target groups for marketing purposes eg direct mail or telemarketing.
> Now we were asked to build a similar selection tool based on a Hadoop
> database. This database contains about 35 million records (companies) with
> different fields to select on (Number of emplyees, Activity code, Geographical
> codes, Legal form code, Turnover figures, Year of establishment and so on)
> 
> Performance is very important for this online app. If one makes a selection
> with different criteria, the number of selected records should be on your
> screen in (milli) seconds.
> 
> We are not sure if Hadoop will be a good choice, for fast results we need a
> good indexed relational database in our opinionŠ
> 
> Can anybody advise me?
> 
> Thanks!
> 
> Best regards,
> 
> James Peterzon



-- 
Sean




Re: Hadoop or RDBMS

2015-07-14 Thread James Peterzon | 123dm
Thanks, this is clear to me.

Op 13-07-15 20:53 schreef Roman Shaposhnik :

>On Mon, Jul 13, 2015 at 7:27 AM, Sean Busbey  wrote:
>> Given the relatively modest dataset size, this sounds like a straight
>> forward use case for a traditional RDBMS.
>
>To pile on top of Sean's reply I'd say that given the current estimates a
>traditional RDBMS such as Postgres could fit the bill. If the potential
>scalability needs to be taken into account you could look at MPP-type
>of solutions to scale from Postgres. Ping me off-list if you're interested
>in that route.
>
>To get us back on topic for user@hadoop, I'd say that Hadoop ecosystem
>strategy for your use case has to be warranted by more than a size
>of data. In fact, I'd say that the Hadoop and/or Spark would only make
>sense to you if you feel like taking advantage of various analytical
>frameworks available for those.
>
>Just my 2c.
>
>Thanks,
>Roman.




Re: Incorrect configuration issue

2015-07-14 Thread khalid yatim
I found the issue, I just had to set HADOOP_PREFIX property in my env. Now
it works fine!

Thank you.

2015-07-14 7:45 GMT+01:00 Brahma Reddy Battula <
brahmareddy.batt...@huawei.com>:

>  you need to configure resolved host name (hdfs://*kyahadmaster*:54310)
>
> configure like following and start cluster.
>
> 
> 
> fs.defaultFS
> hdfs://localhost:9000
> 
> 
>
>
>
> *For more details, check the following link*
>
>
> https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/SingleCluster.html
>
>
>
>
>
>  Thanks & Regards
>
>  Brahma Reddy Battula
>
>
>
>
>--
> *From:* khalid yatim [yatimkha...@gmail.com]
> *Sent:* Monday, July 13, 2015 11:16 PM
> *To:* user@hadoop.apache.org
> *Subject:* Re: Incorrect configuration issue
>
>
> Hello,
>
>  I'm expressing some difficulties making a single node hadoop (2.6.0)
> install working.
>
>  My confs files seem to be OK. but I'm getting this errors
>
>  Incorrect configuration: namenode address
> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not
> configured.
>
>  In hadoop--namenode-.log, I'm getting
> Invalid URI for NameNode address (check fs.defaultFS): file:/// has no
> authority.
>
>  Here is the content of my core-site.xml file
>
> 
> 
> 
>
> 
>
> 
> 
> hadoop.tmp.dir
> /app/hadoop/app
> Temporary Directory.
> 
> 
> fs.default.name
> hdfs://kyahadmaster:54310
> Use HDFS as file storage engine
> 
> 
>
>
> 2015-07-13 17:26 GMT+00:00 khalid yatim :
>
>> Hello,
>>
>>  I'm expressing some difficulties making a single node hadoop (2.6.0)
>> install working.
>>
>>  My confs files seem to be OK. but I'm getting this errors
>>
>>  Incorrect configuration: namenode address
>> dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not
>> configured.
>>
>>  In hadoop--namenode-.log, I'm getting
>> Invalid URI for NameNode address (check fs.defaultFS): file:/// has no
>> authority.
>>
>>
>>  how can I configure logs to get more explicit information about what's
>> going wrong?
>>
>>
>>  I'm new here!
>>
>>  Thank you.
>>
>>  --
>> YATIM Khalid
>> 06 76 19 87 95
>>
>> INGENIEUR ENSIASTE
>> Promotion 2007
>>
>
>
>
> --
> YATIM Khalid
> 06 76 19 87 95
>
> INGENIEUR ENSIASTE
> Promotion 2007
>



-- 
YATIM Khalid
06 76 19 87 95

INGENIEUR ENSIASTE
Promotion 2007