>From a quick perusal of the posted log, it looks like hbase is staying up?
 Is it having problems Dmitry other than slowness after you made changes
like xceiver and upped tickTime?  I'll take a closer look later.

St.Ack

On Wed, Jan 13, 2010 at 4:35 AM, Dmitriy Lyfar <[email protected]> wrote:

> Sorry, forgot to insert link to DEBUG regionserver logs:
> http://pastebin.com/m70f01f36
>
> 2010/1/13 Dmitriy Lyfar <[email protected]>
>
> > Hi Stack,
> >
> > Thank you for you help. I set xceivers in hdfs xml config like:
> >
> > <property>
> >         <name>dfs.datanode.max.xcievers</name>
> >         <value>8192</value>
> > </property>
> >
> > And ulimit is 32K for sure. I turned off DEBUG logging level for hbase
> and
> > here is log for one of regionservers after I have inserted 200K records
> > (each row is 25Kb).
> > Speed still the same (about 1K rows per second).
> > Random ints plays a role of row keys now (i.e. uniform random
> distribution
> > on (0, 100 * 1000)).
> > What do you think is 5GB for hbase and 2GB for hdfs enough?
> >
> >
> >> What are you tasktrackers doing?   Are they doing the hbase loading?
>  You
> >> might try turning down how many task run concurrently on each
> tasktracker.
> >> The running tasktracker may be sucking resources from hdfs (and thus by
> >> association, from hbase): i.e. mapred.map.tasks and mapred.reduce.tasks
> >> (Pardon me if this advice has been given previous and you've already
> acted
> >> on it).
> >
> >
> > Tasktrackers is not used now (I planned them for future use in
> statistical
> > analysis). So I turned them off for last tests. Data uploader is several
> > clients which run simultaneously on name node and each of them inserts
> 100K
> > records.
> >
> > --
> > Regards, Lyfar Dmitriy
> >
>
>
>
> --
> Regards, Lyfar Dmitriy
> mailto: [email protected]
> jabber: [email protected]
>

Reply via email to