Re: Introducing libhbase (C APIs for Apache HBase)

2014-03-18 Thread haosdent
Hi, Aditya. Thank you for your great job. I am very exciting about these
issues. If libhbase depends on Thrift?


On Tue, Mar 18, 2014 at 9:25 AM, Aditya  wrote:

> Hi,
>
> Pursuant to the JIRAs
> HBASE-10168,
> HBASE-9977  and
> HBASE-9835I am happy
> to announce that the first draft of a JNI based implementation
> of C APIs for HBase is now available for your review.
>
> The source and instructions to build and use is available at MapR's Github
> repository . A slide from my presentation on the
> same
> can be downloaded from the meetup site .
>
> Would put the patches on the respective JIRA shortly.
>
> Regards,
> aditya...
>
> 
>



-- 
Best Regards,
Haosdent Huang


Re: Introducing libhbase (C APIs for Apache HBase)

2014-03-18 Thread Aditya
No, it does not. It uses a modified version of AsyncHBase
libraryover JNI.


On Tue, Mar 18, 2014 at 12:31 AM, haosdent  wrote:

> Hi, Aditya. Thank you for your great job. I am very exciting about these
> issues. If libhbase depends on Thrift?
>
>
> On Tue, Mar 18, 2014 at 9:25 AM, Aditya  wrote:
>
>> Hi,
>>
>> Pursuant to the JIRAs
>> HBASE-10168,
>> HBASE-9977  and
>> HBASE-9835I am happy
>>
>> to announce that the first draft of a JNI based implementation
>> of C APIs for HBase is now available for your review.
>>
>> The source and instructions to build and use is available at MapR's Github
>> repository . A slide from my presentation on the
>> same
>> can be downloaded from the meetup site .
>>
>>
>> Would put the patches on the respective JIRA shortly.
>>
>> Regards,
>> aditya...
>>
>> 
>>
>
>
>
> --
> Best Regards,
> Haosdent Huang
>


Re: Introducing libhbase (C APIs for Apache HBase)

2014-03-18 Thread haosdent
Cool. So it have a better peformance?


On Tue, Mar 18, 2014 at 4:09 PM, Aditya  wrote:

> No, it does not. It uses a modified version of AsyncHBase 
> libraryover JNI.
>
>
> On Tue, Mar 18, 2014 at 12:31 AM, haosdent  wrote:
>
>> Hi, Aditya. Thank you for your great job. I am very exciting about these
>> issues. If libhbase depends on Thrift?
>>
>>
>> On Tue, Mar 18, 2014 at 9:25 AM, Aditya  wrote:
>>
>>> Hi,
>>>
>>> Pursuant to the JIRAs
>>> HBASE-10168,
>>> HBASE-9977  and
>>> HBASE-9835I am happy
>>>
>>> to announce that the first draft of a JNI based implementation
>>> of C APIs for HBase is now available for your review.
>>>
>>> The source and instructions to build and use is available at MapR's
>>> Github
>>> repository . A slide from my presentation on the
>>> same
>>> can be downloaded from the meetup site .
>>>
>>>
>>> Would put the patches on the respective JIRA shortly.
>>>
>>> Regards,
>>> aditya...
>>>
>>> 
>>>
>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>
>


-- 
Best Regards,
Haosdent Huang


Re: Introducing libhbase (C APIs for Apache HBase)

2014-03-18 Thread Aditya
There is some penalty of transition from native code to JNI and back but I
would expect it definitely performs better than a Thrift based client. Will
publish performance (comparison) figures soon.


On Tue, Mar 18, 2014 at 1:36 AM, haosdent  wrote:

> Cool. So it have a better peformance?
>
>
> On Tue, Mar 18, 2014 at 4:09 PM, Aditya  wrote:
>
>> No, it does not. It uses a modified version of AsyncHBase 
>> libraryover JNI.
>>
>>
>> On Tue, Mar 18, 2014 at 12:31 AM, haosdent  wrote:
>>
>>> Hi, Aditya. Thank you for your great job. I am very exciting about these
>>> issues. If libhbase depends on Thrift?
>>>
>>>
>>> On Tue, Mar 18, 2014 at 9:25 AM, Aditya  wrote:
>>>
 Hi,

 Pursuant to the JIRAs
 HBASE-10168,
 HBASE-9977  and
 HBASE-9835I am happy

 to announce that the first draft of a JNI based implementation
 of C APIs for HBase is now available for your review.

 The source and instructions to build and use is available at MapR's
 Github
 repository . A slide from my presentation on the
 same
 can be downloaded from the meetup site .


 Would put the patches on the respective JIRA shortly.

 Regards,
 aditya...

 

>>>
>>>
>>>
>>> --
>>> Best Regards,
>>> Haosdent Huang
>>>
>>
>>
>
>
> --
> Best Regards,
> Haosdent Huang
>


[jira] [Created] (HBASE-10782) Hadoop2 MR tests fail occasionally because of mapreduce.jobhistory.address is no set in job conf

2014-03-18 Thread Liu Shaohui (JIRA)
Liu Shaohui created HBASE-10782:
---

 Summary: Hadoop2 MR tests fail occasionally because of 
mapreduce.jobhistory.address is no set in job conf
 Key: HBASE-10782
 URL: https://issues.apache.org/jira/browse/HBASE-10782
 Project: HBase
  Issue Type: Test
Reporter: Liu Shaohui
Priority: Minor


Hadoop2 MR tests fail occasionally with output like this:
{code}
---
Test set: org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan1
---
Tests run: 5, Failures: 0, Errors: 5, Skipped: 0, Time elapsed: 347.57 sec <<< 
FAILURE!
testScanEmptyToAPP(org.apache.hadoop.hbase.mapreduce.TestTableInputFormatScan1) 
 Time elapsed: 50.047 sec  <<< ERROR!
java.io.IOException: java.net.ConnectException: Call From 
liushaohui-OptiPlex-990/127.0.0.1 to 0.0.0.0:10020 failed on connection 
exception: java.net.ConnectException: Connection refused; For more details see: 
 http://wiki.apache.org/hadoop/ConnectionRefused
at 
org.apache.hadoop.mapred.ClientServiceDelegate.invoke(ClientServiceDelegate.java:334)
at 
org.apache.hadoop.mapred.ClientServiceDelegate.getJobStatus(ClientServiceDelegate.java:419)
at org.apache.hadoop.mapred.YARNRunner.getJobStatus(YARNRunner.java:524)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:314)
at org.apache.hadoop.mapreduce.Job$1.run(Job.java:311)
at java.security.AccessController.doPrivileged(Native Method)
 ...
{code}
The reason is that when MR job was running, the job client pulled the job 
status from AppMaster. When the job is completed, the AppMaster will exit. At 
this time, if the job client have not got the job completed event from 
AppMaster, it will try to get job report from history server. 

But in HBaseTestingUtility#startMiniMapReduceCluster, the config: 
mapreduce.jobhistory.address is not copied to TestUtil's config.
 
CRUNCH-249 reported the same problem.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Introducing libhbase (C APIs for Apache HBase)

2014-03-18 Thread haosdent
> Will publish performance (comparison) figures soon.

Great job!


On Tue, Mar 18, 2014 at 4:50 PM, Aditya  wrote:

> There is some penalty of transition from native code to JNI and back but I
> would expect it definitely performs better than a Thrift based client. Will
> publish performance (comparison) figures soon.
>
>
> On Tue, Mar 18, 2014 at 1:36 AM, haosdent  wrote:
>
>> Cool. So it have a better peformance?
>>
>>
>> On Tue, Mar 18, 2014 at 4:09 PM, Aditya  wrote:
>>
>>> No, it does not. It uses a modified version of AsyncHBase 
>>> libraryover JNI.
>>>
>>>
>>> On Tue, Mar 18, 2014 at 12:31 AM, haosdent  wrote:
>>>
 Hi, Aditya. Thank you for your great job. I am very exciting about
 these issues. If libhbase depends on Thrift?


 On Tue, Mar 18, 2014 at 9:25 AM, Aditya wrote:

> Hi,
>
> Pursuant to the JIRAs
> HBASE-10168,
> HBASE-9977  and
> HBASE-9835I am happy
>
> to announce that the first draft of a JNI based implementation
> of C APIs for HBase is now available for your review.
>
> The source and instructions to build and use is available at MapR's
> Github
> repository . A slide from my presentation on
> the same
> can be downloaded from the meetup site .
>
>
> Would put the patches on the respective JIRA shortly.
>
> Regards,
> aditya...
>
> 
>



 --
 Best Regards,
 Haosdent Huang

>>>
>>>
>>
>>
>> --
>> Best Regards,
>> Haosdent Huang
>>
>
>


-- 
Best Regards,
Haosdent Huang


[jira] [Created] (HBASE-10783) Backport HBASE-10476 to 0.94

2014-03-18 Thread haosdent (JIRA)
haosdent created HBASE-10783:


 Summary: Backport HBASE-10476 to 0.94
 Key: HBASE-10783
 URL: https://issues.apache.org/jira/browse/HBASE-10783
 Project: HBase
  Issue Type: Bug
Reporter: haosdent
Assignee: haosdent






--
This message was sent by Atlassian JIRA
(v6.2#6252)


Is there a better way to handle too much log

2014-03-18 Thread haosdent
Sometimes the call of Log.xxx couldn't return if the disk partition of Log
path is full. And HBase would hang because of this. So I think if there is
a better way to handle too much log. For example, through a configuration
item in hbase-site.xml, we could delete the old logs periodically or delete
old logs when this disk didn't have enough space.

I think HBase hang when disk space isn't enough is unacceptable. Looking
forward your ideas. Thanks in advance.

-- 
Best Regards,
Haosdent Huang


Re: Is there a better way to handle too much log

2014-03-18 Thread Ted Yu
Can you utilize 
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
 ?

And have a cron job cleanup old logs ?

Cheers

On Mar 18, 2014, at 5:29 AM, haosdent  wrote:

> Sometimes the call of Log.xxx couldn't return if the disk partition of Log
> path is full. And HBase would hang because of this. So I think if there is
> a better way to handle too much log. For example, through a configuration
> item in hbase-site.xml, we could delete the old logs periodically or delete
> old logs when this disk didn't have enough space.
> 
> I think HBase hang when disk space isn't enough is unacceptable. Looking
> forward your ideas. Thanks in advance.
> 
> -- 
> Best Regards,
> Haosdent Huang


Re: Is there a better way to handle too much log

2014-03-18 Thread haosdent
Thanks for your reply. DailyRollingFileAppender and a cron job could works
in normal scenario. But sometimes log grow too fast, or disk space may use
by other applications. Is there a way make Log more "smart" and choose
policy according to current disk space?


On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu  wrote:

> Can you utilize
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html?
>
> And have a cron job cleanup old logs ?
>
> Cheers
>
> On Mar 18, 2014, at 5:29 AM, haosdent  wrote:
>
> > Sometimes the call of Log.xxx couldn't return if the disk partition of
> Log
> > path is full. And HBase would hang because of this. So I think if there
> is
> > a better way to handle too much log. For example, through a configuration
> > item in hbase-site.xml, we could delete the old logs periodically or
> delete
> > old logs when this disk didn't have enough space.
> >
> > I think HBase hang when disk space isn't enough is unacceptable. Looking
> > forward your ideas. Thanks in advance.
> >
> > --
> > Best Regards,
> > Haosdent Huang
>



-- 
Best Regards,
Haosdent Huang


Re: Is there a better way to handle too much log

2014-03-18 Thread Ted Yu
If log grows so fast that disk space is to be exhausted, verbosity should be 
lowered. 

Do you turn on DEBUG logging ?

Cheers

On Mar 18, 2014, at 6:08 AM, haosdent  wrote:

> Thanks for your reply. DailyRollingFileAppender and a cron job could works
> in normal scenario. But sometimes log grow too fast, or disk space may use
> by other applications. Is there a way make Log more "smart" and choose
> policy according to current disk space?
> 
> 
> On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu  wrote:
> 
>> Can you utilize
>> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html?
>> 
>> And have a cron job cleanup old logs ?
>> 
>> Cheers
>> 
>> On Mar 18, 2014, at 5:29 AM, haosdent  wrote:
>> 
>>> Sometimes the call of Log.xxx couldn't return if the disk partition of
>> Log
>>> path is full. And HBase would hang because of this. So I think if there
>> is
>>> a better way to handle too much log. For example, through a configuration
>>> item in hbase-site.xml, we could delete the old logs periodically or
>> delete
>>> old logs when this disk didn't have enough space.
>>> 
>>> I think HBase hang when disk space isn't enough is unacceptable. Looking
>>> forward your ideas. Thanks in advance.
>>> 
>>> --
>>> Best Regards,
>>> Haosdent Huang
> 
> 
> 
> -- 
> Best Regards,
> Haosdent Huang


Re: Is there a better way to handle too much log

2014-03-18 Thread Jean-Marc Spaggiari
Hey, there is some workarounds like what Ted described, but I think it's
still an issue if we block all the operations because we are not able to
write in the logs.


2014-03-18 9:49 GMT-04:00 Ted Yu :

> If log grows so fast that disk space is to be exhausted, verbosity should
> be lowered.
>
> Do you turn on DEBUG logging ?
>
> Cheers
>
> On Mar 18, 2014, at 6:08 AM, haosdent  wrote:
>
> > Thanks for your reply. DailyRollingFileAppender and a cron job could
> works
> > in normal scenario. But sometimes log grow too fast, or disk space may
> use
> > by other applications. Is there a way make Log more "smart" and choose
> > policy according to current disk space?
> >
> >
> > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu  wrote:
> >
> >> Can you utilize
> >>
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> ?
> >>
> >> And have a cron job cleanup old logs ?
> >>
> >> Cheers
> >>
> >> On Mar 18, 2014, at 5:29 AM, haosdent  wrote:
> >>
> >>> Sometimes the call of Log.xxx couldn't return if the disk partition of
> >> Log
> >>> path is full. And HBase would hang because of this. So I think if there
> >> is
> >>> a better way to handle too much log. For example, through a
> configuration
> >>> item in hbase-site.xml, we could delete the old logs periodically or
> >> delete
> >>> old logs when this disk didn't have enough space.
> >>>
> >>> I think HBase hang when disk space isn't enough is unacceptable.
> Looking
> >>> forward your ideas. Thanks in advance.
> >>>
> >>> --
> >>> Best Regards,
> >>> Haosdent Huang
> >
> >
> >
> > --
> > Best Regards,
> > Haosdent Huang
>


Re: Is there a better way to handle too much log

2014-03-18 Thread haosdent
Yep, I use INFO level. Let me think about this later. If I found a better
way, I would open a issue and record it. Thanks for your great help. @tedyu


On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu  wrote:

> If log grows so fast that disk space is to be exhausted, verbosity should
> be lowered.
>
> Do you turn on DEBUG logging ?
>
> Cheers
>
> On Mar 18, 2014, at 6:08 AM, haosdent  wrote:
>
> > Thanks for your reply. DailyRollingFileAppender and a cron job could
> works
> > in normal scenario. But sometimes log grow too fast, or disk space may
> use
> > by other applications. Is there a way make Log more "smart" and choose
> > policy according to current disk space?
> >
> >
> > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu  wrote:
> >
> >> Can you utilize
> >>
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> ?
> >>
> >> And have a cron job cleanup old logs ?
> >>
> >> Cheers
> >>
> >> On Mar 18, 2014, at 5:29 AM, haosdent  wrote:
> >>
> >>> Sometimes the call of Log.xxx couldn't return if the disk partition of
> >> Log
> >>> path is full. And HBase would hang because of this. So I think if there
> >> is
> >>> a better way to handle too much log. For example, through a
> configuration
> >>> item in hbase-site.xml, we could delete the old logs periodically or
> >> delete
> >>> old logs when this disk didn't have enough space.
> >>>
> >>> I think HBase hang when disk space isn't enough is unacceptable.
> Looking
> >>> forward your ideas. Thanks in advance.
> >>>
> >>> --
> >>> Best Regards,
> >>> Haosdent Huang
> >
> >
> >
> > --
> > Best Regards,
> > Haosdent Huang
>



-- 
Best Regards,
Haosdent Huang


[GitHub] hbase pull request: Trunk

2014-03-18 Thread carp84
GitHub user carp84 opened a pull request:

https://github.com/apache/hbase/pull/9

Trunk

Hi Anoop, this is Yu Li, could you give me the pull access of your trunk 
code for HBASE-10713? Thanks. :-)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/anoopsjohn/hbase trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hbase/pull/9.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #9


commit 05a71880d47bf6621864cff55210334471ef63ff
Author: anoopsjohn 
Date:   2014-03-16T18:35:53Z

HBASE-10713

commit 4379549009db0fe46480b040e239392e876ee689
Author: anoopsjohn 
Date:   2014-03-17T09:19:42Z

HBASE-10713

commit dfb7f12ca658a432f2d6a8da75fd189b1db8bc62
Author: anoopsjohn 
Date:   2014-03-17T10:43:00Z

HBASE-10713




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (HBASE-10779) Doc hadoop1 deprecated in 0.98 and NOT supported in hbase 1.0

2014-03-18 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10779?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-10779.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Thanks for review Jon.  I made the changes you suggested and committed.  Yes, 
hbase 1.0 is not tested on h2.3.  Can change when we come close to release 
(we'll probably ship h2.4).

> Doc hadoop1 deprecated in 0.98 and NOT supported in hbase 1.0
> -
>
> Key: HBASE-10779
> URL: https://issues.apache.org/jira/browse/HBASE-10779
> Project: HBase
>  Issue Type: Sub-task
>Reporter: stack
>Assignee: stack
> Fix For: 0.99.0
>
> Attachments: 10779.txt, configuration.html
>
>
> Do first two bullet items from parent issue adding doc to our hadoop support 
> matrix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10784) [89-fb] Avoid the unnecessary memory copy for RowCol and DeleteColumn Bloom filters

2014-03-18 Thread Liyin Tang (JIRA)
Liyin Tang created HBASE-10784:
--

 Summary: [89-fb] Avoid the unnecessary memory copy for RowCol and 
DeleteColumn Bloom filters
 Key: HBASE-10784
 URL: https://issues.apache.org/jira/browse/HBASE-10784
 Project: HBase
  Issue Type: Improvement
Reporter: Liyin Tang


For adding/querying rowcol and deleteColumn BF, there are multiple unnecessary 
memory copy operations. This jira is to address the concern and avoid creating 
these dummy bloom keys as much as possible.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Splitting table by column family

2014-03-18 Thread Mikhail Antonov
Hey guys,

I have one question regarding splitting the table to regions which I
couldn't find the complete answer for.

So I found in the various sources that the region split divides the table
by the rowkey, and that the single row is always guaranteed to be contained
within the single region.

However, I've heard also information that for the huge (wide) rows, there
is also a possibility to split them by column families, i.e. the huge
single row can actually have it's column families located in different
regions, hence, the single row can span multiple regions.

Having looked at the regionserver sources of SplitTransaction and around, I
couldn't find anything related to splitting the row by the column families.

Could you please advise me if it's actually possible or not?

-- 
Thanks,
Michael Antonov


Re: Splitting table by column family

2014-03-18 Thread Ted Yu
bq. I've heard also information that for the huge (wide) rows ...

That is not true.
Where did you get such information ?


On Tue, Mar 18, 2014 at 11:37 AM, Mikhail Antonov wrote:

> Hey guys,
>
> I have one question regarding splitting the table to regions which I
> couldn't find the complete answer for.
>
> So I found in the various sources that the region split divides the table
> by the rowkey, and that the single row is always guaranteed to be contained
> within the single region.
>
> However, I've heard also information that for the huge (wide) rows, there
> is also a possibility to split them by column families, i.e. the huge
> single row can actually have it's column families located in different
> regions, hence, the single row can span multiple regions.
>
> Having looked at the regionserver sources of SplitTransaction and around, I
> couldn't find anything related to splitting the row by the column families.
>
> Could you please advise me if it's actually possible or not?
>
> --
> Thanks,
> Michael Antonov
>


Re: [VOTE] The 1st HBase 0.98.1 release candidate (RC0) is available

2014-03-18 Thread Ted Yu
+1

- checked documentation and tarball

- Ran unit test suite which passed

- Ran in local and distributed mode
- checked the UI pages


On Mon, Mar 17, 2014 at 4:46 PM, Andrew Purtell  wrote:

> Adding VOTE tag to subject
>
> and a clarification (pardon the typo):
>
> This vote will run for 14 days given that a few RCs have stacked up this
> month. Please try out the candidate and vote +1/-1 by midnight Pacific Time
> (00:00 -0800 GMT) on March 31 on whether or not we should release this as
> 0.98.1.
>
>
> On Mon, Mar 17, 2014 at 4:42 PM, Andrew Purtell 
> wrote:
>
> > The 1st HBase 0.98.1 release candidate (RC0) is available for download at
> > http://people.apache.org/~apurtell/0.98.1RC0/ and Maven artifacts are
> > also available in the temporary repository
> > https://repository.apache.org/content/repositories/orgapachehbase-1007.
> >
> > Signed with my code signing key D5365CCD.
> >
> > The issues resolved in this release can be found here:
> >
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12310753&version=12325664
> >
> >
> > This vote will run for 14 days given that a few RCs have stacked up this
> > month. Please try out the candidate and vote +1/-1 by midnight Pacific
> Time
> > (00:00 -0800 GMT) on February 31 on whether or not we should release this
> > as 0.98.1.
> >
> > --
> > Best regards,
> >
> >- Andy
> >
> > Problems worthy of attack prove their worth by hitting back. - Piet Hein
> > (via Tom White)
> >
>
>
>
> --
> Best regards,
>
>- Andy
>
> Problems worthy of attack prove their worth by hitting back. - Piet Hein
> (via Tom White)
>


Re: Splitting table by column family

2014-03-18 Thread Mikhail Antonov
Thanks for the quick reply, Ted!

I believe I saw that on the stackoverflow, but can't quickly find the link
now. I primarily asked thinking that it may be some low-level
implementation detail, hence not widely mentioned or discussed. So
including the latest releases (I looked thru the release notes of 0.98 and
0.96 briefly and didn't find anything relevant), the right answer still is
- the single row is always located in the one region?

-Mikhail


2014-03-18 12:22 GMT-07:00 Ted Yu :

> bq. I've heard also information that for the huge (wide) rows ...
>
> That is not true.
> Where did you get such information ?
>
>
> On Tue, Mar 18, 2014 at 11:37 AM, Mikhail Antonov  >wrote:
>
> > Hey guys,
> >
> > I have one question regarding splitting the table to regions which I
> > couldn't find the complete answer for.
> >
> > So I found in the various sources that the region split divides the table
> > by the rowkey, and that the single row is always guaranteed to be
> contained
> > within the single region.
> >
> > However, I've heard also information that for the huge (wide) rows, there
> > is also a possibility to split them by column families, i.e. the huge
> > single row can actually have it's column families located in different
> > regions, hence, the single row can span multiple regions.
> >
> > Having looked at the regionserver sources of SplitTransaction and
> around, I
> > couldn't find anything related to splitting the row by the column
> families.
> >
> > Could you please advise me if it's actually possible or not?
> >
> > --
> > Thanks,
> > Michael Antonov
> >
>



-- 
Thanks,
Michael Antonov


Re: Splitting table by column family

2014-03-18 Thread Ted Yu
bq. single row is always located in the one region

Yes.


On Tue, Mar 18, 2014 at 1:12 PM, Mikhail Antonov wrote:

> Thanks for the quick reply, Ted!
>
> I believe I saw that on the stackoverflow, but can't quickly find the link
> now. I primarily asked thinking that it may be some low-level
> implementation detail, hence not widely mentioned or discussed. So
> including the latest releases (I looked thru the release notes of 0.98 and
> 0.96 briefly and didn't find anything relevant), the right answer still is
> - the single row is always located in the one region?
>
> -Mikhail
>
>
> 2014-03-18 12:22 GMT-07:00 Ted Yu :
>
> > bq. I've heard also information that for the huge (wide) rows ...
> >
> > That is not true.
> > Where did you get such information ?
> >
> >
> > On Tue, Mar 18, 2014 at 11:37 AM, Mikhail Antonov  > >wrote:
> >
> > > Hey guys,
> > >
> > > I have one question regarding splitting the table to regions which I
> > > couldn't find the complete answer for.
> > >
> > > So I found in the various sources that the region split divides the
> table
> > > by the rowkey, and that the single row is always guaranteed to be
> > contained
> > > within the single region.
> > >
> > > However, I've heard also information that for the huge (wide) rows,
> there
> > > is also a possibility to split them by column families, i.e. the huge
> > > single row can actually have it's column families located in different
> > > regions, hence, the single row can span multiple regions.
> > >
> > > Having looked at the regionserver sources of SplitTransaction and
> > around, I
> > > couldn't find anything related to splitting the row by the column
> > families.
> > >
> > > Could you please advise me if it's actually possible or not?
> > >
> > > --
> > > Thanks,
> > > Michael Antonov
> > >
> >
>
>
>
> --
> Thanks,
> Michael Antonov
>


Re: Splitting table by column family

2014-03-18 Thread Mikhail Antonov
I see - thank you!

-Mikhail


2014-03-18 13:16 GMT-07:00 Ted Yu :

> bq. single row is always located in the one region
>
> Yes.
>
>
> On Tue, Mar 18, 2014 at 1:12 PM, Mikhail Antonov  >wrote:
>
> > Thanks for the quick reply, Ted!
> >
> > I believe I saw that on the stackoverflow, but can't quickly find the
> link
> > now. I primarily asked thinking that it may be some low-level
> > implementation detail, hence not widely mentioned or discussed. So
> > including the latest releases (I looked thru the release notes of 0.98
> and
> > 0.96 briefly and didn't find anything relevant), the right answer still
> is
> > - the single row is always located in the one region?
> >
> > -Mikhail
> >
> >
> > 2014-03-18 12:22 GMT-07:00 Ted Yu :
> >
> > > bq. I've heard also information that for the huge (wide) rows ...
> > >
> > > That is not true.
> > > Where did you get such information ?
> > >
> > >
> > > On Tue, Mar 18, 2014 at 11:37 AM, Mikhail Antonov <
> olorinb...@gmail.com
> > > >wrote:
> > >
> > > > Hey guys,
> > > >
> > > > I have one question regarding splitting the table to regions which I
> > > > couldn't find the complete answer for.
> > > >
> > > > So I found in the various sources that the region split divides the
> > table
> > > > by the rowkey, and that the single row is always guaranteed to be
> > > contained
> > > > within the single region.
> > > >
> > > > However, I've heard also information that for the huge (wide) rows,
> > there
> > > > is also a possibility to split them by column families, i.e. the huge
> > > > single row can actually have it's column families located in
> different
> > > > regions, hence, the single row can span multiple regions.
> > > >
> > > > Having looked at the regionserver sources of SplitTransaction and
> > > around, I
> > > > couldn't find anything related to splitting the row by the column
> > > families.
> > > >
> > > > Could you please advise me if it's actually possible or not?
> > > >
> > > > --
> > > > Thanks,
> > > > Michael Antonov
> > > >
> > >
> >
> >
> >
> > --
> > Thanks,
> > Michael Antonov
> >
>



-- 
Thanks,
Michael Antonov


Re: Is there a better way to handle too much log

2014-03-18 Thread Enis Söztutar
DFRA already deletes old logs, you do not necessarily have to have a cron
job.

You can use RollingFileAppender to limit the max file size, and number of
log files to keep around.

Check out conf/log4j.properties.
Enis


On Tue, Mar 18, 2014 at 7:22 AM, haosdent  wrote:

> Yep, I use INFO level. Let me think about this later. If I found a better
> way, I would open a issue and record it. Thanks for your great help. @tedyu
>
>
> On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu  wrote:
>
> > If log grows so fast that disk space is to be exhausted, verbosity should
> > be lowered.
> >
> > Do you turn on DEBUG logging ?
> >
> > Cheers
> >
> > On Mar 18, 2014, at 6:08 AM, haosdent  wrote:
> >
> > > Thanks for your reply. DailyRollingFileAppender and a cron job could
> > works
> > > in normal scenario. But sometimes log grow too fast, or disk space may
> > use
> > > by other applications. Is there a way make Log more "smart" and choose
> > > policy according to current disk space?
> > >
> > >
> > > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu  wrote:
> > >
> > >> Can you utilize
> > >>
> >
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> > ?
> > >>
> > >> And have a cron job cleanup old logs ?
> > >>
> > >> Cheers
> > >>
> > >> On Mar 18, 2014, at 5:29 AM, haosdent  wrote:
> > >>
> > >>> Sometimes the call of Log.xxx couldn't return if the disk partition
> of
> > >> Log
> > >>> path is full. And HBase would hang because of this. So I think if
> there
> > >> is
> > >>> a better way to handle too much log. For example, through a
> > configuration
> > >>> item in hbase-site.xml, we could delete the old logs periodically or
> > >> delete
> > >>> old logs when this disk didn't have enough space.
> > >>>
> > >>> I think HBase hang when disk space isn't enough is unacceptable.
> > Looking
> > >>> forward your ideas. Thanks in advance.
> > >>>
> > >>> --
> > >>> Best Regards,
> > >>> Haosdent Huang
> > >
> > >
> > >
> > > --
> > > Best Regards,
> > > Haosdent Huang
> >
>
>
>
> --
> Best Regards,
> Haosdent Huang
>


[jira] [Created] (HBASE-10785) Metas own location should be cached

2014-03-18 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-10785:
-

 Summary: Metas own location should be cached
 Key: HBASE-10785
 URL: https://issues.apache.org/jira/browse/HBASE-10785
 Project: HBase
  Issue Type: Improvement
Reporter: Enis Soztutar
Assignee: Enis Soztutar


With ROOT table gone, we no longer cache the location of the meta table (in 
MetaCache) in 96+. I've checked 94 code, and there we cache meta, but not root.

However, not caching the metas own location means that we are doing a zookeeper 
request every time we want to look up a regions location from meta. This means 
that there is a significant spike in zk requests whenever a region server goes 
down. 

This affects trunk,0.98 and 0.96 as well as hbase-10070 branch. I've discovered 
the issue in hbase-10070 because of the integration test (HBASE-10572) results 
in 150K requests to zk in 10min. 

A thread dump from one of the runs have 100+ threads from client in this stack 
trace: 
{code}
"TimeBoundedMultiThreadedReaderThread_20" prio=10 
tid=0x7f852c2f2000 nid=0x57b6 in Object.wait() [0x7f85059e7000]
   java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at 
org.apache.zookeeper.ClientCnxn.submitRequest(ClientCnxn.java:1309)
- locked <0xea71aa78> (a 
org.apache.zookeeper.ClientCnxn$Packet)
at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1149)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.getData(RecoverableZooKeeper.java:337)
at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.getData(ZKUtil.java:684)
at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.blockUntilAvailable(ZKUtil.java:1853)
at 
org.apache.hadoop.hbase.zookeeper.MetaRegionTracker.blockUntilAvailable(MetaRegionTracker.java:186)
at 
org.apache.hadoop.hbase.client.ZooKeeperRegistry.getMetaRegionLocation(ZooKeeperRegistry.java:60)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1126)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1112)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1220)
at 
org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1129)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.getRegionLocations(RpcRetryingCallerWithReadReplicas.java:321)
at 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.call(RpcRetryingCallerWithReadReplicas.java:257)
- locked <0xe9bcf238> (a 
org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas)
at org.apache.hadoop.hbase.client.HTable.get(HTable.java:818)
at 
org.apache.hadoop.hbase.util.MultiThreadedReader$HBaseReaderThread.queryKey(MultiThreadedReader.java:288)
at 
org.apache.hadoop.hbase.util.MultiThreadedReader$HBaseReaderThread.readKey(MultiThreadedReader.java:249)
at 
org.apache.hadoop.hbase.util.MultiThreadedReader$HBaseReaderThread.runReader(MultiThreadedReader.java:192)
at 
org.apache.hadoop.hbase.util.MultiThreadedReader$HBaseReaderThread.run(MultiThreadedReader.java:150)
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10786) If snapshot verification fails with 'Regions moved', the message should contain the name of region causing the failure

2014-03-18 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10786:
--

 Summary: If snapshot verification fails with 'Regions moved', the 
message should contain the name of region causing the failure
 Key: HBASE-10786
 URL: https://issues.apache.org/jira/browse/HBASE-10786
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor


I was trying to find cause for test failure in 
https://builds.apache.org/job/PreCommit-HBASE-Build/9036//testReport/org.apache.hadoop.hbase.snapshot/TestSecureExportSnapshot/testExportRetry/
 :
{code}
org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: 
org.apache.hadoop.hbase.snapshot.HBaseSnapshotException: Snapshot { 
ss=emptySnaptb0-1395177346656 table=testtb-1395177346656 type=FLUSH } had an 
error.  Procedure emptySnaptb0-1395177346656 { waiting=[] done=[] }
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:342)
at 
org.apache.hadoop.hbase.master.HMaster.isSnapshotDone(HMaster.java:3007)
at 
org.apache.hadoop.hbase.protobuf.generated.MasterProtos$MasterService$2.callBlockingMethod(MasterProtos.java:40494)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2020)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:98)
at 
org.apache.hadoop.hbase.ipc.FifoRpcScheduler$1.run(FifoRpcScheduler.java:73)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException via 
Failed taking snapshot { ss=emptySnaptb0-1395177346656 
table=testtb-1395177346656 type=FLUSH } due to exception:Regions moved during 
the snapshot '{ ss=emptySnaptb0-1395177346656 table=testtb-1395177346656 
type=FLUSH }'. expected=9 
snapshotted=8:org.apache.hadoop.hbase.snapshot.CorruptedSnapshotException: 
Regions moved during the snapshot '{ ss=emptySnaptb0-1395177346656 
table=testtb-1395177346656 type=FLUSH }'. expected=9 snapshotted=8
at 
org.apache.hadoop.hbase.errorhandling.ForeignExceptionDispatcher.rethrowException(ForeignExceptionDispatcher.java:83)
at 
org.apache.hadoop.hbase.master.snapshot.TakeSnapshotHandler.rethrowExceptionIfFailed(TakeSnapshotHandler.java:320)
at 
org.apache.hadoop.hbase.master.snapshot.SnapshotManager.isSnapshotDone(SnapshotManager.java:332)
... 11 more
{code}
However, it is not clear which region caused the verification to fail.
I searched for log from balancer but found none.

The exception message should include region name which caused the verification 
to fail.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-10787) TestHCM#testConnection* take too long

2014-03-18 Thread Ted Yu (JIRA)
Ted Yu created HBASE-10787:
--

 Summary: TestHCM#testConnection* take too long
 Key: HBASE-10787
 URL: https://issues.apache.org/jira/browse/HBASE-10787
 Project: HBase
  Issue Type: Task
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Attachments: 10787-v1.txt

TestHCM#testConnectionClose takes more than 5 minutes on Apache Jenkins.
The test can be shortened when retry count is lowered.
On my Mac,
without patch:
{code}
Running org.apache.hadoop.hbase.client.TestHCM
2014-03-18 15:46:57.695 java[71368:1203] Unable to load realm info from 
SCDynamicStore
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 242.2 sec
{code}
with patch:
{code}
Running org.apache.hadoop.hbase.client.TestHCM
2014-03-18 15:40:44.013 java[71184:1203] Unable to load realm info from 
SCDynamicStore
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 100.465 sec
{code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Is there a better way to handle too much log

2014-03-18 Thread Ted Yu
Here is a related post:
http://stackoverflow.com/questions/13864899/log4j-dailyrollingfileappender-are-rolled-files-deleted-after-some-amount-of


On Tue, Mar 18, 2014 at 2:25 PM, Enis Söztutar  wrote:

> DFRA already deletes old logs, you do not necessarily have to have a cron
> job.
>
> You can use RollingFileAppender to limit the max file size, and number of
> log files to keep around.
>
> Check out conf/log4j.properties.
> Enis
>
>
> On Tue, Mar 18, 2014 at 7:22 AM, haosdent  wrote:
>
> > Yep, I use INFO level. Let me think about this later. If I found a better
> > way, I would open a issue and record it. Thanks for your great help.
> @tedyu
> >
> >
> > On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu  wrote:
> >
> > > If log grows so fast that disk space is to be exhausted, verbosity
> should
> > > be lowered.
> > >
> > > Do you turn on DEBUG logging ?
> > >
> > > Cheers
> > >
> > > On Mar 18, 2014, at 6:08 AM, haosdent  wrote:
> > >
> > > > Thanks for your reply. DailyRollingFileAppender and a cron job could
> > > works
> > > > in normal scenario. But sometimes log grow too fast, or disk space
> may
> > > use
> > > > by other applications. Is there a way make Log more "smart" and
> choose
> > > > policy according to current disk space?
> > > >
> > > >
> > > > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu  wrote:
> > > >
> > > >> Can you utilize
> > > >>
> > >
> >
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> > > ?
> > > >>
> > > >> And have a cron job cleanup old logs ?
> > > >>
> > > >> Cheers
> > > >>
> > > >> On Mar 18, 2014, at 5:29 AM, haosdent  wrote:
> > > >>
> > > >>> Sometimes the call of Log.xxx couldn't return if the disk partition
> > of
> > > >> Log
> > > >>> path is full. And HBase would hang because of this. So I think if
> > there
> > > >> is
> > > >>> a better way to handle too much log. For example, through a
> > > configuration
> > > >>> item in hbase-site.xml, we could delete the old logs periodically
> or
> > > >> delete
> > > >>> old logs when this disk didn't have enough space.
> > > >>>
> > > >>> I think HBase hang when disk space isn't enough is unacceptable.
> > > Looking
> > > >>> forward your ideas. Thanks in advance.
> > > >>>
> > > >>> --
> > > >>> Best Regards,
> > > >>> Haosdent Huang
> > > >
> > > >
> > > >
> > > > --
> > > > Best Regards,
> > > > Haosdent Huang
> > >
> >
> >
> >
> > --
> > Best Regards,
> > Haosdent Huang
> >
>


[jira] [Resolved] (HBASE-10783) Backport HBASE-10476 to 0.94

2014-03-18 Thread haosdent (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

haosdent resolved HBASE-10783.
--

Resolution: Won't Fix

> Backport HBASE-10476 to 0.94
> 
>
> Key: HBASE-10783
> URL: https://issues.apache.org/jira/browse/HBASE-10783
> Project: HBase
>  Issue Type: Bug
>Reporter: haosdent
>Assignee: haosdent
> Attachments: HBASE-10783-94.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: Is there a better way to handle too much log

2014-03-18 Thread haosdent
Cool, I don't realize "the max file size, and number of log files" before.
Thank you very much.


On Wed, Mar 19, 2014 at 7:49 AM, Ted Yu  wrote:

> Here is a related post:
>
> http://stackoverflow.com/questions/13864899/log4j-dailyrollingfileappender-are-rolled-files-deleted-after-some-amount-of
>
>
> On Tue, Mar 18, 2014 at 2:25 PM, Enis Söztutar  wrote:
>
> > DFRA already deletes old logs, you do not necessarily have to have a cron
> > job.
> >
> > You can use RollingFileAppender to limit the max file size, and number of
> > log files to keep around.
> >
> > Check out conf/log4j.properties.
> > Enis
> >
> >
> > On Tue, Mar 18, 2014 at 7:22 AM, haosdent  wrote:
> >
> > > Yep, I use INFO level. Let me think about this later. If I found a
> better
> > > way, I would open a issue and record it. Thanks for your great help.
> > @tedyu
> > >
> > >
> > > On Tue, Mar 18, 2014 at 9:49 PM, Ted Yu  wrote:
> > >
> > > > If log grows so fast that disk space is to be exhausted, verbosity
> > should
> > > > be lowered.
> > > >
> > > > Do you turn on DEBUG logging ?
> > > >
> > > > Cheers
> > > >
> > > > On Mar 18, 2014, at 6:08 AM, haosdent  wrote:
> > > >
> > > > > Thanks for your reply. DailyRollingFileAppender and a cron job
> could
> > > > works
> > > > > in normal scenario. But sometimes log grow too fast, or disk space
> > may
> > > > use
> > > > > by other applications. Is there a way make Log more "smart" and
> > choose
> > > > > policy according to current disk space?
> > > > >
> > > > >
> > > > > On Tue, Mar 18, 2014 at 8:49 PM, Ted Yu 
> wrote:
> > > > >
> > > > >> Can you utilize
> > > > >>
> > > >
> > >
> >
> http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
> > > > ?
> > > > >>
> > > > >> And have a cron job cleanup old logs ?
> > > > >>
> > > > >> Cheers
> > > > >>
> > > > >> On Mar 18, 2014, at 5:29 AM, haosdent  wrote:
> > > > >>
> > > > >>> Sometimes the call of Log.xxx couldn't return if the disk
> partition
> > > of
> > > > >> Log
> > > > >>> path is full. And HBase would hang because of this. So I think if
> > > there
> > > > >> is
> > > > >>> a better way to handle too much log. For example, through a
> > > > configuration
> > > > >>> item in hbase-site.xml, we could delete the old logs periodically
> > or
> > > > >> delete
> > > > >>> old logs when this disk didn't have enough space.
> > > > >>>
> > > > >>> I think HBase hang when disk space isn't enough is unacceptable.
> > > > Looking
> > > > >>> forward your ideas. Thanks in advance.
> > > > >>>
> > > > >>> --
> > > > >>> Best Regards,
> > > > >>> Haosdent Huang
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Best Regards,
> > > > > Haosdent Huang
> > > >
> > >
> > >
> > >
> > > --
> > > Best Regards,
> > > Haosdent Huang
> > >
> >
>



-- 
Best Regards,
Haosdent Huang


PreCommit-HBASE-Build failed for too many times

2014-03-18 Thread liushaohui

hi:

PreCommit-HBASE-Build failed for too many times since build 9025.

https://builds.apache.org/job/PreCommit-HBASE-Build/

After comparing the tests in successful build 9024 and later builds, I 
found the test TestHCM is missing in failed builds.


Maybe the TestHCM hung in the backgroud. Can someone help to check it in 
the Jenkins machine?


I run the TestHCM for several times and can't reproduce the failure or hung.

- Shaohui  Liu



Re: PreCommit-HBASE-Build failed for too many times

2014-03-18 Thread Ted Yu
Please take a look at HBASE-10787. 

Cheers 

On Mar 18, 2014, at 8:06 PM, liushaohui  wrote:

> hi:
> 
> PreCommit-HBASE-Build failed for too many times since build 9025.
> 
> https://builds.apache.org/job/PreCommit-HBASE-Build/
> 
> After comparing the tests in successful build 9024 and later builds, I found 
> the test TestHCM is missing in failed builds.
> 
> Maybe the TestHCM hung in the backgroud. Can someone help to check it in the 
> Jenkins machine?
> 
> I run the TestHCM for several times and can't reproduce the failure or hung.
> 
> - Shaohui  Liu
> 


Re: PreCommit-HBASE-Build failed for too many times

2014-03-18 Thread liushaohui

Thanks, Ted.

Sorry for missing this issue.

-Shaohui Liu

On 03/19/2014 11:15 AM, Ted Yu wrote:

Please take a look at HBASE-10787.

Cheers

On Mar 18, 2014, at 8:06 PM, liushaohui  wrote:


hi:

PreCommit-HBASE-Build failed for too many times since build 9025.

https://builds.apache.org/job/PreCommit-HBASE-Build/

After comparing the tests in successful build 9024 and later builds, I found 
the test TestHCM is missing in failed builds.

Maybe the TestHCM hung in the backgroud. Can someone help to check it in the 
Jenkins machine?

I run the TestHCM for several times and can't reproduce the failure or hung.

- Shaohui  Liu





[jira] [Created] (HBASE-10788) Add 99th percentile of latency in PE

2014-03-18 Thread Liu Shaohui (JIRA)
Liu Shaohui created HBASE-10788:
---

 Summary: Add 99th percentile of latency in PE
 Key: HBASE-10788
 URL: https://issues.apache.org/jira/browse/HBASE-10788
 Project: HBase
  Issue Type: Improvement
Reporter: Liu Shaohui
Assignee: Liu Shaohui
Priority: Minor


In production env, 99th percentile of latency is more important than the avg. 
The 99th percentile is helpful to measure the influence of GC, slow read/write 
of HDFS.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


Re: PreCommit-HBASE-Build failed for too many times

2014-03-18 Thread Stack
On Tue, Mar 18, 2014 at 8:15 PM, Ted Yu  wrote:

> Please take a look at HBASE-10787.
>

The question was about TestHCM being a zombie in precommits, not about how
long it runs which seems to be what HBASE-10787 is doing.  Is there
something I am missing?
Thanks,
St.Ack


Re: PreCommit-HBASE-Build failed for too many times

2014-03-18 Thread haosdent
For hudson console:

[INFO] HBase - Server  FAILURE
[1:03:34.814s]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.12-TRUNK-HBASE-2:test
(secondPartTestsExecution) on project hbase-server: Failure or timeout
-> [Help 1]


It looks like TestHCM takes too long and build failed, as HBASE-10787.




On Wed, Mar 19, 2014 at 2:31 PM, Stack  wrote:

> On Tue, Mar 18, 2014 at 8:15 PM, Ted Yu  wrote:
>
> > Please take a look at HBASE-10787.
> >
>
> The question was about TestHCM being a zombie in precommits, not about how
> long it runs which seems to be what HBASE-10787 is doing.  Is there
> something I am missing?
> Thanks,
> St.Ack
>



-- 
Best Regards,
Haosdent Huang