Fwd: Call for Presentations, Community Over Code Asia 2023

2023-06-05 Thread Duo Zhang
FYI. The CFP deadline has been extended to 6.18, please submit your
representations :)

会议的中文介绍见:https://www.bagevent.com/event/8409854



-- Forwarded message -
发件人: Rich Bowen 
Date: 2023年6月6日周二 00:09
Subject: Call for Presentations, Community Over Code Asia 2023
To: rbo...@apache.org 


You are receiving this message because you are subscribed to one more
more developer mailing lists at the Apache Software Foundation.

The call for presentations is now open at
"https://apachecon.com/acasia2023/cfp.html;, and will be closed by
Sunday, Jun 18th, 2023 11:59 PM GMT.

The event will be held in Beijing, China, August 18-20, 2023.

We are looking for presentations about anything relating to Apache
Software Foundation projects, open-source governance, community, and
software development.
In particular, this year we are building content tracks around the
following specific topics/projects:

AI / Machine learning
API / Microservice
Community
CloudNative
Data Storage & Computing
DataOps
Data Lake & Data Warehouse
OLAP & Data Analysis
Performance Engineering
Incubator
IoT/IIoT
Messaging
RPC
Streaming
Workflow / Data Processing
Web Server / Tomcat

If your proposed presentation falls into one of these categories,
please select that topic in the CFP entry form. Or select Others if
it’s related to another topic or project area.

Looking forward to hearing from you!

Willem Jiang, and the Community Over Code planners


Fwd: [ANNOUNCE] Jira Account Cleanup for Non-ASF Users

2023-04-27 Thread Duo Zhang
FYI.

-- Forwarded message -
发件人: Chris Lambertus 
Date: 2023年4月27日周四 04:10
Subject: [ANNOUNCE] Jira Account Cleanup for Non-ASF Users
To: 



Hi folks,

Effective immediately, Infra will be performing general cleanup operations
on
our Jira user base. Due to licensing changes required by Atlassian, our
active
license count must be reduced to around 20,000 active users, including the
approximately 8000 ASF LDAP users. Our Jira instance currently has 250,044
active users, the vast majority of whom are external (non-LDAP).

Infra is taking a multi-pronged approach to reducing our active license
count:

1. All accounts which have no activity (never reported, created, commented
on,
or were assigned to a ticket) have been removed.

2. All accounts with no known activity for the last two years will be marked
"inactive."

3. All accounts who have minimal activity (reported a small number of
issues, or
made a small number of comments) will be marked "inactive."

Any account marked "inactive" will be unable to log in to Jira, and the
owner
must contact Infra to have their account re-activated. [1]

All data associated with a de-activated account remains in place,; we are
not
removing any issues/comments/metadata/etc.

Important note: All of the steps above only target external users. We are
not
applying them to LDAP users.

If you have an external (non-LDAP) Jira account matching any of the above
criteria, your external account MAY be marked as inactive. If this happens,
please use your LDAP account to log in to Jira and open an Infra Jira ticket
requesting that we merge your LDAP and external accounts.

[1] We will provide a form de-activated external users can use to
re-activate
their account, should they choose to do so. The form will be available from
the
Jira banner when it is ready.

-Chris
ASF Infra


Fwd: TAC supporting Berlin Buzzwords

2023-03-28 Thread Nick Dimiduk
FYI

-- Forwarded message -
From: Gavin McDonald 
Date: Fri, Mar 24, 2023 at 10:57
Subject: TAC supporting Berlin Buzzwords
To: 


PMCs,

Please forward to your dev and user lists.

Hi All,

The ASF Travel Assistance Committee is supporting taking up to six (6)
people
to attend Berlin Buzzwords In June this year.

This includes Conference passes, and travel & accommodation as needed.

Please see our website at https://tac.apache.org for more information and
how to apply.

Applications close on 15th April.

Good luck to those that apply.

Gavin McDonald (VP TAC)


Fwd: Infra 2022 Survey

2022-12-02 Thread Nick Dimiduk
ASF Infra is soliciting feedback. Committers, make your voices heard.

Thanks,
Nick

-- Forwarded message -
From: Chris Thistlethwaite 
Date: Thu, Dec 1, 2022 at 22:49
Subject: Infra 2022 Survey
To: 


We're going to start using regular surveys to get to understand the
heart of the problems that our projects and podlings face.

Our plan with surveys is two fold.

First, to get a finger on the pulse of projects and a "2022 year in
review" baseline. Being this is the first time we're trying this it's
likely that future surveys will contain different questions.

Secondly, we'll be sending out another survey on a regular basis,
something like every six months or so just to keep a feedback loop
going.

Surveys will be anonymous and results will be posted on the Infra blog.
Depending on timing, they will also be discussed in the roundtables if
the data is pertinent.

The first one is posted: https://infra.apache.org/surveys/survey-1.html

This survey will be open until the end of the month 12/30/2022. It
isn't restricted to PMCs only, we'd really like feedback from any
committer.

All the surveys will be short, likely less than 10 questions. The data
will be shared on the blog once the survey has been closed and the
results analyzed.

Thank you,
Chris T.
#asfinfra


Fwd: [ANNOUNCE] Changes to Jira Account Creation (issues.a.o/jira)

2022-10-21 Thread Duo Zhang
Because of spam users, the infra team plans to shutdown self
registration of jira account and suggests ASF projects to make use of
github issuesfor tracking customer facing questions/bugs.

What should we do?

-- Forwarded message -
发件人: fluxo 
Date: 2022年10月22日周六 09:02
Subject: [ANNOUNCE] Changes to Jira Account Creation (issues.a.o/jira)
To: 


Hello PMC members,

As I'm sure most of you are aware, the spam issues on Jira are getting
worse. We are seeing spam user creation of over 10,000 accounts per
year, and receive many requests per month from project members for
help addressing spam complaints. Infra is taking steps to disable
public Jira signups.

Infra has developed a self-service tool by which folks on a PMC can
request a Jira account for non-ASF contributors:


https://selfserve.apache.org/


Click "Create a Jira user account" to go to:


https://selfserve.apache.org/jira-acct.html


You need to enter a username for the new Jira account. We will reject
the request if there is an existing account with that username. If
this person may ultimately become a committer, Infra recommends that
they choose a username that they can also use for their LDAP username.

Next, the tool asks you to enter their Display Name. This is the
"public name" which will appear on all their Jira posts and comments.

Last, the tool asks you to enter the user's email address. We expect
the PMC to exercise due diligence in making sure the contributor's
email works. If it does not, they will not get the password reset
mail.


Infra knows this process change places an increasing burden on PMC
members for managing contributors, and makes it harder for people to
contribute bug reports. We suggest projects consider using GitHub
Issues for customer-facing questions/bug reports/etc., while
maintaining development issues on Jira. You can enable GitHub Issues
for your repository via
https://cwiki.apache.org/confluence/display/INFRA/Git+-+.asf.yaml+features#Git.asf.yamlfeatures-Repositoryfeatures


Infra has targeted 6 November for the date we switch off public
signups for issues.apache.org/jira . Please let us know if this will
place any significant burden on your teams. We are following an
aggressive timeline because of the serious impact spam users have on
the safety and stability of our infrastructure.

As always, if you have any questions or comments about this, please let us know!

-Chris (fluxo)

--
@fluxo
Chris Lambertus
ASF Infrastructure


Fwd: [NOTICE] Dependabot Updates enabled for all projects

2022-04-05 Thread Wei-Chiu Chuang
-- Forwarded message -
From: Chris Lambertus 
Date: Tue, Apr 5, 2022 at 12:38 PM
Subject: [NOTICE] Dependabot Updates enabled for all projects
To: 



Hi folks,

Infra is pleased to announce that GitHub’s Dependabot service has been
approved for use by ASF Legal and Infra, and is now enabled for all repos.
Dependabot will create PRs in your repo with recommended security updates
for your project. It is entirely up to the project to accept or reject
these PRs.

Dependabot Alerts can also be configured per-project, but currently the
notifications go to Org Admins only. If your project wishes to receive
Dependabot Alerts via email, please open an Infra Jira ticket so that we
can add your committer team to the alerts.

-Chris
ASF Infra


Fwd: H Node renaming

2021-10-19 Thread Wei-Chiu Chuang
It looks like a number of HBase Jenkins jobs are still running on the H
nodes today so forward it to the hbase dev community just in case.

-- Forwarded message -
From: Gavin McDonald 
Date: Tue, Sep 28, 2021 at 1:13 PM
Subject: H Node renaming
To: 


Sent to builds@ . adding here also.]]
Hi All,

I have written a short proposal to rename all the H nodes, both
within Jenkins and also to add ASF DNS for Infra use,

let me know if anyone has any questions or concerns.

Thanks

https://cwiki.apache.org/confluence/display/INFRA/H+node+renaming+Proposal



-- 

*Gavin McDonald*
Systems Administrator
ASF Infrastructure Team


Fwd: [RESULT] [VOTE] First release candidate for HBase 2.4.6 (RC0) is available

2021-09-23 Thread Andrew Purtell
It seems I may have sent this one to myself.

-- Forwarded message -
From: Andrew Purtell 
Date: Mon, Sep 13, 2021 at 4:38 PM
Subject: [RESULT] [VOTE] First release candidate for HBase 2.4.6 (RC0) is
available
To: Andrew Purtell 


With 5 binding +1s, including my own, one non-binding +1, one binding
+0, and no other votes, this vote passes.

Thanks to all who voted on the release candidate.


-- 
Best regards,
Andrew

Words like orphans lost among the crosstalk, meaning torn from truth's
decrepit hands
   - A23, Crosstalk


Fwd: Error while running github feature from .asf.yaml in hbase-connectors!

2021-08-24 Thread Peter Somogyi
Hi,

I merged a commit to hbase-connectors from GitHub and got this automated
email from Infra.
The .asf.yaml files look identical in hbase [1] and hbase-connectors [2]
repositories.
Do you have any idea what could be the problem?

- Peter

[1] https://github.com/apache/hbase/blob/master/.asf.yaml
[2] https://github.com/apache-connectors/hbase/blob/master/.asf.yaml


-- Forwarded message -
From: Apache Infrastructure 
Date: Tue, Aug 24, 2021 at 11:00 AM
Subject: Error while running github feature from .asf.yaml in
hbase-connectors!
To: , 



An error occurred while running github feature in .asf.yaml!:
dump_all() got an unexpected keyword argument 'sort_keys'


Fwd: Anyone could Help To Review These 2 HBase BuldLoad Related Patchs

2020-06-08 Thread Chang Wu
Hi  ,
I have 2 patches which is used to fix some issues I found for the BuldLoad
of HBase.
I has been refined it and currently everything looks fine.
Very appreciated if anyone could save your time and have a further review
for these 2 patches.
HBASE-24403 FsDelegationToken Should Cache Token After Acquired A New One :
Issue: https://issues.apache.org/jira/browse/HBASE-24403
Patch: https://github.com/apache/hbase/pull/1743

HBASE-24420 Avoid Meaningless Retry Attempts in Unrecoverable Failure
Issue: https://issues.apache.org/jira/browse/HBASE-24420
Patch: https://github.com/apache/hbase/pull/1764

Thanks,
Chang Wu


Fwd: Anyone could Help To Review These 2 HBase BuldLoad Related Patchs

2020-06-02 Thread Chang Wu
Hi  ,
I have 2 patches which is used to fix some issues I found for the BuldLoad
of HBase.
I has been reviewed at the very beginning but no response then;
Very appreciated if anyone could save your time and have a further review
for these 2 patches.
HBASE-24403 FsDelegationToken Should Cache Token After Acquired A New One :
Issue: https://issues.apache.org/jira/browse/HBASE-24403
Patch: https://github.com/apache/hbase/pull/1743

HBASE-24420 Avoid Meaningless Retry Attempts in Unrecoverable Failure
Issue: https://issues.apache.org/jira/browse/HBASE-24420
Patch: https://github.com/apache/hbase/pull/1764

Thanks,
Chang Wu


Fwd: [DISCUSS] EOL Hadoop branch-2.8

2020-02-25 Thread Wei-Chiu Chuang
FYI
Also filed HBASE-23893  to
keep track of the relevant work in HBase.

I'd like to gather the feedback from the HBase community. Once branch-2.8
is EOL, what's the plan of Hadoop 2.x support?
IMHO Hadoop 2.9 EOL would be the next. The last 2.9 release was 15 months
ago and doesn't look like any one's picking it up.

I don't think HBase is certified to run on Hadoop 2.10 yet.
But in my opinion, I'd like to see Hadoop reducing Hadoop 2.x footprint
sooner than later.

-- Forwarded message -
From: Wei-Chiu Chuang 
Date: Mon, Feb 17, 2020 at 9:44 AM
Subject: [DISCUSS] EOL Hadoop branch-2.8
To: Hadoop Common , Hdfs-dev <
hdfs-...@hadoop.apache.org>, yarn-dev ,
mapreduce-dev 


The last Hadoop 2.8.x release, 2.8.5, was GA on September 15th 2018.

It's been 17 months since the release and the community by and large have
moved up to 2.9/2.10/3.x.

With Hadoop 3.3.0 over the horizon, is it time to start the EOL discussion
and reduce the number of active branches?


Re: Fwd: AWS promotional credits for open source projects

2020-01-24 Thread Josh Elser

Yeah, that'd be nice.

I was thinking about doing some higher-level integration test with an 
"S3 Clone" (like MinIO), but never got around to it.


On 1/23/20 7:30 PM, Wei-Chiu Chuang wrote:

Would it sound interesting for the HBOSS project?

-- Forwarded message -
From: Wei-Chiu Chuang 
Date: Thu, Jan 23, 2020 at 4:28 PM
Subject: AWS promotional credits for open source projects
To: Hadoop Common , Hdfs-dev <
hdfs-...@hadoop.apache.org>, 


https://aws.amazon.com/blogs/opensource/aws-promotional-credits-open-source-projects/

Any one interested? I think it can be a useful resource especially for the
S3 cloud connector.



Fwd: AWS promotional credits for open source projects

2020-01-23 Thread Wei-Chiu Chuang
Would it sound interesting for the HBOSS project?

-- Forwarded message -
From: Wei-Chiu Chuang 
Date: Thu, Jan 23, 2020 at 4:28 PM
Subject: AWS promotional credits for open source projects
To: Hadoop Common , Hdfs-dev <
hdfs-...@hadoop.apache.org>, 


https://aws.amazon.com/blogs/opensource/aws-promotional-credits-open-source-projects/

Any one interested? I think it can be a useful resource especially for the
S3 cloud connector.


Fwd: [NOTICE] Introducing .asf.yaml for enhanced automation of git repository services

2019-09-04 Thread Misty Linville
Per-branch website / docs staging and publishing if the project wants it.
:) Other useful stuff below too.

-- Forwarded message -
From: Daniel Gruno 
Date: Wed, Sep 4, 2019 at 5:33 PM
Subject: [NOTICE] Introducing .asf.yaml for enhanced automation of git
repository services
To: 


Hello, fellow Apache committers and enthusiasts!

Today, the Apache Infrastructure Team is launching new self-serve
features to help augment the productivity of Apache projects through a
series of simple configurations for enabling automation of various
service offerings.

We call this new initiative '.asf.yaml', and as the name implies, it
consists of a new file that projects can add to their git repositories
to control various aspects that were previously done through JIRA
tickets and manual operations by the Infrastructure staff.

For detailed information about these features and how to enable them,
please visit our documentation page at: https://s.apache.org/asfyaml

At launch time, .asf.yaml has three features enabled that projects can
make use of: web site staging, web site publishing, and github meta-data
settings:

web site staging:
   Much like with the Apache CMS system, projects using git can now get
   their web sites staged for previews at a specific (HTTPS-enabled)
   staging domain. Staging supports multi-tenancy, which allows for
   multiple staging web sites from different site repository branches.
   For more information on multi-tenancy, see the canonical documentation
   linked above.

web site publishing:
   Projects can now automatically set up publishing for their own main
   web site ($project.apache.org) and change at will, without the need to
   wait for assistance from the Infrastructure team. New podlings and
   projects can also get web sites bootstrapped and ready immediately.

github meta-data settings:
   Projects can now, via .asf.yaml, specify their repositories' meta-data
   on github, such as the description, homepage URL, and which topics to
   add to a repository's front page.

In the coming months, we will extend this feature with many new,
exciting features such as automated buildbot integration for web site
generation (think CMS but via git) and enhanced JIRA integration
automation.

We hope projects will appreciate and make use of these new features. If
you or your project have any questions, please feel free to reach out to
us at: us...@infra.apache.org - but please read through the
documentation first :)

With regards, and on behalf of the Apache Infrastructure Team,
Daniel.


Fwd: Re: Mob reference tags go missing

2019-08-18 Thread meirav.malka
Hi jms,How are you? Hope everything is ok:)We have encountered a strange 
behavior in mob..some of our values contain only the reference cells..instead 
of leading us to the mob we get hex value+ mob file name suffix..Have you seen 
such behavior?We suspect the compact mob doesn't update the reference..Will 
appreciate your thoughts on the issue..ThanksSent from my Samsung Galaxy 
smartphone.
 Original message From: Omri Cohen  
Date: 8/17/19  16:42  (GMT+02:00) To: hbase-u...@hadoop.apache.org Subject: Re: 
Mob reference tags go missing Hello,We recently encountered a problem in our 
production hbase cluster (CDH deployment, version 5.13.1).We have a cluster 
with a high mob percentage ( > 50% of objects are MOB, the threshold is the 
default 102400 bytes). The cluster has been active for about a year. Recently, 
our clients started to receive the mob reference instead of the mob data when 
trying to access certain rows (in most tables about 10% of the rows are bad, in 
one table it is about 50% of rows).When we investigated the HFiles, we saw that 
the "bad" cells are missing two tags that exist in the "good" rows, it looks 
something like that:K: {rowkey of a "good" row} {Column family, column 
qualifier}... vlen=76/seqid=... V: \x00\x04\xFB.{MOB file name} T[0]:  T[1]: 
{table name}K: {rowkey of a "bad" row} {Column family, column qualifier}... 
vlen=76/seqid=... V: \x00\x13\x1A{MOB file name}The "good" row returns the 
expected data when queried, while the "bad" row returned the mob file reference 
instead of the data. This happened when we queried from the REST server, the 
native java client, and the hbase shell.When we examined the mob files, we saw 
that all the data was there.  *   Has anyone encountered a similar situation?  
*   Is it possible to manually add the missing mob reference tags to the "bad" 
rows?No changes where made to the table in recently. We never encountered this 
issue before.We would appreciate any help on this 
issue.From: Omri CohenSent: Saturday, August 
17, 2019 4:28 PMTo: hbase-u...@hadoop.apache.org 
Subject: Mob reference tags go missingHello,We 
recently encountered a problem in our production hbase cluster.We have a 
cluster with a high mob percentage ( > 50% of objects are MOB, the threshold is 
the default 102400 bytes). The cluster has been active for about a year. 
Recently, our clients started to receive the mob reference instead of the mob 
data when trying to access certain rows (in most tables about 10% of the rows 
are bad, in one table it is about 50% of rows).When we investigated the HFiles, 
we saw that the "bad" cells are missing two tags that exist in the "good" rows, 
it looks something like that:

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-11 Thread Peter Somogyi
Thanks Josh for acting on this!

On Tue, Jun 11, 2019 at 3:15 AM Sean Busbey  wrote:

> We used to have a build step that compressed our logs for us. I don't think
> Jenkins can read the test results if we do the xml files from surefire, so
> I'm not sure how much space we can save. That's where I'd start though.
>
> On Mon, Jun 10, 2019, 19:46 张铎(Duo Zhang)  wrote:
>
> > Does surefire have some options to truncate the test output if it is too
> > large? Or jenkins has some options to truncate or compress a file when
> > archiving?
> >
> > Josh Elser  于2019年6月11日周二 上午8:40写道:
> >
> > > Just a cursory glance at some build artifacts showed just test output
> > > which sometimes extended into the multiple megabytes.
> > >
> > > So everyone else knows, I just chatted with ChrisL in Slack and he
> > > confirmed that our disk utilization is down already (after
> HBASE-22563).
> > > He thanked us for the quick response.
> > >
> > > We should keep pulling on this thread now that we're looking at it :)
> > >
> > > On 6/10/19 8:36 PM, 张铎(Duo Zhang) wrote:
> > > > Oh, it is the build artifacts, not the jars...
> > > >
> > > > Most of our build artifacts are build logs, but maybe the problem is
> > that
> > > > some of the logs are very large if the test hangs...
> > > >
> > > > 张铎(Duo Zhang)  于2019年6月11日周二 上午8:16写道:
> > > >
> > > >> For flakey we just need the commit id in the console output then we
> > can
> > > >> build the artifacts locally. +1 on removing artifacts caching.
> > > >>
> > > >> Josh Elser  于2019年6月11日周二 上午7:50写道:
> > > >>
> > > >>> Sure, Misty. No arguments here.
> > > >>>
> > > >>> I think that might be a bigger untangling. Maybe Peter or Busbey
> know
> > > >>> better about how these could be de-coupled (e.g. I think flakies
> > > >>> actually look back at old artifacts), but I'm not sure off the top
> of
> > > my
> > > >>> head. I was just going for a quick fix to keep Infra from doing
> > > >>> something super-destructive.
> > > >>>
> > > >>> For context, I've dropped them a note in Slack to make sure what
> I'm
> > > >>> doing is having a positive effect.
> > > >>>
> > > >>> On 6/10/19 7:34 PM, Misty Linville wrote:
> > >  Keeping artifacts and keeping build logs are two separate things.
> I
> > > >>> don’t
> > >  see a need to keep any artifacts past the most recent green and
> most
> > > >>> recent
> > >  red builds. Alternately if we need the artifacts let’s have
> Jenkins
> > > put
> > >  them somewhere rather than keeping them there. You can get back to
> > > >>> whatever
> > >  hash you need within git to reproduce a build problem.
> > > 
> > >  On Mon, Jun 10, 2019 at 2:26 PM Josh Elser 
> > wrote:
> > > 
> > > > https://issues.apache.org/jira/browse/HBASE-22563 for a quick
> > > bandaid
> > > >>> (I
> > > > hope).
> > > >
> > > > On 6/10/19 4:31 PM, Josh Elser wrote:
> > > >> Eyes on.
> > > >>
> > > >> Looking at master, we already have the linked configuration, set
> > to
> > > >> retain 30 builds.
> > > >>
> > > >> We have some extra branches which we can lop off (branch-1.2,
> > > >> branch-2.0, maybe some feature branches too). A quick fix might
> be
> > > to
> > > >> just pull back that 30 to 10.
> > > >>
> > > >> Largely figuring out how this stuff works now, give me a shout
> in
> > > >>> Slack
> > > >> if anyone else has cycles.
> > > >>
> > > >> On 6/10/19 2:34 PM, Peter Somogyi wrote:
> > > >>> Hi,
> > > >>>
> > > >>> HBase jobs are using more than 400GB based on this list.
> > > >>> Could someone take a look at the job configurations today?
> > > >>> Otherwise, I
> > > >>> will look into it tomorrow morning.
> > > >>>
> > > >>> Thanks,
> > > >>> Peter
> > > >>>
> > > >>> -- Forwarded message -
> > > >>> From: Chris Lambertus 
> > > >>> Date: Mon, Jun 10, 2019 at 7:57 PM
> > > >>> Subject: ACTION REQUIRED: disk space on jenkins master nearly
> > full
> > > >>> To: 
> > > >>> Cc: , 
> > > >>>
> > > >>>
> > > >>> Hello,
> > > >>>
> > > >>> The jenkins master is nearly full.
> > > >>>
> > > >>> The workspaces listed below need significant size reduction
> > within
> > > 24
> > > >>> hours
> > > >>> or Infra will need to perform some manual pruning of old builds
> > to
> > > >>> keep the
> > > >>> jenkins system running. The Mesos “Packaging” job also needs to
> > be
> > > >>> corrected to include the project name (mesos-packaging) please.
> > > >>>
> > > >>> It appears that the typical ‘Discard Old Builds’ checkbox in
> the
> > > job
> > > >>> configuration may not be working for multibranch pipeline jobs.
> > > >>> Please
> > > >>> refer to these articles for information on discarding builds in
> > > >>> multibranch
> > > >>> jobs:
> > > >>>
> > > >>>
> > > >
> > > >>>
> > >
> >
> 

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-10 Thread Sean Busbey
We used to have a build step that compressed our logs for us. I don't think
Jenkins can read the test results if we do the xml files from surefire, so
I'm not sure how much space we can save. That's where I'd start though.

On Mon, Jun 10, 2019, 19:46 张铎(Duo Zhang)  wrote:

> Does surefire have some options to truncate the test output if it is too
> large? Or jenkins has some options to truncate or compress a file when
> archiving?
>
> Josh Elser  于2019年6月11日周二 上午8:40写道:
>
> > Just a cursory glance at some build artifacts showed just test output
> > which sometimes extended into the multiple megabytes.
> >
> > So everyone else knows, I just chatted with ChrisL in Slack and he
> > confirmed that our disk utilization is down already (after HBASE-22563).
> > He thanked us for the quick response.
> >
> > We should keep pulling on this thread now that we're looking at it :)
> >
> > On 6/10/19 8:36 PM, 张铎(Duo Zhang) wrote:
> > > Oh, it is the build artifacts, not the jars...
> > >
> > > Most of our build artifacts are build logs, but maybe the problem is
> that
> > > some of the logs are very large if the test hangs...
> > >
> > > 张铎(Duo Zhang)  于2019年6月11日周二 上午8:16写道:
> > >
> > >> For flakey we just need the commit id in the console output then we
> can
> > >> build the artifacts locally. +1 on removing artifacts caching.
> > >>
> > >> Josh Elser  于2019年6月11日周二 上午7:50写道:
> > >>
> > >>> Sure, Misty. No arguments here.
> > >>>
> > >>> I think that might be a bigger untangling. Maybe Peter or Busbey know
> > >>> better about how these could be de-coupled (e.g. I think flakies
> > >>> actually look back at old artifacts), but I'm not sure off the top of
> > my
> > >>> head. I was just going for a quick fix to keep Infra from doing
> > >>> something super-destructive.
> > >>>
> > >>> For context, I've dropped them a note in Slack to make sure what I'm
> > >>> doing is having a positive effect.
> > >>>
> > >>> On 6/10/19 7:34 PM, Misty Linville wrote:
> >  Keeping artifacts and keeping build logs are two separate things. I
> > >>> don’t
> >  see a need to keep any artifacts past the most recent green and most
> > >>> recent
> >  red builds. Alternately if we need the artifacts let’s have Jenkins
> > put
> >  them somewhere rather than keeping them there. You can get back to
> > >>> whatever
> >  hash you need within git to reproduce a build problem.
> > 
> >  On Mon, Jun 10, 2019 at 2:26 PM Josh Elser 
> wrote:
> > 
> > > https://issues.apache.org/jira/browse/HBASE-22563 for a quick
> > bandaid
> > >>> (I
> > > hope).
> > >
> > > On 6/10/19 4:31 PM, Josh Elser wrote:
> > >> Eyes on.
> > >>
> > >> Looking at master, we already have the linked configuration, set
> to
> > >> retain 30 builds.
> > >>
> > >> We have some extra branches which we can lop off (branch-1.2,
> > >> branch-2.0, maybe some feature branches too). A quick fix might be
> > to
> > >> just pull back that 30 to 10.
> > >>
> > >> Largely figuring out how this stuff works now, give me a shout in
> > >>> Slack
> > >> if anyone else has cycles.
> > >>
> > >> On 6/10/19 2:34 PM, Peter Somogyi wrote:
> > >>> Hi,
> > >>>
> > >>> HBase jobs are using more than 400GB based on this list.
> > >>> Could someone take a look at the job configurations today?
> > >>> Otherwise, I
> > >>> will look into it tomorrow morning.
> > >>>
> > >>> Thanks,
> > >>> Peter
> > >>>
> > >>> -- Forwarded message -
> > >>> From: Chris Lambertus 
> > >>> Date: Mon, Jun 10, 2019 at 7:57 PM
> > >>> Subject: ACTION REQUIRED: disk space on jenkins master nearly
> full
> > >>> To: 
> > >>> Cc: , 
> > >>>
> > >>>
> > >>> Hello,
> > >>>
> > >>> The jenkins master is nearly full.
> > >>>
> > >>> The workspaces listed below need significant size reduction
> within
> > 24
> > >>> hours
> > >>> or Infra will need to perform some manual pruning of old builds
> to
> > >>> keep the
> > >>> jenkins system running. The Mesos “Packaging” job also needs to
> be
> > >>> corrected to include the project name (mesos-packaging) please.
> > >>>
> > >>> It appears that the typical ‘Discard Old Builds’ checkbox in the
> > job
> > >>> configuration may not be working for multibranch pipeline jobs.
> > >>> Please
> > >>> refer to these articles for information on discarding builds in
> > >>> multibranch
> > >>> jobs:
> > >>>
> > >>>
> > >
> > >>>
> >
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> > >>>
> > >>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> > >>>
> > >
> > >>>
> >
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> > >>>
> > 

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-10 Thread Duo Zhang
Does surefire have some options to truncate the test output if it is too
large? Or jenkins has some options to truncate or compress a file when
archiving?

Josh Elser  于2019年6月11日周二 上午8:40写道:

> Just a cursory glance at some build artifacts showed just test output
> which sometimes extended into the multiple megabytes.
>
> So everyone else knows, I just chatted with ChrisL in Slack and he
> confirmed that our disk utilization is down already (after HBASE-22563).
> He thanked us for the quick response.
>
> We should keep pulling on this thread now that we're looking at it :)
>
> On 6/10/19 8:36 PM, 张铎(Duo Zhang) wrote:
> > Oh, it is the build artifacts, not the jars...
> >
> > Most of our build artifacts are build logs, but maybe the problem is that
> > some of the logs are very large if the test hangs...
> >
> > 张铎(Duo Zhang)  于2019年6月11日周二 上午8:16写道:
> >
> >> For flakey we just need the commit id in the console output then we can
> >> build the artifacts locally. +1 on removing artifacts caching.
> >>
> >> Josh Elser  于2019年6月11日周二 上午7:50写道:
> >>
> >>> Sure, Misty. No arguments here.
> >>>
> >>> I think that might be a bigger untangling. Maybe Peter or Busbey know
> >>> better about how these could be de-coupled (e.g. I think flakies
> >>> actually look back at old artifacts), but I'm not sure off the top of
> my
> >>> head. I was just going for a quick fix to keep Infra from doing
> >>> something super-destructive.
> >>>
> >>> For context, I've dropped them a note in Slack to make sure what I'm
> >>> doing is having a positive effect.
> >>>
> >>> On 6/10/19 7:34 PM, Misty Linville wrote:
>  Keeping artifacts and keeping build logs are two separate things. I
> >>> don’t
>  see a need to keep any artifacts past the most recent green and most
> >>> recent
>  red builds. Alternately if we need the artifacts let’s have Jenkins
> put
>  them somewhere rather than keeping them there. You can get back to
> >>> whatever
>  hash you need within git to reproduce a build problem.
> 
>  On Mon, Jun 10, 2019 at 2:26 PM Josh Elser  wrote:
> 
> > https://issues.apache.org/jira/browse/HBASE-22563 for a quick
> bandaid
> >>> (I
> > hope).
> >
> > On 6/10/19 4:31 PM, Josh Elser wrote:
> >> Eyes on.
> >>
> >> Looking at master, we already have the linked configuration, set to
> >> retain 30 builds.
> >>
> >> We have some extra branches which we can lop off (branch-1.2,
> >> branch-2.0, maybe some feature branches too). A quick fix might be
> to
> >> just pull back that 30 to 10.
> >>
> >> Largely figuring out how this stuff works now, give me a shout in
> >>> Slack
> >> if anyone else has cycles.
> >>
> >> On 6/10/19 2:34 PM, Peter Somogyi wrote:
> >>> Hi,
> >>>
> >>> HBase jobs are using more than 400GB based on this list.
> >>> Could someone take a look at the job configurations today?
> >>> Otherwise, I
> >>> will look into it tomorrow morning.
> >>>
> >>> Thanks,
> >>> Peter
> >>>
> >>> -- Forwarded message -
> >>> From: Chris Lambertus 
> >>> Date: Mon, Jun 10, 2019 at 7:57 PM
> >>> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
> >>> To: 
> >>> Cc: , 
> >>>
> >>>
> >>> Hello,
> >>>
> >>> The jenkins master is nearly full.
> >>>
> >>> The workspaces listed below need significant size reduction within
> 24
> >>> hours
> >>> or Infra will need to perform some manual pruning of old builds to
> >>> keep the
> >>> jenkins system running. The Mesos “Packaging” job also needs to be
> >>> corrected to include the project name (mesos-packaging) please.
> >>>
> >>> It appears that the typical ‘Discard Old Builds’ checkbox in the
> job
> >>> configuration may not be working for multibranch pipeline jobs.
> >>> Please
> >>> refer to these articles for information on discarding builds in
> >>> multibranch
> >>> jobs:
> >>>
> >>>
> >
> >>>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>>
> >>> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>>
> >
> >>>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>>
> >>>
> >>>
> >>>
> >>> NB: I have not fully vetted the above information, I just notice
> that
> >>> many
> >>> of these jobs have ‘Discard old builds’ checked, but it is clearly
> >>> not
> >>> working.
> >>>
> >>>
> >>> If you are unable to reduce your disk usage beyond what is listed,
> > please
> >>> let me know what the reasons are and we’ll see if we can find a
> > solution.
> >>> If you believe you’ve configured your job properly and the space
> >>> usage
> > is
> >>> 

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-10 Thread Josh Elser
Just a cursory glance at some build artifacts showed just test output 
which sometimes extended into the multiple megabytes.


So everyone else knows, I just chatted with ChrisL in Slack and he 
confirmed that our disk utilization is down already (after HBASE-22563). 
He thanked us for the quick response.


We should keep pulling on this thread now that we're looking at it :)

On 6/10/19 8:36 PM, 张铎(Duo Zhang) wrote:

Oh, it is the build artifacts, not the jars...

Most of our build artifacts are build logs, but maybe the problem is that
some of the logs are very large if the test hangs...

张铎(Duo Zhang)  于2019年6月11日周二 上午8:16写道:


For flakey we just need the commit id in the console output then we can
build the artifacts locally. +1 on removing artifacts caching.

Josh Elser  于2019年6月11日周二 上午7:50写道:


Sure, Misty. No arguments here.

I think that might be a bigger untangling. Maybe Peter or Busbey know
better about how these could be de-coupled (e.g. I think flakies
actually look back at old artifacts), but I'm not sure off the top of my
head. I was just going for a quick fix to keep Infra from doing
something super-destructive.

For context, I've dropped them a note in Slack to make sure what I'm
doing is having a positive effect.

On 6/10/19 7:34 PM, Misty Linville wrote:

Keeping artifacts and keeping build logs are two separate things. I

don’t

see a need to keep any artifacts past the most recent green and most

recent

red builds. Alternately if we need the artifacts let’s have Jenkins put
them somewhere rather than keeping them there. You can get back to

whatever

hash you need within git to reproduce a build problem.

On Mon, Jun 10, 2019 at 2:26 PM Josh Elser  wrote:


https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid

(I

hope).

On 6/10/19 4:31 PM, Josh Elser wrote:

Eyes on.

Looking at master, we already have the linked configuration, set to
retain 30 builds.

We have some extra branches which we can lop off (branch-1.2,
branch-2.0, maybe some feature branches too). A quick fix might be to
just pull back that 30 to 10.

Largely figuring out how this stuff works now, give me a shout in

Slack

if anyone else has cycles.

On 6/10/19 2:34 PM, Peter Somogyi wrote:

Hi,

HBase jobs are using more than 400GB based on this list.
Could someone take a look at the job configurations today?

Otherwise, I

will look into it tomorrow morning.

Thanks,
Peter

-- Forwarded message -
From: Chris Lambertus 
Date: Mon, Jun 10, 2019 at 7:57 PM
Subject: ACTION REQUIRED: disk space on jenkins master nearly full
To: 
Cc: , 


Hello,

The jenkins master is nearly full.

The workspaces listed below need significant size reduction within 24
hours
or Infra will need to perform some manual pruning of old builds to
keep the
jenkins system running. The Mesos “Packaging” job also needs to be
corrected to include the project name (mesos-packaging) please.

It appears that the typical ‘Discard Old Builds’ checkbox in the job
configuration may not be working for multibranch pipeline jobs.

Please

refer to these articles for information on discarding builds in
multibranch
jobs:





https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-


https://issues.jenkins-ci.org/browse/JENKINS-35642




https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489





NB: I have not fully vetted the above information, I just notice that
many
of these jobs have ‘Discard old builds’ checked, but it is clearly

not

working.


If you are unable to reduce your disk usage beyond what is listed,

please

let me know what the reasons are and we’ll see if we can find a

solution.

If you believe you’ve configured your job properly and the space

usage

is

more than you expect, please comment here and we’ll take a look at

what

might be going on.

I cut this list off arbitrarily at 40GB workspaces and larger. There

are

many which are between 20 and 30GB which also need to be addressed,

but

these are the current top contributors to the disk space situation.


594GPackaging
425Gpulsar-website-build
274Gpulsar-master
195Ghadoop-multibranch
173GHBase Nightly
138GHBase-Flaky-Tests
119Gnetbeans-release
108GAny23-trunk
101Gnetbeans-linux-experiment
96G Jackrabbit-Oak-Windows
94G HBase-Find-Flaky-Tests
88G PreCommit-ZOOKEEPER-github-pr-build
74G netbeans-windows
71G stanbol-0.12
68G Sling
63G Atlas-master-NoTests
48G FlexJS Framework (maven)
45G HBase-PreCommit-GitHub-PR
42G pulsar-pull-request
40G Atlas-1.0-NoTests



Thanks,
Chris
ASF Infra













Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-10 Thread Duo Zhang
Oh, it is the build artifacts, not the jars...

Most of our build artifacts are build logs, but maybe the problem is that
some of the logs are very large if the test hangs...

张铎(Duo Zhang)  于2019年6月11日周二 上午8:16写道:

> For flakey we just need the commit id in the console output then we can
> build the artifacts locally. +1 on removing artifacts caching.
>
> Josh Elser  于2019年6月11日周二 上午7:50写道:
>
>> Sure, Misty. No arguments here.
>>
>> I think that might be a bigger untangling. Maybe Peter or Busbey know
>> better about how these could be de-coupled (e.g. I think flakies
>> actually look back at old artifacts), but I'm not sure off the top of my
>> head. I was just going for a quick fix to keep Infra from doing
>> something super-destructive.
>>
>> For context, I've dropped them a note in Slack to make sure what I'm
>> doing is having a positive effect.
>>
>> On 6/10/19 7:34 PM, Misty Linville wrote:
>> > Keeping artifacts and keeping build logs are two separate things. I
>> don’t
>> > see a need to keep any artifacts past the most recent green and most
>> recent
>> > red builds. Alternately if we need the artifacts let’s have Jenkins put
>> > them somewhere rather than keeping them there. You can get back to
>> whatever
>> > hash you need within git to reproduce a build problem.
>> >
>> > On Mon, Jun 10, 2019 at 2:26 PM Josh Elser  wrote:
>> >
>> >> https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid
>> (I
>> >> hope).
>> >>
>> >> On 6/10/19 4:31 PM, Josh Elser wrote:
>> >>> Eyes on.
>> >>>
>> >>> Looking at master, we already have the linked configuration, set to
>> >>> retain 30 builds.
>> >>>
>> >>> We have some extra branches which we can lop off (branch-1.2,
>> >>> branch-2.0, maybe some feature branches too). A quick fix might be to
>> >>> just pull back that 30 to 10.
>> >>>
>> >>> Largely figuring out how this stuff works now, give me a shout in
>> Slack
>> >>> if anyone else has cycles.
>> >>>
>> >>> On 6/10/19 2:34 PM, Peter Somogyi wrote:
>>  Hi,
>> 
>>  HBase jobs are using more than 400GB based on this list.
>>  Could someone take a look at the job configurations today?
>> Otherwise, I
>>  will look into it tomorrow morning.
>> 
>>  Thanks,
>>  Peter
>> 
>>  -- Forwarded message -
>>  From: Chris Lambertus 
>>  Date: Mon, Jun 10, 2019 at 7:57 PM
>>  Subject: ACTION REQUIRED: disk space on jenkins master nearly full
>>  To: 
>>  Cc: , 
>> 
>> 
>>  Hello,
>> 
>>  The jenkins master is nearly full.
>> 
>>  The workspaces listed below need significant size reduction within 24
>>  hours
>>  or Infra will need to perform some manual pruning of old builds to
>>  keep the
>>  jenkins system running. The Mesos “Packaging” job also needs to be
>>  corrected to include the project name (mesos-packaging) please.
>> 
>>  It appears that the typical ‘Discard Old Builds’ checkbox in the job
>>  configuration may not be working for multibranch pipeline jobs.
>> Please
>>  refer to these articles for information on discarding builds in
>>  multibranch
>>  jobs:
>> 
>> 
>> >>
>> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
>> 
>>  https://issues.jenkins-ci.org/browse/JENKINS-35642
>> 
>> >>
>> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
>> 
>> 
>> 
>> 
>>  NB: I have not fully vetted the above information, I just notice that
>>  many
>>  of these jobs have ‘Discard old builds’ checked, but it is clearly
>> not
>>  working.
>> 
>> 
>>  If you are unable to reduce your disk usage beyond what is listed,
>> >> please
>>  let me know what the reasons are and we’ll see if we can find a
>> >> solution.
>>  If you believe you’ve configured your job properly and the space
>> usage
>> >> is
>>  more than you expect, please comment here and we’ll take a look at
>> what
>>  might be going on.
>> 
>>  I cut this list off arbitrarily at 40GB workspaces and larger. There
>> are
>>  many which are between 20 and 30GB which also need to be addressed,
>> but
>>  these are the current top contributors to the disk space situation.
>> 
>> 
>>  594GPackaging
>>  425Gpulsar-website-build
>>  274Gpulsar-master
>>  195Ghadoop-multibranch
>>  173GHBase Nightly
>>  138GHBase-Flaky-Tests
>>  119Gnetbeans-release
>>  108GAny23-trunk
>>  101Gnetbeans-linux-experiment
>>  96G Jackrabbit-Oak-Windows
>>  94G HBase-Find-Flaky-Tests
>>  88G PreCommit-ZOOKEEPER-github-pr-build
>>  74G netbeans-windows
>>  71G stanbol-0.12
>>  68G Sling
>>  63G Atlas-master-NoTests

Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-10 Thread Misty Linville
Keeping artifacts and keeping build logs are two separate things. I don’t
see a need to keep any artifacts past the most recent green and most recent
red builds. Alternately if we need the artifacts let’s have Jenkins put
them somewhere rather than keeping them there. You can get back to whatever
hash you need within git to reproduce a build problem.

On Mon, Jun 10, 2019 at 2:26 PM Josh Elser  wrote:

> https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid (I
> hope).
>
> On 6/10/19 4:31 PM, Josh Elser wrote:
> > Eyes on.
> >
> > Looking at master, we already have the linked configuration, set to
> > retain 30 builds.
> >
> > We have some extra branches which we can lop off (branch-1.2,
> > branch-2.0, maybe some feature branches too). A quick fix might be to
> > just pull back that 30 to 10.
> >
> > Largely figuring out how this stuff works now, give me a shout in Slack
> > if anyone else has cycles.
> >
> > On 6/10/19 2:34 PM, Peter Somogyi wrote:
> >> Hi,
> >>
> >> HBase jobs are using more than 400GB based on this list.
> >> Could someone take a look at the job configurations today? Otherwise, I
> >> will look into it tomorrow morning.
> >>
> >> Thanks,
> >> Peter
> >>
> >> -- Forwarded message -
> >> From: Chris Lambertus 
> >> Date: Mon, Jun 10, 2019 at 7:57 PM
> >> Subject: ACTION REQUIRED: disk space on jenkins master nearly full
> >> To: 
> >> Cc: , 
> >>
> >>
> >> Hello,
> >>
> >> The jenkins master is nearly full.
> >>
> >> The workspaces listed below need significant size reduction within 24
> >> hours
> >> or Infra will need to perform some manual pruning of old builds to
> >> keep the
> >> jenkins system running. The Mesos “Packaging” job also needs to be
> >> corrected to include the project name (mesos-packaging) please.
> >>
> >> It appears that the typical ‘Discard Old Builds’ checkbox in the job
> >> configuration may not be working for multibranch pipeline jobs. Please
> >> refer to these articles for information on discarding builds in
> >> multibranch
> >> jobs:
> >>
> >>
> https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
> >>
> >> https://issues.jenkins-ci.org/browse/JENKINS-35642
> >>
> https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489
> >>
> >>
> >>
> >>
> >> NB: I have not fully vetted the above information, I just notice that
> >> many
> >> of these jobs have ‘Discard old builds’ checked, but it is clearly not
> >> working.
> >>
> >>
> >> If you are unable to reduce your disk usage beyond what is listed,
> please
> >> let me know what the reasons are and we’ll see if we can find a
> solution.
> >> If you believe you’ve configured your job properly and the space usage
> is
> >> more than you expect, please comment here and we’ll take a look at what
> >> might be going on.
> >>
> >> I cut this list off arbitrarily at 40GB workspaces and larger. There are
> >> many which are between 20 and 30GB which also need to be addressed, but
> >> these are the current top contributors to the disk space situation.
> >>
> >>
> >> 594GPackaging
> >> 425Gpulsar-website-build
> >> 274Gpulsar-master
> >> 195Ghadoop-multibranch
> >> 173GHBase Nightly
> >> 138GHBase-Flaky-Tests
> >> 119Gnetbeans-release
> >> 108GAny23-trunk
> >> 101Gnetbeans-linux-experiment
> >> 96G Jackrabbit-Oak-Windows
> >> 94G HBase-Find-Flaky-Tests
> >> 88G PreCommit-ZOOKEEPER-github-pr-build
> >> 74G netbeans-windows
> >> 71G stanbol-0.12
> >> 68G Sling
> >> 63G Atlas-master-NoTests
> >> 48G FlexJS Framework (maven)
> >> 45G HBase-PreCommit-GitHub-PR
> >> 42G pulsar-pull-request
> >> 40G Atlas-1.0-NoTests
> >>
> >>
> >>
> >> Thanks,
> >> Chris
> >> ASF Infra
> >>
>


Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-10 Thread Josh Elser
https://issues.apache.org/jira/browse/HBASE-22563 for a quick bandaid (I 
hope).


On 6/10/19 4:31 PM, Josh Elser wrote:

Eyes on.

Looking at master, we already have the linked configuration, set to 
retain 30 builds.


We have some extra branches which we can lop off (branch-1.2, 
branch-2.0, maybe some feature branches too). A quick fix might be to 
just pull back that 30 to 10.


Largely figuring out how this stuff works now, give me a shout in Slack 
if anyone else has cycles.


On 6/10/19 2:34 PM, Peter Somogyi wrote:

Hi,

HBase jobs are using more than 400GB based on this list.
Could someone take a look at the job configurations today? Otherwise, I
will look into it tomorrow morning.

Thanks,
Peter

-- Forwarded message -
From: Chris Lambertus 
Date: Mon, Jun 10, 2019 at 7:57 PM
Subject: ACTION REQUIRED: disk space on jenkins master nearly full
To: 
Cc: , 


Hello,

The jenkins master is nearly full.

The workspaces listed below need significant size reduction within 24 
hours
or Infra will need to perform some manual pruning of old builds to 
keep the

jenkins system running. The Mesos “Packaging” job also needs to be
corrected to include the project name (mesos-packaging) please.

It appears that the typical ‘Discard Old Builds’ checkbox in the job
configuration may not be working for multibranch pipeline jobs. Please
refer to these articles for information on discarding builds in 
multibranch

jobs:

https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job- 


https://issues.jenkins-ci.org/browse/JENKINS-35642
https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489 





NB: I have not fully vetted the above information, I just notice that 
many

of these jobs have ‘Discard old builds’ checked, but it is clearly not
working.


If you are unable to reduce your disk usage beyond what is listed, please
let me know what the reasons are and we’ll see if we can find a solution.
If you believe you’ve configured your job properly and the space usage is
more than you expect, please comment here and we’ll take a look at what
might be going on.

I cut this list off arbitrarily at 40GB workspaces and larger. There are
many which are between 20 and 30GB which also need to be addressed, but
these are the current top contributors to the disk space situation.


594G    Packaging
425G    pulsar-website-build
274G    pulsar-master
195G    hadoop-multibranch
173G    HBase Nightly
138G    HBase-Flaky-Tests
119G    netbeans-release
108G    Any23-trunk
101G    netbeans-linux-experiment
96G Jackrabbit-Oak-Windows
94G HBase-Find-Flaky-Tests
88G PreCommit-ZOOKEEPER-github-pr-build
74G netbeans-windows
71G stanbol-0.12
68G Sling
63G Atlas-master-NoTests
48G FlexJS Framework (maven)
45G HBase-PreCommit-GitHub-PR
42G pulsar-pull-request
40G Atlas-1.0-NoTests



Thanks,
Chris
ASF Infra



Re: Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-10 Thread Josh Elser

Eyes on.

Looking at master, we already have the linked configuration, set to 
retain 30 builds.


We have some extra branches which we can lop off (branch-1.2, 
branch-2.0, maybe some feature branches too). A quick fix might be to 
just pull back that 30 to 10.


Largely figuring out how this stuff works now, give me a shout in Slack 
if anyone else has cycles.


On 6/10/19 2:34 PM, Peter Somogyi wrote:

Hi,

HBase jobs are using more than 400GB based on this list.
Could someone take a look at the job configurations today? Otherwise, I
will look into it tomorrow morning.

Thanks,
Peter

-- Forwarded message -
From: Chris Lambertus 
Date: Mon, Jun 10, 2019 at 7:57 PM
Subject: ACTION REQUIRED: disk space on jenkins master nearly full
To: 
Cc: , 


Hello,

The jenkins master is nearly full.

The workspaces listed below need significant size reduction within 24 hours
or Infra will need to perform some manual pruning of old builds to keep the
jenkins system running. The Mesos “Packaging” job also needs to be
corrected to include the project name (mesos-packaging) please.

It appears that the typical ‘Discard Old Builds’ checkbox in the job
configuration may not be working for multibranch pipeline jobs. Please
refer to these articles for information on discarding builds in multibranch
jobs:

https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
https://issues.jenkins-ci.org/browse/JENKINS-35642
https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489



NB: I have not fully vetted the above information, I just notice that many
of these jobs have ‘Discard old builds’ checked, but it is clearly not
working.


If you are unable to reduce your disk usage beyond what is listed, please
let me know what the reasons are and we’ll see if we can find a solution.
If you believe you’ve configured your job properly and the space usage is
more than you expect, please comment here and we’ll take a look at what
might be going on.

I cut this list off arbitrarily at 40GB workspaces and larger. There are
many which are between 20 and 30GB which also need to be addressed, but
these are the current top contributors to the disk space situation.


594GPackaging
425Gpulsar-website-build
274Gpulsar-master
195Ghadoop-multibranch
173GHBase Nightly
138GHBase-Flaky-Tests
119Gnetbeans-release
108GAny23-trunk
101Gnetbeans-linux-experiment
96G Jackrabbit-Oak-Windows
94G HBase-Find-Flaky-Tests
88G PreCommit-ZOOKEEPER-github-pr-build
74G netbeans-windows
71G stanbol-0.12
68G Sling
63G Atlas-master-NoTests
48G FlexJS Framework (maven)
45G HBase-PreCommit-GitHub-PR
42G pulsar-pull-request
40G Atlas-1.0-NoTests



Thanks,
Chris
ASF Infra



Fwd: ACTION REQUIRED: disk space on jenkins master nearly full

2019-06-10 Thread Peter Somogyi
Hi,

HBase jobs are using more than 400GB based on this list.
Could someone take a look at the job configurations today? Otherwise, I
will look into it tomorrow morning.

Thanks,
Peter

-- Forwarded message -
From: Chris Lambertus 
Date: Mon, Jun 10, 2019 at 7:57 PM
Subject: ACTION REQUIRED: disk space on jenkins master nearly full
To: 
Cc: , 


Hello,

The jenkins master is nearly full.

The workspaces listed below need significant size reduction within 24 hours
or Infra will need to perform some manual pruning of old builds to keep the
jenkins system running. The Mesos “Packaging” job also needs to be
corrected to include the project name (mesos-packaging) please.

It appears that the typical ‘Discard Old Builds’ checkbox in the job
configuration may not be working for multibranch pipeline jobs. Please
refer to these articles for information on discarding builds in multibranch
jobs:

https://support.cloudbees.com/hc/en-us/articles/115000237071-How-do-I-set-discard-old-builds-for-a-Multi-Branch-Pipeline-Job-
https://issues.jenkins-ci.org/browse/JENKINS-35642
https://issues.jenkins-ci.org/browse/JENKINS-34738?focusedCommentId=263489=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-263489



NB: I have not fully vetted the above information, I just notice that many
of these jobs have ‘Discard old builds’ checked, but it is clearly not
working.


If you are unable to reduce your disk usage beyond what is listed, please
let me know what the reasons are and we’ll see if we can find a solution.
If you believe you’ve configured your job properly and the space usage is
more than you expect, please comment here and we’ll take a look at what
might be going on.

I cut this list off arbitrarily at 40GB workspaces and larger. There are
many which are between 20 and 30GB which also need to be addressed, but
these are the current top contributors to the disk space situation.


594GPackaging
425Gpulsar-website-build
274Gpulsar-master
195Ghadoop-multibranch
173GHBase Nightly
138GHBase-Flaky-Tests
119Gnetbeans-release
108GAny23-trunk
101Gnetbeans-linux-experiment
96G Jackrabbit-Oak-Windows
94G HBase-Find-Flaky-Tests
88G PreCommit-ZOOKEEPER-github-pr-build
74G netbeans-windows
71G stanbol-0.12
68G Sling
63G Atlas-master-NoTests
48G FlexJS Framework (maven)
45G HBase-PreCommit-GitHub-PR
42G pulsar-pull-request
40G Atlas-1.0-NoTests



Thanks,
Chris
ASF Infra


Fwd: Access request to dev and users slack channel

2019-04-16 Thread Evelina Dumitrescu
Hello,

I am interested to start to make contributions and I want to request access
to the Hbase slack channels for the email address
evelina.dumitrescu@gmail.com.

Thank you,
Evelina


Fwd: Jute buffer size increasing.

2019-03-10 Thread Asim Zafir
Hi Stack,
We are seeing excessive Region Server exists along with ZK connection tear
down (Len error, jute buffer threshold being reached)
 I want to see what is contributing jute buffer reaching the max upper
bound. so for after investigating the code and studying the protocol itself
it appear it is a function of number of watches that gets set on the
znodes. to bring stability to ZK service, we had to increase jute.buffer
from 1mb to 20mb, 32mb and now it is set to 128mb. In order to understand
more, I digged little bit more to see  how many zookeeper watch objects are
on zookeeper jvm /instance. I did a jmap history:live on zookeeper pid and
I got the following output (please see below). I am not sure what is [C, [B
here and it doesn't appear its refers to any class - I don't see this on
dev instance of zookeeper. due to suspect memory leak or another issue?
Please guide me through this as I can't find a resource who can go that far
to give me any hint as to what may be happening on my end. Also is it safe
for ZK sizes to increase that much? what is the impact of jute-buffer
increasing on hbase? I will greatly appreciate your feedback and help on
this.

 num #instances #bytes  class name

--

   1:220810  140582448  [C

   2:109370   34857168  [B

   3:1038427476624  org.apache.zookeeper.data.StatPersisted

   4:2207035296872  java.lang.String

   5: 286823783712  

   6: 286823681168  

   7:1110003552000  java.util.HashMap$Entry

   8:1075693442208
java.util.concurrent.ConcurrentHashMap$HashEntry

   9:1038423322944  org.apache.zookeeper.server.DataNode

  10:  26553179640  

  11:  23132017056  

  12:  26551842456  

  13:   3181241568
[Ljava.util.concurrent.ConcurrentHashMap$HashEntry;

  14:  75261221504  [Ljava.util.HashMap$Entry;

  15:  1820 812976  

  16:  8228 394944  java.util.HashMap

  17:  2903 348432  java.lang.Class

  18:  4077 229688  [S

  19:  4138 221848  [[I

  20:   231 125664  

  21:  7796 124736  java.util.HashSet

  22:  6771 108336  java.util.HashMap$KeySet

  23:  1263  62968  [Ljava.lang.Object;

  24:   746  59680  java.lang.reflect.Method

  25:  3570  57120  java.lang.Object

  26:   502  36144  org.apache.zookeeper.server.Request

  27:   649  25960  java.lang.ref.SoftReference

  28:   501  24048  org.apache.zookeeper.txn.TxnHeader

  29:   188  21704  [I

  30:   861  20664  java.lang.Long

  31:   276  19872  java.lang.reflect.Constructor

  32:   559  17888
java.util.concurrent.locks.ReentrantLock$NonfairSync

  33:   422  16880  java.util.LinkedHashMap$Entry

  34:   502  16064
org.apache.zookeeper.server.quorum.QuorumPacket

  35:   455  14560  java.util.Hashtable$Entry

  36:   495  14368  [Ljava.lang.String;

  37:   318  12720
java.util.concurrent.ConcurrentHashMap$Segment

  38: 3  12336  [Ljava.nio.ByteBuffer;

  39:   514  12336  javax.management.ObjectName$Property

  40:   505  12120  java.util.LinkedList$Node

  41:   501  12024
org.apache.zookeeper.server.quorum.Leader$Proposal

  42:   619  11920  [Ljava.lang.Class;

  43:74  11840
org.apache.zookeeper.server.NIOServerCnxn

  44:   145  11672  [Ljava.util.Hashtable$Entry;

  45:   729  11664  java.lang.Integer

  46:   346  11072  java.lang.ref.WeakReference

  47:   449  10776  org.apache.zookeeper.txn.SetDataTxn

  48:   156   9984
com.cloudera.cmf.event.shaded.org.apache.avro.Schema$Props

  49:   266   8512  java.util.Vector

  50:75   8400  sun.nio.ch.SocketChannelImpl

  51:   175   8400  java.nio.HeapByteBuffer

  52:   247   8320  [Ljavax.management.ObjectName$Property;

  53:   303   7272  com.cloudera.cmf.event.EventCode

  54:   300   7200  java.util.ArrayList

  55:   136   6528  java.util.Hashtable

  56:   156   6240  java.util.WeakHashMap$Entry

  57:   194   6208  com.sun.jmx.mbeanserver.ConvertingMethod


Re: Fwd:

2018-05-03 Thread Xi Yang
Wow, This explanation is really detailed. That helps me much! I totally
understand the read process now.
Thanks a million.

Thanks,
Alex

2018-05-02 22:33 GMT-07:00 ramkrishna vasudevan <
ramkrishna.s.vasude...@gmail.com>:

> Regarding the read flow this is what happens
>
> 1)  Create a region level scanner
> 2) the region level scanner can comprise of more than one store scanner
> (each store scanner works on one column family).
> 3) Every store scanner wil comprise of memstore scanner and a set of hfile
> scanners (based on number of store files).
> 4) The scan tries to read data in lexographical order.
>  For eg, for simplicty take you have row1 to row5 and there is only one
> column family 'f1' and one column 'c1'. Assume row1 was already written and
> it is flushed to a store file. Row2 to row5 are in the memstore .
> When the scanner starts it will form a heap with all these memstore scanner
> and store file (hfile) scanners. Internally since row1 is smaller
> lexographically the row1 from the store file is retrieved first. This row1
> for the first time will be in HDFS (and not in block cache). The remaining
> rows are fetched from memstore scanners. there is no block cache concept at
> the memstore level. Memstore is just a simple Key value map.
>
> When the same scan is issued the next time we go through the above steps
> but to fetch row1, the store file scanner that has row1, fetches the block
> cache that has row1 (instead of HDFS) and returns the value from block
> cache and the remaining rows are again fetched from memstore scanners from
> the underlying memstore.
>
> Hope this helps.
>
> REgards
> Ram
>
> On Thu, May 3, 2018 at 9:17 AM, Xi Yang  wrote:
>
> > Hi Tim,
> >
> > Thanks for confirm the question.  That question confused me for a long
> > time. Really appreciate.
> >
> >
> > About another question, I still don't know whether ModelA is correct or
> > Model B is correct. Still confused
> >
> >
> > Thanks,
> > Alex
> >
> > 2018-05-02 13:53 GMT-07:00 Tim Robertson :
> >
> > > Thanks Alex,
> > >
> > > Yes, looking at that code I believe you are correct - the memStore
> > scanner
> > > is appended after the block scanners.
> > > The block scanners may or may not see hits in the block cache when they
> > > read. If they don't get a hit, they'll open the block from the
> underlying
> > > HFile(s).
> > >
> > >
> > >
> > > On Wed, May 2, 2018 at 10:41 PM, Xi Yang 
> wrote:
> > >
> > > > Hi Tim,
> > > >
> > > > Thank you for detailed explanation. Yes, that really helps me! I
> really
> > > > appreciate it!
> > > >
> > > >
> > > > But I still confused about the sequence:
> > > >
> > > > I've read these codes in *HStore.getScanners* :
> > > >
> > > >
> > > > *// TODO this used to get the store files in descending order,*
> > > > *// but now we get them in ascending order, which I think is*
> > > > *// actually more correct, since memstore get put at the end.*
> > > > *List sfScanners =
> > > > StoreFileScanner.getScannersForStoreFiles(storeFilesToScan,*
> > > > *  cacheBlocks, usePread, isCompaction, false, matcher, readPt);*
> > > > *List scanners = new
> > ArrayList<>(sfScanners.size() +
> > > > 1);*
> > > > *scanners.addAll(sfScanners);*
> > > > *// Then the memstore scanners*
> > > > *scanners.addAll(memStoreScanners);*
> > > >
> > > >
> > > > Is it mean this step:
> > > >
> > > >
> > > > *2) It looks in the memstore to see if there are any writes still in
> > > > memoryready to flush down to the HFiles that needs merged with the
> data
> > > > read in 1) *
> > > >
> > > > is behind the following step?
> > > >
> > > > *c) the data is read from the opened block *
> > > >
> > > >
> > > >
> > > >
> > > > Here are explanation of the images I drew before, so that we don't
> need
> > > the
> > > > images:
> > > >
> > > > When a read request come in
> > > > Model A
> > > >
> > > >1. get Scanners (including StoreScanner and MemStoreScanner).
> > > >MemStoreScanner is the last one
> > > >2. Begin with the first StoreScanner
> > > >3. Try to get the block from BlockCache of the StoreScanner
> > > >4. Try to get the block from HFile of the StoreScanner
> > > >5. Go to the next StoreScanner
> > > >6. Loop #2 - #5 until all StoreScanner been used
> > > >7. Try to get the block from memStore
> > > >
> > > >
> > > > Model B
> > > >
> > > >1. Try to get the block from BlockCache, if failed then go to #2
> > > >2. get Scanners (including StoreScanner and MemStoreScanner).
> > > >MemStoreScanner is the last on
> > > >3. Begin with the first StoreScanner
> > > >4. Try to get the block from HFile of the StoreScanner
> > > >5. Go to the next StoreScanner
> > > >6. Loop #4 - #5 until all StoreScanner been used
> > > >7. Try to get the block from memStore
> > > >
> > > >
> > > >
> > > > Thanks,
> > > > Alex
> > > >
> > > >
> > > > 

Re: Fwd:

2018-05-02 Thread ramkrishna vasudevan
Regarding the read flow this is what happens

1)  Create a region level scanner
2) the region level scanner can comprise of more than one store scanner
(each store scanner works on one column family).
3) Every store scanner wil comprise of memstore scanner and a set of hfile
scanners (based on number of store files).
4) The scan tries to read data in lexographical order.
 For eg, for simplicty take you have row1 to row5 and there is only one
column family 'f1' and one column 'c1'. Assume row1 was already written and
it is flushed to a store file. Row2 to row5 are in the memstore .
When the scanner starts it will form a heap with all these memstore scanner
and store file (hfile) scanners. Internally since row1 is smaller
lexographically the row1 from the store file is retrieved first. This row1
for the first time will be in HDFS (and not in block cache). The remaining
rows are fetched from memstore scanners. there is no block cache concept at
the memstore level. Memstore is just a simple Key value map.

When the same scan is issued the next time we go through the above steps
but to fetch row1, the store file scanner that has row1, fetches the block
cache that has row1 (instead of HDFS) and returns the value from block
cache and the remaining rows are again fetched from memstore scanners from
the underlying memstore.

Hope this helps.

REgards
Ram

On Thu, May 3, 2018 at 9:17 AM, Xi Yang  wrote:

> Hi Tim,
>
> Thanks for confirm the question.  That question confused me for a long
> time. Really appreciate.
>
>
> About another question, I still don't know whether ModelA is correct or
> Model B is correct. Still confused
>
>
> Thanks,
> Alex
>
> 2018-05-02 13:53 GMT-07:00 Tim Robertson :
>
> > Thanks Alex,
> >
> > Yes, looking at that code I believe you are correct - the memStore
> scanner
> > is appended after the block scanners.
> > The block scanners may or may not see hits in the block cache when they
> > read. If they don't get a hit, they'll open the block from the underlying
> > HFile(s).
> >
> >
> >
> > On Wed, May 2, 2018 at 10:41 PM, Xi Yang  wrote:
> >
> > > Hi Tim,
> > >
> > > Thank you for detailed explanation. Yes, that really helps me! I really
> > > appreciate it!
> > >
> > >
> > > But I still confused about the sequence:
> > >
> > > I've read these codes in *HStore.getScanners* :
> > >
> > >
> > > *// TODO this used to get the store files in descending order,*
> > > *// but now we get them in ascending order, which I think is*
> > > *// actually more correct, since memstore get put at the end.*
> > > *List sfScanners =
> > > StoreFileScanner.getScannersForStoreFiles(storeFilesToScan,*
> > > *  cacheBlocks, usePread, isCompaction, false, matcher, readPt);*
> > > *List scanners = new
> ArrayList<>(sfScanners.size() +
> > > 1);*
> > > *scanners.addAll(sfScanners);*
> > > *// Then the memstore scanners*
> > > *scanners.addAll(memStoreScanners);*
> > >
> > >
> > > Is it mean this step:
> > >
> > >
> > > *2) It looks in the memstore to see if there are any writes still in
> > > memoryready to flush down to the HFiles that needs merged with the data
> > > read in 1) *
> > >
> > > is behind the following step?
> > >
> > > *c) the data is read from the opened block *
> > >
> > >
> > >
> > >
> > > Here are explanation of the images I drew before, so that we don't need
> > the
> > > images:
> > >
> > > When a read request come in
> > > Model A
> > >
> > >1. get Scanners (including StoreScanner and MemStoreScanner).
> > >MemStoreScanner is the last one
> > >2. Begin with the first StoreScanner
> > >3. Try to get the block from BlockCache of the StoreScanner
> > >4. Try to get the block from HFile of the StoreScanner
> > >5. Go to the next StoreScanner
> > >6. Loop #2 - #5 until all StoreScanner been used
> > >7. Try to get the block from memStore
> > >
> > >
> > > Model B
> > >
> > >1. Try to get the block from BlockCache, if failed then go to #2
> > >2. get Scanners (including StoreScanner and MemStoreScanner).
> > >MemStoreScanner is the last on
> > >3. Begin with the first StoreScanner
> > >4. Try to get the block from HFile of the StoreScanner
> > >5. Go to the next StoreScanner
> > >6. Loop #4 - #5 until all StoreScanner been used
> > >7. Try to get the block from memStore
> > >
> > >
> > >
> > > Thanks,
> > > Alex
> > >
> > >
> > > 2018-05-02 1:04 GMT-07:00 Tim Robertson :
> > >
> > > > Hi Alex,
> > > >
> > > > I'm not sure I fully follow your question without the images but I'll
> > try
> > > > and help.
> > > >
> > > > When a read request comes in, my understanding of the order of
> > execution
> > > is
> > > > as follows (perhaps someone can verify this):
> > > >
> > > > 1) It looks in the block cache for the cells (this is a read only
> cache
> > > > containing recently read data)
> > > > 2) 

Re: Fwd:

2018-05-02 Thread Xi Yang
Hi Tim,

Thanks for confirm the question.  That question confused me for a long
time. Really appreciate.


About another question, I still don't know whether ModelA is correct or
Model B is correct. Still confused


Thanks,
Alex

2018-05-02 13:53 GMT-07:00 Tim Robertson :

> Thanks Alex,
>
> Yes, looking at that code I believe you are correct - the memStore scanner
> is appended after the block scanners.
> The block scanners may or may not see hits in the block cache when they
> read. If they don't get a hit, they'll open the block from the underlying
> HFile(s).
>
>
>
> On Wed, May 2, 2018 at 10:41 PM, Xi Yang  wrote:
>
> > Hi Tim,
> >
> > Thank you for detailed explanation. Yes, that really helps me! I really
> > appreciate it!
> >
> >
> > But I still confused about the sequence:
> >
> > I've read these codes in *HStore.getScanners* :
> >
> >
> > *// TODO this used to get the store files in descending order,*
> > *// but now we get them in ascending order, which I think is*
> > *// actually more correct, since memstore get put at the end.*
> > *List sfScanners =
> > StoreFileScanner.getScannersForStoreFiles(storeFilesToScan,*
> > *  cacheBlocks, usePread, isCompaction, false, matcher, readPt);*
> > *List scanners = new ArrayList<>(sfScanners.size() +
> > 1);*
> > *scanners.addAll(sfScanners);*
> > *// Then the memstore scanners*
> > *scanners.addAll(memStoreScanners);*
> >
> >
> > Is it mean this step:
> >
> >
> > *2) It looks in the memstore to see if there are any writes still in
> > memoryready to flush down to the HFiles that needs merged with the data
> > read in 1) *
> >
> > is behind the following step?
> >
> > *c) the data is read from the opened block *
> >
> >
> >
> >
> > Here are explanation of the images I drew before, so that we don't need
> the
> > images:
> >
> > When a read request come in
> > Model A
> >
> >1. get Scanners (including StoreScanner and MemStoreScanner).
> >MemStoreScanner is the last one
> >2. Begin with the first StoreScanner
> >3. Try to get the block from BlockCache of the StoreScanner
> >4. Try to get the block from HFile of the StoreScanner
> >5. Go to the next StoreScanner
> >6. Loop #2 - #5 until all StoreScanner been used
> >7. Try to get the block from memStore
> >
> >
> > Model B
> >
> >1. Try to get the block from BlockCache, if failed then go to #2
> >2. get Scanners (including StoreScanner and MemStoreScanner).
> >MemStoreScanner is the last on
> >3. Begin with the first StoreScanner
> >4. Try to get the block from HFile of the StoreScanner
> >5. Go to the next StoreScanner
> >6. Loop #4 - #5 until all StoreScanner been used
> >7. Try to get the block from memStore
> >
> >
> >
> > Thanks,
> > Alex
> >
> >
> > 2018-05-02 1:04 GMT-07:00 Tim Robertson :
> >
> > > Hi Alex,
> > >
> > > I'm not sure I fully follow your question without the images but I'll
> try
> > > and help.
> > >
> > > When a read request comes in, my understanding of the order of
> execution
> > is
> > > as follows (perhaps someone can verify this):
> > >
> > > 1) It looks in the block cache for the cells (this is a read only cache
> > > containing recently read data)
> > > 2) It looks in the memstore to see if there are any writes still in
> > memory
> > > ready to flush down to the HFiles that needs merged with the data read
> in
> > > 1)
> > > 3) Only if not found it starts locating the data from HFiles (note,
> there
> > > can be multiple files per region until major compaction runs which
> merges
> > > into 1 per column family, discarding stale data where possible)
> > >   a) It uses bloom filters and the block cache indexes to locate the
> > target
> > > blocks (these are part of the HFiles, but read into memory when the
> > region
> > > servers start)
> > >   b) those target blocks are then opened and occupy space on the block
> > > cache on the region server (possibly evicting other blocks)
> > >   c) the data is read from the opened block
> > >
> > > Does that help at all?
> > >
> > > Thanks,
> > > Tim
> > >
> > >
> > >
> > > On Wed, May 2, 2018 at 9:49 AM, Xi Yang 
> wrote:
> > >
> > > > OK, I got it. I've understood the Q2 by your help, thanks!
> > > >
> > > >
> > > >
> > > > Seems like I have to use some other way to draw my images, Here is
> the
> > > > updated version Q1:
> > > >
> > > >
> > > > Q1
> > > >
> > > > I found that HFileScannerImpl.getCachedBlock(...) get block from
> > > > BlockCache. This CachedBlock is used by StoreFileScanner. Is that
> mean
> > > the
> > > > read model like:
> > > >
> > > > *Model A*
> > > >
> > > > When a read request come
> > > >
> > > >1. Read 1st Store:
> > > >a. read BlockCache
> > > >b. read HFile
> > > >2. Read 2nd Store:
> > > >a. read BlockCache
> > > >b. read HFile
> > > >3. ..
> > > >4. Read Memstore
> 

Re: Fwd:

2018-05-02 Thread Tim Robertson
Thanks Alex,

Yes, looking at that code I believe you are correct - the memStore scanner
is appended after the block scanners.
The block scanners may or may not see hits in the block cache when they
read. If they don't get a hit, they'll open the block from the underlying
HFile(s).



On Wed, May 2, 2018 at 10:41 PM, Xi Yang  wrote:

> Hi Tim,
>
> Thank you for detailed explanation. Yes, that really helps me! I really
> appreciate it!
>
>
> But I still confused about the sequence:
>
> I've read these codes in *HStore.getScanners* :
>
>
> *// TODO this used to get the store files in descending order,*
> *// but now we get them in ascending order, which I think is*
> *// actually more correct, since memstore get put at the end.*
> *List sfScanners =
> StoreFileScanner.getScannersForStoreFiles(storeFilesToScan,*
> *  cacheBlocks, usePread, isCompaction, false, matcher, readPt);*
> *List scanners = new ArrayList<>(sfScanners.size() +
> 1);*
> *scanners.addAll(sfScanners);*
> *// Then the memstore scanners*
> *scanners.addAll(memStoreScanners);*
>
>
> Is it mean this step:
>
>
> *2) It looks in the memstore to see if there are any writes still in
> memoryready to flush down to the HFiles that needs merged with the data
> read in 1) *
>
> is behind the following step?
>
> *c) the data is read from the opened block *
>
>
>
>
> Here are explanation of the images I drew before, so that we don't need the
> images:
>
> When a read request come in
> Model A
>
>1. get Scanners (including StoreScanner and MemStoreScanner).
>MemStoreScanner is the last one
>2. Begin with the first StoreScanner
>3. Try to get the block from BlockCache of the StoreScanner
>4. Try to get the block from HFile of the StoreScanner
>5. Go to the next StoreScanner
>6. Loop #2 - #5 until all StoreScanner been used
>7. Try to get the block from memStore
>
>
> Model B
>
>1. Try to get the block from BlockCache, if failed then go to #2
>2. get Scanners (including StoreScanner and MemStoreScanner).
>MemStoreScanner is the last on
>3. Begin with the first StoreScanner
>4. Try to get the block from HFile of the StoreScanner
>5. Go to the next StoreScanner
>6. Loop #4 - #5 until all StoreScanner been used
>7. Try to get the block from memStore
>
>
>
> Thanks,
> Alex
>
>
> 2018-05-02 1:04 GMT-07:00 Tim Robertson :
>
> > Hi Alex,
> >
> > I'm not sure I fully follow your question without the images but I'll try
> > and help.
> >
> > When a read request comes in, my understanding of the order of execution
> is
> > as follows (perhaps someone can verify this):
> >
> > 1) It looks in the block cache for the cells (this is a read only cache
> > containing recently read data)
> > 2) It looks in the memstore to see if there are any writes still in
> memory
> > ready to flush down to the HFiles that needs merged with the data read in
> > 1)
> > 3) Only if not found it starts locating the data from HFiles (note, there
> > can be multiple files per region until major compaction runs which merges
> > into 1 per column family, discarding stale data where possible)
> >   a) It uses bloom filters and the block cache indexes to locate the
> target
> > blocks (these are part of the HFiles, but read into memory when the
> region
> > servers start)
> >   b) those target blocks are then opened and occupy space on the block
> > cache on the region server (possibly evicting other blocks)
> >   c) the data is read from the opened block
> >
> > Does that help at all?
> >
> > Thanks,
> > Tim
> >
> >
> >
> > On Wed, May 2, 2018 at 9:49 AM, Xi Yang  wrote:
> >
> > > OK, I got it. I've understood the Q2 by your help, thanks!
> > >
> > >
> > >
> > > Seems like I have to use some other way to draw my images, Here is the
> > > updated version Q1:
> > >
> > >
> > > Q1
> > >
> > > I found that HFileScannerImpl.getCachedBlock(...) get block from
> > > BlockCache. This CachedBlock is used by StoreFileScanner. Is that mean
> > the
> > > read model like:
> > >
> > > *Model A*
> > >
> > > When a read request come
> > >
> > >1. Read 1st Store:
> > >a. read BlockCache
> > >b. read HFile
> > >2. Read 2nd Store:
> > >a. read BlockCache
> > >b. read HFile
> > >3. ..
> > >4. Read Memstore
> > >
> > >
> > >
> > > Or there is only one BlockCache and all the read request will go
> through
> > it
> > > first, like:
> > >
> > > *Model B:*
> > >
> > > When a read request come
> > >
> > >1. Read BlockCache
> > >2. Read 1st Store -> read HFIle
> > >3. Read 2nd Store -> read HFile
> > >4. 
> > >5. Read Memstore
> > >
> > >
> > > ​​
> > >
> > > Thanks,
> > > Alex
> > >
> > >
> > >
> > > 2018-05-01 20:04 GMT-07:00 Josh Elser :
> > >
> > > > FYI, the mailing list strips images.
> > > >
> > > > There is only one BlockCache per RS. Not sure if that answers your 

Re: Fwd:

2018-05-02 Thread Xi Yang
Hi Tim,

Thank you for detailed explanation. Yes, that really helps me! I really
appreciate it!


But I still confused about the sequence:

I've read these codes in *HStore.getScanners* :


*// TODO this used to get the store files in descending order,*
*// but now we get them in ascending order, which I think is*
*// actually more correct, since memstore get put at the end.*
*List sfScanners =
StoreFileScanner.getScannersForStoreFiles(storeFilesToScan,*
*  cacheBlocks, usePread, isCompaction, false, matcher, readPt);*
*List scanners = new ArrayList<>(sfScanners.size() +
1);*
*scanners.addAll(sfScanners);*
*// Then the memstore scanners*
*scanners.addAll(memStoreScanners);*


Is it mean this step:


*2) It looks in the memstore to see if there are any writes still in
memoryready to flush down to the HFiles that needs merged with the data
read in 1) *

is behind the following step?

*c) the data is read from the opened block *




Here are explanation of the images I drew before, so that we don't need the
images:

When a read request come in
Model A

   1. get Scanners (including StoreScanner and MemStoreScanner).
   MemStoreScanner is the last one
   2. Begin with the first StoreScanner
   3. Try to get the block from BlockCache of the StoreScanner
   4. Try to get the block from HFile of the StoreScanner
   5. Go to the next StoreScanner
   6. Loop #2 - #5 until all StoreScanner been used
   7. Try to get the block from memStore


Model B

   1. Try to get the block from BlockCache, if failed then go to #2
   2. get Scanners (including StoreScanner and MemStoreScanner).
   MemStoreScanner is the last on
   3. Begin with the first StoreScanner
   4. Try to get the block from HFile of the StoreScanner
   5. Go to the next StoreScanner
   6. Loop #4 - #5 until all StoreScanner been used
   7. Try to get the block from memStore



Thanks,
Alex


2018-05-02 1:04 GMT-07:00 Tim Robertson :

> Hi Alex,
>
> I'm not sure I fully follow your question without the images but I'll try
> and help.
>
> When a read request comes in, my understanding of the order of execution is
> as follows (perhaps someone can verify this):
>
> 1) It looks in the block cache for the cells (this is a read only cache
> containing recently read data)
> 2) It looks in the memstore to see if there are any writes still in memory
> ready to flush down to the HFiles that needs merged with the data read in
> 1)
> 3) Only if not found it starts locating the data from HFiles (note, there
> can be multiple files per region until major compaction runs which merges
> into 1 per column family, discarding stale data where possible)
>   a) It uses bloom filters and the block cache indexes to locate the target
> blocks (these are part of the HFiles, but read into memory when the region
> servers start)
>   b) those target blocks are then opened and occupy space on the block
> cache on the region server (possibly evicting other blocks)
>   c) the data is read from the opened block
>
> Does that help at all?
>
> Thanks,
> Tim
>
>
>
> On Wed, May 2, 2018 at 9:49 AM, Xi Yang  wrote:
>
> > OK, I got it. I've understood the Q2 by your help, thanks!
> >
> >
> >
> > Seems like I have to use some other way to draw my images, Here is the
> > updated version Q1:
> >
> >
> > Q1
> >
> > I found that HFileScannerImpl.getCachedBlock(...) get block from
> > BlockCache. This CachedBlock is used by StoreFileScanner. Is that mean
> the
> > read model like:
> >
> > *Model A*
> >
> > When a read request come
> >
> >1. Read 1st Store:
> >a. read BlockCache
> >b. read HFile
> >2. Read 2nd Store:
> >a. read BlockCache
> >b. read HFile
> >3. ..
> >4. Read Memstore
> >
> >
> >
> > Or there is only one BlockCache and all the read request will go through
> it
> > first, like:
> >
> > *Model B:*
> >
> > When a read request come
> >
> >1. Read BlockCache
> >2. Read 1st Store -> read HFIle
> >3. Read 2nd Store -> read HFile
> >4. 
> >5. Read Memstore
> >
> >
> > ​​
> >
> > Thanks,
> > Alex
> >
> >
> >
> > 2018-05-01 20:04 GMT-07:00 Josh Elser :
> >
> > > FYI, the mailing list strips images.
> > >
> > > There is only one BlockCache per RS. Not sure if that answers your Q1
> in
> > > entirety though.
> > >
> > > Q2. The "Block" in "BlockCache" are the blocks that make up the HBase
> > > HFiles in HDFS. Data in the Memstore does not yet exist in HFiles on
> > HDFS.
> > > Additionally, Memstore is already in memory; no need to have a
> different
> > > cache to accomplish the same thing :)
> > >
> > > On 5/1/18 9:25 PM, Xi Yang wrote:
> > >
> > >> Sorry to bother you guys. May I ask 2 questions about HBase?
> > >>
> > >> Q1
> > >>
> > >> I found that |HFileScannerImpl.getCachedBlock(...)| get block from
> > >> BlockCache. This CachedBlock is used by |StoreFileScanner|. Is that
> mean
> > >> the read model like:
> > >>
> > >> 

Re: Fwd:

2018-05-02 Thread Tim Robertson
Hi Alex,

I'm not sure I fully follow your question without the images but I'll try
and help.

When a read request comes in, my understanding of the order of execution is
as follows (perhaps someone can verify this):

1) It looks in the block cache for the cells (this is a read only cache
containing recently read data)
2) It looks in the memstore to see if there are any writes still in memory
ready to flush down to the HFiles that needs merged with the data read in 1)
3) Only if not found it starts locating the data from HFiles (note, there
can be multiple files per region until major compaction runs which merges
into 1 per column family, discarding stale data where possible)
  a) It uses bloom filters and the block cache indexes to locate the target
blocks (these are part of the HFiles, but read into memory when the region
servers start)
  b) those target blocks are then opened and occupy space on the block
cache on the region server (possibly evicting other blocks)
  c) the data is read from the opened block

Does that help at all?

Thanks,
Tim



On Wed, May 2, 2018 at 9:49 AM, Xi Yang  wrote:

> OK, I got it. I've understood the Q2 by your help, thanks!
>
>
>
> Seems like I have to use some other way to draw my images, Here is the
> updated version Q1:
>
>
> Q1
>
> I found that HFileScannerImpl.getCachedBlock(...) get block from
> BlockCache. This CachedBlock is used by StoreFileScanner. Is that mean the
> read model like:
>
> *Model A*
>
> When a read request come
>
>1. Read 1st Store:
>a. read BlockCache
>b. read HFile
>2. Read 2nd Store:
>a. read BlockCache
>b. read HFile
>3. ..
>4. Read Memstore
>
>
>
> Or there is only one BlockCache and all the read request will go through it
> first, like:
>
> *Model B:*
>
> When a read request come
>
>1. Read BlockCache
>2. Read 1st Store -> read HFIle
>3. Read 2nd Store -> read HFile
>4. 
>5. Read Memstore
>
>
> ​​
>
> Thanks,
> Alex
>
>
>
> 2018-05-01 20:04 GMT-07:00 Josh Elser :
>
> > FYI, the mailing list strips images.
> >
> > There is only one BlockCache per RS. Not sure if that answers your Q1 in
> > entirety though.
> >
> > Q2. The "Block" in "BlockCache" are the blocks that make up the HBase
> > HFiles in HDFS. Data in the Memstore does not yet exist in HFiles on
> HDFS.
> > Additionally, Memstore is already in memory; no need to have a different
> > cache to accomplish the same thing :)
> >
> > On 5/1/18 9:25 PM, Xi Yang wrote:
> >
> >> Sorry to bother you guys. May I ask 2 questions about HBase?
> >>
> >> Q1
> >>
> >> I found that |HFileScannerImpl.getCachedBlock(...)| get block from
> >> BlockCache. This CachedBlock is used by |StoreFileScanner|. Is that mean
> >> the read model like:
> >>
> >> *Model A*
> >>
> >> Or there is only one BlockCache and all the read request will go through
> >> it first, like:
> >>
> >> *Model B:*
> >>
> >> ​
> >> Q2
> >> If the data been read from Memstore, will it be put in BlockCache to
> >> accelerate the read process next time?
> >>
> >> ​
> >> Thanks,
> >> Alex
> >>
> >> ​
> >>
> >
>


Re: Fwd:

2018-05-02 Thread Xi Yang
OK, I got it. I've understood the Q2 by your help, thanks!



Seems like I have to use some other way to draw my images, Here is the
updated version Q1:


Q1

I found that HFileScannerImpl.getCachedBlock(...) get block from
BlockCache. This CachedBlock is used by StoreFileScanner. Is that mean the
read model like:

*Model A*

When a read request come

   1. Read 1st Store:
   a. read BlockCache
   b. read HFile
   2. Read 2nd Store:
   a. read BlockCache
   b. read HFile
   3. ..
   4. Read Memstore



Or there is only one BlockCache and all the read request will go through it
first, like:

*Model B:*

When a read request come

   1. Read BlockCache
   2. Read 1st Store -> read HFIle
   3. Read 2nd Store -> read HFile
   4. 
   5. Read Memstore


​​

Thanks,
Alex



2018-05-01 20:04 GMT-07:00 Josh Elser :

> FYI, the mailing list strips images.
>
> There is only one BlockCache per RS. Not sure if that answers your Q1 in
> entirety though.
>
> Q2. The "Block" in "BlockCache" are the blocks that make up the HBase
> HFiles in HDFS. Data in the Memstore does not yet exist in HFiles on HDFS.
> Additionally, Memstore is already in memory; no need to have a different
> cache to accomplish the same thing :)
>
> On 5/1/18 9:25 PM, Xi Yang wrote:
>
>> Sorry to bother you guys. May I ask 2 questions about HBase?
>>
>> Q1
>>
>> I found that |HFileScannerImpl.getCachedBlock(...)| get block from
>> BlockCache. This CachedBlock is used by |StoreFileScanner|. Is that mean
>> the read model like:
>>
>> *Model A*
>>
>> Or there is only one BlockCache and all the read request will go through
>> it first, like:
>>
>> *Model B:*
>>
>> ​
>> Q2
>> If the data been read from Memstore, will it be put in BlockCache to
>> accelerate the read process next time?
>>
>> ​
>> Thanks,
>> Alex
>>
>> ​
>>
>


Re: Fwd:

2018-05-01 Thread Josh Elser

FYI, the mailing list strips images.

There is only one BlockCache per RS. Not sure if that answers your Q1 in 
entirety though.


Q2. The "Block" in "BlockCache" are the blocks that make up the HBase 
HFiles in HDFS. Data in the Memstore does not yet exist in HFiles on 
HDFS. Additionally, Memstore is already in memory; no need to have a 
different cache to accomplish the same thing :)


On 5/1/18 9:25 PM, Xi Yang wrote:

Sorry to bother you guys. May I ask 2 questions about HBase?

Q1

I found that |HFileScannerImpl.getCachedBlock(...)| get block from 
BlockCache. This CachedBlock is used by |StoreFileScanner|. Is that mean 
the read model like:


*Model A*

Or there is only one BlockCache and all the read request will go through 
it first, like:


*Model B:*

​
Q2
If the data been read from Memstore, will it be put in BlockCache to 
accelerate the read process next time?


​
Thanks,
Alex

​


Fwd:

2018-05-01 Thread Xi Yang
Sorry to bother you guys. May I ask 2 questions about HBase?

Q1

I found that HFileScannerImpl.getCachedBlock(...) get block from
BlockCache. This CachedBlock is used by StoreFileScanner. Is that mean the
read model like:

*Model A*

Or there is only one BlockCache and all the read request will go through it
first, like:

*Model B:*

​
Q2
If the data been read from Memstore, will it be put in BlockCache to
accelerate the read process next time?

​
Thanks,
Alex
​


Fwd: [REPORT] HBase - January 2018

2018-01-09 Thread Misty Stanley-Jones
FYI, this is the quarterly report about HBase project health, as submitted
to the Apache board today. If you have any questions, feel free to ask here
or bring them to the PMC or release management team. Happy New Year!

-- Forwarded message --
From: Misty Stanley-Jones 
Date: Tue, Jan 9, 2018 at 10:58 AM
Subject: [REPORT] HBase - January 2018
To: bo...@apache.org
Cc: priv...@hbase.apache.org


Please vote +/-1 on this report by Jan 9. These stats cover HBase project
activities from Oct-Dec 2017. Happy new year!
---

HBase is a distributed column-oriented database built on top of Hadoop
Common and Hadoop HDFS.

hbase-thirdparty is a set of internal artifacts used by the project to
mitigate the impact of our dependency choices on the wider ecosystem.

ISSUES FOR THE BOARD’S ATTENTION

None at this time.

RELEASES

HBase had three releases during this reporting period, including one alpha
release working toward 2.0.0.

- HBase 1.1.13 was released on Sun Dec 10 2017. This was the final release
in the 1.1 branch. Thanks to Nick Dimiduk for managing releases on this
branch for us.
- HBase 1.4.0 was released on Sun Dec 17 2017. This is the first release of
our new 1.4 code line, the latest in the 1.x series of minor releases.
HBase 1.4.0 incorporates 660 bug fixes and improvements and several new
features. See the 1.4.0 release announcement in the archives of our dev
list for more details. We expect to make releases from this line on roughly
a monthly cadence.
- HBase 2.0.0-alpha-4 was released on Mon Nov 06 2017.

In addition, hbase-thirdparty had one release.

- hbase-thrirdparty-2.0.0 was released on Mon Dec 25 2017

ACTIVITY

Development toward a HBase 2.0.0 release is going well, with the fourth
alpha released in November and the first beta around the corner. A release
candidate went up for vote for that beta, but the vote did not pass due to
stability concerns, and we are continuing work to stabilize before the
first beta. Thanks to Michael Stack for driving the 2.0.0 effort. Work
toward the Beta has continued to engage lots of discussions on the dev@
mailing list and to increase interest in the project overall.

Work on the HBase 1.4.x line is going well, and the HBase 1.1.x line is now
at an end.

The HBase project has added five new committers during this reporting
period, with a total of 71 committers.

- Lars Francke was added as a committer on October 16.
- Rahul Gidwani and Jan Hentschel were added as committers on October 25.
- Zheng Hu was added as a committer on October 21.
- Yi Liang was added as a committer on December 20.

Thanks to PMC members, committers, and active community members for a
successful 2017 for the HBase project!

STATS

The dev@ mailing list saw a slight decrease in membership, but a marked
increase in activity. This was due to several substantial discussions about
project health, work on the upcoming 2.0.0 Beta, and discussion of new
features and design changes.

The user@ mailing list saw a slight decrease in membership and a small
decrease in the number of discussions. This is probably due to holidays.

The builds@ mailing list saw a decrease in traffic, due to significant work
on stabilizing build and test infrastructure over the past quarter.

71 committers
38 PMC members
755 JIRA tickets created
730 JIRA tickets closed/resolved


Fwd: precommit stalling across projects

2017-11-26 Thread Ted Yu
For those of you waiting for QA bot to come back, see the following thread.

Allen advised waiting before logging INFRA ticket.

FYI

-- Forwarded message --
From: Allen Wittenauer 
Date: Sun, Nov 26, 2017 at 6:28 PM
Subject: Re: precommit stalling across projects
To: d...@yetus.apache.org



> On Nov 26, 2017, at 6:20 PM, Allen Wittenauer 
wrote:
>
>
>   Given the data provided and without looking real hard, this usually
means someone screwed up the hadoopqa account and locked it out of JIRA.

Yup, it’s locked.  Tried from the UI.  Hit the reset link.  Maybe
someone we know will get it.

>  I’m tempted to create a yetusqa account, give ownership to it to the
yetus PMC, and then put that into (at least) the precommit-hadoop-build job.

Looks like builds.apache.org is undergoing maintenance so no point
in spending much time on this at the moment.  I’ll check later/tomorrow.


Fwd: Ycsb and Hbase - read+write simulation.

2017-08-09 Thread Meirav Malka
-- Forwarded message --
From: "meirav.malka" 
Date: Aug 9, 2017 10:56
Subject: Ycsb and Hbase - read+write simulation.
To: "Hbase User Group" 
Cc:

Hi everyone,

Does anyone know of a ycdb option that will allow the following read
request distribution:


1. 90% of the material read was inserted in the last 24 hours.
2. 97% in the last 2 hours.

The only options i see for request distribution is "latest", "zipfian" and
"uniform"

Is there anyway to provide an this distribution?

Thanks!


Sent from my Samsung Galaxy smartphone.


Fwd: [JENKINS] [IMPORTANT] - Jenkins Migration and Upgrade (And JDK7 deprecation)

2017-06-27 Thread Josh Elser
tl;dr Infra is upgrading Jenkins in 2 weeks and Java7 Maven jobs 
may/may-not work after this. See explanation below from [1]:



Users with jobs configured with the "Maven project" type may not be able 
to use Java 7 for their Maven jobs. The correct behavior is not 
guaranteed so proceed at your own risk. The Maven Project uses Jenkins 
Remoting to establish "interceptors" within the Maven executable. 
Because of this, Maven uses Remoting and other Jenkins core classes, and 
this behavior may break an update.



[1] https://jenkins.io/blog/2017/04/10/jenkins-has-upgraded-to-java-8/

 Forwarded Message 
Subject: [JENKINS]  [IMPORTANT] - Jenkins Migration and Upgrade (And 
JDK7 deprecation)

Date: Tue, 27 Jun 2017 17:03:13 +1000
From: Gavin McDonald 
Reply-To: bui...@apache.org, bui...@apache.org
To: bui...@apache.org
CC: ASF Operations 

ASF Jenkins Master Migration and Upgrade on :-


Location	Local Time	 
   Time Zone	UTC Offset
Melbourne (Australia - Victoria)	Sunday, 16 July 2017 at 10:00:00 am 
AEST	UTC+10 hours
New York (USA - New York)	Saturday, 15 July 2017 at 8:00:00 pm 
EDT	UTC-4 hours

Corresponding UTC (GMT) Sunday, 16 July 2017 at 00:00:00

Hi All,

A few things are going to happen in just over 2 weeks.

1. Migration of Jenkins to a new host. A Jenkins Master module and yaml 
have been puppetized and ready to go.
What we need to do to migrate the Master away from its current host 
is turn off the old service. Perform a final rsync of data and 
perform the migration tasks.

As we intend to preserve history for jobs this will take some time.
At the same time as doing this migration to a new host, all slave 
connections will be updated (see below.)
I have no current estimate of downtime, but it will run into 
several hours. We do plan to run this migration on a Sunday at the 
lowest part of Jenkins usual usage.


2. Upgrade of Jenkins - Jenkins project released a new LTS release, 
version 2.60.1. This is a major release and breaks Jenkins in terms 
of Maven jobs for JDK 7 in the same way that it happened for Maven and 
JDK 6 a few months back.


The infra team (mainly myself) got quite some feedback on not 
supplying advance notice of this breakage. That upgrade however was 
necessary due to security fixes that required our upgrade.  This email 
serves as advance warning of the upcoming upgrade of Jenkins, the 
downtime due to the migration of the service to a new host; and notice 
of the breakage to JDK 7 that the upgrade brings.


Please familiarise yourself with the Jenkins LTS upgrade notes at [1].
In particular please note:-

“…2.60.1 is the first Jenkins LTS release that requires Java 8 to 
run. If you're using the Maven Project type, please note that it needs 
to use a JDK capable of running Jenkins, i.e. JDK 8 or up. If you 
configure an older JDK in a Maven Project, Jenkins will attempt to find 
a newer JDK and use that automatically. If your SSH Slaves fail to start 
and you have the plugin install the JRE to run them, make sure to update 
SSH Slaves Plugin to at least version 1.17 (1.20 recommended).

Changes since 2.60:
Fix for NullPointerException while initiating some SSH connections 
(regression in 2.59). (issue 44120 
)

Notable changes since 2.46.3:
Jenkins (master and agents) now requires Java 8 to run. (issue 27624 
 <>, issue 42709 
 <>, pull 2802 
, announcement blog post 
)


…”

There are over 30 other enhancements/fixes since 2.46.2 which we 
currently run so please do take a note of those.


Recap: In just over 2 weeks, downtime for a migration AND upgrade is 
planned.
Please do not rely on Jenkins at all for that weekend if you use it in 
your release workflow.


Please do take this notice back to your dev lists.
Any questions or concerns please email back to bui...@apache.org 
 only.

Thanks

Gav…

[1] - https://jenkins.io/changelog-stable/


Fwd: [JENKINS] [IMPORTANT] - Jenkins Migration and Upgrade (And JDK7 deprecation)

2017-06-27 Thread Mike Drob
Gavin of the infra team just sent this missive over.

It looks like JDK 7 will no longer be available. Do we need to do anything
this affect our branch-1, or are we sufficiently confident that JDK8 with
compatibility mode targeting 1.7 will be enough for our needs?

Mike

-- Forwarded message --
From: Gavin McDonald 
Date: Tue, Jun 27, 2017 at 2:03 AM
Subject: [JENKINS] [IMPORTANT] - Jenkins Migration and Upgrade (And JDK7
deprecation)
To: bui...@apache.org
Cc: ASF Operations 


ASF Jenkins Master Migration and Upgrade on :-


LocationLocal Time
Time Zone   UTC Offset
Melbourne (Australia - Victoria)Sunday, 16 July 2017 at 10:00:00
am AESTUTC+10 hours
New York (USA - New York)   Saturday, 15 July 2017 at 8:00:00
pmEDT UTC-4 hours
Corresponding UTC (GMT) Sunday, 16 July 2017 at 00:00:00


Hi All,

A few things are going to happen in just over 2 weeks.

1. Migration of Jenkins to a new host. A Jenkins Master module and yaml
have been puppetized and ready to go.
What we need to do to migrate the Master away from its current host is
turn off the old service. Perform a final
rsync of data and perform the migration tasks.

As we intend to preserve history for jobs this will take some time.
At the same time as doing this migration to a new host, all slave
connections will be updated (see below.)
I have no current estimate of downtime, but it will run into several
hours. We do plan to run this migration on a
Sunday at the lowest part of Jenkins usual usage.

2. Upgrade of Jenkins - Jenkins project released a new LTS release, version
2.60.1. This is a major release and breaks
Jenkins in terms of Maven jobs for JDK 7 in the same way that it
happened for Maven and JDK 6 a few months back.

The infra team (mainly myself) got quite some feedback on not supplying
advance notice of this breakage. That upgrade
however was necessary due to security fixes that required our upgrade.
This email serves as advance warning of the
upcoming upgrade of Jenkins, the downtime due to the migration of the
service to a new host; and notice of the breakage
to JDK 7 that the upgrade brings.

Please familiarise yourself with the Jenkins LTS upgrade notes at [1].
In particular please note:-

“…2.60.1 is the first Jenkins LTS release that requires Java 8 to run.
If you're using the Maven Project type, please note that it needs to use a
JDK capable of running Jenkins, i.e. JDK 8 or up. If you configure an older
JDK in a Maven Project, Jenkins will attempt to find a newer JDK and use
that automatically. If your SSH Slaves fail to start and you have the
plugin install the JRE to run them, make sure to update SSH Slaves Plugin
to at least version 1.17 (1.20 recommended).
Changes since 2.60:
Fix for NullPointerException while initiating some SSH connections
(regression in 2.59). (issue 44120 )
Notable changes since 2.46.3:
Jenkins (master and agents) now requires Java 8 to run. (issue 27624 <
https://issues.jenkins-ci.org/browse/JENKINS-27624> <>, issue 42709 <
https://issues.jenkins-ci.org/browse/JENKINS-42709> <>, pull 2802 <
https://github.com/jenkinsci/jenkins/pull/2802>, announcement blog post <
https://jenkins.io/blog/2017/04/10/jenkins-has-upgraded-to-java-8/>)

…”

There are over 30 other enhancements/fixes since 2.46.2 which we currently
run so please do take a note of those.

Recap: In just over 2 weeks, downtime for a migration AND upgrade is
planned.

Please do not rely on Jenkins at all for that weekend if you use it in your
release workflow.

Please do take this notice back to your dev lists.

Any questions or concerns please email back to bui...@apache.org  only.

Thanks

Gav…

[1] - https://jenkins.io/changelog-stable/


Fwd: Encryption of exisiting data in Stripe Compaction

2017-06-20 Thread ramkrishna vasudevan
Hi all

Interesting case with Stripe compactions and Encryptions. Does any one has
any suggestion for Karthick's case? The initial mail was targetted to
issues@ and so forwarding it to dev@ and user@.


Regards
Ram

-- Forwarded message --
From: ramkrishna vasudevan 
Date: Tue, Jun 20, 2017 at 4:51 PM
Subject: Re: Encryption of exisiting data in Stripe Compaction
To: Karthick Ram 


I am not aware of any other mechanism. I just noticed that you had fwded
the message to issues@ and not to dev@ and users@. Let me forward it to
those mailing address. Thanks Karthick.

Regards
Ram

On Mon, Jun 19, 2017 at 1:07 PM, Karthick Ram 
wrote:

> Hi,
> Yes we are doing exactly the same. We altered the table with
> exploringcompaction and triggered a major compaction. But when it comes to
> key rotation, which we do very often, we have to manually alter the table
> and rollback to previous compaction policy. Currently we have a cron job
> for this. Is there any other way to automate this?
>
> Regards
> Karthick R
>
> On Thu, Jun 15, 2017 at 9:55 AM, ramkrishna vasudevan <
> ramkrishna.s.vasude...@gmail.com> wrote:
>
>> Hi
>> Very interesting case. Ya Stripe compaction does not need to under go a
>> major compaction if it already running under stripe compaction (reading the
>> docs I get this).
>> Since you have enable encryption at a later point of time you face this
>> issue I believe. The naive workaround I can think of is that do a alter
>> table with default compaction and it will do a major compaction and once
>> that is done again move back to Stripe compaction?  Will that work?
>>
>> I would like to hear opinion of others who have experience with Strip
>> compaction.
>>
>> Regards
>> Ram
>>
>> On Wed, Jun 14, 2017 at 10:25 AM, Karthick Ram 
>> wrote:
>>
>>> We have a table which has time series data with Stripe Compaction
>>> enabled.
>>> After encryption has been enabled for this table the newer entries are
>>> encrypted and inserted. However to encrypt the existing data in the
>>> table,
>>> a major compaction has to run. Since, stripe compaction doesn't allow a
>>> major compaction to run, we are unable to encrypt the previous data.
>>> Please
>>> suggest some ways to rectify this problem.
>>>
>>> Regards,
>>> Karthick R
>>>
>>
>>
>


Fwd: Removal of maven eclipse plug-in support from Apache Yetus

2017-05-30 Thread Sean Busbey
Just a heads up, I believe we rely on this functionality at the moment
as well. Presuming our runs of the plugin are currently in good health
we can move this into our hbase-specific personality.


-- Forwarded message --
From: Allen Wittenauer 
Date: Tue, May 30, 2017 at 10:33 AM
Subject: Removal of maven eclipse plug-in support from Apache Yetus
To: Hadoop Common 



This is just a heads up.

The Apache Yetus community is debating removing the maven
eclipse plug-in testing support from precommit. (Given that Apache
Hadoop is currently rigged up to always run Yetus' master for testing
purposes, this means Hadoop will see the removal immediately
post-commit.) The plug-in itself is deprecated and always throws
warnings/errors during execution.  Additionally, Eclipse has added
import support as part of Neon.

If you feel strongly either way, feel free to hop onto YETUS-509.

Thanks.
-
To unsubscribe, e-mail: common-dev-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-dev-h...@hadoop.apache.org



-- 
busbey


Re: Fwd: Successful: HBase Generate Website

2017-03-31 Thread Nick Dimiduk
Thanks Misty! You are a- automation-mazing!

On Fri, Mar 31, 2017 at 3:06 PM Misty Stanley-Jones 
wrote:

> And emails will now only go out if the job fails, so you won't see these
> anymore at all.
>
> On Fri, Mar 31, 2017, at 03:03 PM, Misty Stanley-Jones wrote:
> > FYI, the linked Jenkins job now automatically updates the site! No more
> > need to manually push. Merry Christmas! :)
> >
> > - Original message -
> > From: Apache Jenkins Server 
> > To: dev@hbase.apache.org
> > Subject: Successful: HBase Generate Website
> > Date: Fri, 31 Mar 2017 21:32:17 + (UTC)
> >
> > Build status: Successful
> >
> > If successful, the website and docs have been generated and the site has
> > been updated automatically.
> > If failed, see
> > https://builds.apache.org/job/hbase_generate_website/561/console
> >
> > YOU DO NOT NEED TO DO THE FOLLOWING ANYMORE! It is here for
> > informational purposes and shows what the Jenkins job does to push the
> > site.
> >
> >   git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git
> >   cd hbase-site
> >   wget -O-
> >
> https://builds.apache.org/job/hbase_generate_website/561/artifact/website.patch.zip
> >   | funzip > 1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7.patch
> >   git fetch
> >   git checkout -b asf-site-1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7
> >   origin/asf-site
> >   git am --whitespace=fix 1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7.patch
> >   git push origin
> >   asf-site-1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7:asf-site
> >   git commit --allow-empty -m "INFRA-10751 Empty commit"
> >   git push origin asf-site
> >   git checkout asf-site
> >   git branch -D asf-site-1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7
> >
> >
> >
>


Re: Fwd: Successful: HBase Generate Website

2017-03-31 Thread Misty Stanley-Jones
And emails will now only go out if the job fails, so you won't see these
anymore at all.

On Fri, Mar 31, 2017, at 03:03 PM, Misty Stanley-Jones wrote:
> FYI, the linked Jenkins job now automatically updates the site! No more
> need to manually push. Merry Christmas! :)
> 
> - Original message -
> From: Apache Jenkins Server 
> To: dev@hbase.apache.org
> Subject: Successful: HBase Generate Website
> Date: Fri, 31 Mar 2017 21:32:17 + (UTC)
> 
> Build status: Successful
> 
> If successful, the website and docs have been generated and the site has
> been updated automatically.
> If failed, see
> https://builds.apache.org/job/hbase_generate_website/561/console
> 
> YOU DO NOT NEED TO DO THE FOLLOWING ANYMORE! It is here for
> informational purposes and shows what the Jenkins job does to push the
> site.
> 
>   git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git
>   cd hbase-site
>   wget -O-
>   
> https://builds.apache.org/job/hbase_generate_website/561/artifact/website.patch.zip
>   | funzip > 1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7.patch
>   git fetch
>   git checkout -b asf-site-1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7
>   origin/asf-site
>   git am --whitespace=fix 1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7.patch
>   git push origin
>   asf-site-1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7:asf-site
>   git commit --allow-empty -m "INFRA-10751 Empty commit"
>   git push origin asf-site
>   git checkout asf-site
>   git branch -D asf-site-1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7
> 
> 
> 


Fwd: Successful: HBase Generate Website

2017-03-31 Thread Misty Stanley-Jones
FYI, the linked Jenkins job now automatically updates the site! No more
need to manually push. Merry Christmas! :)

- Original message -
From: Apache Jenkins Server 
To: dev@hbase.apache.org
Subject: Successful: HBase Generate Website
Date: Fri, 31 Mar 2017 21:32:17 + (UTC)

Build status: Successful

If successful, the website and docs have been generated and the site has
been updated automatically.
If failed, see
https://builds.apache.org/job/hbase_generate_website/561/console

YOU DO NOT NEED TO DO THE FOLLOWING ANYMORE! It is here for
informational purposes and shows what the Jenkins job does to push the
site.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git
  cd hbase-site
  wget -O-
  
https://builds.apache.org/job/hbase_generate_website/561/artifact/website.patch.zip
  | funzip > 1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7.patch
  git fetch
  git checkout -b asf-site-1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7
  origin/asf-site
  git am --whitespace=fix 1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7.patch
  git push origin
  asf-site-1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7:asf-site
  git commit --allow-empty -m "INFRA-10751 Empty commit"
  git push origin asf-site
  git checkout asf-site
  git branch -D asf-site-1c4d9c8965952cbd17f0afdacbb0c0ac1e5bd1d7





Fwd: Hbase Row key lock

2016-08-16 Thread Manjeet Singh
Hi All

Can anyone help me about how and in which version of Hbase support Rowkey
lock ?
I have seen article about rowkey lock but it was about .94 version it said
that if row key not exist and any update request come and that rowkey not
exist then in this case Hbase hold the lock for 60 sec.

currently I am using Hbase 1.2.2 version

Thanks
Manjeet





-- 
luv all


Fwd: [GSoC Mentors] Adding Mentors Deadline

2016-03-22 Thread Talat Uyarer
I found an explaination in GSoC Maillist. Someone asked same question
to GSoC Team.


-- Forwarded message --
From: 'Stephanie Taylor' via Google Summer of Code Mentors List

Date: Fri, Mar 18, 2016 at 1:33 AM
Subject: Re: [GSoC Mentors] Adding Mentors Deadline
To: Maybellin Burgos 
Cc: Google Summer of Code Mentors List



Hi May,
>
>
> When is the deadline to add new mentors?


Mentors can continue to be added until the end of the program (though
they should be pretty much added by the time you select students in
mid April).
>
>
> Also, will the "remove member" option for mentors be disabled after we've 
> assigned mentors to the final proposals?

No, the remove member will remain in case the OA has a good
reason/need to remove a mentor during the program.

Best,
Stephanie

>
>
> Best,
> May
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Google Summer of Code Mentors List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to google-summer-of-code-mentors-list+unsubscr...@googlegroups.com.
> To post to this group, send email to 
> google-summer-of-code-mentors-l...@googlegroups.com.
> Visit this group at 
> https://groups.google.com/group/google-summer-of-code-mentors-list.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/google-summer-of-code-mentors-list/3fbf029f-d6d4-4b07-960f-21f506da4f70%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.


--
You received this message because you are subscribed to the Google
Groups "Google Summer of Code Mentors List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to google-summer-of-code-mentors-list+unsubscr...@googlegroups.com.
To post to this group, send email to
google-summer-of-code-mentors-l...@googlegroups.com.
Visit this group at
https://groups.google.com/group/google-summer-of-code-mentors-list.
To view this discussion on the web visit
https://groups.google.com/d/msgid/google-summer-of-code-mentors-list/CAEp7XcSpz3i9eTk3rb8AKYEhrXoByKR-mn9o39V-M6R5RrFx%2Bw%40mail.gmail.com.

For more options, visit https://groups.google.com/d/optout.


Fwd: Google Summer of Code 2016 is coming

2016-02-11 Thread Nick Dimiduk
Does anyone have an interest in participating this year? We had a fruitful
summer last year over on Phoenix.

-- Forwarded message --
From: Ulrich Stärk 
Date: Wed, Feb 10, 2016 at 12:16 PM
Subject: Google Summer of Code 2016 is coming
To: ment...@community.apache.org


Hello PMCs (incubator Mentors, please forward this email to your podlings),

Google Summer of Code [1] is a program sponsored by Google allowing
students to spend their summer
working on open source software. Students will receive stipends for
developing open source software
full-time for three months. Projects will provide mentoring and project
ideas, and in return have
the chance to get new code developed and - most importantly - to identify
and bring in new committers.

The ASF will apply as a participating organization meaning individual
projects don't have to apply
separately.

If you want to participate with your project we ask you to do the following
things as soon as
possible but by no later than 2016-02-19:

1. understand what it means to be a mentor [2].

2. record your project ideas.

Just create issues in JIRA, label them with gsoc2016, and they will show up
at [3]. Please be as
specific as possible when describing your idea. Include the programming
language, the tools and
skills required, but try not to scare potential students away. They are
supposed to learn what's
required before the program starts.

Use labels, e.g. for the programming language (java, c, c++, erlang,
python, brainfuck, ...) or
technology area (cloud, xml, web, foo, bar, ...) and record them at [5].

Please use the COMDEV JIRA project for recording your ideas if your project
doesn't use JIRA (e.g.
httpd, ooo). Contact d...@community.apache.org if you need assistance.

[4] contains some additional information (will be updated for 2016 shortly).

3. subscribe to ment...@community.apache.org; restricted to potential
mentors, meant to be used as a
private list - general discussions on the public d...@community.apache.org
list as much as possible
please). Use a recognized address when subscribing (@apache.org or one of
your alias addresses on
record).

Note that the ASF isn't accepted as a participating organization yet,
nevertheless you *have to*
start recording your ideas now or we will not get accepted.

Over the years we were able to complete hundreds of projects successfully.
Some of our prior
students are active contributors now! Let's make this year a success again!

Cheers,

Uli

P.S.: Except for the private parts (label spreadsheet mostly), this email
is free to be shared
publicly if you want to.

[1] https://summerofcode.withgoogle.com/
[2] http://community.apache.org/guide-to-being-a-mentor.html
[3] http://s.apache.org/gsoc2016ideas
[4] http://community.apache.org/gsoc.html
[5] http://s.apache.org/gsoclabels


Fwd: MODERATE for u...@hbase.apache.org

2015-12-21 Thread Stack
(Here I am forwarding a message that tripped the spam filter (probably
because of the footer) and that I mistakenly judged spam... forwarding
manually).
St.Ack

-- Forwarded message --
From: Harpreet-Saini Singh 
To: "u...@hbase.apache.org" 
Cc:
Date: Mon, 21 Dec 2015 13:19:15 +
Subject: Socket timeout exception while using regex filter in scan statement

Hi Team,



I am keep getting this error in shell while using regex string with filter
query

Ex:

scan 'table',COLUMNS=>'j:PRICE', FILTER => "ValueFilter( =,
'regexstring:111025' )"



ERROR: Call id=122, waitTime=60001, operationTimeout=6 expired.



I found a workaround was going on for this issue and it has been resolved.

JIRA : https://issues.apache.org/jira/browse/HBASE-14180

Can you please help me with the solution as JIRA status is Resolved and I
cant find any outcome in the comments.



I am using Hbase1.0 with CDH5.4.5.



Thanks and Regards

Harpreet Singh


---
Informationen (einschließlich Pflichtangaben) zu einzelnen, innerhalb der
EU tätigen Gesellschaften und Zweigniederlassungen des Konzerns Deutsche
Bank finden Sie unter
http://www.deutsche-bank.de/de/content/pflichtangaben.htm. Diese E-Mail
enthält vertrauliche und/ oder rechtlich geschützte Informationen. Wenn Sie
nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten
haben, informieren Sie bitte sofort den Absender und vernichten Sie diese
E-Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser
E-Mail ist nicht gestattet.

Please refer to http://www.db.com/en/content/eu_disclosures.htm for
information (including mandatory corporate particulars) on selected
Deutsche Bank branches and group companies registered or incorporated in
the European Union. This e-mail may contain confidential and/or privileged
information. If you are not the intended recipient (or have received this
e-mail in error) please notify the sender immediately and delete this
e-mail. Any unauthorized copying, disclosure or distribution of the
material in this e-mail is strictly forbidden.


Fwd: Successful: HBase Generate Website

2015-12-07 Thread Misty Stanley-Jones
You may notice that the instructions in this email have changed slightly.
We now have a second repository,
https://git-wip-us.apache.org/repos/asf/hbase-site.git, with a single
branch, asf-site. The Jenkins job now checks out the hbase repo as normal,
then runs mvn clean site site:stage. Next, it checks out the hbase-site
repo, removes automatically generated content, replaces that content with
the newly-built content from the hbase build, and creates a patch.

The eventual goal is to be able to have Jenkins push the updated site and
docs for us. Apache Infra are working on this last step, but until they get
it working, follow the instructions in these Jenkins emails to update the
website.

Anyone who is an Apache committer can examine the Jenkins job:
https://builds.apache.org/view/H-L/view/HBase/job/hbase_generate_website/

Thanks for your patience!

-- Forwarded message --
From: Apache Jenkins Server 
Date: Mon, Dec 7, 2015 at 8:48 PM
Subject: Successful: HBase Generate Website
To: dev@hbase.apache.org, mi...@apache.org


Build status: Successful

If successful, the website and docs have been generated. If failed, skip to
the bottom of this email.

Use the following commands to download the patch and apply it to a clean
branch based on origin/asf-site. If you prefer to keep the hbase-site repo
around permanently, you can skip the clone step.

  git clone https://git-wip-us.apache.org/repos/asf/hbase-site.git
  cd hbase-site
  wget -O-
https://builds.apache.org/job/hbase_generate_website/58/artifact/website.patch.zip
| funzip > 1f999c1e2bba62fda0fb426a168afa338b31c251.patch
  git fetch
  git checkout -b asf-site-1f999c1e2bba62fda0fb426a168afa338b31c251
origin/asf-site
  git am 1f999c1e2bba62fda0fb426a168afa338b31c251.patch

At this point, you can preview the changes by opening index.html or any of
the other HTML pages in your local
asf-site-1f999c1e2bba62fda0fb426a168afa338b31c251 branch, and you can
review the differences by running:

  git diff origin/asf-site

When you are satisfied, publish your changes to origin/asf-site using this
command:

  git push origin asf-site-1f999c1e2bba62fda0fb426a168afa338b31c251:asf-site

Changes take a couple of minutes to be propagated. You can then remove your
asf-site-1f999c1e2bba62fda0fb426a168afa338b31c251 branch:

  git checkout asf-site && git branch -d
asf-site-1f999c1e2bba62fda0fb426a168afa338b31c251



If failed, see
https://builds.apache.org/job/hbase_generate_website/58/console


Fwd: [NOTICE] people.apache.org web space is moving to home.apache.org

2015-12-03 Thread Andrew Purtell
Please note that the infrastructure team is making a significant change.
people.apache.org will be going away to be replaced with home.apache.org,
but only for hosting public web content, and only accessible (by
committers/members) via sftp.

Some of us, like myself, have been hosting release candidate binaries on
people.apache.org. Any of us doing that will need to switch to publishing
release candidates on dist.apache.org instead.

We have also in the past used people.apache.org to host temporary maven
repositories. I checked root poms for our active branches. Only 0.94 will
be affected when people.apache.org goes away. That may produce build
failure so if we will make another 0.94 release we should include a fix for
this.


-- Forwarded message --
From: Daniel Gruno 
Date: Wed, Nov 25, 2015 at 4:20 AM
Subject: [NOTICE] people.apache.org web space is moving to home.apache.org
To: committ...@apache.org


Hi folks,
as the subject says, people.apache.org is being decommissioned soon, and
personal web space is being moved to a new home, aptly named
home.apache.org ( https://home.apache.org/ )

IMPORTANT:
If you have things on people.apache.org that you would like to retain,
please make a copy of it and move it to home.apache.org. (note, you will
have to make a folder called 'public_html' there, for items to show up
under https://home.apache.org/~yourID/ ).

We will _NOT_ be moving your data for you. There is simply too much old
junk data on minotaur (the current people.apache.org machine) for it to
make sense to rsync it across, so we have made the decision that moving
data is up to each individual committer.

The new host, home.apache.org, will ONLY be for web space, you will not
have shell access to the machine (but you can copy data to it using SFTP
and your SSH key). Access to modify LDAP records (for project chairs)
will be moved to a separate host when the time comes.

There will be a 3 month grace period to move your data across. After
this time span (March 1st, 2016), minotaur will no longer serve up
personal web space, and visits to people.apache.org will be redirected
to home.apache.org.

With regards,
Daniel on behalf of the Apache Infrastructure Team.

PS: All replies to this should go to infrastruct...@apache.org



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Fwd: Successful: HBase Generate Website

2015-11-02 Thread Misty Stanley-Jones
I set up a Jenkins job to do the heavy lifting of building and publishing
the website. All you need to do is paste the commands from the email from
Jenkins. You don't even need to worry about keeping the SVN repo up to
date. Hope this is helpful!

Right now, this will run daily. We can tune it down if it's too often.

-- Forwarded message --
From: Apache Jenkins Server 
Date: Tue, Nov 3, 2015 at 3:48 PM
Subject: Successful: HBase Generate Website
To: dev@hbase.apache.org


Successful

If successful, AND you have SVN commit access, paste the following into a
command prompt to publish the website:

wget
https://builds.apache.org/job/HBase%20Generate%20Website/9/artifact/trunk.tar.gz
tar xzvf trunk.tar.gz
cd trunk
svn commit -F ./commit_msg.txt

If failed, see
https://builds.apache.org/job/HBase%20Generate%20Website/9/console


Fwd: Hbase Fully distribution mode - Cannot resolve regionserver hostname

2015-07-17 Thread Dima Spivak
+user@, dev@ to bcc

Pubudu,

I think you'll get more help on an issue like this on the users list.

-Dima

-- Forwarded message --
From: Ted Yu yuzhih...@gmail.com
Date: Fri, Jul 17, 2015 at 5:40 AM
Subject: Re: Hbase Fully distribution mode - Cannot resolve regionserver
hostname
To: dev@hbase.apache.org dev@hbase.apache.org


Have you looked at
HBASE-12954 Ability impaired using HBase on multihomed hosts

Cheers

On Fri, Jul 17, 2015 at 3:32 AM, Pubudu Gunatilaka pubudu...@gmail.com
wrote:

 Hi Devs,

 I am trying to run Hbase in fully distributed mode. So first I started
 master node. Then I started regionserver. But I am getting following
error.

 2015-07-17 05:12:02,260 WARN  [pod-35:16020.activeMasterManager]
 master.AssignmentManager: Failed assignment of hbase:meta,,1.1588230740 to
 pod-36,16020,1437109916288, trying to assign elsewhere instead; try=1 of
10
 java.net.UnknownHostException: unknown host: pod-36
 at


org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.init(RpcClientImpl.java:296)
 at


org.apache.hadoop.hbase.ipc.RpcClientImpl.createConnection(RpcClientImpl.java:129)
 at


org.apache.hadoop.hbase.ipc.RpcClientImpl.getConnection(RpcClientImpl.java:1278)
 at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1152)
 at


org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:216)
 at


org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:300)
 at


org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$BlockingStub.openRegion(AdminProtos.java:21711)
 at


org.apache.hadoop.hbase.master.ServerManager.sendRegionOpen(ServerManager.java:712)
 at


org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:2101)
 at


org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1567)
 at


org.apache.hadoop.hbase.master.AssignmentManager.assign(AssignmentManager.java:1545)
 at


org.apache.hadoop.hbase.master.AssignmentManager.assignMeta(AssignmentManager.java:2630)
 at org.apache.hadoop.hbase.master.HMaster.assignMeta(HMaster.java:820)
 at


org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:685)
 at org.apache.hadoop.hbase.master.HMaster.access$500(HMaster.java:165)
 at org.apache.hadoop.hbase.master.HMaster$1.run(HMaster.java:1428)
 at java.lang.Thread.run(Thread.java:745)


 This error occurs as the master node cannot resolve the hostname of the
 regionserver. According to the requirement of mine, I want to automate the
 hbase installation with 1 master node and 4 regionservers. But at the
 moment I don't have any possibility of updating master's /etc/hosts file.
 From the hbase configuration side, will I be able to solve the problem?

 If the hbase can communicate with IP addresses or use the hostname, which
 is already sent by regionserver to the master without updating /etc/hosts
 file this issue can be solved. Similar approach can be found in hadoop as
 well. Once the datanode connects to the namenode, it can communicate with
 the datanode without updating /etc/hosts file.

 Any help on this is appreciated.

 Thank you!

 --

 *Pubudu Gunatilaka*



Fwd: [jira] [Created] (REEF-216) Protocol Buffers 2.5 no longer available for download

2015-03-22 Thread Ted Yu
-- Forwarded message --
From: Chris Douglas cdoug...@apache.org
Date: Sun, Mar 22, 2015 at 4:31 PM
Subject: Fwd: [jira] [Created] (REEF-216) Protocol Buffers 2.5 no longer
available for download
To: common-...@hadoop.apache.org common-...@hadoop.apache.org


The Hadoop build instructions[1] also no longer apply. -C

[1]
https://git-wip-us.apache.org/repos/asf?p=hadoop.git;a=blob;f=BUILDING.txt


-- Forwarded message --
From: Markus Weimer (JIRA) j...@apache.org
Date: Fri, Mar 20, 2015 at 5:33 PM
Subject: [jira] [Created] (REEF-216) Protocol Buffers 2.5 no longer
available for download
To: d...@reef.incubator.apache.org


Markus Weimer created REEF-216:
--

 Summary: Protocol Buffers 2.5 no longer available for download
 Key: REEF-216
 URL: https://issues.apache.org/jira/browse/REEF-216
 Project: REEF
  Issue Type: Bug
  Components: REEF
Affects Versions: 0.10, 0.11
Reporter: Markus Weimer


Google recently switched off Google Code. They transferred the
Protocol Buffers project to
[GitHub|https://github.com/google/protobuf], and binaries are
available from [Google's developer
page|https://developers.google.com/protocol-buffers/docs/downloads].
However, only the most recent version is available. We use version 2.5
to be compatible with Hadoop. That version isn't available for
download. Hence, our [compile
instructions|https://cwiki.apache.org/confluence/display/REEF/Compiling+REEF
]
no longer apply.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Fwd: data base design question

2015-02-12 Thread Dima Spivak
Forwarding to users@, moving dev@ to bcc. People on the user list might be
more helpful here, Jignesh.

Cheers,
  Dima

-- Forwarded message --
From: *Jignesh Patel* jigneshmpa...@gmail.com
Date: Thursday, February 12, 2015
Subject: data base design question
To: dev@hbase.apache.org dev@hbase.apache.org


I have a requirement where I have to define two entities: orders and
results.
Now each order can have multiple results, for db design I will have two
options:

Option 1: Create an embedded entity of results and store it as list object
inside order tabel as one of the column field.
Option 2. Create a separate table of orders and results and build secondary
index inside solr where for given order id, multiple results ids are mapped.

Is there a better alternative than  above options for one to many
relationships?


Fwd: how to do parallel scanning in map reduce using hbase as input?

2014-07-21 Thread Li Li
anyone could help? now I have about 1.1 billion nodes and it takes 2
hours to finish a map reduce job.

-- Forwarded message --
From: Li Li fancye...@gmail.com
Date: Thu, Jun 26, 2014 at 3:34 PM
Subject: how to do parallel scanning in map reduce using hbase as input?
To: u...@hbase.apache.org


my table has about 700 million rows and about 80 regions. each task
tracker is configured with 4 mappers and 4 reducers at the same time.
The hadoop/hbase cluster has 5 nodes so at the same time, it has 20
mappers running. it takes more than an hour to finish mapper stage.
The hbase cluster's load is very low, about 2,000 request per second.
I think one mapper for a region is too small. How can I run more than
one mapper for a region so that it can take full advantage of
computing resources?


Fwd: Disk space leak when using HBase and HDFS ShortCircuit

2014-06-25 Thread Andrew Purtell
Forwarded

-- Forwarded message --
From: Vladimir Rodionov vrodio...@carrieriq.com
Date: Wed, Jun 25, 2014 at 12:03 PM
Subject: RE: Disk space leak when using HBase and HDFS ShortCircuit
To: u...@hbase.apache.org u...@hbase.apache.org


 Apparently those file descriptors were stored by the HDFS
 ShortCircuit cache.

As far as I understand this is issue of HDFS shorty-circuit-reads
implementation not HBase. HBase uses HDFS API to access
files. Did you ask this question on hdfs dev list? This looks like a very
serious bug.

Best regards,
Vladimir Rodionov
Principal Platform Engineer
Carrier IQ, www.carrieriq.com
e-mail: vrodio...@carrieriq.com


From: Giuseppe Reina [g.re...@gmail.com]
Sent: Wednesday, June 25, 2014 2:54 AM
To: u...@hbase.apache.org
Subject: Disk space leak when using HBase and HDFS ShortCircuit

Hi all,
   we have been experiencing the same problem with 2 of our clusters. We
are currently using HDP 2.1 that comes with HBase 0.98.

The problem manifested by showing a huge differences (hundreds of GB)
between the output of df and du of the hdfs data directories.
Eventually, other systems complained for the lack of space before shutting
down. We identified the problem and discovered that all the RegionServers
were holding lots of open file descriptors to deleted files, which
prevented the OS to free the disk space occupied (hence the difference
between df and du). The deleted files were pointing to the local HDFS
blocks of old HFiles deleted from HDFS during the compaction and/or split
operations. Apparently those file descriptors were stored by the HDFS
ShortCircuit cache.

My question is, isn't the shortcircuit feautre supposed to get notified
somehow of file deletion on a file on HDFS so it can remove the open fds
from the cache? This creates huge leaks whenever HBase is heavily loaded
and we had to restart the RegionServer periodically until before
identifying the problem. We solved the problem first by disabling
shortcircuit from HDFS and then enabling it and reducing the cache size so
to trigger often the caching policies (this leads to some performance
loss).


p.s. I am aware of the dfs.client.read.shortcircuit.streams.cache.expiry.ms
 directoparameter, but for some reason the default value (5 mins) does not
work out-of-the-box on HDP 2.1, moreover the problem persists for high
timeouts and big cache sizes.

Kind Regards

Confidentiality Notice:  The information contained in this message,
including any attachments hereto, may be confidential and is intended to be
read only by the individual or entity to whom this message is addressed. If
the reader of this message is not the intended recipient or an agent or
designee of the intended recipient, please note that any review, use,
disclosure or distribution of this message or its attachments, in any form,
is strictly prohibited.  If you have received this message in error, please
immediately notify the sender and/or notificati...@carrieriq.com and delete
or destroy any copy of this message and its attachments.



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Re: ANNOUNCE: The third hbase-0.96.2 release candidate (WAS -- Re: ANNOUNCE: The second hbase-0.96.2 release candiate (WAS -- Fwd: ANNOUNCE: The first hbase-0.96.2 release candidate is available for

2014-04-03 Thread Stack
Any votes out there for this one?
Thanks,
St.Ack


On Mon, Mar 24, 2014 at 4:41 PM, Stack st...@duboce.net wrote:

 The third release candidate for hbase-0.96.2 is available here:

  http://people.apache.org/~stack/hbase-0.96.2RC2/

 and up in a staging memory repository here:

  https://repository.apache.org/content/repositories/orgapachehbase-1012

 This RC has two fixes beyond RC0 and RC1:

  HBASE-10819 (HBASE-10819) Backport HBASE-8063 (Filter HFiles based on
 first/last key) into 0.96 Stack
  HBASE-10802 CellComparator.compareStaticIgnoreMvccVersion compares type
 wrongly Ramkrishna S Vasudevan

 Shall we release this RC as hbase-0.96.2?  Vote closes on Tuesday, April
 1st.

 Thanks,
 St.Ack





 On Sun, Mar 23, 2014 at 10:45 AM, Stack st...@duboce.net wrote:

 Here is RC1.  Its the same as RC0 only it is properly named and I svn
 add'd a bit of missing doc.  You can download it here:

  http://people.apache.org/~stack/hbase-0.96.2RC1/

 It is up in a staging maven repository here:

  https://repository.apache.org/content/repositories/orgapachehbase-1009

 See below for list of fixes since 0.96.1.

 Shall we release this as hbase-0.96.2?  Please vote by Monday, March 31st.

 Thanks,
 St.Ack




 -- Forwarded message --
 From: Stack st...@duboce.net
 Date: Wed, Mar 19, 2014 at 12:43 PM
 Subject: ANNOUNCE: The first hbase-0.96.2 release candidate is available
 for download
 To: HBase Dev List dev@hbase.apache.org


 hbase-0.96.2RC0 is the first release candidate for hbase-0.96.2.  It is a
 bug fix release that includes 129 fixes [1] since hbase-0.96.1 (from 41
 distinct contributors).

 You can download the release candidate here:

   http://people.apache.org/~stack/hbase-0.96.2RC0/

 It is staged in an apache maven repository at this location:

   https://repository.apache.org/content/repositories/orgapachehbase-1008/

 Shall we release this candidate as hbase-0.96.2?  Lets close the vote in
 a week on March 26th.

 Yours,
 St.Ack

 1. http://goo.gl/bQk42q

 Here is the list of CHANGEs:


 HBASE-10161 [AccessController] Tolerate regions in recovery Anoop Sam
 John
 HBASE-10384 Failed to increment serveral columns in one Increment Jimmy
 Xiang
 HBASE-10313 Duplicate servlet-api jars in hbase 0.96.0 stack
 HBASE-10370 Compaction in out-of-date Store causes region split failure
 Liu Shaohui
 HBASE-10366 0.94 filterRow() may be skipped in 0.96(or onwards) code
 Jeffrey Zhong
 HBASE-10556 Possible data loss due to non-handled
 DroppedSnapshotException for user-triggered flush from client/shell
 Honghua Feng
 HBASE-10582 0.94-0.96 Upgrade: ACL can't be repopulated when ACL table
 contains row for table '-ROOT' or '.META.' Jeffrey Zhong
 HBASE-10581 ACL znode are left without PBed during upgrading hbase0.94*
 to hbase0.96+ Jeffrey Zhong
 HBASE-10575 ReplicationSource thread can't be terminated if it runs into
 the loop to contact peer's zk ensemble and fails continuously Honghua Feng
 HBASE-10436 restore regionserver lists removed from hbase 0.96+ jmx
 Jonathan Hsieh
 HBASE-10449 Wrong execution pool configuration in HConnectionManager
 Nicolas Liochon
 HBASE-10442 prepareDelete() isn't called before doPreMutationHook for a
 row deletion case Jeffrey Zhong
 HBASE-10598 Written data can not be read out because
 MemStore#timeRangeTracker might be updated concurrently cuijianwei
 HBASE-10679 Both clients get wrong scan results if the first scanner
 expires and the second scanner is created with the same scannerId on the
 same region Honghua Feng
 HBASE-10514 Forward port HBASE-10466, possible data loss when failed
 flushes stack
 HBASE-10749 CellComparator.compareStatic() compares type wrongly
 ramkrishna.s.vasudevan
 HBASE-9151 HBCK cannot fix when meta server znode deleted, this can
 happen if all region servers stopped and there are no logs to split.
 rajeshbabu
 HBASE-8803 region_mover.rb should move multiple regions at a time
 Jean-Marc Spaggiari
 HBASE-10043 HBASE-10033 Fix Potential Resouce Leak in
 MultiTableInputFormatBase Elliott Clark
 HBASE-10195 mvn site build failed with OutOfMemoryError Jeffrey Zhong
 HBASE-10196 Enhance HBCK to understand the case after online region
 merge chunhui shen
 HBASE-10137 GeneralBulkAssigner with retain assignment plan can be used
 in EnableTableHandler to bulk assign the regions rajeshbabu
 HBASE-10157 Provide CP hook post log replay Anoop Sam John
 HBASE-10155 HRegion isRecovering state is wrongly coming in postOpen
 hook Anoop Sam John
 HBASE-10146 Bump HTrace version to 2.04 Elliott Clark
 HBASE-10124 HBASE-10033 Make Sub Classes Static When Possible Elliott
 Clark
 HBASE-10084 [WINDOWS] bin\hbase.cmd should allow whitespaces in
 java.library.path and classpath Enis Soztutar
 HBASE-10186 region_mover.rb broken because ServerName constructor is
 changed to private Samir Ahmic
 HBASE-10098 [WINDOWS] pass in native library directory from hadoop for
 unit tests Enis Soztutar
 HBASE-10110 HBASE-10033 Fix Potential Resource Leak in StoreFlusher
 

Re: ANNOUNCE: The third hbase-0.96.2 release candidate (WAS -- Re: ANNOUNCE: The second hbase-0.96.2 release candiate (WAS -- Fwd: ANNOUNCE: The first hbase-0.96.2 release candidate is available for

2014-04-03 Thread Ted Yu
http://people.apache.org/ is not accessible at this moment (due to ldap
issue).

Will give it a spin once access is restored.



On Thu, Apr 3, 2014 at 7:48 AM, Stack st...@duboce.net wrote:

 Any votes out there for this one?
 Thanks,
 St.Ack


 On Mon, Mar 24, 2014 at 4:41 PM, Stack st...@duboce.net wrote:

  The third release candidate for hbase-0.96.2 is available here:
 
   http://people.apache.org/~stack/hbase-0.96.2RC2/
 
  and up in a staging memory repository here:
 
   https://repository.apache.org/content/repositories/orgapachehbase-1012
 
  This RC has two fixes beyond RC0 and RC1:
 
   HBASE-10819 (HBASE-10819) Backport HBASE-8063 (Filter HFiles based on
  first/last key) into 0.96 Stack
   HBASE-10802 CellComparator.compareStaticIgnoreMvccVersion compares type
  wrongly Ramkrishna S Vasudevan
 
  Shall we release this RC as hbase-0.96.2?  Vote closes on Tuesday, April
  1st.
 
  Thanks,
  St.Ack
 
 
 
 
 
  On Sun, Mar 23, 2014 at 10:45 AM, Stack st...@duboce.net wrote:
 
  Here is RC1.  Its the same as RC0 only it is properly named and I svn
  add'd a bit of missing doc.  You can download it here:
 
   http://people.apache.org/~stack/hbase-0.96.2RC1/
 
  It is up in a staging maven repository here:
 
   https://repository.apache.org/content/repositories/orgapachehbase-1009
 
  See below for list of fixes since 0.96.1.
 
  Shall we release this as hbase-0.96.2?  Please vote by Monday, March
 31st.
 
  Thanks,
  St.Ack
 
 
 
 
  -- Forwarded message --
  From: Stack st...@duboce.net
  Date: Wed, Mar 19, 2014 at 12:43 PM
  Subject: ANNOUNCE: The first hbase-0.96.2 release candidate is available
  for download
  To: HBase Dev List dev@hbase.apache.org
 
 
  hbase-0.96.2RC0 is the first release candidate for hbase-0.96.2.  It is
 a
  bug fix release that includes 129 fixes [1] since hbase-0.96.1 (from 41
  distinct contributors).
 
  You can download the release candidate here:
 
http://people.apache.org/~stack/hbase-0.96.2RC0/
 
  It is staged in an apache maven repository at this location:
 
 
 https://repository.apache.org/content/repositories/orgapachehbase-1008/
 
  Shall we release this candidate as hbase-0.96.2?  Lets close the vote in
  a week on March 26th.
 
  Yours,
  St.Ack
 
  1. http://goo.gl/bQk42q
 
  Here is the list of CHANGEs:
 
 
  HBASE-10161 [AccessController] Tolerate regions in recovery Anoop Sam
  John
  HBASE-10384 Failed to increment serveral columns in one Increment Jimmy
  Xiang
  HBASE-10313 Duplicate servlet-api jars in hbase 0.96.0 stack
  HBASE-10370 Compaction in out-of-date Store causes region split failure
  Liu Shaohui
  HBASE-10366 0.94 filterRow() may be skipped in 0.96(or onwards) code
  Jeffrey Zhong
  HBASE-10556 Possible data loss due to non-handled
  DroppedSnapshotException for user-triggered flush from client/shell
  Honghua Feng
  HBASE-10582 0.94-0.96 Upgrade: ACL can't be repopulated when ACL table
  contains row for table '-ROOT' or '.META.' Jeffrey Zhong
  HBASE-10581 ACL znode are left without PBed during upgrading hbase0.94*
  to hbase0.96+ Jeffrey Zhong
  HBASE-10575 ReplicationSource thread can't be terminated if it runs into
  the loop to contact peer's zk ensemble and fails continuously Honghua
 Feng
  HBASE-10436 restore regionserver lists removed from hbase 0.96+ jmx
  Jonathan Hsieh
  HBASE-10449 Wrong execution pool configuration in HConnectionManager
  Nicolas Liochon
  HBASE-10442 prepareDelete() isn't called before doPreMutationHook for a
  row deletion case Jeffrey Zhong
  HBASE-10598 Written data can not be read out because
  MemStore#timeRangeTracker might be updated concurrently cuijianwei
  HBASE-10679 Both clients get wrong scan results if the first scanner
  expires and the second scanner is created with the same scannerId on the
  same region Honghua Feng
  HBASE-10514 Forward port HBASE-10466, possible data loss when failed
  flushes stack
  HBASE-10749 CellComparator.compareStatic() compares type wrongly
  ramkrishna.s.vasudevan
  HBASE-9151 HBCK cannot fix when meta server znode deleted, this can
  happen if all region servers stopped and there are no logs to split.
  rajeshbabu
  HBASE-8803 region_mover.rb should move multiple regions at a time
  Jean-Marc Spaggiari
  HBASE-10043 HBASE-10033 Fix Potential Resouce Leak in
  MultiTableInputFormatBase Elliott Clark
  HBASE-10195 mvn site build failed with OutOfMemoryError Jeffrey Zhong
  HBASE-10196 Enhance HBCK to understand the case after online region
  merge chunhui shen
  HBASE-10137 GeneralBulkAssigner with retain assignment plan can be used
  in EnableTableHandler to bulk assign the regions rajeshbabu
  HBASE-10157 Provide CP hook post log replay Anoop Sam John
  HBASE-10155 HRegion isRecovering state is wrongly coming in postOpen
  hook Anoop Sam John
  HBASE-10146 Bump HTrace version to 2.04 Elliott Clark
  HBASE-10124 HBASE-10033 Make Sub Classes Static When Possible Elliott
  Clark
  HBASE-10084 [WINDOWS] bin\hbase.cmd should allow whitespaces 

Re: ANNOUNCE: The third hbase-0.96.2 release candidate (WAS -- Re: ANNOUNCE: The second hbase-0.96.2 release candiate (WAS -- Fwd: ANNOUNCE: The first hbase-0.96.2 release candidate is available for

2014-04-03 Thread Elliott Clark
+1 looks good.
Checked sig, and played on a small cluster.
Passed IT tests on hadoop 2


On Thu, Apr 3, 2014 at 7:55 AM, Anoop John anoop.hb...@gmail.com wrote:

 I will check the RC and vote soon Stack. b4 this weekend. :)

 Anoop


 On Thursday, April 3, 2014, Stack st...@duboce.net wrote:
  Any votes out there for this one?
  Thanks,
  St.Ack
 
 
  On Mon, Mar 24, 2014 at 4:41 PM, Stack st...@duboce.net wrote:
 
  The third release candidate for hbase-0.96.2 is available here:
 
   http://people.apache.org/~stack/hbase-0.96.2RC2/
 
  and up in a staging memory repository here:
 
   https://repository.apache.org/content/repositories/orgapachehbase-1012
 
  This RC has two fixes beyond RC0 and RC1:
 
   HBASE-10819 (HBASE-10819) Backport HBASE-8063 (Filter HFiles based on
  first/last key) into 0.96 Stack
   HBASE-10802 CellComparator.compareStaticIgnoreMvccVersion compares
 type
  wrongly Ramkrishna S Vasudevan
 
  Shall we release this RC as hbase-0.96.2?  Vote closes on Tuesday, April
  1st.
 
  Thanks,
  St.Ack
 
 
 
 
 
  On Sun, Mar 23, 2014 at 10:45 AM, Stack st...@duboce.net wrote:
 
  Here is RC1.  Its the same as RC0 only it is properly named and I svn
  add'd a bit of missing doc.  You can download it here:
 
   http://people.apache.org/~stack/hbase-0.96.2RC1/
 
  It is up in a staging maven repository here:
 
 
 https://repository.apache.org/content/repositories/orgapachehbase-1009
 
  See below for list of fixes since 0.96.1.
 
  Shall we release this as hbase-0.96.2?  Please vote by Monday, March
 31st.
 
  Thanks,
  St.Ack
 
 
 
 
  -- Forwarded message --
  From: Stack st...@duboce.net
  Date: Wed, Mar 19, 2014 at 12:43 PM
  Subject: ANNOUNCE: The first hbase-0.96.2 release candidate is
 available
  for download
  To: HBase Dev List dev@hbase.apache.org
 
 
  hbase-0.96.2RC0 is the first release candidate for hbase-0.96.2.  It is
 a
  bug fix release that includes 129 fixes [1] since hbase-0.96.1 (from 41
  distinct contributors).
 
  You can download the release candidate here:
 
http://people.apache.org/~stack/hbase-0.96.2RC0/
 
  It is staged in an apache maven repository at this location:
 
 
 https://repository.apache.org/content/repositories/orgapachehbase-1008/
 
  Shall we release this candidate as hbase-0.96.2?  Lets close the vote
 in
  a week on March 26th.
 
  Yours,
  St.Ack
 
  1. http://goo.gl/bQk42q
 
  Here is the list of CHANGEs:
 
 
  HBASE-10161 [AccessController] Tolerate regions in recovery Anoop Sam
  John
  HBASE-10384 Failed to increment serveral columns in one Increment
 Jimmy
  Xiang
  HBASE-10313 Duplicate servlet-api jars in hbase 0.96.0 stack
  HBASE-10370 Compaction in out-of-date Store causes region split
 failure
  Liu Shaohui
  HBASE-10366 0.94 filterRow() may be skipped in 0.96(or onwards) code
  Jeffrey Zhong
  HBASE-10556 Possible data loss due to non-handled
  DroppedSnapshotException for user-triggered flush from client/shell
  Honghua Feng
  HBASE-10582 0.94-0.96 Upgrade: ACL can't be repopulated when ACL table
  contains row for table '-ROOT' or '.META.' Jeffrey Zhong
  HBASE-10581 ACL znode are left without PBed during upgrading hbase0.94*
  to hbase0.96+ Jeffrey Zhong
 



Re: ANNOUNCE: The third hbase-0.96.2 release candidate (WAS -- Re: ANNOUNCE: The second hbase-0.96.2 release candiate (WAS -- Fwd: ANNOUNCE: The first hbase-0.96.2 release candidate is available for

2014-04-03 Thread Stack
Thanks Elliott.


On Thu, Apr 3, 2014 at 12:02 PM, Elliott Clark ecl...@apache.org wrote:

 +1 looks good.
 Checked sig, and played on a small cluster.
 Passed IT tests on hadoop 2


 On Thu, Apr 3, 2014 at 7:55 AM, Anoop John anoop.hb...@gmail.com wrote:

  I will check the RC and vote soon Stack. b4 this weekend. :)
 
  Anoop
 
 
  On Thursday, April 3, 2014, Stack st...@duboce.net wrote:
   Any votes out there for this one?
   Thanks,
   St.Ack
  
  
   On Mon, Mar 24, 2014 at 4:41 PM, Stack st...@duboce.net wrote:
  
   The third release candidate for hbase-0.96.2 is available here:
  
http://people.apache.org/~stack/hbase-0.96.2RC2/
  
   and up in a staging memory repository here:
  
  
 https://repository.apache.org/content/repositories/orgapachehbase-1012
  
   This RC has two fixes beyond RC0 and RC1:
  
HBASE-10819 (HBASE-10819) Backport HBASE-8063 (Filter HFiles based
 on
   first/last key) into 0.96 Stack
HBASE-10802 CellComparator.compareStaticIgnoreMvccVersion compares
  type
   wrongly Ramkrishna S Vasudevan
  
   Shall we release this RC as hbase-0.96.2?  Vote closes on Tuesday,
 April
   1st.
  
   Thanks,
   St.Ack
  
  
  
  
  
   On Sun, Mar 23, 2014 at 10:45 AM, Stack st...@duboce.net wrote:
  
   Here is RC1.  Its the same as RC0 only it is properly named and I svn
   add'd a bit of missing doc.  You can download it here:
  
http://people.apache.org/~stack/hbase-0.96.2RC1/
  
   It is up in a staging maven repository here:
  
  
  https://repository.apache.org/content/repositories/orgapachehbase-1009
  
   See below for list of fixes since 0.96.1.
  
   Shall we release this as hbase-0.96.2?  Please vote by Monday, March
  31st.
  
   Thanks,
   St.Ack
  
  
  
  
   -- Forwarded message --
   From: Stack st...@duboce.net
   Date: Wed, Mar 19, 2014 at 12:43 PM
   Subject: ANNOUNCE: The first hbase-0.96.2 release candidate is
  available
   for download
   To: HBase Dev List dev@hbase.apache.org
  
  
   hbase-0.96.2RC0 is the first release candidate for hbase-0.96.2.  It
 is
  a
   bug fix release that includes 129 fixes [1] since hbase-0.96.1 (from
 41
   distinct contributors).
  
   You can download the release candidate here:
  
 http://people.apache.org/~stack/hbase-0.96.2RC0/
  
   It is staged in an apache maven repository at this location:
  
  
  https://repository.apache.org/content/repositories/orgapachehbase-1008/
  
   Shall we release this candidate as hbase-0.96.2?  Lets close the vote
  in
   a week on March 26th.
  
   Yours,
   St.Ack
  
   1. http://goo.gl/bQk42q
  
   Here is the list of CHANGEs:
  
  
   HBASE-10161 [AccessController] Tolerate regions in recovery Anoop
 Sam
   John
   HBASE-10384 Failed to increment serveral columns in one Increment
  Jimmy
   Xiang
   HBASE-10313 Duplicate servlet-api jars in hbase 0.96.0 stack
   HBASE-10370 Compaction in out-of-date Store causes region split
  failure
   Liu Shaohui
   HBASE-10366 0.94 filterRow() may be skipped in 0.96(or onwards) code
   Jeffrey Zhong
   HBASE-10556 Possible data loss due to non-handled
   DroppedSnapshotException for user-triggered flush from client/shell
   Honghua Feng
   HBASE-10582 0.94-0.96 Upgrade: ACL can't be repopulated when ACL
 table
   contains row for table '-ROOT' or '.META.' Jeffrey Zhong
   HBASE-10581 ACL znode are left without PBed during upgrading
 hbase0.94*
   to hbase0.96+ Jeffrey Zhong
  
 



Re: ANNOUNCE: The third hbase-0.96.2 release candidate (WAS -- Re: ANNOUNCE: The second hbase-0.96.2 release candiate (WAS -- Fwd: ANNOUNCE: The first hbase-0.96.2 release candidate is available for

2014-04-03 Thread Ted Yu
+1

- checked documentation and tarball

- ran unit test suite against hadoop-2 profile which passed (see bottom of
email)

- ran in local and distributed mode

- checked the UI pages

-
[INFO] Building HBase - Assembly 0.96.2
[INFO]

[INFO]
[INFO] --- maven-remote-resources-plugin:1.4:process (default) @
hbase-assembly ---
[INFO]
[INFO] --- maven-dependency-plugin:2.8:build-classpath
(create-hbase-generated-classpath) @ hbase-assembly ---
[INFO] Wrote classpath file
'/homes/hortonzy/96.2RC2/hbase-0.96.2/target/cached_classpath.txt'.
[INFO]

[INFO] Reactor Summary:
[INFO]
[INFO] HBase . SUCCESS [1.586s]
[INFO] HBase - Common  SUCCESS [35.548s]
[INFO] HBase - Protocol .. SUCCESS [0.288s]
[INFO] HBase - Client  SUCCESS [35.744s]
[INFO] HBase - Hadoop Compatibility .. SUCCESS [5.062s]
[INFO] HBase - Hadoop Two Compatibility .. SUCCESS [1.253s]
[INFO] HBase - Prefix Tree ... SUCCESS [2.403s]
[INFO] HBase - Server  SUCCESS
[1:00:26.939s]
[INFO] HBase - Testing Util .. SUCCESS [1.785s]
[INFO] HBase - Thrift  SUCCESS
[1:49.282s]
[INFO] HBase - Shell . SUCCESS
[1:32.461s]
[INFO] HBase - Integration Tests . SUCCESS [0.882s]
[INFO] HBase - Examples .. SUCCESS [0.896s]
[INFO] HBase - Assembly .. SUCCESS [0.829s]


On Thu, Apr 3, 2014 at 7:48 AM, Stack st...@duboce.net wrote:

 Any votes out there for this one?
 Thanks,
 St.Ack


 On Mon, Mar 24, 2014 at 4:41 PM, Stack st...@duboce.net wrote:

  The third release candidate for hbase-0.96.2 is available here:
 
   http://people.apache.org/~stack/hbase-0.96.2RC2/
 
  and up in a staging memory repository here:
 
   https://repository.apache.org/content/repositories/orgapachehbase-1012
 
  This RC has two fixes beyond RC0 and RC1:
 
   HBASE-10819 (HBASE-10819) Backport HBASE-8063 (Filter HFiles based on
  first/last key) into 0.96 Stack
   HBASE-10802 CellComparator.compareStaticIgnoreMvccVersion compares type
  wrongly Ramkrishna S Vasudevan
 
  Shall we release this RC as hbase-0.96.2?  Vote closes on Tuesday, April
  1st.
 
  Thanks,
  St.Ack
 
 
 
 
 
  On Sun, Mar 23, 2014 at 10:45 AM, Stack st...@duboce.net wrote:
 
  Here is RC1.  Its the same as RC0 only it is properly named and I svn
  add'd a bit of missing doc.  You can download it here:
 
   http://people.apache.org/~stack/hbase-0.96.2RC1/
 
  It is up in a staging maven repository here:
 
   https://repository.apache.org/content/repositories/orgapachehbase-1009
 
  See below for list of fixes since 0.96.1.
 
  Shall we release this as hbase-0.96.2?  Please vote by Monday, March
 31st.
 
  Thanks,
  St.Ack
 
 
 
 
  -- Forwarded message --
  From: Stack st...@duboce.net
  Date: Wed, Mar 19, 2014 at 12:43 PM
  Subject: ANNOUNCE: The first hbase-0.96.2 release candidate is available
  for download
  To: HBase Dev List dev@hbase.apache.org
 
 
  hbase-0.96.2RC0 is the first release candidate for hbase-0.96.2.  It is
 a
  bug fix release that includes 129 fixes [1] since hbase-0.96.1 (from 41
  distinct contributors).
 
  You can download the release candidate here:
 
http://people.apache.org/~stack/hbase-0.96.2RC0/
 
  It is staged in an apache maven repository at this location:
 
 
 https://repository.apache.org/content/repositories/orgapachehbase-1008/
 
  Shall we release this candidate as hbase-0.96.2?  Lets close the vote in
  a week on March 26th.
 
  Yours,
  St.Ack
 
  1. http://goo.gl/bQk42q
 
  Here is the list of CHANGEs:
 
 
  HBASE-10161 [AccessController] Tolerate regions in recovery Anoop Sam
  John
  HBASE-10384 Failed to increment serveral columns in one Increment Jimmy
  Xiang
  HBASE-10313 Duplicate servlet-api jars in hbase 0.96.0 stack
  HBASE-10370 Compaction in out-of-date Store causes region split failure
  Liu Shaohui
  HBASE-10366 0.94 filterRow() may be skipped in 0.96(or onwards) code
  Jeffrey Zhong
  HBASE-10556 Possible data loss due to non-handled
  DroppedSnapshotException for user-triggered flush from client/shell
  Honghua Feng
  HBASE-10582 0.94-0.96 Upgrade: ACL can't be repopulated when ACL table
  contains row for table '-ROOT' or '.META.' Jeffrey Zhong
  HBASE-10581 ACL znode are left without PBed during upgrading hbase0.94*
  to hbase0.96+ Jeffrey Zhong
  HBASE-10575 ReplicationSource thread can't be terminated if it runs into
  the loop to contact peer's zk ensemble and fails continuously Honghua
 Feng
  HBASE-10436 restore regionserver lists removed from hbase 

Re: ANNOUNCE: The third hbase-0.96.2 release candidate (WAS -- Re: ANNOUNCE: The second hbase-0.96.2 release candiate (WAS -- Fwd: ANNOUNCE: The first hbase-0.96.2 release candidate is available for

2014-04-03 Thread Stack
Thanks lads for testing.  Let me push it out since three +1s.  I +1 it to
after messing with it for a while here on my little test cluster.

St.Ack


On Thu, Apr 3, 2014 at 4:15 PM, Ted Yu yuzhih...@gmail.com wrote:

 +1

 - checked documentation and tarball

 - ran unit test suite against hadoop-2 profile which passed (see bottom of
 email)

 - ran in local and distributed mode

 - checked the UI pages

 -
 [INFO] Building HBase - Assembly 0.96.2
 [INFO]
 
 [INFO]
 [INFO] --- maven-remote-resources-plugin:1.4:process (default) @
 hbase-assembly ---
 [INFO]
 [INFO] --- maven-dependency-plugin:2.8:build-classpath
 (create-hbase-generated-classpath) @ hbase-assembly ---
 [INFO] Wrote classpath file
 '/homes/hortonzy/96.2RC2/hbase-0.96.2/target/cached_classpath.txt'.
 [INFO]
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] HBase . SUCCESS [1.586s]
 [INFO] HBase - Common  SUCCESS
 [35.548s]
 [INFO] HBase - Protocol .. SUCCESS [0.288s]
 [INFO] HBase - Client  SUCCESS
 [35.744s]
 [INFO] HBase - Hadoop Compatibility .. SUCCESS [5.062s]
 [INFO] HBase - Hadoop Two Compatibility .. SUCCESS [1.253s]
 [INFO] HBase - Prefix Tree ... SUCCESS [2.403s]
 [INFO] HBase - Server  SUCCESS
 [1:00:26.939s]
 [INFO] HBase - Testing Util .. SUCCESS [1.785s]
 [INFO] HBase - Thrift  SUCCESS
 [1:49.282s]
 [INFO] HBase - Shell . SUCCESS
 [1:32.461s]
 [INFO] HBase - Integration Tests . SUCCESS [0.882s]
 [INFO] HBase - Examples .. SUCCESS [0.896s]
 [INFO] HBase - Assembly .. SUCCESS [0.829s]


 On Thu, Apr 3, 2014 at 7:48 AM, Stack st...@duboce.net wrote:

  Any votes out there for this one?
  Thanks,
  St.Ack
 
 
  On Mon, Mar 24, 2014 at 4:41 PM, Stack st...@duboce.net wrote:
 
   The third release candidate for hbase-0.96.2 is available here:
  
http://people.apache.org/~stack/hbase-0.96.2RC2/
  
   and up in a staging memory repository here:
  
  
 https://repository.apache.org/content/repositories/orgapachehbase-1012
  
   This RC has two fixes beyond RC0 and RC1:
  
HBASE-10819 (HBASE-10819) Backport HBASE-8063 (Filter HFiles based on
   first/last key) into 0.96 Stack
HBASE-10802 CellComparator.compareStaticIgnoreMvccVersion compares
 type
   wrongly Ramkrishna S Vasudevan
  
   Shall we release this RC as hbase-0.96.2?  Vote closes on Tuesday,
 April
   1st.
  
   Thanks,
   St.Ack
  
  
  
  
  
   On Sun, Mar 23, 2014 at 10:45 AM, Stack st...@duboce.net wrote:
  
   Here is RC1.  Its the same as RC0 only it is properly named and I svn
   add'd a bit of missing doc.  You can download it here:
  
http://people.apache.org/~stack/hbase-0.96.2RC1/
  
   It is up in a staging maven repository here:
  
  
 https://repository.apache.org/content/repositories/orgapachehbase-1009
  
   See below for list of fixes since 0.96.1.
  
   Shall we release this as hbase-0.96.2?  Please vote by Monday, March
  31st.
  
   Thanks,
   St.Ack
  
  
  
  
   -- Forwarded message --
   From: Stack st...@duboce.net
   Date: Wed, Mar 19, 2014 at 12:43 PM
   Subject: ANNOUNCE: The first hbase-0.96.2 release candidate is
 available
   for download
   To: HBase Dev List dev@hbase.apache.org
  
  
   hbase-0.96.2RC0 is the first release candidate for hbase-0.96.2.  It
 is
  a
   bug fix release that includes 129 fixes [1] since hbase-0.96.1 (from
 41
   distinct contributors).
  
   You can download the release candidate here:
  
 http://people.apache.org/~stack/hbase-0.96.2RC0/
  
   It is staged in an apache maven repository at this location:
  
  
  https://repository.apache.org/content/repositories/orgapachehbase-1008/
  
   Shall we release this candidate as hbase-0.96.2?  Lets close the vote
 in
   a week on March 26th.
  
   Yours,
   St.Ack
  
   1. http://goo.gl/bQk42q
  
   Here is the list of CHANGEs:
  
  
   HBASE-10161 [AccessController] Tolerate regions in recovery Anoop Sam
   John
   HBASE-10384 Failed to increment serveral columns in one Increment
 Jimmy
   Xiang
   HBASE-10313 Duplicate servlet-api jars in hbase 0.96.0 stack
   HBASE-10370 Compaction in out-of-date Store causes region split
 failure
   Liu Shaohui
   HBASE-10366 0.94 filterRow() may be skipped in 0.96(or onwards) code
   Jeffrey Zhong
   HBASE-10556 Possible data loss due to non-handled
   DroppedSnapshotException for user-triggered flush from client/shell
   Honghua Feng
   HBASE-10582 0.94-0.96 Upgrade: ACL can't be repopulated when ACL
 table
   contains 

Re: ANNOUNCE: The second hbase-0.96.2 release candiate (WAS -- Fwd: ANNOUNCE: The first hbase-0.96.2 release candidate is available for download

2014-03-24 Thread Stack
Sinking because it is missing HBASE-10819, Backport HBASE-8063 (Filter
HFiles based on first/last key) into 0.96

Let me put up a new RC in a few.

St.Ack


On Sun, Mar 23, 2014 at 10:45 AM, Stack st...@duboce.net wrote:

 Here is RC1.  Its the same as RC0 only it is properly named and I svn
 add'd a bit of missing doc.  You can download it here:

  http://people.apache.org/~stack/hbase-0.96.2RC1/

 It is up in a staging maven repository here:

  https://repository.apache.org/content/repositories/orgapachehbase-1009

 See below for list of fixes since 0.96.1.

 Shall we release this as hbase-0.96.2?  Please vote by Monday, March 31st.

 Thanks,
 St.Ack




 -- Forwarded message --
 From: Stack st...@duboce.net
 Date: Wed, Mar 19, 2014 at 12:43 PM
 Subject: ANNOUNCE: The first hbase-0.96.2 release candidate is available
 for download
 To: HBase Dev List dev@hbase.apache.org


 hbase-0.96.2RC0 is the first release candidate for hbase-0.96.2.  It is a
 bug fix release that includes 129 fixes [1] since hbase-0.96.1 (from 41
 distinct contributors).

 You can download the release candidate here:

   http://people.apache.org/~stack/hbase-0.96.2RC0/

 It is staged in an apache maven repository at this location:

   https://repository.apache.org/content/repositories/orgapachehbase-1008/

 Shall we release this candidate as hbase-0.96.2?  Lets close the vote in a
 week on March 26th.

 Yours,
 St.Ack

 1. http://goo.gl/bQk42q

 Here is the list of CHANGEs:


 HBASE-10161 [AccessController] Tolerate regions in recovery Anoop Sam John
 HBASE-10384 Failed to increment serveral columns in one Increment Jimmy
 Xiang
 HBASE-10313 Duplicate servlet-api jars in hbase 0.96.0 stack
 HBASE-10370 Compaction in out-of-date Store causes region split failure
 Liu Shaohui
 HBASE-10366 0.94 filterRow() may be skipped in 0.96(or onwards) code
 Jeffrey Zhong
 HBASE-10556 Possible data loss due to non-handled DroppedSnapshotException
 for user-triggered flush from client/shell Honghua Feng
 HBASE-10582 0.94-0.96 Upgrade: ACL can't be repopulated when ACL table
 contains row for table '-ROOT' or '.META.' Jeffrey Zhong
 HBASE-10581 ACL znode are left without PBed during upgrading hbase0.94* to
 hbase0.96+ Jeffrey Zhong
 HBASE-10575 ReplicationSource thread can't be terminated if it runs into
 the loop to contact peer's zk ensemble and fails continuously Honghua Feng
 HBASE-10436 restore regionserver lists removed from hbase 0.96+ jmx
 Jonathan Hsieh
 HBASE-10449 Wrong execution pool configuration in HConnectionManager
 Nicolas Liochon
 HBASE-10442 prepareDelete() isn't called before doPreMutationHook for a
 row deletion case Jeffrey Zhong
 HBASE-10598 Written data can not be read out because
 MemStore#timeRangeTracker might be updated concurrently cuijianwei
 HBASE-10679 Both clients get wrong scan results if the first scanner
 expires and the second scanner is created with the same scannerId on the
 same region Honghua Feng
 HBASE-10514 Forward port HBASE-10466, possible data loss when failed
 flushes stack
 HBASE-10749 CellComparator.compareStatic() compares type wrongly
 ramkrishna.s.vasudevan
 HBASE-9151 HBCK cannot fix when meta server znode deleted, this can happen
 if all region servers stopped and there are no logs to split. rajeshbabu
 HBASE-8803 region_mover.rb should move multiple regions at a time
 Jean-Marc Spaggiari
 HBASE-10043 HBASE-10033 Fix Potential Resouce Leak in
 MultiTableInputFormatBase Elliott Clark
 HBASE-10195 mvn site build failed with OutOfMemoryError Jeffrey Zhong
 HBASE-10196 Enhance HBCK to understand the case after online region merge
 chunhui shen
 HBASE-10137 GeneralBulkAssigner with retain assignment plan can be used in
 EnableTableHandler to bulk assign the regions rajeshbabu
 HBASE-10157 Provide CP hook post log replay Anoop Sam John
 HBASE-10155 HRegion isRecovering state is wrongly coming in postOpen hook
 Anoop Sam John
 HBASE-10146 Bump HTrace version to 2.04 Elliott Clark
 HBASE-10124 HBASE-10033 Make Sub Classes Static When Possible Elliott
 Clark
 HBASE-10084 [WINDOWS] bin\hbase.cmd should allow whitespaces in
 java.library.path and classpath Enis Soztutar
 HBASE-10186 region_mover.rb broken because ServerName constructor is
 changed to private Samir Ahmic
 HBASE-10098 [WINDOWS] pass in native library directory from hadoop for
 unit tests Enis Soztutar
 HBASE-10110 HBASE-10033 Fix Potential Resource Leak in StoreFlusher
 Elliott Clark
 HBASE-10264 [MapReduce]: CompactionTool in mapred mode is missing classes
 in its classpath Himanshu Vashishtha
 HBASE-10260 Canary Doesn't pick up Configuration properly. Elliott Clark
 HBASE-10232 Remove native profile from hbase-shell Elliott Clark
 HBASE-10221 Region from coprocessor invocations can be null on failure
 Andrew Purtell
 HBASE-10220 Put all test service principals into the superusers list
 Andrew Purtell
 HBASE-10219 HTTPS support for HBase in RegionServerListTmpl.jamon Ted Yu
 HBASE-10218 Port HBASE-10142 

ANNOUNCE: The third hbase-0.96.2 release candidate (WAS -- Re: ANNOUNCE: The second hbase-0.96.2 release candiate (WAS -- Fwd: ANNOUNCE: The first hbase-0.96.2 release candidate is available for dow

2014-03-24 Thread Stack
The third release candidate for hbase-0.96.2 is available here:

 http://people.apache.org/~stack/hbase-0.96.2RC2/

and up in a staging memory repository here:

 https://repository.apache.org/content/repositories/orgapachehbase-1012

This RC has two fixes beyond RC0 and RC1:

 HBASE-10819 (HBASE-10819) Backport HBASE-8063 (Filter HFiles based on
first/last key) into 0.96 Stack
 HBASE-10802 CellComparator.compareStaticIgnoreMvccVersion compares type
wrongly Ramkrishna S Vasudevan

Shall we release this RC as hbase-0.96.2?  Vote closes on Tuesday, April
1st.

Thanks,
St.Ack





On Sun, Mar 23, 2014 at 10:45 AM, Stack st...@duboce.net wrote:

 Here is RC1.  Its the same as RC0 only it is properly named and I svn
 add'd a bit of missing doc.  You can download it here:

  http://people.apache.org/~stack/hbase-0.96.2RC1/

 It is up in a staging maven repository here:

  https://repository.apache.org/content/repositories/orgapachehbase-1009

 See below for list of fixes since 0.96.1.

 Shall we release this as hbase-0.96.2?  Please vote by Monday, March 31st.

 Thanks,
 St.Ack




 -- Forwarded message --
 From: Stack st...@duboce.net
 Date: Wed, Mar 19, 2014 at 12:43 PM
 Subject: ANNOUNCE: The first hbase-0.96.2 release candidate is available
 for download
 To: HBase Dev List dev@hbase.apache.org


 hbase-0.96.2RC0 is the first release candidate for hbase-0.96.2.  It is a
 bug fix release that includes 129 fixes [1] since hbase-0.96.1 (from 41
 distinct contributors).

 You can download the release candidate here:

   http://people.apache.org/~stack/hbase-0.96.2RC0/

 It is staged in an apache maven repository at this location:

   https://repository.apache.org/content/repositories/orgapachehbase-1008/

 Shall we release this candidate as hbase-0.96.2?  Lets close the vote in a
 week on March 26th.

 Yours,
 St.Ack

 1. http://goo.gl/bQk42q

 Here is the list of CHANGEs:


 HBASE-10161 [AccessController] Tolerate regions in recovery Anoop Sam John
 HBASE-10384 Failed to increment serveral columns in one Increment Jimmy
 Xiang
 HBASE-10313 Duplicate servlet-api jars in hbase 0.96.0 stack
 HBASE-10370 Compaction in out-of-date Store causes region split failure
 Liu Shaohui
 HBASE-10366 0.94 filterRow() may be skipped in 0.96(or onwards) code
 Jeffrey Zhong
 HBASE-10556 Possible data loss due to non-handled DroppedSnapshotException
 for user-triggered flush from client/shell Honghua Feng
 HBASE-10582 0.94-0.96 Upgrade: ACL can't be repopulated when ACL table
 contains row for table '-ROOT' or '.META.' Jeffrey Zhong
 HBASE-10581 ACL znode are left without PBed during upgrading hbase0.94* to
 hbase0.96+ Jeffrey Zhong
 HBASE-10575 ReplicationSource thread can't be terminated if it runs into
 the loop to contact peer's zk ensemble and fails continuously Honghua Feng
 HBASE-10436 restore regionserver lists removed from hbase 0.96+ jmx
 Jonathan Hsieh
 HBASE-10449 Wrong execution pool configuration in HConnectionManager
 Nicolas Liochon
 HBASE-10442 prepareDelete() isn't called before doPreMutationHook for a
 row deletion case Jeffrey Zhong
 HBASE-10598 Written data can not be read out because
 MemStore#timeRangeTracker might be updated concurrently cuijianwei
 HBASE-10679 Both clients get wrong scan results if the first scanner
 expires and the second scanner is created with the same scannerId on the
 same region Honghua Feng
 HBASE-10514 Forward port HBASE-10466, possible data loss when failed
 flushes stack
 HBASE-10749 CellComparator.compareStatic() compares type wrongly
 ramkrishna.s.vasudevan
 HBASE-9151 HBCK cannot fix when meta server znode deleted, this can happen
 if all region servers stopped and there are no logs to split. rajeshbabu
 HBASE-8803 region_mover.rb should move multiple regions at a time
 Jean-Marc Spaggiari
 HBASE-10043 HBASE-10033 Fix Potential Resouce Leak in
 MultiTableInputFormatBase Elliott Clark
 HBASE-10195 mvn site build failed with OutOfMemoryError Jeffrey Zhong
 HBASE-10196 Enhance HBCK to understand the case after online region merge
 chunhui shen
 HBASE-10137 GeneralBulkAssigner with retain assignment plan can be used in
 EnableTableHandler to bulk assign the regions rajeshbabu
 HBASE-10157 Provide CP hook post log replay Anoop Sam John
 HBASE-10155 HRegion isRecovering state is wrongly coming in postOpen hook
 Anoop Sam John
 HBASE-10146 Bump HTrace version to 2.04 Elliott Clark
 HBASE-10124 HBASE-10033 Make Sub Classes Static When Possible Elliott
 Clark
 HBASE-10084 [WINDOWS] bin\hbase.cmd should allow whitespaces in
 java.library.path and classpath Enis Soztutar
 HBASE-10186 region_mover.rb broken because ServerName constructor is
 changed to private Samir Ahmic
 HBASE-10098 [WINDOWS] pass in native library directory from hadoop for
 unit tests Enis Soztutar
 HBASE-10110 HBASE-10033 Fix Potential Resource Leak in StoreFlusher
 Elliott Clark
 HBASE-10264 [MapReduce]: CompactionTool in mapred mode is missing classes
 in its classpath Himanshu Vashishtha
 

ANNOUNCE: The second hbase-0.96.2 release candiate (WAS -- Fwd: ANNOUNCE: The first hbase-0.96.2 release candidate is available for download

2014-03-23 Thread Stack
Here is RC1.  Its the same as RC0 only it is properly named and I svn add'd
a bit of missing doc.  You can download it here:

 http://people.apache.org/~stack/hbase-0.96.2RC1/

It is up in a staging maven repository here:

 https://repository.apache.org/content/repositories/orgapachehbase-1009

See below for list of fixes since 0.96.1.

Shall we release this as hbase-0.96.2?  Please vote by Monday, March 31st.

Thanks,
St.Ack




-- Forwarded message --
From: Stack st...@duboce.net
Date: Wed, Mar 19, 2014 at 12:43 PM
Subject: ANNOUNCE: The first hbase-0.96.2 release candidate is available
for download
To: HBase Dev List dev@hbase.apache.org


hbase-0.96.2RC0 is the first release candidate for hbase-0.96.2.  It is a
bug fix release that includes 129 fixes [1] since hbase-0.96.1 (from 41
distinct contributors).

You can download the release candidate here:

  http://people.apache.org/~stack/hbase-0.96.2RC0/

It is staged in an apache maven repository at this location:

  https://repository.apache.org/content/repositories/orgapachehbase-1008/

Shall we release this candidate as hbase-0.96.2?  Lets close the vote in a
week on March 26th.

Yours,
St.Ack

1. http://goo.gl/bQk42q

Here is the list of CHANGEs:


HBASE-10161 [AccessController] Tolerate regions in recovery Anoop Sam John
HBASE-10384 Failed to increment serveral columns in one Increment Jimmy
Xiang
HBASE-10313 Duplicate servlet-api jars in hbase 0.96.0 stack
HBASE-10370 Compaction in out-of-date Store causes region split failure
Liu Shaohui
HBASE-10366 0.94 filterRow() may be skipped in 0.96(or onwards) code
Jeffrey Zhong
HBASE-10556 Possible data loss due to non-handled DroppedSnapshotException
for user-triggered flush from client/shell Honghua Feng
HBASE-10582 0.94-0.96 Upgrade: ACL can't be repopulated when ACL table
contains row for table '-ROOT' or '.META.' Jeffrey Zhong
HBASE-10581 ACL znode are left without PBed during upgrading hbase0.94* to
hbase0.96+ Jeffrey Zhong
HBASE-10575 ReplicationSource thread can't be terminated if it runs into
the loop to contact peer's zk ensemble and fails continuously Honghua Feng
HBASE-10436 restore regionserver lists removed from hbase 0.96+ jmx
Jonathan Hsieh
HBASE-10449 Wrong execution pool configuration in HConnectionManager
Nicolas Liochon
HBASE-10442 prepareDelete() isn't called before doPreMutationHook for a row
deletion case Jeffrey Zhong
HBASE-10598 Written data can not be read out because
MemStore#timeRangeTracker might be updated concurrently cuijianwei
HBASE-10679 Both clients get wrong scan results if the first scanner
expires and the second scanner is created with the same scannerId on the
same region Honghua Feng
HBASE-10514 Forward port HBASE-10466, possible data loss when failed
flushes stack
HBASE-10749 CellComparator.compareStatic() compares type wrongly
ramkrishna.s.vasudevan
HBASE-9151 HBCK cannot fix when meta server znode deleted, this can happen
if all region servers stopped and there are no logs to split. rajeshbabu
HBASE-8803 region_mover.rb should move multiple regions at a time
Jean-Marc Spaggiari
HBASE-10043 HBASE-10033 Fix Potential Resouce Leak in
MultiTableInputFormatBase Elliott Clark
HBASE-10195 mvn site build failed with OutOfMemoryError Jeffrey Zhong
HBASE-10196 Enhance HBCK to understand the case after online region merge
chunhui shen
HBASE-10137 GeneralBulkAssigner with retain assignment plan can be used in
EnableTableHandler to bulk assign the regions rajeshbabu
HBASE-10157 Provide CP hook post log replay Anoop Sam John
HBASE-10155 HRegion isRecovering state is wrongly coming in postOpen hook
Anoop Sam John
HBASE-10146 Bump HTrace version to 2.04 Elliott Clark
HBASE-10124 HBASE-10033 Make Sub Classes Static When Possible Elliott Clark
HBASE-10084 [WINDOWS] bin\hbase.cmd should allow whitespaces in
java.library.path and classpath Enis Soztutar
HBASE-10186 region_mover.rb broken because ServerName constructor is
changed to private Samir Ahmic
HBASE-10098 [WINDOWS] pass in native library directory from hadoop for unit
tests Enis Soztutar
HBASE-10110 HBASE-10033 Fix Potential Resource Leak in StoreFlusher
Elliott Clark
HBASE-10264 [MapReduce]: CompactionTool in mapred mode is missing classes
in its classpath Himanshu Vashishtha
HBASE-10260 Canary Doesn't pick up Configuration properly. Elliott Clark
HBASE-10232 Remove native profile from hbase-shell Elliott Clark
HBASE-10221 Region from coprocessor invocations can be null on failure
Andrew Purtell
HBASE-10220 Put all test service principals into the superusers list
Andrew Purtell
HBASE-10219 HTTPS support for HBase in RegionServerListTmpl.jamon Ted Yu
HBASE-10218 Port HBASE-10142 'TestLogRolling#testLogRollOnDatanodeDeath
test failure' to 0.96 branch Ted Yu
HBASE-10332 Missing .regioninfo file during daughter open processing
Matteo Bertozzi
HBASE-10315 Canary shouldn't exit with 3 if there is no master running.
Elliott Clark
HBASE-10310 ZNodeCleaner session expired for /hbase/master Samir Ahmic
HBASE-10375 

Fwd: Where are we in ZOOKEEPER-1416

2014-01-17 Thread Andrew Purtell
What is going on with this thread over on dev@zookeeper? Bringing it to the
attention of people over here.


-- Forwarded message --
From: Ted Dunning ted.dunn...@gmail.com
Date: Fri, Jan 17, 2014 at 2:41 PM
Subject: Re: Where are we in ZOOKEEPER-1416
To: d...@zookeeper.apache.org d...@zookeeper.apache.org


My reference here is to the comments a ways up thread.  Kishore and I
clearly agree completely that idempotency and dealing with the state as it
is right now are the keys to correct design.


On Fri, Jan 17, 2014 at 2:14 PM, Ted Dunning ted.dunn...@gmail.com wrote:


 That comment indicates a lack of understanding of ZK, not a bug in ZK.

 You don't lose state transitions if you read new state at the same time
 you set the new watch.

 Likewise, it is simply a product of bad design to have a problem with
 asynchronous notification.  Changes on other machines *are* asynchronous
so
 anybody who can't handle that is inherently denying reality.  If you want
 to inject the notifications into a sequential view of an event stream,
that
 is trivial to do.

 Systems that depend on transition notification are generally not as robust
 as systems that depend on current state.  Building a cluster manager works
 better if the master is notified that a change has happened, but then
 simply deals with the situation as it stands.

 As an analog, imagine that you have a system that shows a number x and a
 second system that is supposed to show an echo of that number.

 Design A is notified of changes to x in the form of deltas.  If there is
 ever an error in handling events, the echo will be off forever.  The error
 that causes the delta to be dropped could be notification or a coding
error
 or a misunderstanding of how parallel systems work.  For instance, the
 InterruptedException might not be handled right.

 Design B is notified of changes to x and whenever a change happens, the
 second system simply goes and reads the new state.  Errors will be quickly
 corrected.

 It sounds like the original poster is trying to build something like
 Design A when they should be building Design B.





 On Fri, Jan 17, 2014 at 12:34 PM, Ted Yu yuzhih...@gmail.com wrote:

 HBASE-5487 is also related.

 The discussion there is very long. Below is an excerpt from Honghua:

 too many tricky scenarios/bugs due to ZK watch is one-time(which can
 result
 in missed state transition) and the notification/process is
 asyncronous(which can lead to delayed/non-update-to-date state in master
 memory).

 Cheers


 On Fri, Jan 17, 2014 at 11:25 AM, Ted Yu yuzhih...@gmail.com wrote:

  Hi, Flavio:
  HBASE-8365 is one such case.
 
  Let me search around for other related discussion.
 
 
  On Fri, Jan 17, 2014 at 11:17 AM, Flavio Junqueira 
 fpjunque...@yahoo.comwrote:
 
  Hi Ted,
 
  Can you provide more detail on how the precise deltas could make it
 more
  robust?
 
  -Flavio
 
  -Original Message-
  From: Ted Yu yuzhih...@gmail.com
  Sent: 17/01/2014 17:25
  To: d...@zookeeper.apache.org d...@zookeeper.apache.org
  Subject: Re: Where are we in ZOOKEEPER-1416
 
  Having the ability to know exact deltas would help make HBase region
  assignment more robust.
 
  Cheers
 
 
 
  On Fri, Jan 17, 2014 at 9:13 AM, kishore g g.kish...@gmail.com
 wrote:
 
   I agree with you, I like the side effect and in fact I would prefer
 to
  have
   one notification for all changes under a parent node.
  
   However, Hao is probably asking for ability to know exact deltas.
  
  
   On Fri, Jan 17, 2014 at 8:15 AM, FPJ fpjunque...@yahoo.com wrote:
  
We don't need to have a mapping between every change and a
  notification.
   If
there are 2+ changes between notifications, you'll be able to
 observe
  it
   by
reading the ZK state. In fact, one nice side-effect is that we
 reduce
  the
number of notifications when there are many concurrent changes.
   
The only situation I can see it being necessary is the one in
 which we
   need
to know precisely the changes and we haven't cached a previous
  version of
the state.
   
-Flavio
   
 -Original Message-
 From: kishore g [mailto:g.kish...@gmail.com]
 Sent: 17 January 2014 16:06
 To: d...@zookeeper.apache.org
 Subject: Re: Where are we in ZOOKEEPER-1416

 I think Hao is pointing out that there is no way to see every
 change
 (delta) that happened to a znode. Consider 2 changes A,B in
quick
 succession. When client gets notified of A and before setting
the
  watch
the
 change B has occurred on the server side. This means the client
  cannot
know
 the delta A. Client can only read the state after change B is
  applied.

 Implementing the concept of Persistent watcher guarantees that
  client
   if
 notified after every change.

 This is a nice to have feature but I dont understand the
  requirement in
Hbase
 where this is needed. Hao, can you shed more light on how this
  would 

Fwd: [jira] [Resolved] (HADOOP-10199) Precommit Admin build is not running because no previous successful build is available

2014-01-02 Thread Ted Yu
FYI

In case you're wondering why
https://builds.apache.org/job/PreCommit-HBASE-Build/ is empty.

-- Forwarded message --
From: Todd Lipcon (JIRA) j...@apache.org
Date: Thu, Jan 2, 2014 at 11:51 AM
Subject: [jira] [Resolved] (HADOOP-10199) Precommit Admin build is not
running because no previous successful build is available
To: common-...@hadoop.apache.org



 [
https://issues.apache.org/jira/browse/HADOOP-10199?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel]

Todd Lipcon resolved HADOOP-10199.
--

  Resolution: Fixed
Hadoop Flags: Reviewed

 Precommit Admin build is not running because no previous successful build
is available

--

 Key: HADOOP-10199
 URL: https://issues.apache.org/jira/browse/HADOOP-10199
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brock Noland
Assignee: Brock Noland
Priority: Blocker
 Attachments: HADOOP-10199.patch


 It seems at some point the builds started failing for an unknown reason
and eventually the last successful was rolled off. At that point the
precommit builds started failing because they pull an artifact from the
last successful build.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)


Fwd: CFP NoSQL FOSDEM - Hadoop Community

2013-11-04 Thread Ted Yu
-- Forwarded message --
From: laura.czajkow...@gmail.com laura.czajkow...@gmail.com
Date: Mon, Nov 4, 2013 at 1:29 PM
Subject: CFP NoSQL FOSDEM - Hadoop Community
To: u...@hadoop.apache.org


Hi all,

We're pleased to announce the call for participation for the NoSQL devroom,
returning after a great last year.

NoSQL is an encompassing term that covers a multitude of different and
interesting database solutions.  As the interest in NoSQL continues to
grow, we are looking for talks on any open source NoSQL database or related
topic.

Speaking slots are 25 or 50 minutes. To propose a talk please go to:
http://bit.ly/nosql-devroom-2013
https://penta.fosdem.org/submission/FOSDEM14

As FOSDEM is a friendly open source conference, please refrain from
slagging matches about each other’s projects. Keep it respectful, keep it
non-commercial, and remember that all decks are subject to approval.

http://bit.ly/nosql-devroom-2013

If you do not want to give a talk yourself but have ideas for NoSQL topics,
send them to the mailing list at nosql-devr...@lists.fosdem.org. Know
someone who might be interested in the devroom? Please forward them this
email on our behalf. Want to help out but don’t know how? Contact us!

The devroom is scheduled for Sunday, February 2nd and has approx 80 seats. The
call for proposals is open until Dec 13th and speakers will be notified by
December 20th. The final schedule will then be announced by January 10th.

Any changes will be announced on the mailing list:
https://lists.fosdem.org/listinfo/nosql-devroom



*Original announcement went on
: http://www.lczajkowski.com/2013/10/10/cfp-for-nosql-devroom-at-fosdem/
http://www.lczajkowski.com/2013/10/10/cfp-for-nosql-devroom-at-fosdem/*


Laura


Fwd: FW: Coverity Scan (MAPREDUCE-5032)

2013-08-27 Thread Ted Yu
FYI

-- Forwarded message --
From: Jon Jarboe jjar...@coverity.com
Date: Mon, Aug 26, 2013 at 8:21 AM
Subject: FW: Coverity Scan (MAPREDUCE-5032)
To: common-...@hadoop.apache.org common-...@hadoop.apache.org


I've been working with DataStax on their use of Coverity with Cassandra,
and decided to give the Hadoop 1.2.1 source tarball a run through our
analyzer.  I found some interesting issues, and noticed that some of them
are integer overflow defects that align with the open MAPREDUCE-5032 issue.
 Other issues range from concurrency problems to cross-site scripting to
resource leaks, but I haven't tried to match those up to existing JIRA
issues.

Email is not the best forum for investigating these issues, so I'd be happy
to post them on Coverity's Scan server for your review.  If you're not
familiar with Coverity Scan, it is our free cloud-based service for OSS
projects (https://scan.coverity.com).  I realize that false positives can
be a concern, and I'd like to point out that Coverity is specifically
designed to minimize false positives.

If somebody is interested in looking through the results, please let me
know.  To get an initial analysis into Scan, please let me know whether the
1.2.1 source is a good place to start.  I can analyze a different
rev/branch if that's more interesting.  If you see value, we can always set
up additional branches.

Best regards, and thanks for your time.

Jon Jarboe | Senior Technical Manager
Coverity | 185 Berry Street | Suite 6500, Lobby 3 | San Francisco, CA  94107
O: +1 214-531-3496 | M: +1 214-531-3496 | E: jjar...@coverity.commailto:
jjar...@coverity.com
Web: www.coverity.comhttp://www.coverity.com | Twitter: @Coverity

The Leader in Development Testing


Fwd: [jira] [Commented] (HBASE-8943) Split Thrift2's ThriftServer into separate classes for easier testing and modularization

2013-07-16 Thread Lars George
Did this get committed with the wrong JIRA number?

Begin forwarded message:

 From: Hudson (JIRA) j...@apache.org
 Subject: [jira] [Commented] (HBASE-8943) Split Thrift2's ThriftServer into 
 separate classes for easier testing and modularization
 Date: July 16, 2013 2:44:55 AM GMT+02:00
 To: iss...@hbase.apache.org
 
 
[ 
 https://issues.apache.org/jira/browse/HBASE-8943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13709294#comment-13709294
  ] 
 
 Hudson commented on HBASE-8943:
 ---
 
 FAILURE: Integrated in hbase-0.95-on-hadoop2 #180 (See 
 [https://builds.apache.org/job/hbase-0.95-on-hadoop2/180/])
 HBASE-8943 TestRegionMergeTransactionOnCluster#testWholesomeMerge may fail 
 due to race in opening region (stack: rev 1503471)
 * 
 /hbase/branches/0.95/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/DispatchMergingRegionHandler.java
 
 
 Split Thrift2's ThriftServer into separate classes for easier testing and 
 modularization
 
 
Key: HBASE-8943
URL: https://issues.apache.org/jira/browse/HBASE-8943
Project: HBase
 Issue Type: Sub-task
 Components: Thrift
   Reporter: Lars George
   Assignee: Lars George
 Labels: thrift2
 
 Currently the ThriftServer class in Thrift 2 sets up and starts the actual 
 server. Better follow a similar pattern to Thrift 1 where there is some 
 factory setting up the server, and a separate start section. That way it is 
 easier to test if the setup of the server is picking up everything it needs.
 
 --
 This message is automatically generated by JIRA.
 If you think it was sent incorrectly, please contact your JIRA administrators
 For more information on JIRA, see: http://www.atlassian.com/software/jira



Questions regarding Fwd: [jira] [Commented] (HBASE-8819) Port HBASE-5428 to Thrift 2

2013-07-08 Thread Lars George
Hi!

I am wondering how to figure what two extra warning the patch added. I am rusty 
on Jenkins, I tried to click on the links below, i.e. the testReport and 
hbase-server ones, but  cannot find those extra two warning it refers to. How 
do you go about fixing this QA issue?

Thanks,
Lars


Begin forwarded message:

 From: Hadoop QA (JIRA) j...@apache.org
 Subject: [jira] [Commented] (HBASE-8819) Port HBASE-5428 to Thrift 2
 Date: July 8, 2013 6:55:52 PM GMT+02:00
 To: iss...@hbase.apache.org
 
 
[ 
 https://issues.apache.org/jira/browse/HBASE-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702118#comment-13702118
  ] 
 
 Hadoop QA commented on HBASE-8819:
 --
 
 {color:red}-1 overall{color}.  Here are the results of testing the latest 
 attachment 
  http://issues.apache.org/jira/secure/attachment/12591221/HBASE-8819.patch
  against trunk revision .
 
{color:green}+1 @author{color}.  The patch does not contain any @author 
 tags.
 
{color:green}+1 tests included{color}.  The patch appears to include 3 new 
 or modified tests.
 
{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
 1.0 profile.
 
{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
 2.0 profile.
 
{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
 2 warning messages.
 
{color:green}+1 javac{color}.  The applied patch does not increase the 
 total number of javac compiler warnings.
 
{color:green}+1 findbugs{color}.  The patch does not introduce any new 
 Findbugs (version 1.3.9) warnings.
 
{color:green}+1 release audit{color}.  The applied patch does not increase 
 the total number of release audit warnings.
 
{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
 longer than 100
 
  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.
 
{color:green}+1 core tests{color}.  The patch passed unit tests in .
 
 Test results: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//testReport/
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
 Findbugs warnings: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
 Console output: 
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//console
 
 This message is automatically generated.
 
 Port HBASE-5428 to Thrift 2
 ---
 
Key: HBASE-8819
URL: https://issues.apache.org/jira/browse/HBASE-8819
Project: HBase
 Issue Type: Sub-task
 Components: Thrift
   Reporter: Lars George
   Assignee: Lars George
 Labels: thrift2
Fix For: 0.98.0, 0.95.2
 
Attachments: HBASE-8819.patch
 
 
 HBASE-5428 adds loading filters at start up. Needs to be added in Thrift 2 
 as well.
 
 --
 This message is automatically generated by JIRA.
 If you think it was sent incorrectly, please contact your JIRA administrators
 For more information on JIRA, see: http://www.atlassian.com/software/jira



Re: Questions regarding Fwd: [jira] [Commented] (HBASE-8819) Port HBASE-5428 to Thrift 2

2013-07-08 Thread Ted Yu
I submitted patch onto HBASE-8864 which should fix the 2 javadoc warnings.

On Mon, Jul 8, 2013 at 10:43 AM, Lars George lars.geo...@gmail.com wrote:

 Hi!

 I am wondering how to figure what two extra warning the patch added. I am
 rusty on Jenkins, I tried to click on the links below, i.e. the testReport
 and hbase-server ones, but  cannot find those extra two warning it refers
 to. How do you go about fixing this QA issue?

 Thanks,
 Lars


 Begin forwarded message:

  From: Hadoop QA (JIRA) j...@apache.org
  Subject: [jira] [Commented] (HBASE-8819) Port HBASE-5428 to Thrift 2
  Date: July 8, 2013 6:55:52 PM GMT+02:00
  To: iss...@hbase.apache.org
 
 
 [
 https://issues.apache.org/jira/browse/HBASE-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702118#comment-13702118]
 
  Hadoop QA commented on HBASE-8819:
  --
 
  {color:red}-1 overall{color}.  Here are the results of testing the
 latest attachment
 
 http://issues.apache.org/jira/secure/attachment/12591221/HBASE-8819.patch
   against trunk revision .
 
 {color:green}+1 @author{color}.  The patch does not contain any
 @author tags.
 
 {color:green}+1 tests included{color}.  The patch appears to include
 3 new or modified tests.
 
 {color:green}+1 hadoop1.0{color}.  The patch compiles against the
 hadoop 1.0 profile.
 
 {color:green}+1 hadoop2.0{color}.  The patch compiles against the
 hadoop 2.0 profile.
 
 {color:red}-1 javadoc{color}.  The javadoc tool appears to have
 generated 2 warning messages.
 
 {color:green}+1 javac{color}.  The applied patch does not increase
 the total number of javac compiler warnings.
 
 {color:green}+1 findbugs{color}.  The patch does not introduce any
 new Findbugs (version 1.3.9) warnings.
 
 {color:green}+1 release audit{color}.  The applied patch does not
 increase the total number of release audit warnings.
 
 {color:green}+1 lineLengths{color}.  The patch does not introduce
 lines longer than 100
 
   {color:green}+1 site{color}.  The mvn site goal succeeds with this
 patch.
 
 {color:green}+1 core tests{color}.  The patch passed unit tests in .
 
  Test results:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//testReport/
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
  Console output:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//console
 
  This message is automatically generated.
 
  Port HBASE-5428 to Thrift 2
  ---
 
 Key: HBASE-8819
 URL: https://issues.apache.org/jira/browse/HBASE-8819
 Project: HBase
  Issue Type: Sub-task
  Components: Thrift
Reporter: Lars George
Assignee: Lars George
  Labels: thrift2
 Fix For: 0.98.0, 0.95.2
 
 Attachments: HBASE-8819.patch
 
 
  HBASE-5428 adds loading filters at start up. Needs to be added in
 Thrift 2 as well.
 
  --
  This message is automatically generated by JIRA.
  If you think it was sent incorrectly, please contact your JIRA
 administrators
  For more information on JIRA, see:
 http://www.atlassian.com/software/jira




Re: Questions regarding Fwd: [jira] [Commented] (HBASE-8819) Port HBASE-5428 to Thrift 2

2013-07-08 Thread Sergey Shelukhin
https://builds.apache.org/job/PreCommit-HBASE-Build/6239/artifact/trunk/patchprocess/patchJavadocWarnings.txt

search for WARN, ignore the Unsafe ones.

1 warning
[WARNING] Javadoc Warnings
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Put.java:73:
warning - @param argument ts is not a parameter name.


1 warning
[WARNING] Javadoc Warnings
[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/replication/regionserver/ReplicationSink.java:123:
warning - Tag @link: can't find
replicateEntries(org.apache.hadoop.hbase.regionserver.wal.HLog.Entry[])
in org.apache.hadoop.hbase.replication.regionserver.ReplicationSink


On Mon, Jul 8, 2013 at 10:43 AM, Lars George lars.geo...@gmail.com wrote:

 Hi!

 I am wondering how to figure what two extra warning the patch added. I am
 rusty on Jenkins, I tried to click on the links below, i.e. the testReport
 and hbase-server ones, but  cannot find those extra two warning it refers
 to. How do you go about fixing this QA issue?

 Thanks,
 Lars


 Begin forwarded message:

  From: Hadoop QA (JIRA) j...@apache.org
  Subject: [jira] [Commented] (HBASE-8819) Port HBASE-5428 to Thrift 2
  Date: July 8, 2013 6:55:52 PM GMT+02:00
  To: iss...@hbase.apache.org
 
 
 [
 https://issues.apache.org/jira/browse/HBASE-8819?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13702118#comment-13702118]
 
  Hadoop QA commented on HBASE-8819:
  --
 
  {color:red}-1 overall{color}.  Here are the results of testing the
 latest attachment
 
 http://issues.apache.org/jira/secure/attachment/12591221/HBASE-8819.patch
   against trunk revision .
 
 {color:green}+1 @author{color}.  The patch does not contain any
 @author tags.
 
 {color:green}+1 tests included{color}.  The patch appears to include
 3 new or modified tests.
 
 {color:green}+1 hadoop1.0{color}.  The patch compiles against the
 hadoop 1.0 profile.
 
 {color:green}+1 hadoop2.0{color}.  The patch compiles against the
 hadoop 2.0 profile.
 
 {color:red}-1 javadoc{color}.  The javadoc tool appears to have
 generated 2 warning messages.
 
 {color:green}+1 javac{color}.  The applied patch does not increase
 the total number of javac compiler warnings.
 
 {color:green}+1 findbugs{color}.  The patch does not introduce any
 new Findbugs (version 1.3.9) warnings.
 
 {color:green}+1 release audit{color}.  The applied patch does not
 increase the total number of release audit warnings.
 
 {color:green}+1 lineLengths{color}.  The patch does not introduce
 lines longer than 100
 
   {color:green}+1 site{color}.  The mvn site goal succeeds with this
 patch.
 
 {color:green}+1 core tests{color}.  The patch passed unit tests in .
 
  Test results:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//testReport/
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
  Findbugs warnings:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
  Console output:
 https://builds.apache.org/job/PreCommit-HBASE-Build/6239//console
 
  This message is automatically generated.
 
  Port HBASE-5428 to Thrift 2
  ---
 
 Key: HBASE-8819
 URL: https://issues.apache.org/jira/browse/HBASE-8819
 Project: HBase
  Issue Type: Sub-task
  Components: Thrift
Reporter: Lars George
Assignee: Lars George
  Labels: thrift2
 Fix For: 0.98.0, 0.95.2
 
 Attachments: HBASE-8819.patch
 
 
  HBASE-5428 adds loading filters at start up. Needs to be added in
 Thrift 2 as well.
 
  --
  This message is automatically generated by JIRA.
  If you think it was sent incorrectly, please contact your JIRA
 

Fwd: DesignLounge @ HadoopSummit

2013-06-12 Thread Devaraj Das
FYI

Begin forwarded message:

*From:* Eric Baldeschwieler eri...@hortonworks.com
*Date:* June 11, 2013, 10:46:25 AM PDT
*To:* common-...@hadoop.apache.org common-...@hadoop.apache.org
*Subject:* *DesignLounge @ HadoopSummit*
*Reply-To:* common-...@hadoop.apache.org

Hi Folks,

We thought we'd try something new at Hadoop Summit this year to build upon
two pieces of feedback I've heard a lot this year:

Apache project developers would like to take advantage of the Hadoop summit
to meet with their peers to on work on specific technical details of their
projects
That they want to do this during the summit, not before it starts or at
night. I've been told BoFs and other such traditional formats have not
historically worked for them, because they end up being about educating
users about their projects, not actually working with their peers on how to
make their projects better.
So we are creating a space in the summit - marked in the event guide as
DesignLounge - concurrent with the presentation tracks where Apache Project
contributors can meet with their peers to plan the future of their project
or work through various technical issues near and dear to their hearts.

We're going to provide white boards and message boards and let folks take
it from there in an unconference style.  We think there will be room for
about 4 groups to meet at once.  Interested? Let me know what you think.
 Send me any ideas for how we can make this work best for you.

The room will be 231A and B at the Hadoop Summit and will run from 10:30am
to 5:00pm on Day 1 (26th June), and we can also run from 10:30am to 5:00pm
on Day 2 (27th June) if we have a lot of topics that folk want to cover.

Some of the early topics some folks told me they hope can be covered:

Hadoop Core security proposals.  There are a couple of detailed proposals
circulating.  Let's get together and hash out the differences.
Accumulo 1.6 features
The Hive vectorization project.  Discussion of the design and how to phase
it in incrementally with minimum complexity.
Finishing Yarn - what things need to get done NOW to make Yarn more
effective
If you are a project lead for one of the Apache projects, look at the
schedule below and suggest a few slots when you think it would be best for
your project to meet.  I'll try to work out a schedule where no more than 2
projects are using the lounge at once.

Day 1, 26th June: 10:30am - 12:30pm, 1:45pm - 3:30pm, 3:45pm - 5:00pm

Day 2, 27th June: 10:30am - 12:30pm, 1:45pm - 3:30pm, 3:45pm - 5:00pm

It will be up to you, the hadoop contributors, from there.

Look forward to seeing you all at the summit,

E14

PS Please forward to the other -dev lists.  This event is for folks on the
-dev lists.


Fwd: How to collect the real-time transaction request logs from HBase Master/Region Servers?

2013-06-05 Thread Joarder KAMAL
Many apologies for forwarding this email again.

Could you let me know how can I be able to pull/export the real-time raw
logs (number of requests and their details in a particular regions) which
appears in the HBase Web UI like shown in below? I looked at pp. 277-283 of
Lars George's book and other sources but didn't get a clue :(

Any idea??

I want to perform real-time data stream mining with those logs.


Regards,
Joarder Kamal


-- Forwarded message --
From: Joarder KAMAL joard...@gmail.com
Date: 4 June 2013 16:09
Subject: How to collect the real-time transaction request logs from HBase
Master/Region Servers?
To: dev@hbase.apache.org


Dear All,

I am a newbie in HBase/Hadoop and recently have a small-scale setup in a
research cloud:
--
1 Master Server (Also Hadoop Name Node)
3 Region Server (Also Hadoop Data Node)
1 Ganglia Monitoring Server
1 YCSB Workload Generation Server
--
HBase Version: 0.94.7, r1471806
Hadoop Version: 1.0.4, r1393290
Ganglia Version: gmond/gmetad - 3.6.0, gweb - 3.5.8
YCSB Version: 0.1.4
--

I have only one table in HBase - 'usertable' with a single column family
'cf1' holding 1,000,000 key-value records. The row keys are in
monotonically increasing order and currently I have 6 regions distributed
in the 3 region servers each holding 2 of the regions.
*
*
*Objective:* create region hotspots for some research experiments

*Observation:*
After running a workload consist of a total 10,000,000 operations (50%
read, 50% write) I've observed the below statistics in the Web UI of the
master server which can suggest potential hotspots in the 3rd (not sure why
!!) and 6th regions (possibly it was receiving large number of write
requests).

Table Regions
 NameRegion ServerStart Key End KeyRequests
usertable,,1369584948241.3061b90ff519c1bce5b3d867690a2b4a. hdb1-02:60030
user2035146605813492656 127946
usertable,user2035146605813492656,1369584948241.00f8a51bab6d98ebd7c4db582579c3e7.
hdb1-03:60030user2035146605813492656 user30679275375621809 126700
usertable,user30679275375621809,1369584813037.d704a50802ec39982884e394d4ef05b7.
hdb1-04:60030user30679275375621809 user5136356049533495298
*284828*usertable,user5136356049533495298,1369584928780.999b987d646462e21b8916a737619b39.
hdb1-02:60030 
user5136356049533495298user617761656465008158133108usertable,user617761656465008158,1369584928780.9cfe288f48f987de7f93b800dcd4c964.
hdb1-04:60030 
user617761656465008158user7218407885253116621119008usertable,user7218407885253116621,1369584832152.e3a9c4d35c91f06c18ed346886ff3306.
hdb1-03:60030 user7218407885253116621*363234*

*Questions:*

   1. Can the HBase developer community guide me on how to collect the *raw
   logs* (directly from the master/region servers) for the above table
   which I've retrieved from the Master server?
   2. And how the master server is getting these logs from the region
   servers? As far I've understand from the architecture the client will
   directly communicate with the region servers to read/write the data
   bypassing the master server (unless the first time or if the region server
   is not responding)
   3. How frequently the master collects these logs? Is it real-time
   (within 1 sec interval !!)?
   4. Which HBase metrics will be most helpful to notice region hotspots
   from Ganglia?


I want to know which transaction request (read/write) going to which region
servers from the raw log dumps as like

No:12345  Type:Write  Query  Region06
and so on ...


Many thanks again...


Regards,
Joarder Kamal


Fwd: Questions about versions and timestamp

2013-03-20 Thread Benyi Wang
Hi,

Please forgive me if my questions have been already asked and answered many
times because I could not googled any of them.

If I do the following commands in hbase shell,

hbase(main):048:0 create test_ts_ver, data
0 row(s) in 1.0550 seconds

hbase(main):049:0 describe test_ts_ver
DESCRIPTION  ENABLED

 {NAME = 'test_ts_ver', FAMILIES = [{NAME = 'data true

 ', BLOOMFILTER = 'NONE', REPLICATION_SCOPE = '0',

  VERSIONS = '3', COMPRESSION = 'NONE', MIN_VERSIO

 NS = '0', TTL = '2147483647', BLOCKSIZE = '65536

 ', IN_MEMORY = 'false', BLOCKCACHE = 'true'}]}

1 row(s) in 0.0940 seconds

hbase(main):052:0 put test_ts_ver, row_1, data:name, benyi_w, 100
0 row(s) in 0.0040 seconds

hbase(main):053:0 put test_ts_ver, row_1, data:name, benyi_1, 110
0 row(s) in 0.0050 seconds

hbase(main):054:0 put test_ts_ver, row_1, data:name, benyi_2, 120
0 row(s) in 0.0040 seconds

hbase(main):055:0 put test_ts_ver, row_1, data:name, benyi_3, 130
0 row(s) in 0.0040 seconds

hbase(main):056:0 put test_ts_ver, row_1, data:name, benyi_4, 140
0 row(s) in 0.0040 seconds

hbase(main):057:0 get test_ts_ver, row_1, { TIMERANGE=[0,200] }
COLUMNCELL

 data:nametimestamp=140, value=benyi_4

1 row(s) in 0.0140 seconds

hbase(main):058:0 get test_ts_ver, row_1, { TIMERANGE=[0,200],
VERSIONS=5 }
COLUMNCELL

 data:nametimestamp=140, value=benyi_4

 data:nametimestamp=130, value=benyi_3

 data:nametimestamp=120, value=benyi_2

3 row(s) in 0.0050 seconds

So far so good. But if I try to get timestamp=100 or 110, I still can get
them

hbase(main):059:0 get test_ts_ver, row_1, { TIMESTAMP= 100 }
COLUMNCELL

 data:nametimestamp=100, value=benyi_w

1 row(s) in 0.0120 seconds

hbase(main):060:0 get test_ts_ver, row_1, { TIMESTAMP= 110 }
COLUMNCELL

 data:nametimestamp=110, value=benyi_1

1 row(s) in 0.0060 seconds

My questions:

1. When all those old versions will be removed?
2. Will compact or major_compact remove those old versions?
3. Is there a section/chapter talking about this behavior In HBase
Reference Guide?

Thanks.

Ben


Fwd: unable to generate assembly using trunk

2012-12-04 Thread Andrew Purtell
assembly:assembly worked for me.

-- Forwarded message --
From: *Ted Yu*
Date: Wednesday, December 5, 2012
Subject: unable to generate assembly using trunk
To: dev@hbase.apache.org


Hi,
Using this command: mvn package assembly:single -DskipTests
I got:

INFO] HBase . FAILURE [20.459s]
[INFO] HBase - Common  SKIPPED
[INFO] HBase - Protocol .. SKIPPED
[INFO] HBase - Client  SKIPPED
[INFO] HBase - Hadoop Compatibility .. SKIPPED
[INFO] HBase - Hadoop One Compatibility .. SKIPPED
[INFO] HBase - Server  SKIPPED
[INFO] HBase - Hadoop Two Compatibility .. SKIPPED
[INFO] HBase - Integration Tests . SKIPPED
[INFO] HBase - Examples .. SKIPPED
[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 21.037s
[INFO] Finished at: Tue Dec 04 16:23:04 PST 2012
[INFO] Final Memory: 19M/81M
[INFO]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-assembly-plugin:2.3:single (default-cli) on
project hbase: Failed to create assembly: Unable to resolve dependencies
for assembly 'all': Failed to resolve dependencies for assembly: Missing:
[ERROR] --
[ERROR] 1) org.apache.hbase:hbase-client:jar:0.95-SNAPSHOT
[ERROR]
[ERROR] Try downloading the file manually from the project website.
[ERROR]
[ERROR] Then, install it using the command:
[ERROR] mvn install:install-file -DgroupId=org.apache.hbase
-DartifactId=hbase-client -Dversion=0.95-SNAPSHOT -Dpackaging=jar
-Dfile=/path/to/file
[ERROR]
[ERROR] Alternatively, if you host your own repository you can deploy the
file there:
[ERROR] mvn deploy:deploy-file -DgroupId=org.apache.hbase
-DartifactId=hbase-client -Dversion=0.95-SNAPSHOT -Dpackaging=jar
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]
[ERROR]
[ERROR] Path to dependency:
[ERROR] 1) org.apache.hbase:hbase:pom:0.95-SNAPSHOT
[ERROR] 2) org.apache.hbase:hbase-client:jar:0.95-SNAPSHOT
[ERROR]
[ERROR] 2) org.apache.hbase:hbase-hadoop1-compat:jar:0.95-SNAPSHOT
[ERROR]
[ERROR] Try downloading the file manually from the project website.
[ERROR]
[ERROR] Then, install it using the command:
[ERROR] mvn install:install-file -DgroupId=org.apache.hbase
-DartifactId=hbase-hadoop1-compat -Dversion=0.95-SNAPSHOT -Dpackaging=jar
-Dfile=/path/to/file
[ERROR]
[ERROR] Alternatively, if you host your own repository you can deploy the
file there:
[ERROR] mvn deploy:deploy-file -DgroupId=org.apache.hbase
-DartifactId=hbase-hadoop1-compat -Dversion=0.95-SNAPSHOT -Dpackaging=jar
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]
[ERROR]
[ERROR] Path to dependency:
[ERROR] 1) org.apache.hbase:hbase:pom:0.95-SNAPSHOT
[ERROR] 2) org.apache.hbase:hbase-hadoop1-compat:jar:0.95-SNAPSHOT
[ERROR]
[ERROR] 3) org.apache.hbase:hbase-protocol:jar:0.95-SNAPSHOT
[ERROR]
[ERROR] Try downloading the file manually from the project website.
[ERROR]
[ERROR] Then, install it using the command:
[ERROR] mvn install:install-file -DgroupId=org.apache.hbase
-DartifactId=hbase-protocol -Dversion=0.95-SNAPSHOT -Dpackaging=jar
-Dfile=/path/to/file
[ERROR]
[ERROR] Alternatively, if you host your own repository you can deploy the
file there:
[ERROR] mvn deploy:deploy-file -DgroupId=org.apache.hbase
-DartifactId=hbase-protocol -Dversion=0.95-SNAPSHOT -Dpackaging=jar
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]
[ERROR]
[ERROR] Path to dependency:
[ERROR] 1) org.apache.hbase:hbase:pom:0.95-SNAPSHOT
[ERROR] 2) org.apache.hbase:hbase-protocol:jar:0.95-SNAPSHOT
[ERROR]
[ERROR] 4) org.apache.hbase:hbase-hadoop-compat:jar:0.95-SNAPSHOT
[ERROR]
[ERROR] Try downloading the file manually from the project website.
[ERROR]
[ERROR] Then, install it using the command:
[ERROR] mvn install:install-file -DgroupId=org.apache.hbase
-DartifactId=hbase-hadoop-compat -Dversion=0.95-SNAPSHOT -Dpackaging=jar
-Dfile=/path/to/file
[ERROR]
[ERROR] Alternatively, if you host your own repository you can deploy the
file there:
[ERROR] mvn deploy:deploy-file -DgroupId=org.apache.hbase
-DartifactId=hbase-hadoop-compat -Dversion=0.95-SNAPSHOT -Dpackaging=jar
-Dfile=/path/to/file -Durl=[url] -DrepositoryId=[id]
[ERROR]
[ERROR] Path to dependency:
[ERROR] 1) org.apache.hbase:hbase:pom:0.95-SNAPSHOT
[ERROR] 2) org.apache.hbase:hbase-hadoop-compat:jar:0.95-SNAPSHOT
[ERROR]
[ERROR] --
[ERROR] 4 required artifacts are missing.
[ERROR]
[ERROR] for artifact:
[ERROR] org.apache.hbase:hbase:pom:0.95-SNAPSHOT
[ERROR]

FYI



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet Hein
(via Tom White)


Fwd: sty wrong with Get operation of hbase client

2012-08-26 Thread lin weijian
I think this is a bug of Result.getValue(byte[], byte[]). I debug and trace the 
code, find that the Result is
right, but getValue is wrong. 

Result r = htable.get(get);

byte[] res = r.getValue(Bytes.toBytes(f), Bytes.toBytes(ts));

In function getColumnLatest() called by geValue, the value kvs is like this:

[com.sohu.www:http//f:fi/1345888370605/Put/vlen=4, 
com.sohu.www:http//f:ts/1345888370605/Put/vlen=8, 
com.sohu.www:http//ft:1st/1345888370605/Put/vlen=1, 
com.sohu.www:http//mk:_injmrk_/1345888370605/Put/vlen=1, 
com.sohu.www:http//s:s/1345888370605/Put/vlen=4]

but getColumnLatest(f, ts) is null;

Does the binarySearch or the KeyValue.COMPARATOR have a bug?

下面是被转发的邮件:

 发件人: lin weijian linweiji...@gmail.com
 主题: sty wrong with Get operation of hbase client
 日期: 2012年8月25日格林尼治标准时间+0800下午10时03分26秒
 收件人: dev@hbase.apache.org
 
 Hi, 
  I use hbase client 0.92.1 to get the row, but when Get add all the 
 columns and qualifiers, a filed (f:ts) is always return nothing.   If Get not 
 add column mk or  ft , it works right. Is it a bug?
 
 
 The schema as follow:
 
 table name=webpage
 family name=p maxVersions=1/ !-- This can also have params 
 like compression, bloom filters --
 family name=f maxVersions=1/
 family name=s maxVersions=1/
 family name=il maxVersions=1/
 family name=ol maxVersions=1/
 family name=h maxVersions=1/
 family name=mtdt maxVersions=1/
 family name=mk maxVersions=1/
 family name=ft maxVersions=1/
 /table
 class table=webpage keyClass=java.lang.String 
 name=org.apache.nutch.storage.WebPage
 
 !-- fetch fields   --
 field name=baseUrl family=f qualifier=bas/
 field name=status family=f qualifier=st/
 field name=prevFetchTime family=f qualifier=pts/
 field name=fetchTime family=f qualifier=ts/
 field name=fetchInterval family=f qualifier=fi/
 field name=retriesSinceFetch family=f qualifier=rsf/
 field name=reprUrl family=f qualifier=rpr/
 field name=content family=f qualifier=cnt/
 field name=contentType family=f qualifier=typ/
 field name=protocolStatus family=f qualifier=prot/
 field name=modifiedTime family=f qualifier=mod/
 field name=pageType family=f qualifier=ptyp/
 field name=level family=f qualifier=l/
 field name=lastFetchInterval family=f qualifier=lfi/
 field name=newsTime family=f qualifier=nts/
 field name=findTime family=f qualifier=fts/
 
 
 field name=title family=p qualifier=t/
 field name=text family=p qualifier=c/
 field name=parseStatus family=p qualifier=st/
 field name=signature family=p qualifier=sig/
 field name=prevSignature family=p qualifier=psig/
 
 !-- score fields   --
 field name=score family=s qualifier=s/
 field name=headers family=h/
 field name=inlinks family=il/
 field name=outlinks family=ol/
 field name=metadata family=mtdt/
 field name=markers family=mk/
 
 field name=features family=ft/
 /class
 
 {name: WebPage,
  type: record,
  namespace: org.apache.nutch.storage,
  fields: [
 {name: baseUrl, type: string}, 
 {name: status, type: int},
 {name: fetchTime, type: long},
 {name: prevFetchTime, type: long},
 {name: fetchInterval, type: int},
 {name: retriesSinceFetch, type: int},
 {name: modifiedTime, type: long},
 {name: protocolStatus, type: {
 name: ProtocolStatus,
 type: record,
 namespace: org.apache.nutch.storage,
 fields: [
 {name: code, type: int},
 {name: args, type: {type: array, items: 
 string}},
 {name: lastModified, type: long}
 ]
 }},
 {name: content, type: bytes},
 {name: contentType, type: string},
 {name: prevSignature, type: bytes},
 {name: signature, type: bytes},
 {name: title, type: string},
 {name: text, type: string},
 {name: parseStatus, type: {
 name: ParseStatus,
 type: record,
 namespace: org.apache.nutch.storage,
 fields: [
 {name: majorCode, type: int},
 {name: minorCode, type: int},
 {name: args, type: {type: array, items: string}}
 ]
 }},
 {name: score, type: float},
 {name: reprUrl, type: string},
 {name: headers, type: {type: map, values: string}},
 {name: outlinks, type: {type: map, values: string}},
 {name: inlinks, type: {type: map, values: string}},
 {name: markers, type: {type: map, values: string}},
 {name: metadata, type: 

Fwd: [jira] [Commented] (HBASE-5728) Methods Missing in HTableInterface

2012-08-10 Thread Jimmy Xiang
Hi Bing,

Are you working on this issue?

Based on comments, at least the following methods should be added to
HTableInterface:

  public HConnection getConnection();

  public byte[][] getStartKeys() throws IOException;
  public byte[][] getEndKeys() throws IOException;
  public Pairbyte[][], byte[][] getStartEndKeys() throws IOException;

  public void setAutoFlush(boolean autoFlush);
  public void setAutoFlush(boolean autoFlush, boolean clearBufferOnFail);

  public long getWriteBufferSize();
  public void setWriteBufferSize(long writeBufferSize) throws IOException,

Thanks,
Jimmy


-- Forwarded message --
From: Lars Hofhansl (JIRA) j...@apache.org
Date: Tue, Jul 31, 2012 at 12:01 PM
Subject: [jira] [Commented] (HBASE-5728) Methods Missing in HTableInterface
To: iss...@hbase.apache.org



[ 
https://issues.apache.org/jira/browse/HBASE-5728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13426024#comment-13426024
]

Lars Hofhansl commented on HBASE-5728:
--

These:
{code}
public MapHRegionInfo, HServerAddress getRegionsInfo() throws IOException;
public HRegionLocation getRegionLocation(String row) throws IOException;
public HRegionLocation getRegionLocation(byte[] row) throws IOException;

public void prewarmRegionCache(MapHRegionInfo, HServerAddress regionMap);
public void clearRegionCache();

public long getWriteBufferSize();
public void setWriteBufferSize(long writeBufferSize) throws IOException,
public ArrayListPut getWriteBuffer();
{code}


Would leak implementation stuff into the interface.
I think HBASE-4054 specifically mentions, that {code}public
MapHRegionInfo, HServerAddress getRegionsInfo() throws
IOException;{code} is needed. Hmm...


 Methods Missing in HTableInterface
 --

 Key: HBASE-5728
 URL: https://issues.apache.org/jira/browse/HBASE-5728
 Project: HBase
  Issue Type: Improvement
  Components: client
Reporter: Bing Li

 Dear all,
 I found some methods existed in HTable were not in HTableInterface.
setAutoFlush
setWriteBufferSize
...
 In most cases, I manipulate HBase through HTableInterface from HTablePool. If 
 I need to use the above methods, how to do that?
 I am considering writing my own table pool if no proper ways. Is it fine?
 Thanks so much!
 Best regards,
 Bing

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA
administrators:
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


Fwd: HBase setting up issue

2012-07-06 Thread Varun kumar
Hi Ted,

My mvn version

$ mvn --version
Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
Maven home: /usr/local/apache-maven-3.0.4
Java version: 1.6.0_26, vendor: Sun Microsystems Inc.
Java home: /usr/lib/jvm/java-6-sun-1.6.0.26/jre
Default locale: en_US, platform encoding: UTF-8
OS name: linux, version: 3.0.0-12-generic, arch: amd64, family: unix



step 2
$ svn checkout http://svn.apache.org/repos/asf/hbase/trunk hbase

step3
Under the workspace
mvn clean package -DskipTests

Build was successful


step 3

mvn eclipse:eclipse

ERROR] Failed to execute goal on project hbase-server: Could not resolve
dependencies for project org.apache.hbase:hbase-server:jar:0.95-SNAPSHOT:
Failure to find org.apache.hbase:hbase-common:jar:0.95-SNAPSHOT in
http://repository-netty.forge.cloudbees.com/snapshot/ was cached in the
local repository, resolution will not be reattempted until the update
interval of cloudbees netty has elapsed or updates are forced - [Help 1]


I also  used Maven Eclipse plugin and ran the maven-generate sources
command from eclipse
I get the same error !

Could you please help me on this .

Atleast could you  provide a brief outline of the checking out and
compilation process..It would really help me out !
I initially followed the procedure on
http://michaelmorello.blogspot.com/2011/09/hbase-subversion-eclipse-windows.html

I am working under a professor at  UTD  trying to learn HBase. I have been
installing uninstalling and getting struck at the initial step ! :(

Any help would be highly appreciated.



-- 
_
Regards,
Varun





-- 
_
Regards,
Varun


Fwd: porting multi() to zookeeper 3.3

2012-06-28 Thread Andrew Purtell
ZooKeeper dev consensus is: fwiw during the summit meetup we took a
poll and the consensus was
that 3.4 should now be considered stable

So I don't see an issue with depending on 3.4.x. At some point you
have to make progress.

-- Forwarded message --
From: Patrick Hunt ph...@apache.org
Date: Thu, Jun 28, 2012 at 9:18 AM
Subject: Re: porting multi() to zookeeper 3.3
To: d...@zookeeper.apache.org
Cc: Jesse Yates jesse.k.ya...@gmail.com


On Thu, Jun 28, 2012 at 6:54 AM, Ted Yu yuzhih...@gmail.com wrote:
 See Jesse's comment:

 https://issues.apache.org/jira/browse/HBASE-2611?focusedCommentId=13402863page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13402863

Sounds like a reasonable concern. In their shoes I'd probably stick
with established/stable features as well.

fwiw during the summit meetup we took a poll and the consensus was
that 3.4 should now be considered stable. Mahadev and I were planning
to propose the next release (3.4.4) as such. So it shouldn't be long
to wait if you are interested in using some of the new features.

Patrick

 On Thu, Jun 28, 2012 at 12:07 AM, Patrick Hunt ph...@apache.org wrote:

 On Wed, Jun 27, 2012 at 10:11 PM, Ted Yu yuzhih...@gmail.com wrote:
  For Ted's question, developers and ops at the company I mentioned would
 be
  able to give their answer.

 We have a general policy of not adding new features to fix releases.

 What testing have you done with 3.4.3 that concerns you? Are there
 specific outstanding bugs?

 Patrick



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet
Hein (via Tom White)


Re: Fwd: porting multi() to zookeeper 3.3

2012-06-28 Thread lars hofhansl
I'll repeat my +1 from a while back :)




 From: Andrew Purtell apurt...@apache.org
To: dev@hbase.apache.org 
Sent: Thursday, June 28, 2012 10:48 AM
Subject: Fwd: porting multi() to zookeeper 3.3
 
ZooKeeper dev consensus is: fwiw during the summit meetup we took a
poll and the consensus was
that 3.4 should now be considered stable

So I don't see an issue with depending on 3.4.x. At some point you
have to make progress.

-- Forwarded message --
From: Patrick Hunt ph...@apache.org
Date: Thu, Jun 28, 2012 at 9:18 AM
Subject: Re: porting multi() to zookeeper 3.3
To: d...@zookeeper.apache.org
Cc: Jesse Yates jesse.k.ya...@gmail.com


On Thu, Jun 28, 2012 at 6:54 AM, Ted Yu yuzhih...@gmail.com wrote:
 See Jesse's comment:

 https://issues.apache.org/jira/browse/HBASE-2611?focusedCommentId=13402863page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13402863

Sounds like a reasonable concern. In their shoes I'd probably stick
with established/stable features as well.

fwiw during the summit meetup we took a poll and the consensus was
that 3.4 should now be considered stable. Mahadev and I were planning
to propose the next release (3.4.4) as such. So it shouldn't be long
to wait if you are interested in using some of the new features.

Patrick

 On Thu, Jun 28, 2012 at 12:07 AM, Patrick Hunt ph...@apache.org wrote:

 On Wed, Jun 27, 2012 at 10:11 PM, Ted Yu yuzhih...@gmail.com wrote:
  For Ted's question, developers and ops at the company I mentioned would
 be
  able to give their answer.

 We have a general policy of not adding new features to fix releases.

 What testing have you done with 3.4.3 that concerns you? Are there
 specific outstanding bugs?

 Patrick



-- 
Best regards,

   - Andy

Problems worthy of attack prove their worth by hitting back. - Piet
Hein (via Tom White)

Fwd: Build failed in Jenkins: HBase-TRUNK-security #99

2012-02-03 Thread Ted Yu
Hi,
I tried to build HBase-TRUNK-security several times today and couldn't
reduce the number of failed tests to 2.
I can reproduce the following test failure on MacBook:
Tests in error:
  org.apache.hadoop.hbase.master.TestZKBasedOpenCloseRegion: Shutting down

In the past two days, I checked in some patches related to thrift. Maybe
the cause for new test failures is something else.

Cheers

-- Forwarded message --
From: Apache Jenkins Server jenk...@builds.apache.org
Date: Fri, Feb 3, 2012 at 7:46 AM
Subject: Build failed in Jenkins: HBase-TRUNK-security #99
To: dev@hbase.apache.org

Results :

Tests in error:
 testPreviousOffset[1](org.apache.hadoop.hbase.io.hfile.TestHFileBlock)
 testConcurrentReading[1](org.apache.hadoop.hbase.io.hfile.TestHFileBlock):
unable to create new native thread
 org.apache.hadoop.hbase.TestInfoServers: Shutting down
 org.apache.hadoop.hbase.master.TestZKBasedOpenCloseRegion: Shutting down

Tests run: 874, Failures: 0, Errors: 4, Skipped: 10

[INFO]

[INFO] BUILD FAILURE
[INFO]

[INFO] Total time: 47:10.325s
[INFO] Finished at: Fri Feb 03 15:44:23 UTC 2012
[INFO] Final Memory: 47M/448M
[INFO]

[ERROR] Failed to execute goal
org.apache.maven.plugins:maven-surefire-plugin:2.12-TRUNK-HBASE-2:test
(secondPartTestsExecution) on project hbase: There are test failures.
[ERROR]
[ERROR] Please refer to 
https://builds.apache.org/job/HBase-TRUNK-security/ws/trunk/target/surefire-reports
for the individual test results.
[ERROR] - [Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute
goal org.apache.maven.plugins:maven-surefire-plugin:2.12-TRUNK-HBASE-2:test
(secondPartTestsExecution) on project hbase: There are test failures.

Please refer to 
https://builds.apache.org/job/HBase-TRUNK-security/ws/trunk/target/surefire-reports
for the individual test results.
   at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:213)
   at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at
org.apache.maven.lifecycle.internal.MojoExecutor.executeForkedExecutions(MojoExecutor.java:365)
   at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:199)
   at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at
org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:84)
   at
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:59)
   at
org.apache.maven.lifecycle.internal.LifecycleStarter.singleThreadedBuild(LifecycleStarter.java:183)
   at
org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:161)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:319)
   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:156)
   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:537)
   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:196)
   at org.apache.maven.cli.MavenCli.main(MavenCli.java:141)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
   at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
   at java.lang.reflect.Method.invoke(Method.java:597)
   at
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:290)
   at
org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:230)
   at
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:409)
   at
org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:352)
Caused by: org.apache.maven.plugin.MojoFailureException: There are test
failures.

Please refer to 
https://builds.apache.org/job/HBase-TRUNK-security/ws/trunk/target/surefire-reports
for the individual test results.
   at
org.apache.maven.plugin.surefire.SurefireHelper.reportExecution(SurefireHelper.java:87)
   at
org.apache.maven.plugin.surefire.SurefirePlugin.writeSummary(SurefirePlugin.java:651)
   at
org.apache.maven.plugin.surefire.SurefirePlugin.handleSummary(SurefirePlugin.java:625)
   at
org.apache.maven.plugin.surefire.AbstractSurefireMojo.executeAfterPreconditionsChecked(AbstractSurefireMojo.java:137)
   at
org.apache.maven.plugin.surefire.AbstractSurefireMojo.execute(AbstractSurefireMojo.java:98)
   at

Fwd: [VOTE] Release ZooKeeper 3.4.0 (candidate 2)

2011-11-22 Thread Andrew Purtell
Releasing soon. 


Begin forwarded message:

 From: Mahadev Konar maha...@hortonworks.com
 Date: November 22, 2011 7:38:36 PM PST
 To: d...@zookeeper.apache.org
 Subject: Re: [VOTE] Release ZooKeeper 3.4.0 (candidate 2)
 Reply-To: d...@zookeeper.apache.org
 
 With 4 +1's (binding) and one +1 non binding, the vote passes. Ill do
 the needful on the release now.
 
 Thanks for everyones hard work on this one.
 
 thanks
 mahadev
 
 On Tue, Nov 22, 2011 at 11:07 AM, Mahadev Konar maha...@hortonworks.com 
 wrote:
 Thanks for taking it for a spin Roman.
 
 I tried out the RC. Also did a backwards compatibility test with 3.3.*
 and made sure the old clients can talk to new servers and also the
 other way around.
 
 +1 from me.
 
 thanks
 mahadev
 
 On Sat, Nov 19, 2011 at 6:59 PM, Roman Shaposhnik r...@apache.org wrote:
 On Tue, Nov 15, 2011 at 6:28 PM, Mahadev Konar maha...@hortonworks.com 
 wrote:
 *** Please download, test and VOTE before the
 *** vote closes 5pm PT on Saturday, Nov22***
 
 +1. I pulled 3.4.0 into Bigtop and rebuilt the entire stack over here:

 http://bigtop01.cloudera.org:8080/view/RCs/job/Bigtop-trunk-rc-zookeeper-3.4.0/
 
 Folks interested in testing *clients* of ZK can install everything in the 
 usual
 manner from packages ([X] stands for centos5, centos6, fedora15, sles11, 
 lucid):
   
 http://bigtop01.cloudera.org:8080/view/RCs/job/Bigtop-trunk-rc-zookeeper-3.4.0/label=[X]/lastSuccessfulBuild/artifact/output/bigtop.repo
 
 I also executed HBase running with this RC:
   http://bigtop01.cloudera.org:8080/view/RCs/job/Bigtop-rc-smoketest/11/
 
 Thanks,
 Roman.
 
 


Fwd: warning from dev@hbase.apache.org

2011-11-15 Thread Akash Ashok
Any Idea ? Got a warning for no reason :)

Cheers,
Akash A
-- Forwarded message --
From: dev-h...@hbase.apache.org
Date: Wed, Nov 16, 2011 at 3:40 AM
Subject: warning from dev@hbase.apache.org
To: thehellma...@gmail.com


Hi! This is the ezmlm program. I'm managing the
dev@hbase.apache.org mailing list.

I'm working for my owner, who can be reached
at dev-ow...@hbase.apache.org.


Messages to you from the dev mailing list seem to
have been bouncing. I've attached a copy of the first bounce
message I received.

If this message bounces too, I will send you a probe. If the probe bounces,
I will remove your address from the dev mailing list,
without further notice.


I've kept a list of which messages from the dev mailing list have
bounced from your address.

Copies of these messages may be in the archive.
To retrieve a set of messages 123-145 (a maximum of 100 per request),
send a short message to:
  dev-get.123_...@hbase.apache.org

To receive a subject and author list for the last 100 or so messages,
send a short message to:
  dev-in...@hbase.apache.org

Here are the message numbers:

  25130

--- Enclosed is a copy of the bounce message I received.

Return-Path: 
Received: (qmail 53091 invoked for bounce); 4 Nov 2011 03:22:09 -
Date: 4 Nov 2011 03:22:09 -
From: mailer-dae...@apache.org
To: dev-return-251...@hbase.apache.org
Subject: failure notice

Hi. This is the qmail-send program at apache.org.
I'm afraid I wasn't able to deliver your message to the following addresses.
This is a permanent error; I've given up. Sorry it didn't work out.

thehellma...@gmail.com:
74.125.127.27 failed after I sent the message.
Remote host said: 550 5.7.1 Unauthenticated email is not accepted from this
domain. c10si2741359ibj.54


Re: Fwd: warning from dev@hbase.apache.org

2011-11-15 Thread Mayuresh
I got that too! I dont know why!
On Nov 16, 2011 8:16 AM, Akash Ashok thehellma...@gmail.com wrote:

 Any Idea ? Got a warning for no reason :)

 Cheers,
 Akash A
 -- Forwarded message --
 From: dev-h...@hbase.apache.org
 Date: Wed, Nov 16, 2011 at 3:40 AM
 Subject: warning from dev@hbase.apache.org
 To: thehellma...@gmail.com


 Hi! This is the ezmlm program. I'm managing the
 dev@hbase.apache.org mailing list.

 I'm working for my owner, who can be reached
 at dev-ow...@hbase.apache.org.


 Messages to you from the dev mailing list seem to
 have been bouncing. I've attached a copy of the first bounce
 message I received.

 If this message bounces too, I will send you a probe. If the probe bounces,
 I will remove your address from the dev mailing list,
 without further notice.


 I've kept a list of which messages from the dev mailing list have
 bounced from your address.

 Copies of these messages may be in the archive.
 To retrieve a set of messages 123-145 (a maximum of 100 per request),
 send a short message to:
  dev-get.123_...@hbase.apache.org

 To receive a subject and author list for the last 100 or so messages,
 send a short message to:
  dev-in...@hbase.apache.org

 Here are the message numbers:

  25130

 --- Enclosed is a copy of the bounce message I received.

 Return-Path: 
 Received: (qmail 53091 invoked for bounce); 4 Nov 2011 03:22:09 -
 Date: 4 Nov 2011 03:22:09 -
 From: mailer-dae...@apache.org
 To: dev-return-251...@hbase.apache.org
 Subject: failure notice

 Hi. This is the qmail-send program at apache.org.
 I'm afraid I wasn't able to deliver your message to the following
 addresses.
 This is a permanent error; I've given up. Sorry it didn't work out.

 thehellma...@gmail.com:
 74.125.127.27 failed after I sent the message.
 Remote host said: 550 5.7.1 Unauthenticated email is not accepted from this
 domain. c10si2741359ibj.54



Interesting note on hbase client from asynchbase list -- Fwd: standard hbase client, asynchbase client, netty and direct memory buffers

2011-10-22 Thread Stack
Below is an interesting finding on our hbase client by Jonathan Payne.
  He posted the asynchbase list.   I'm forwarding here (with his
permission).

St.Ack


-- Forwarded message --
From: Jonathan Payne jpa...@flipboard.com
Date: Fri, Oct 21, 2011 at 6:30 PM
Subject: standard hbase client, asynchbase client, netty and direct
memory buffers
To: AsyncHBase asynchb...@googlegroups.com


I thought I'd take a moment to explain what I discovered trying to
track down serious problems with the regular (non-async) hbase client
and Java's nio implementation.
We were having issues running out of direct memory and here's a stack
trace which says it all:
        java.nio.Buffer.init(Buffer.java:172)
        java.nio.ByteBuffer.init(ByteBuffer.java:259)
        java.nio.ByteBuffer.init(ByteBuffer.java:267)
        java.nio.MappedByteBuffer.init(MappedByteBuffer.java:64)
        java.nio.DirectByteBuffer.init(DirectByteBuffer.java:97)
        java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:288)
        sun.nio.ch.Util.getTemporaryDirectBuffer(Util.java:155)
        sun.nio.ch.IOUtil.write(IOUtil.java:37)
        sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:334)
        
org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:55)
        
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
        
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:146)
        
org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:107)
        java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
        java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
        java.io.DataOutputStream.flush(DataOutputStream.java:106)
        
org.apache.hadoop.hbase.ipc.HBaseClient$Connection.sendParam(HBaseClient.java:518)
        org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:751)
        org.apache.hadoop.hbase.ipc.HBaseRPC$Invoker.invoke(HBaseRPC.java:257)
        $Proxy11.getProtocolVersion(Unknown Source:Unknown line)
Here you can see that an HBaseClient request is flushing a stream
which has a socket channel at the other end of it. HBase has decided
not to use direct memory for its byte buffers which I thought was
smart since they are difficult to manage. Unfortunately, behind the
scenes the JDK is noticing the lack of direct memory buffer in the
socket channel write call, and it is allocating a direct memory buffer
on your behalf! The size of that direct memory buffer depends on the
amount of data you want to write at that time, so if you are writing
1M of data, the JDK will allocate 1M of direct memory.
The same is done on the reading side as well. If you perform channel
I/O with a non-direct memory buffer, the JDK will allocate a direct
memory buffer for you. In the reading case it allocates a size that
equals the amount of room you have in the direct memory buffer you
passed in to the read call. WTF!? That can be a very large value.
To make matters worse, the JDK caches these direct memory buffers in
thread local storage and it caches not one, but three of these
arbitrarily sized buffers. (Look in
sun.nio.ch.Util.getTemporaryDirectBuffer and let me know if I have
interpreted the code incorrectly.) So if you have a large number of
threads talking to hbase you can find yourself overflowing with direct
memory buffers that you have not allocated and didn't even know about.
This issue is what caused us to check out the asynchbase client, which
happily didn't have any of these problems. The reason is that
asynchbase uses netty and netty knows the proper way of using direct
memory buffers for I/O. The correct way is to use direct memory
buffers in manageable sizes, 16k to 64k or something like that, for
the purpose of invoking a read or write system call. Netty has
algorithms for calculating the best size given a particular socket
connection, based on the amount of data it seems to be able to read at
once, etc. Netty reads the data from the OS using direct memory and
copies that data into Java byte buffers.
Now you might be wondering why you don't just pass a regular Java byte
array into the read/write calls, to avoid the copy from direct memory
to java heap memory, and here's the story about that. Let's assume
you're doing a file or socket read. There are two cases:

If the amount being read is  8k, it uses a native char array on the C
stack for the read system call, and then copies the result into your
Java buffer.
If the amount being read is  8k, the JDK calls malloc, does the read
system call with that buffer, copies the result into your Java buffer,
and then calls free.

The reason for this is that the the compacting Java garbage collector
might move your Java buffer while you're blocked in the read system
call and clearly that will not do. But if you are not aware of the
malloc/free being called every time you perform a read larger than 8k,
you might be surprised by the 

Fwd: Build failed in Jenkins: HBase-TRUNK #2116

2011-08-15 Thread Ted Yu
From:
https://builds.apache.org/view/G-L/view/HBase/job/HBase-TRUNK/lastCompletedBuild/testReport/org.apache.hadoop.hbase.master/TestDistributedLogSplitting/testOrphanLogCreation/

Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.initPipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.init(EPollSelectorImpl.java:49)

FYI


Re: Fwd: mvn error?

2011-08-04 Thread Eric Charles

Hi,

'mvn site -DskipTests=true' works fine here with maven 3.

What does 'mvn clean site -DskipTests=true -U' give?

I would also 'svn revert' my local hbase repo (to be sure pom has not 
been locally modified).


...and even remove some local maven jars from $HOME/.m2/ (e.g. 
$HOME/.m2/org/apache/maven (I helped a collegue some time ago who had 
unbelievable maven errors, and this came from bad jars in its local 
maven repository).


Hope this helps.


On 04/08/11 04:17, Ted Yu wrote:

Collective wisdom is needed here.

-- Forwarded message --
From: Doug Meildoug.m...@explorysmedical.com
Date: Wed, Aug 3, 2011 at 7:07 PM
Subject: Re: mvn error?
To: Ted Yuyuzhih...@gmail.com



I don't believe this…   I'm getting the same error even with mvn 2.


INFO] [site:site {execution: default-site}]
[INFO] Unable to load parent project from a relative path: Could not find
the model file '/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml'.
for project unknown
[INFO] Parent project loaded from repository.
[INFO] Unable to load parent project from a relative path: Could not find
the model file '/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml'.
for project unknown
[INFO] Parent project loaded from repository.
[INFO] Skipped About report, file index.html already exists for the
English version.
[INFO] Generating Project Team report.



From: Ted Yuyuzhih...@gmail.com
Date: Wed, 3 Aug 2011 21:59:45 -0400

To: Doug Meildoug.m...@explorysmedical.com
Subject: Re: mvn error?

At our company mvn 3 gave us some headache.
Please use mvn 2.

Regards

On Wed, Aug 3, 2011 at 6:58 PM, Doug Meildoug.m...@explorysmedical.comwrote:



doug-meils-macbook-pro:hbase doug.meil$ mvn -version
Apache Maven 3.0.2 (r1056850; 2011-01-08 19:58:10-0500)
Java version: 1.6.0_24, vendor: Apple Inc.
Java home: /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
Default locale: en_US, platform encoding: MacRoman
OS name: mac os x, version: 10.6.7, arch: x86_64, family: mac


From: Ted Yuyuzhih...@gmail.com
Date: Wed, 3 Aug 2011 21:51:22 -0400
To: Doug Meildoug.m...@explorysmedical.com
Subject: Re: mvn error?

Your command didn't work for me:

[INFO]

[ERROR] BUILD FAILURE
[INFO]

[INFO] Invalid task '##skipTests': you must specify a valid lifecycle
phase, or a goal in the format plugin:goal or
pluginGroupId:pluginArtifactId:pluginVersion:goal

'mvn site' completed successfully.

tyumac:trunk tyu$ mvn -version
Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700)
Java version: 1.6.0_26

What version of mvn are you using ?

On Wed, Aug 3, 2011 at 6:32 PM, Doug Meildoug.m...@explorysmedical.comwrote:



Hey Ted, sorry to bug you but I just got an error I haven't had before…

When executing from this directory…

  /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/hbase

… this command  'mvn –DskipTests site

… I got this error…   for some reason it's trying to find a pom.xml in the
parent directory, not in the local directory. So the error is correct
in the sense that the pom.xml isn't there, but I'm at a loss as to explain
why it's not looking locally.  Did I honk my system up or did something
change?


[INFO] Unable to load parent project from a relative path: 1 problem was
encountered while building the effective model
[FATAL] Non-readable POM
/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml:
/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml (No such file or
directory) @
  for project  at
/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml for project  at
/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml
[INFO] Parent project loaded from repository.
[INFO] Unable to load parent project from a relative path: 1 problem was
encountered while building the effective model
[FATAL] Non-readable POM
/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml:
/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml (No such file or
directory) @
  for project  at
/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml for project  at
/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml
[INFO] Parent project loaded from repository.
[INFO]



Doug Meil
Chief Software Architect, Explorys
doug.m...@explorys.com









--
Eric Charles
http://about.echarles.net


Fwd: mvn error?

2011-08-03 Thread Ted Yu
Collective wisdom is needed here.

-- Forwarded message --
From: Doug Meil doug.m...@explorysmedical.com
Date: Wed, Aug 3, 2011 at 7:07 PM
Subject: Re: mvn error?
To: Ted Yu yuzhih...@gmail.com



I don't believe this…   I'm getting the same error even with mvn 2.


INFO] [site:site {execution: default-site}]
[INFO] Unable to load parent project from a relative path: Could not find
the model file '/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml'.
for project unknown
[INFO] Parent project loaded from repository.
[INFO] Unable to load parent project from a relative path: Could not find
the model file '/Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml'.
for project unknown
[INFO] Parent project loaded from repository.
[INFO] Skipped About report, file index.html already exists for the
English version.
[INFO] Generating Project Team report.



From: Ted Yu yuzhih...@gmail.com
Date: Wed, 3 Aug 2011 21:59:45 -0400

To: Doug Meil doug.m...@explorysmedical.com
Subject: Re: mvn error?

At our company mvn 3 gave us some headache.
Please use mvn 2.

Regards

On Wed, Aug 3, 2011 at 6:58 PM, Doug Meil doug.m...@explorysmedical.comwrote:


 doug-meils-macbook-pro:hbase doug.meil$ mvn -version
 Apache Maven 3.0.2 (r1056850; 2011-01-08 19:58:10-0500)
 Java version: 1.6.0_24, vendor: Apple Inc.
 Java home: /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents/Home
 Default locale: en_US, platform encoding: MacRoman
 OS name: mac os x, version: 10.6.7, arch: x86_64, family: mac


 From: Ted Yu yuzhih...@gmail.com
 Date: Wed, 3 Aug 2011 21:51:22 -0400
 To: Doug Meil doug.m...@explorysmedical.com
 Subject: Re: mvn error?

 Your command didn't work for me:

 [INFO]
 
 [ERROR] BUILD FAILURE
 [INFO]
 
 [INFO] Invalid task '##skipTests': you must specify a valid lifecycle
 phase, or a goal in the format plugin:goal or
 pluginGroupId:pluginArtifactId:pluginVersion:goal

 'mvn site' completed successfully.

 tyumac:trunk tyu$ mvn -version
 Apache Maven 2.2.1 (r801777; 2009-08-06 12:16:01-0700)
 Java version: 1.6.0_26

 What version of mvn are you using ?

 On Wed, Aug 3, 2011 at 6:32 PM, Doug Meil 
 doug.m...@explorysmedical.comwrote:


 Hey Ted, sorry to bug you but I just got an error I haven't had before…

 When executing from this directory…

  /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/hbase

 … this command  'mvn –DskipTests site

 … I got this error…   for some reason it's trying to find a pom.xml in the
 parent directory, not in the local directory. So the error is correct
 in the sense that the pom.xml isn't there, but I'm at a loss as to explain
 why it's not looking locally.  Did I honk my system up or did something
 change?


 [INFO] Unable to load parent project from a relative path: 1 problem was
 encountered while building the effective model
 [FATAL] Non-readable POM
 /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml:
 /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml (No such file or
 directory) @
  for project  at
 /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml for project  at
 /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml
 [INFO] Parent project loaded from repository.
 [INFO] Unable to load parent project from a relative path: 1 problem was
 encountered while building the effective model
 [FATAL] Non-readable POM
 /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml:
 /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml (No such file or
 directory) @
  for project  at
 /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml for project  at
 /Users/local/EXPLORYS/doug.meil/Documents/Hbasewrk/pom.xml
 [INFO] Parent project loaded from repository.
 [INFO]
 


 Doug Meil
 Chief Software Architect, Explorys
 doug.m...@explorys.com





Fwd: Build failed in Jenkins: HBase-TRUNK #2021

2011-07-12 Thread Ted Yu
Only 3 tests failed this time:
https://builds.apache.org/view/G-L/view/HBase/job/HBase-TRUNK/2021/

-- Forwarded message --
From: Apache Jenkins Server jenk...@builds.apache.org
Date: Tue, Jul 12, 2011 at 3:52 PM
Subject: Build failed in Jenkins: HBase-TRUNK #2021
To: dev@hbase.apache.org


See https://builds.apache.org/job/HBase-TRUNK/2021/changes

Changes:

[tedyu] HBASE-4003  Cleanup Calls Conservatively On Timeout - revert

--


Fwd: hive hbase and hadoop versions

2011-04-11 Thread Ted Yu
See John's comment below.

On Mon, Apr 11, 2011 at 11:14 AM, John Sichi jsi...@fb.com wrote:

 Until HBase has a well-defined separation between client and server,
 including protocol compatibility across versions, the situation is going to
 remain sticky.

 I think I heard that 0.89 and 0.90 should be protocol compatible, but I
 haven't confirmed that.  If it's true, then you should be able to just use
 Hive 0.7 as is (with the 0.89 jars) against an HBase 0.90 cluster.

 If that's not true, then follow the procedure described in the wiki page to
 rebuild Hive from source after editing ivy/libraries.properties to change
 the hbase.version property.

 I'm not sure about the Hadoop append part; maybe someone else knows the
 answer regarding the Hive/HBase dependencies there.

 JVS

 On Apr 10, 2011, at 9:49 AM, hi...@gmx.de
  wrote:

  Hello,
 
  I am using the hive/hbase integration (which is a great feature) on a
 standalone one machine environment. For production I set up a hadoop cluster
 and now hbase. Here
  http://wiki.apache.org/hadoop/Hive/HBaseIntegration
  I read I should use hbase 0.89.
  And here
  http://hbase.apache.org/book/notsoquick.html
  its written you must use the sync-supporting Hadoop jar.
  Does the following setup will work?
  I chose hbase 0.90 (I couldn't find 0.89). Because of the sync-supporting
 I would take the hadoop-core-0.20-append.jar from hbase 0.90 release and
 replace the hadoop jars in my hadoop installations with that. The hbase 0.90
 jar I will replace with the 0.89 jar from the hive 0.7.0 release. Will it
 come to compatibility trouble with this version setup? And what about
 stability and version updates in future...
 
  Thanks
  labtrax
 
  --
  Empfehlen Sie GMX DSL Ihren Freunden und Bekannten und wir
  belohnen Sie mit bis zu 50,- Euro! https://freundschaftswerbung.gmx.de




ReSend: Fwd: ANN: hbase 0.90.2 Release Candidate 0 available for download

2011-03-28 Thread Stack
Resending Ted's vote.  He got a failure trying to post the list.


-- Forwarded message --
From: Ted Yu yuzhih...@gmail.com
Date: Mon, Mar 28, 2011 at 10:27 AM
Subject: Re: ANN: hbase 0.90.2 Release Candidate 0 available for download
To: dev@hbase.apache.org
Cc: Stack st...@duboce.net


I have completed our flow over 200GB data four times successfully.
Performance was stable.

+1

On Sun, Mar 27, 2011 at 3:52 PM, Stack st...@duboce.net wrote:

 The first hbase 0.90.2 release candidate is available for download:

  http://people.apache.org/~stack/hbase-0.90.2-candidate-0/

 Its also available in Apache's Maven Staging Repository [1]

 About 60 issues have been resolved since 0.90.1. About half were
 deemed Blockers/Critical fixes.

 Release notes are available here: http://su.pr/1880F7

 Should we release this candidate as hbase 0.90.2?  Please vote +1/-1 by
 next Friday, April 1st.

 Yours,
 The HBase Team

 1. Look for hbase-0.90.2-SNAPSHOT in
 https://repository.apache.org/index.html#nexus-search;quick~hbase


Fwd: negotiated timeout

2011-03-24 Thread Ted Yu
Seeking more comment.

-- Forwarded message --
From: Patrick Hunt ph...@apache.org
Date: Thu, Mar 24, 2011 at 4:15 PM
Subject: Re: negotiated timeout
To: Ted Yu yuzhih...@gmail.com
Cc: d...@zookeeper.apache.org, Mahadev Konar maha...@apache.org,
zookeeper-...@hadoop.apache.org


Ted, you'll need to ask the hbase guys about this if you are not
running a dedicated zk cluster. I'm not sure how they manage embedded
zk.

However a quick search of the HBASE code results in:

./src/main/java/org/apache/hadoop/hbase/zookeeper/HQuorumPeer.java:

   // Set the max session timeout from the provided client-side timeout
   properties.setProperty(maxSessionTimeout,
   conf.get(zookeeper.session.timeout, 18));

Patrick

On Thu, Mar 24, 2011 at 4:00 PM, Ted Yu yuzhih...@gmail.com wrote:
 Patrick:
 Do you want me to look at maxSessionTimeout ?
 Since hbase manages zookeeper, I am not sure I can control this parameter
 directly.

 On Thu, Mar 24, 2011 at 3:50 PM, Patrick Hunt ph...@apache.org wrote:



http://zookeeper.apache.org/doc/r3.3.3/zookeeperAdmin.html#sc_advancedConfiguration

 On Thu, Mar 24, 2011 at 3:43 PM, Mahadev Konar maha...@apache.org
wrote:
  Hi Ted,
   The session timeout can be changed by the server depending on min/max
  bounds set on the servers. Are you servers configured to have a max
  timeout of 60 seconds? usually the default is 20 * tickTime. Looks
  like your ticktime is 3 seconds?
 
  thanks
  mahadev
 
 
 
  On Thu, Mar 24, 2011 at 3:20 PM, Ted Yu yuzhih...@gmail.com wrote:
  Hi,
  hbase 0.90.1 uses zookeeper 3.3.2
  I specified:
  property
  namezookeeper.session.timeout/name
  value49/value
  /property
 
  In zookeeper log I see:
  2011-03-24 19:58:09,499 INFO
org.apache.zookeeper.server.NIOServerCnxn:
  Client attempting to establish new session at /10.202.50.111:50325
  2011-03-24 19:58:09,499 INFO
org.apache.zookeeper.server.NIOServerCnxn:
  Established session 0x12ebb99d686a012 with negotiated timeout 6
for
  client /10.202.50.112:62386
  2011-03-24 19:58:09,499 INFO
org.apache.zookeeper.server.NIOServerCnxn:
  Client attempting to establish new session at /10.202.50.112:62387
  2011-03-24 19:58:09,499 INFO
  org.apache.zookeeper.server.PrepRequestProcessor: Got user-level
  KeeperException when processing sessionid:0x12ebb99d686a012
type:create
  cxid:0x1 zxid:0xfffe txntype:unknown reqpath:n/a Error
  Path:/hbase Error:KeeperErrorCode = NodeExists for /hbase
  2011-03-24 19:58:09,499 INFO
org.apache.zookeeper.server.NIOServerCnxn:
  Established session 0x12ebb99d686a013 with negotiated timeout 6
for
  client /10.202.50.111:50324
 
  Can someone tell me how the negotiated timeout of 6 was computed ?
 
  Thanks
 
 




Fwd: Review Request: Improvements to Hbck and better error reporting

2011-03-23 Thread Marc Limotte
Hey.  Are we still using ReviewBoard for Hbase?  I put up a patch for hbck,
but the auto-generated email from ReviewBoard was blocked as spam.

Re:

https://issues.apache.org/jira/browse/HBASE-3695http://review.cloudera.org/r/1661/

Marc


-- Forwarded message --
From: Marc Limotte mslimo...@gmail.com
Date: Wed, Mar 23, 2011 at 4:16 PM
Subject: Review Request: Improvements to Hbck and better error reporting
To: Marc Limotte mslimo...@gmail.com, jirapos...@review.cloudera.org,
dev@hbase.apache.org


   This is an automatically generated e-mail. To reply, visit:
http://review.cloudera.org/r/1661/
  Review request for hbase.
By Marc Limotte.
Description

https://issues.apache.org/jira/browse/HBASE-3695

  Diffs

   - src/main/java/org/apache/hadoop/hbase/util/HBaseFsck.java (55423af)
   - src/main/java/org/apache/hadoop/hbase/util/HBaseFsckRepair.java
   (b624d28)
   - src/test/java/org/apache/hadoop/hbase/HBaseTestingUtility.java
   (186027c)
   - src/test/java/org/apache/hadoop/hbase/util/TestHBaseFsck.java (a055082)

View Diff http://review.cloudera.org/r/1661/diff/


Fwd: File formats in Hadoop

2011-03-22 Thread Weishung Chung
-- Forwarded message --
From: Weishung Chung weish...@gmail.com
Date: Tue, Mar 22, 2011 at 11:31 AM
Subject: Re: File formats in Hadoop
To: Vivek Krishna vivekris...@gmail.com
Cc: u...@hbase.apache.org, common-u...@hadoop.apache.org,
qwertyman...@gmail.com, Doug Cutting cutt...@apache.org


I also found this informative article
http://cloudepr.blogspot.com/2009/09/hfile-block-indexed-file-format-to.html


http://cloudepr.blogspot.com/2009/09/hfile-block-indexed-file-format-to.htmlis
the key value pair be
eg column family1 with one qualifier 1 with 2 versions

key1 : rowkey1+column family1:qualifier1+timestamp1
value1: corresponding cell value1
key2 :  rowkey1+column family1:qualifier1+timestamp2
value2: corresponding cell value 2
key3:  rowkey2+column family1:qualifier1+timestamp1
value3: corresponding cell value 3
http://cloudepr.blogspot.com/2009/09/hfile-block-indexed-file-format-to.html
On Tue, Mar 22, 2011 at 10:58 AM, Vivek Krishna vivekris...@gmail.comwrote:

 http://nosql.mypopescu.com/post/3220921756/hbase-internals-hfile-explained
 might help.

 Viv




 On Tue, Mar 22, 2011 at 11:43 AM, Weishung Chung weish...@gmail.comwrote:

 My fellow superb hbase experts,

 Looking at the HFile specs and have some questions.
 How is a particular table cell in a HBase table being represented in the
 HFile? Does the key of the key value pair represent the rowkey+column
 family:qualifier+timestamp and the value represent the corresponding cell
 value? If so, to read a row, multiple key/value pair reads have to be
 done?

 Thank you :)


 On Tue, Mar 22, 2011 at 9:09 AM, Weishung Chung weish...@gmail.com
 wrote:

  Thank you, I will definitely take a look. Also, the TFile spec below
 helps
  me to understand more,
  what an exciting work !
 
 
 
 https://issues.apache.org/jira/secure/attachment/12396286/TFile+Specification+20081217.pdf
 
  
 https://issues.apache.org/jira/secure/attachment/12396286/TFile+Specification+20081217.pdf
 
  On Mon, Mar 21, 2011 at 11:41 AM, Doug Cutting cutt...@apache.org
 wrote:
 
  On 03/19/2011 09:01 AM, Weishung Chung wrote:
   I am browsing through the hadoop.io package and was wondering what
  other
   file formats are available in hadoop other than SequenceFile and
 TFile?
   Is all data written through hadoop including those from hbase saved
 in
  the
   above formats? It seems like SequenceFile is in key value pair
 format.
 
  Avro includes a file format that works with Hadoop.
 
 
 
 http://avro.apache.org/docs/current/api/java/org/apache/avro/mapred/package-summary.html
 
  Doug
 
 
 





Fwd: HRegion.RegionScanner.nextInternal()

2010-11-25 Thread Lars George
Does hbase-dev still get forwarded? Did you see the below message?

-- Forwarded message --
From: Lars George lars.geo...@gmail.com
Date: Tue, Nov 23, 2010 at 4:25 PM
Subject: HRegion.RegionScanner.nextInternal()
To: hbase-...@hadoop.apache.org

Hi,

I am officially confused:

         byte [] nextRow;
         do {
           this.storeHeap.next(results, limit - results.size());
           if (limit  0  results.size() == limit) {
             if (this.filter != null  filter.hasFilterRow()) throw
new IncompatibleFilterException(
                 Filter with filterRow(ListKeyValue) incompatible
with scan with limit!);
             return true; // we are expecting more yes, but also
limited to how many we can return.
           }
         } while (Bytes.equals(currentRow, nextRow = peekRow()));

This is from the nextInternal() call. Questions:

a) Why is that check for the filter and limit both being set inside the loop?

b) if limit is the batch size (which for a Get is -1, not 1 as I
would have thought) then what does that limit - results.size()
achieve?

I mean, this loops gets all columns for a given row, so batch/limit
should not be handled here, right? what if limit were set to 1 by
the client? Then even if the Get had 3 columns to retrieve it would
not be able to since this limit makes it bail out. So there would be
multiple calls to nextInternal() to complete what could be done in one
loop?

Eh?

Lars


Fwd: Client developer mailing list

2010-09-01 Thread Jeff Hammerbacher
Hey,

I'm not sure if this sort of mailing list makes sense for HBase, but I like
the idea. There are a lot of ideas floating around in the HBase client
space, and they may make sense to discuss separately. No worries if you guys
don't think it's a good idea.

Later,
Jeff

-- Forwarded message --
From: Jeremy Hanna jeremy.hanna1...@gmail.com
Date: Mon, Aug 30, 2010 at 12:05 PM
Subject: Client developer mailing list
To: u...@cassandra.apache.org
Cc: d...@cassandra.apache.org


There has been a new mailing list created for those who are working on
Cassandra clients above thrift and/or avro.  You can subscribe by sending an
email to client-dev-subscr...@cassandra.apache.org or using the link at the
bottom of http://cassandra.apache.org

The list is meant to give client authors a discussion forum as well as a
place to interact with core cassandra developers about the roadmap and
upcoming features.

Thanks to Cliff Moon (@moonpolysoft) for starting a discussion about client
quality at the Cassandra Summit.


  1   2   >