Incubator wiki edit privileges

2015-07-16 Thread Joey Echeverria
I'm interested in adding my name to the YetusProposal[1] as an
Additional Interested Contributor.

Can someone please add edit privileges to my account, JoeyEcheverria?

Thanks!

-Joey

-
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org



Re: [VOTE] accept NiFi into the incubator

2014-11-21 Thread Joey Echeverria
+1 (non-binding)

Great proposal!
On Fri, Nov 21, 2014 at 16:22 Alan D. Cabrera l...@toolazydogs.com wrote:

 +1 - binding

 On Nov 21, 2014, at 10:55 AM, Benson Margulies bimargul...@gmail.com
 wrote:

  http://wiki.apache.org/incubator/NiFiProposal has elicited a cheerful
 and
  positive conversation, so I offer this vote.
 
  Vote will be open for the usual 72 hours ...
 
  Here is my [+1]


 -
 To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
 For additional commands, e-mail: general-h...@incubator.apache.org




Re: [VOTE] accumulo graduation

2012-03-04 Thread Joey Echeverria
+1 (non-binding)

On Sat, Mar 3, 2012 at 5:24 PM, Billie J Rinaldi
billie.j.rina...@ugov.gov wrote:
 Apache Accumulo began incubation in September 2011.  Since then, our 
 community has been active and we have voted in one new committer and seen 10 
 new contributors.  We have made one release.  The Accumulo community held a 
 vote for graduation that passed with 9 +1 votes, 2 from IPMC members 
 (http://s.apache.org/hTk).

 Please vote on recommending the graduation of Apache Accumulo with the 
 following resolution to the ASF Board.

 [  ] +1 Recommend to the ASF board that Apache Accumulo is ready to graduate 
 to TLP
 [  ] +0
 [  ] -1 Do not recommend that Apache Accumulo graduate yet because ...

 This vote will remain open for 72 hours.

 Billie

 

    X. Establish the Apache Accumulo Project

       WHEREAS, the Board of Directors deems it to be in the best
       interests of the Foundation and consistent with the
       Foundation's purpose to establish a Project Management
       Committee charged with the creation and maintenance of
       open-source software related to a robust, scalable distributed
       key/value store with cell-based access control and customizable
       server-side processing for distribution at no charge to the
       public.

       NOW, THEREFORE, BE IT RESOLVED, that a Project Management
       Committee (PMC), to be known as the Apache Accumulo Project,
       be and hereby is established pursuant to Bylaws of the
       Foundation; and be it further

       RESOLVED, that the Apache Accumulo Project be and hereby is
       responsible for the creation and maintenance of software
       related to a robust, scalable distributed key/value store with
       cell-based access control and customizable server-side processing
       and be it further

       RESOLVED, that the office of Vice President, Apache Accumulo be
       and hereby is created, the person holding such office to
       serve at the direction of the Board of Directors as the chair
       of the Apache Accumulo Project, and to have primary responsibility
       for management of the projects within the scope of
       responsibility of the Apache Accumulo Project; and be it further

       RESOLVED, that the persons listed immediately below be and
       hereby are appointed to serve as the initial members of the
       Apache Accumulo Project:

         * Aaron Cordova             acord...@apache.org
         * Adam Fuchs                afu...@apache.org
         * Billie Rinaldi            bil...@apache.org
         * Chris Waring              cawar...@apache.org
         * Eric Newton               e...@apache.org
         * Keith Turner              ktur...@apache.org
         * David Medinets            medi...@apache.org
         * John Vines                vi...@apache.org

       NOW, THEREFORE, BE IT FURTHER RESOLVED, that Billie Rinaldi
       be appointed to the office of Vice President, Apache Accumulo, to
       serve in accordance with and subject to the direction of the
       Board of Directors and the Bylaws of the Foundation until
       death, resignation, retirement, removal or disqualification,
       or until a successor is appointed; and be it further

       RESOLVED, that the initial Apache Accumulo PMC be and hereby is
       tasked with the creation of a set of bylaws intended to
       encourage open development and increased participation in the
       Apache Accumulo Project; and be it further

       RESOLVED, that the Apache Accumulo Project be and hereby
       is tasked with the migration and rationalization of the Apache
       Incubator Accumulo podling; and be it further

       RESOLVED, that all responsibilities pertaining to the Apache
       Incubator Accumulo podling encumbered upon the Apache Incubator
       Project are hereafter discharged.

 -
 To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
 For additional commands, e-mail: general-h...@incubator.apache.org




-- 
Joseph Echeverria
Cloudera, Inc.
443.305.9434

-
To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org
For additional commands, e-mail: general-h...@incubator.apache.org



Re: [VOTE] DeltaSpike to join the Incubator

2011-12-04 Thread Joey Echeverria
+1 (non-binding)

On Sun, Dec 4, 2011 at 5:13 PM, Mark Struberg strub...@yahoo.de wrote:
 +1 (binding)


 LieGrue,
 strub



 - Original Message -
 From: Gerhard Petracek gpetra...@apache.org
 To: general@incubator.apache.org
 Cc:
 Sent: Sunday, December 4, 2011 11:11 PM
 Subject: [VOTE] DeltaSpike to join the Incubator

 Hello,

 Please vote on the acceptance of DeltaSpike into the Apache Incubator.

 The proposal is available at [1] and its content is also included below for
 your convenience.

 Please vote:

 [ ] +1 Accept DeltaSpike for incubation
 [ ] +0 Don't care
 [ ] -1  Don't accept DeltaSpike for incubation because...

 The vote is open for 72 hours.

 Thanks,
 Gerhard

 [1] http://wiki.apache.org/incubator/DeltaSpikeProposal

 

 Apache DeltaSpike Proposal
 ==



 Abstract
 

 Apache DeltaSpike is a collection of JSR-299 (CDI) Extensions for building
 applications on the Java SE and EE platforms.

 Proposal
 

 Apache DeltaSpike will consist of a number of portable CDI extensions that
 provide
 useful features for Java application developers. The goal of  Apache
 DeltaSpike is to create a de-facto standard of extensions that is
 developed and
 maintained by the Java community, and to act as an  incubator for
 features that may eventually become part of the various  Java SE and
 EE-related specifications.

 Background
 

 One  of the
 most exciting inclusions of the Java EE6 specification is  JSR-299,
 Contexts and Dependency Injection (CDI) for Java. CDI builds on  other
 Java EE specifications by defining a contextual component model  and
 typesafe dependency injection framework for managed beans.  It also
 defines a SPI that allows developers to write portable “extensions” that
 can be used to modify the behaviour of the Java EE platform, by
 offering additional features not provided by the platform by default.
 Apache DeltaSpike builds on this portable extensions SPI by providing
 baseline  utilities and CDI Extensions which form the base of almost all
 CDI  applications.

 Rationale
 

 There  presently exists a number of open source projects that provide
 extensions for CDI, such as Apache MyFaces CODI, JBoss Seam3 and
 CDISource.  Apache DeltaSpike seeks to unify these efforts by creating  an
 “industry standard” set of extensions, combining the best core  features of
 these projects. The
 project also aims to provide a rich,  JBoss Arquillian based (license:
 ALv2), test environment to ensure that DeltaSpike portably runs in all
 important CDI environments.

 Initial Goals
 

 The initial goals of the Apache DeltaSpike project are to:
     * Setup the governance structure of the project
     * Receive code donations from contributing members
     * Ensure all donated code is appropriately licensed under the Apache
 License
     * Merge and rename code to reflect new project name
     * Merge code where feature overlap exists
     * Merge or produce documentation for all modules
     * Provide simple examples demonstrating feature usage
     * Produce release/s based on a schedule created by the PMC
     * Attract contributions from the greater Java EE community and other
 Java EE development groups

 Current Status
 

 The  initial codebase for Apache DeltaSpike will be populated with mature
 code donations from project members, including JBoss Seam3, Apache MyFaces
 CODI and CDISource.

 Meritocracy
 

 All
 contributors have a well established history in the open source
 community and are well aware of the meritocracy principles of the Apache
 Software Foundation.
 Currently the Seam3 project is fortunate to receive the majority of its
 code
 contributions from its large community of users.  Many of the modules
 that are contained in the Seam project are led by volunteers from the
 community, who have both direct commit access, and discretion over the
 direction of their modules.
 Apache MyFaces CODI is a subproject of Apache MyFaces and thus all
 contributors are already familiar with the meritocracy principles.
 The CDISource project has adopted the principles of meritocracy by the
 founding developers having control of different modules depending on
 their contribution to those modules.

 Community
 

 The  JBoss Seam, Apache MyFaces CODI and CDISource projects already have
 well established communities, consisting of many active users and
 contributors.  One of the primary
 goals of the Apache DeltaSpike project  is to unify this community, and by
 creating a project that is a “single  source of truth” for CDI Extensions.
 By doing this, we hope
 to make the whole greater than the sum of its parts,  i.e. to
 attract a much stronger community than that which currently  exists
 across the separate projects.  To this end, it is a goal of this
 project to attract contributors from the Java EE community in addition
 to those from the three projects already 

Re: [VOTE] accept DirectMemory as new Apache Incubator podling

2011-10-02 Thread Joey Echeverria
+1 (non-binding)



On Oct 2, 2011, at 3:36, Simone Tripodi simonetrip...@apache.org wrote:

 Hi all guys,
 
 I'm now calling a formal VOTE on the DirectMemory proposal located here:
 
 http://wiki.apache.org/incubator/DirectMemoryProposal
 
 Proposal text copied at the bottom of this email.
 
 VOTE close on Tuesday, October 4, early 7:30 AM CET.
 
 Please VOTE:
 
 [ ] +1 Accept DirectMemory into the Apache Incubator
 [ ] +0 Don't care
 [ ] -1  Don't Accept DirectMemory into the Apache Incubator because...
 
 Thanks in advance for participating!
 
 All the best, have a nice day,
 Simo
 
 P.S. Here's my +1
 
 http://people.apache.org/~simonetripodi/
 http://www.99soft.org/
 
 = DirectMemory =
 
 == Abstract ==
 The following proposal is about Apache !DirectMemory, a Java
 !OpenSource multi-layered cache implementation featuring off-heap
 memory storage (a-la Terracotta !BigMemory) to enable caching of Java
 objects without degrading JVM performance
 
 == Proposal ==
 !DirectMemory's main purpose is to to act as a second level cache
 (after a heap based one) able to store large amounts of data without
 filling up the Java heap and thus avoiding long garbage collection
 cycles. Although serialization has a runtime cost store/retrieve
 operations are in the sub-millisecond range being pretty acceptable in
 every usage scenario even as a first level cache and, most of all,
 outperforms heap storage when the count of the entries goes over a
 certain amount. !DirectMemory implements cache eviction based on a
 simple LFU (Least Frequently Used) algorythm and also on item
 expiration. Included in the box is a small set of utility classes to
 easily handle off-heap memory buffers.
 
 == Background ==
 !DirectMemory is a project was born in the 2010 thanks to Raffaele P.
 Guidi initial effort under
 [[https://github.com/raffaeleguidi/!DirectMemory/|GitHub]] and already
 licensed under the Apache License 2.0.
 
 == Rationale ==
 The rationale behind !DirectMemory is bringing off-heap caching to the
 open source world, empowering FOSS developers and products with a tool
 that enables breaking the heap barrier and override the JVM garbage
 collection mechanism collection - which could be useful in scenarios
 where RAM needs are over the usual limits (more than 8, 12, 24gb) and
 to ease usage of off-heap memory in general
 
 = Current Status =
 
 == Meritocracy ==
 As a majority of the initial project members are existing ASF
 committers, we recognize the desirability of running the project as a
 meritocracy.  We are eager to engage other members of the community
 and operate to the standard of meritocracy that Apache emphasizes; we
 believe this is the most effective method of growing our community and
 enabling widespread adoption.
 
 == Core Developers ==
 In alphabetical order:
 
 * Christian Grobmeier grobmeier at apache dot org
 * Maurizio Cucchiara mcucchiara at apache dot org
 * Olivier Lamy olamy at apache dot org
 * Raffaele P. Guidi raffaele dot p dot guidi at gmail dot com
 * Simone Gianni simoneg at apache dot org
 * Simone Tripodi simonetripodi at apache dot org
 * Tommaso Teofili tommaso at apache dot org
 
 == Alignment ==
 The purpose of the project is to develop and maintain !DirectMemory
 implementation that can be used by other Apache projects.
 
 = Known Risks =
 == Orphaned Products ==
 !DirectMemory does not have any reported production usage, yet, but is
 getting traction with developers and being evaluated by potential
 users and thus the risks of it being orphaned are minimal
 
 == Inexperience with Open Source ==
 All of the committers have experience working in one or more open
 source projects inside and outside ASF.
 
 == Homogeneous Developers ==
 The list of initial committers are geographically distributed across
 the Europe with no one company being associated with a majority of the
 developers.  Many of these initial developers are experienced Apache
 committers already and all are experienced with working in distributed
 development communities.
 
 == Reliance on Salaried Developers ==
 To the best of our knowledge, none of the initial committers are being
 paid to develop code for this project.
 
 == Relationships with Other Apache Products ==
 !DirectMemory fits naturally in the ASF because it could be
 successfully employed together with a large number of ASF products
 ranging from JCS - as a new cache region between the heap and indexed
 file ones, to ORM systems like Cayenne (i.e. replacing current OSCache
 based implementation), Apache JDO and JPA implementations and also
 java based databases (i.e. Derby) and all systems managing large
 amounts of data from Hadoop to Cassandra
 
 == A Excessive Fascination with the Apache Brand ==
 While the Apache Software Foundation would be a good home for the
 !DirectMemory project it already has some traction and it could live
 on its own - however we see reciprocal benefits for both the ASF and
 the project in adopting the brand to better 

Re: [VOTE] S4 to join the Incubator

2011-09-20 Thread Joey Echeverria
+1 (non-binding)

On Tue, Sep 20, 2011 at 4:56 PM, Patrick Hunt ph...@apache.org wrote:
 It's been a nearly a week since the S4 proposal was submitted for
 discussion.  A few questions were asked, and the proposal was clarified
 in response.  Sufficient mentors have volunteered.  I thus feel we are
 now ready for a vote.

 The latest proposal can be found at the end of this email and at:

  http://wiki.apache.org/incubator/S4Proposal

 The discussion regarding the proposal can be found at:

  http://s.apache.org/RMU

 Please cast your votes:

 [  ] +1 Accept S4 for incubation
 [  ] +0 Indifferent to S4 incubation
 [  ] -1 Reject S4 for incubation

 This vote will close 72 hours from now.

 Thanks,

 Patrick

 --
 = S4 Proposal =

 == Abstract ==

 S4 (Simple Scalable Streaming System) is a general-purpose,
 distributed, scalable, partially fault-tolerant, pluggable platform
 that allows programmers to easily develop applications for processing
 continuous, unbounded streams of data.

 == Proposal ==

 S4 is a software platform written in Java. Clients that send and
 receive events can be written in any programming language. S4 also
 includes a collection of modules called Processing Elements (or PEs
 for short) that implement basic functionality and can be used by
 application developers. In S4, keyed data events are routed with
 affinity to Processing Elements (PEs), which consume the events and do
 one or both of the following: (1) ''emit'' one or more events which
 may be consumed by other PEs, (2) ''publish'' results. The
 architecture resembles the Actors model, providing semantics of
 encapsulation and location transparency, thus allowing applications to
 be massively concurrent while exposing a simple programming  interface
 to application developers.

 To drive adoption and increase the number of contributors to the
 project, we may need to prioritize the focus based on feedback from
 the community. We believe that one of the top priorities and driving
 design principle for the S4 project is to provide a simple API that
 hides most of the complexity associated with distributed systems and
 concurrency. The project grew out of the need to provide a flexible
 platform for application developers and scientists that can be used
 for quick experimentation and production.

 S4 differs from existing Apache projects in a number of fundamental
 ways. Flume is an Incubator project that focuses on log processing,
 performing lightweight processing in a distributed fashion and
 accumulating log data in a centralized repository for batch
 processing. S4 instead performs all stream processing in a distributed
 fashion and enables applications to form arbitrary graphs to process
 streams of events. We see Flume as a complementary project. We also
 expect S4 to complement Hadoop processing and in some cases to
 supersede it. Kafka is another Incubator project that focuses on
 processing large amounts of stream data. The design of Kafka, however,
 follows the pub-sub paradigm, which focuses on delivering messages
 containing arbitrary data from source processes (publishers) to
 consumer processes (subscribers). Compared to S4, Kafka is an
 intermediate step between data generation and processing, while S4 is
 itself a platform for processing streams of events.

 S4 overall addresses a need of existing applications to process
 streams of events beyond moving data to a centralized repository for
 batch processing. It complements the features of existing Apache
 projects, such as Hadoop, Flume, and Kafka, by providing a flexible
 platform for distributed event processing.

 == Background ==

 S4 was initially developed at Yahoo! Labs starting in 2008 to process
 user feedback in the context of search advertising. The project was
 licensed under the Apache License version 2.0 in October 2010. The
 project documentation is currently available at http://s4.io .

 == Rationale ==

 Stream computing has been growing steadily over the last 20 years.
 However, recently there has been an explosion in real-time data
 sources including the Web, sensor networks, financial securities
 analysis and trading, traffic monitoring, natural language processing
 of news and social data, and much more.

 As Hadoop evolved as a standard open source solution for batch
 processing of massive data sets, there is no equivalent community
 supported open source platform for processing data streams in
 real-time. While various research projects have evolved into
 proprietary commercial products, S4 has the potential to fill the gap.
 Many projects that require a scalable stream processing architecture
 currently use Hadoop by segmenting the input stream into data batches.
 This solution is not efficient, results in high latency, and
 introduces unnecessary complexity.

 The S4 design is primarily driven by large scale applications for data
 mining and machine learning in a production environment. We think that
 the S4 design is 

Re: [VOTE] Kalumet to join Incubator

2011-09-13 Thread Joey Echeverria
+1 (non-binding)



On Sep 13, 2011, at 3:18, Olivier Lamy ol...@apache.org wrote:

 Hello Folks,
 
 Please vote on the acceptance of Kalumet into the Apache incubator.
 
 The proposal is available at: http://wiki.apache.org/incubator/KalumetProposal
 (for your convenience, a snapshot is also copied below)
 
 The vote options (please cast your vote) :
 
 [ ] +1 Accept Kalumet for incubation
 [ ] +0 Don't care
 [ ] -1 Reject for the following reason:
 
 The vote is open for 72 hours.
 
 Here my (binding) +1 .
 
 Thanks,
 -- 
 Olivier Lamy
 Talend : http://talend.com
 http://twitter.com/olamy | http://linkedin.com/in/olamy
 
 
 = Kalumet - Complete Environment Deployer Toolbox =
 
 == Abstract ==
 
 Kalumet a complete environment manager and deployer including J2EE
 environments (application servers, applications, etc), softwares, and
 resources.
 It's a perfect complement to continuous integration (managed by maven
 and continuum or jenkins for instance) by adding continuous
 deployment.
 The whole factory chain is cover and the software administrator
 managed all environments in secure and safe way, whatever the
 underlying application servers (Apache Geronimo, Apache Tomcat, Apache
 TomEE, RedHat JBoss, Oracle Weblogic or IBM Websphere) or softwares
 (printout system, Apache HTTPd, operating systems, etc) are.
 
 Kalumet provides two kind of components :
 
 * Apache Kalumet agent are installed locally on the target server box.
 
 * Apache Kalumet console controls and manages the agents, allowing the
 software administrator to update all environments from a central
 multi-user web tool.
 
 == Background ==
 
 Currently, Apache Kalumet is named BuildProcess AutoDeploy
 (http://buildprocess.sourceforge.net). The development has begun 4
 years ago and several release have already been provided.
 
 == Rationale ==
 
 The software environments administration is heavy cost task with a
 high level of human actions, especially when mixing J2EE environments,
 different operating systems, softwares, etc It suffers :
 
 * a different set of scripts or actions/procedures depending of the
 application server used (Geronimo, Tomcat, JBoss, Weblogic, Websphere,
 ...) or other middlewares (portals or ESB like ServiceMix, HTTP
 servers like Apache HTTPd, etc);
 
 * a high level of risk due to human actions (for example, an
 administrator can forget to deploy a JDBC DataSource, or forget to
 change an application configuration file);
 
 * migrate an application from an environment to another one request
 boring actions (for example, migration applications and all linked
 resources from a testing environment to a production one, this action
 is named promote);
 
 * the upgrade process can be long (depending of the number of
 applications and complexity);
 
 * most of resources are stored on the application server box, not in a
 central repository.
 
 Apache Kalumet secures the environment deployment and covers the whole
 environment scope including J2EE parts (EAR/WAR archives with
 classloader policy, JDBC DataSources, JMS Connection Factories, JMS
 Queues/Topics, etc) and resources (operating systems, softwares, etc)
 in a unique way. It's heavily expendable, that means that you can
 create new plugins for dedicated resources.
 
 == Initial Goals ==
 
 When we have begun AutoDeploy, the first goal was to provide several
 J2EE application servers JMX plugins. But quickly, we have seen that
 multi-application servers support was only a part of the software
 administration needs.
 
 That's why we have extended AutoDeploy to provide agents and a central
 console hosting all environments knowledges (artifact versions,
 resources, etc). Using the console, several administrators can use a
 central tool to manage all environments in a collaborative, unique and
 secure way.
 
 The target is now to provide a complete tool to fully administrate a
 data center servers, softwares and middlewares.
 
 = Current Status =
 
 Currently, BuildProcess AutoDeploy provides two branches :
 
 * the 0.5 branch (with the 0.5.6 latest release) is the current stable
 branch. This branch is built every night using Apache Continuum
 (http://continuum.nanthrax.net).
 
 * the 0.6 branch is in progress and it's the target one to become Apache 
 Kalumet
 
 == Community ==
 
 Currently, AutoDeploy community contains two comitters, 5 contributors
 and around 50 users.
 BuildProcess AutoDeploy is used in production and test in several companies :
 
 * Fimasys France (http://www.fimasys.com)
 
 * AMP-AXA Australia (https://www.amp.com.au/wps/portal/au)
 
 * Vodacom South Africa (http://www.vodacom.com)
 
 * Mayo Clinic USA (http://www.mayoclinic.com)
 
 * NSW Attorney General Australia (http://www.lawlink.nsw.gov.au)
 
 == Core Developers ==
 
 The core developers for AutoDeploy/Apache Kalumet project are :
 
 * Jean-Baptiste Onofré (founder in 2004).
 
 * Mike Duffy, WebSphere Consultant, contributes since 2005.
 
 == Open Source ==
 
 Since the beginning, 

Re: [VOTE] Accumulo to join the Incubator

2011-09-09 Thread Joey Echeverria
+1 (non-binding)

On Fri, Sep 9, 2011 at 11:22 AM, Doug Cutting cutt...@apache.org wrote:
 It's been a week since the Accumulo proposal was submitted for
 discussion.  A few questions were asked, and the proposal was clarified
 in response.  Sufficient mentors have volunteered.  I thus feel we are
 now ready for a vote.

 The latest proposal can be found at the end of this email and at:

  http://wiki.apache.org/incubator/AccumuloProposal

 The discussion regarding the proposal can be found at:

  http://s.apache.org/oi

 Please cast your votes:

 [  ] +1 Accept Accumulo for incubation
 [  ] +0 Indifferent to Accumulo incubation
 [  ] -1 Reject Accumulo for incubation

 This vote will close 72 hours from now.

 Thanks,

 Doug

 ---

 = Accumulo Proposal =

 == Abstract ==
 Accumulo is a distributed key/value store that provides expressive,
 cell-level access labels.

 == Proposal ==
 Accumulo is a sorted, distributed key/value store based on Google's
 BigTable design.  It is built on top of Apache Hadoop, Zookeeper, and
 Thrift.  It features a few novel improvements on the BigTable design in
 the form of cell-level access labels and a server-side programming
 mechanism that can modify key/value pairs at various points in the data
 management process.

 == Background ==
 Google published the design of BigTable in 2006.  Several other open
 source projects have implemented aspects of this design including HBase,
 CloudStore, and Cassandra.  Accumulo began its development in 2008.

 == Rationale ==
 There is a need for a flexible, high performance distributed key/value
 store that provides expressive, fine-grained access labels.  The
 communities we expect to be most interested in such a project are
 government, health care, and other industries where privacy is a
 concern.  We have made much progress in developing this project over the
 past 3 years and believe both the project and the interested communities
 would benefit from this work being openly available and having open
 development.

 == Current Status ==

 === Meritocracy ===
 We intend to strongly encourage the community to help with and
 contribute to the code.  We will actively seek potential committers and
 help them become familiar with the codebase.

 === Community ===
 A strong government community has developed around Accumulo and training
 classes have been ongoing for about a year.  Hundreds of developers use
 Accumulo.

 === Core Developers ===
 The developers are mainly employed by the National Security Agency, but
 we anticipate interest developing among other companies.

 === Alignment ===
 Accumulo is built on top of Hadoop, Zookeeper, and Thrift.  It builds
 with Maven.  Due to the strong relationship with these Apache projects,
 the incubator is a good match for Accumulo.

 == Known Risks ==
 === Orphaned Products ===
 There is only a small risk of being orphaned.  The community is
 committed to improving the codebase of the project due to its fulfilling
 needs not addressed by any other software.

 === Inexperience with Open Source ===
 The codebase has been treated internally as an open source project since
 its beginning, and the initial Apache committers have been involved with
 the code for multiple years.  While our experience with public open
 source is limited, we do not anticipate difficulty in operating under
 Apache's development process.

 === Homogeneous Developers ===
 The committers have multiple employers and it is expected that
 committers from different companies will be recruited.

 === Reliance on Salaried Developers ===
 The initial committers are all paid by their employers to work on
 Accumulo and we expect such employment to continue.  Some of the initial
 committers would continue as volunteers even if no longer employed to do so.

 === Relationships with Other Apache Products ===
 Accumulo uses Hadoop, Zookeeper, Thrift, Maven, log4j, commons-lang,
 -net, -io, -jci, -collections, -configuration, -logging, and -codec.

 === Relationship to HBase ===
 Accumulo and HBase are both based on the design of Google's BigTable, so
 there is a danger that potential users will have difficulty
 distinguishing the two.  Some of the key areas in which Accumulo differs
 from HBase are discussed below.  It may be possible to incorporate the
 desired features of Accumulo into HBase.  However, the amount of work
 required would slow development of HBase and Accumulo considerably.  We
 believe this warrants a podling for Accumulo at the current time.  We
 expect active cross-pollination will occur between HBase and podling
 Accumulo and it is possible that the codebases and projects will
 ultimately converge.

  Access Labels 
 Accumulo has an additional portion of its key that sorts after the
 column qualifier and before the timestamp.  It is called column
 visibility and enables expressive cell-level access control.
 Authorizations are passed with each query to control what data is
 returned to