Re: [PROPOSAL] Deltacloud Project
Hi; +1; I am happy to help during incubation process; Thanks; --Gurkan From: David Lutterkort lut...@redhat.com To: general@incubator.apache.org Sent: Thu, May 6, 2010 10:41:19 PM Subject: [PROPOSAL] Deltacloud Project Hi, I would like to propose the Deltacloud API[1] for addition to the Apache incubator. I have added the initial proposal to the Wiki[2]; it is also included below for convenience. There are a few additional people that have expressed interest in becoming initial committers; I am waiting for their express consent to list them as committers, and will add them to the Wiki as I get that. We are looking forward to any and all feedback and/or questions on the proposal. We already have two mentors, but would very much welcome additional volunteers to help steer Deltacloud through the incubation process. David [1] http://deltacloud.org/ [2] http://wiki.apache.org/incubator/DeltacloudProposal Deltacloud, a cross-cloud web service API = Abstract Deltacloud defines a web service API for interacting with cloud service providers and resources in those clouds in a unified manner. In addition, it consists of a number of implementations of this API for the most popular clouds. Proposal * Define a REST-based API for managing and manipulating cloud resources in a manner that isolates the API client as much as possible from the particulars of specific cloud API's * Provide an open API definition for cloud providers for their IaaS clouds and a basis on which PaaS providers can layer their offering * Provide image management and directory capabilities as part of the API * The current implementation allows instance lifecycle management (create, start, stop, destroy, reboot), and querying of related resources like available images, instance sizes, and allowed instance actions for a number of public and private clouds * Currently supported are Amazon EC2, Eucalyptus, Rackspace, RimuHosting, GoGrid, OpenNebula, and RHEV-M * Future enhancements should broaden the scope of the API to include networking, firewalling, authentication, accounting, and image management Background -- An important issue for cloud users is that of avoiding lock-in to a specific cloud. By providing a cross-cloud API for infrastructure-as-a-service (IaaS) clouds, Deltacloud addresses this concern and strives to provide the best possible API for writing cloud-management applications that can target multiple clouds. There are also no efforts currently to define a truly open-source cloud API, one for which there is a proper upstream, independent of any specific cloud provider. Deltacloud API strives to create a community around building an open-source cloud API in a manner that fully allows for tried-and-true open source mechanisms such as user-driven innovation. By providing a web-service API, Deltacloud is language agnostic, and one of its subordinated goals is to provide a practical vocabulary for talking about IaaS cloud resources and operations on them. Rationale - IaaS clouds provide numerous advantages to their users, for example, making provisioning new servers more agile. If users directly use the 'native' cloud API's, they risk locking themselves in to the API of a specific cloud provider. There is therefore a strong need for an API that can be used across a wide range of public and private clouds, and that can serve as the basis for developing cloud management applications; in contrast to several existing language-specific efforts in this direction, Deltacloud is conceived as a web service. This will allow the project to attract a broad community of users of the API and cloud providers interested in offering a truly open-source API, with a proper upstream community. We strongly believe that the best way to drive such an API effort is by developing the API and open-source implementations of the API side-by-side. Initial Goals - Deltacloud is an existing open source project; initially started by Red Hat, it has attracted a number of outside contributors. We look at moving this project to the ASF as the next step to broaden the community, and put the project on solid footing since the ASF governance model is well suited for the Deltacloud project goals. The ASF is a great location for Deltacloud to build a community and will benefit from ASL licensing. Current Status -- Deltacloud API is licensed under the LGPL: * Deltacloud Website (http://deltacloud.org) There are two projects hosted there: the API under consideration here and the Aggregator (not part of this proposal, though also open source) * Deltacloud git repository (http://git.fedorahosted.org/git/?p=deltacloud/core.git;a=summary) * Deltacloud mailing lists - users (https://fedorahosted.org/mailman/listinfo/deltacloud-users) - developers
Re: [libcloud] [VOTE] Release Apache Libcloud 0.3.1
+1 One minor nit is the Changes file still talks about 0.3.0. ...ant On Fri, May 7, 2010 at 12:09 AM, Paul Querna p...@querna.org wrote: Test tarballs for Apache Libcloud 0.3.1 are available at: http://people.apache.org/~pquerna/libcloud-0.3.1/ Please test and place your votes please; +/- 1 [ ] Release Apache Libcloud 0.3.1 Vote closes on Monday May 10, 2010 at 1pm PST. This release fixes several issues related to the license blocks, NOTICE file, and test cases that were noticed in the scrubbed 0.3.0 release. It is based upon this tag: https://svn.apache.org/repos/asf/incubator/libcloud/tags/0.3.1 Thanks, Paul - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Release Apache Empire-db 2.0.6-incubating (rc4)
+1 ...ant On Wed, May 5, 2010 at 8:17 PM, Martijn Dashorst martijn.dasho...@gmail.com wrote: +1 (again). Could someone from the IPMC take a look at the release? The team is eager to release it, but lacks 2 +1 binding votes. Martijn On Wed, May 5, 2010 at 4:22 PM, Rainer Döbele doeb...@esteam.de wrote: +1 Rainer Francis De Brabandere wrote: re: [VOTE] Release Apache Empire-db 2.0.6-incubating (rc4) Hi, We have just prepared the fourth 2.0.6-incubating release candidate and we are now looking for approval of the IPMC to publish the release. PMC vote thread: http://mail-archives.apache.org/mod_mbox/incubator-empire-db- dev/201004.mbox/browser We already have one binding vote. These are the major change from our previous 2.0.5-incubating release: - Code-Generator allows generation of Data Model code files for existing databases - Maven plugin for DB-Code-Generator - New example to demonstrate interoperability between Empire-db and Spring - Provided jars are now OSGi compatible Changelog: http://svn.apache.org/viewvc/incubator/empire-db/tags/apache-empire-db- 2.0.6-incubating-rc4/CHANGELOG.txt?view=co Subversion tag: https://svn.apache.org/repos/asf/incubator/empire-db/tags/apache- empire-db-2.0.6-incubating-rc4 Maven staging repository: https://repository.apache.org/content/repositories/orgapacheempire-db- 020/ Distribution files are located here http://people.apache.org/~francisdb/empire-db/ Rat report for the tag is available here: http://people.apache.org/~francisdb/empire-db/rat.txt Vote open for 72 hours. [ ] +1 [ ] +0 [ ] -1 -- http://www.somatik.be Microsoft gives you windows, Linux gives you the whole house. - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org -- Become a Wicket expert, learn from the best: http://wicketinaction.com Apache Wicket 1.4 increases type safety for web applications Get it now: http://www.apache.org/dyn/closer.cgi/wicket/1.4.7 - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Open source communities survey
Hello Currently a student in Master 2 engineering innovation in Grenoble (FRANCE), I realized a study on the communities that develop around open source software in collaboration with the INRIA institute (http://www.inria.fr/) Could you help me and try to answer the following questions? How Apache assess its communities? what are the criteria that they take in account? When a community wants to take part in your foundation, what are the criteria that it has to have? Why? How does Apache select the hosted community? Thank you for your help. Best regards Emmanuel Neckebroeck
Re: [VOTE] Release Apache Libcloud 0.3.1
On 07/05/2010, Paul Querna p...@querna.org wrote: Test tarballs for Apache Libcloud 0.3.1 are available at: http://people.apache.org/~pquerna/libcloud-0.3.1/ Where is the KEYS file? The directory structures of the bz2 and zip archives are different - the file r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json is in a different place in the archives. The bz2 archive needs to be corrected. The XML files under fixtures should have AL headers. As far as I can tell, adding such comments does not affect the test cases. -1 Minor problem: There is no mention of the dependency on zope. Please test and place your votes please; +/- 1 [ ] Release Apache Libcloud 0.3.1 Vote closes on Monday May 10, 2010 at 1pm PST. This release fixes several issues related to the license blocks, NOTICE file, and test cases that were noticed in the scrubbed 0.3.0 release. It is based upon this tag: https://svn.apache.org/repos/asf/incubator/libcloud/tags/0.3.1 Thanks, Paul - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Accept Whirr for Incubation
+1 Carl. On Wed, May 5, 2010 at 2:06 PM, Tom White tomwh...@apache.org wrote: We've added three mentors since starting the proposal thread, so I would like to start the vote to accept Whirr into the Apache Incubator. The proposal is included below and is also at: http://wiki.apache.org/incubator/WhirrProposal Please cast your votes: [ ] +1 Accept Whirr for incubation [ ] +0 Don't care [ ] -1 Reject for the following reason: Thanks, Tom = Whirr, a library of cloud services = == Abstract == Whirr will be a set of libraries for running cloud services. == Proposal == Whirr will provide code for running a variety of software services on cloud infrastructure. It will provide bindings in several languages (e.g. Python and Java) for popular cloud providers to make it easy to start and stop services like Hadoop clusters. The project will not be limited to a particular set of services, rather it will be expected that a range of services are developed, as determined by the project contributors. Possible services include Hadoop, HBase, !ZooKeeper, Cassandra. == Background == The ability to run services on cloud providers is very useful, particularly for proofs of concept, testing, and also ad hoc production work. Bringing up clusters in the cloud is non-trivial, since careful choreography is required. (Designing an interface that is convenient as well as secure is also a challenge in a cloud context.) Making services that runs on a variety of cloud providers is harder, even with the availability of libraries like libcloud and jclouds, since each platform's quirks and extra features must be considered (and either worked around, or possibly taken advantage of, as appropriate) . Whirr will facilitate sharing of best practices, both for a particular service (such as Hadoop configuration on a particular provider), and for common cloud operations (such as installation of dependencies across cloud providers). It will provide a space to share good configurations and will encode service-specific knowledge. == Rationale == There are already scripts in the Hadoop project that allow users to run Hadoop clusters on Amazon EC2 and other cloud providers. While users have found these scripts useful, their current home as a Hadoop Common contrib project has the following limitations: * Tying the scripts' release cycle to Hadoop's means that it is difficult to distribute updates to the scripts which are changing fast (new features and bugfixes). * The scripts support multiple versions of Hadoop, so it makes more sense to distribute them separately from Hadoop itself. * They are general: people want to contribute code for non-Hadoop services like Cassandra (for example: http://github.com/johanoskarsson/cassandra-ec2). * Having a uniform approach to running services in the cloud, hosted in one project, makes launching sets of complementary services easier for the user. Today, the scripts and libraries hosted within each project (e.g. in Hadoop, HBase, Cassandra) have slightly different conventions and semantics, and are likely to diverge over time. Building a community around cloud infrastructure services will help enforce a common approach to running services in the cloud. == Initial Goals == * Provide a new home for the existing Hadoop cloud scripts. * Add more services (e.g. HBase) * Develop Java libraries for Hadoop clusters * Add new cloud providers by taking advantage of libcloud and jclouds. * (Future) Run on own hardware, so users can take advantage of the same interface to control services running locally or in the cloud. == Current Status == === Meritocracy === The Hadoop scripts were originally created by Tom White, and have had a substantial number of contributions from members of the Hadoop community. By becoming its own project, significant contributors to Whirr would become committers, and allow the project to grow. === Community === The community interested in cloud service infrastructure is currently spread across many smaller projects, and one of the main goals of this project is to build a vibrant community to share best practices and build common infrastructure. For example, this project would provide a home to facilitate collaboration between the groups of Hadoop and HBase developers who are building cloud services. === Core developers === Tom White wrote most of the original code and is familiar with open source and Apache-style development, being a Hadoop committer and an ASF member. There have been a number of contributors who have provided patches to these scripts over time. Andrew Purtell who created the HBase cloud scripts is a HBase committer. Johan Oskarsson (Hadoop and Cassandra committer) ported the scripts to Cassandra. === Alignment === Whirr complements libcloud, currently in the Incubator. Libcloud provides multi-cloud provider support, while Whirr will provide multi-service support in the cloud. Whirr will build cloud components for several Apache projects, such
Re: [PROPOSAL] Deltacloud Project
On 05/07/2010 02:33 AM, Gurkan Erdogdu wrote: Hi; +1; I am happy to help during incubation process; Thanks; --Gurkan Gurkan, Is this an offer to help mentor the project, or just help out as needed. Reason for asking is if you would like to be represented onto the proposal or just voicing support which is greatly appreciated. regards, Carl. - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [PROPOSAL] Deltacloud Project
Carl, if you need additional mentors, count me in. thanks, dims On Fri, May 7, 2010 at 9:18 AM, Carl Trieloff cctriel...@redhat.com wrote: On 05/07/2010 02:33 AM, Gurkan Erdogdu wrote: Hi; +1; I am happy to help during incubation process; Thanks; --Gurkan Gurkan, Is this an offer to help mentor the project, or just help out as needed. Reason for asking is if you would like to be represented onto the proposal or just voicing support which is greatly appreciated. regards, Carl. - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org -- Davanum Srinivas :: http://davanum.wordpress.com - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Accept Whirr for Incubation
+1 for Incubation On May 5, 2010, at 11:06 AM, Tom White wrote: We've added three mentors since starting the proposal thread, so I would like to start the vote to accept Whirr into the Apache Incubator. The proposal is included below and is also at: http://wiki.apache.org/incubator/WhirrProposal Please cast your votes: [ ] +1 Accept Whirr for incubation [ ] +0 Don't care [ ] -1 Reject for the following reason: Thanks, Tom = Whirr, a library of cloud services = == Abstract == Whirr will be a set of libraries for running cloud services. == Proposal == Whirr will provide code for running a variety of software services on cloud infrastructure. It will provide bindings in several languages (e.g. Python and Java) for popular cloud providers to make it easy to start and stop services like Hadoop clusters. The project will not be limited to a particular set of services, rather it will be expected that a range of services are developed, as determined by the project contributors. Possible services include Hadoop, HBase, !ZooKeeper, Cassandra. == Background == The ability to run services on cloud providers is very useful, particularly for proofs of concept, testing, and also ad hoc production work. Bringing up clusters in the cloud is non-trivial, since careful choreography is required. (Designing an interface that is convenient as well as secure is also a challenge in a cloud context.) Making services that runs on a variety of cloud providers is harder, even with the availability of libraries like libcloud and jclouds, since each platform's quirks and extra features must be considered (and either worked around, or possibly taken advantage of, as appropriate) . Whirr will facilitate sharing of best practices, both for a particular service (such as Hadoop configuration on a particular provider), and for common cloud operations (such as installation of dependencies across cloud providers). It will provide a space to share good configurations and will encode service-specific knowledge. == Rationale == There are already scripts in the Hadoop project that allow users to run Hadoop clusters on Amazon EC2 and other cloud providers. While users have found these scripts useful, their current home as a Hadoop Common contrib project has the following limitations: * Tying the scripts' release cycle to Hadoop's means that it is difficult to distribute updates to the scripts which are changing fast (new features and bugfixes). * The scripts support multiple versions of Hadoop, so it makes more sense to distribute them separately from Hadoop itself. * They are general: people want to contribute code for non-Hadoop services like Cassandra (for example: http://github.com/johanoskarsson/cassandra-ec2). * Having a uniform approach to running services in the cloud, hosted in one project, makes launching sets of complementary services easier for the user. Today, the scripts and libraries hosted within each project (e.g. in Hadoop, HBase, Cassandra) have slightly different conventions and semantics, and are likely to diverge over time. Building a community around cloud infrastructure services will help enforce a common approach to running services in the cloud. == Initial Goals == * Provide a new home for the existing Hadoop cloud scripts. * Add more services (e.g. HBase) * Develop Java libraries for Hadoop clusters * Add new cloud providers by taking advantage of libcloud and jclouds. * (Future) Run on own hardware, so users can take advantage of the same interface to control services running locally or in the cloud. == Current Status == === Meritocracy === The Hadoop scripts were originally created by Tom White, and have had a substantial number of contributions from members of the Hadoop community. By becoming its own project, significant contributors to Whirr would become committers, and allow the project to grow. === Community === The community interested in cloud service infrastructure is currently spread across many smaller projects, and one of the main goals of this project is to build a vibrant community to share best practices and build common infrastructure. For example, this project would provide a home to facilitate collaboration between the groups of Hadoop and HBase developers who are building cloud services. === Core developers === Tom White wrote most of the original code and is familiar with open source and Apache-style development, being a Hadoop committer and an ASF member. There have been a number of contributors who have provided patches to these scripts over time. Andrew Purtell who created the HBase cloud scripts is a HBase committer. Johan Oskarsson (Hadoop and Cassandra committer) ported the scripts to Cassandra. === Alignment === Whirr complements libcloud, currently in the Incubator. Libcloud provides multi-cloud provider support, while Whirr will provide multi-service support in the
Zeta proposal - Next steps
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi, we gathered 3 mentors by our call for mentors here on the list and private contact to Apache members. These are: - - Christian Grobmeier (not ASF member) - - Julien Vermillard (ASF member) - - Craig L Russell (ASF member) Is there a need for more mentors (e.g. a specially experienced one)? If not, what would be the next step? Is it calling for votes here on the list? If yes, who is in charge of this, one of the project members or a mentor? Thanks in advance for your support, regards, Toby - -- Tobias Schlitt tob...@schlitt.info GPG Key: 0xC462BC14 a passion for php http://schlitt.info/opensource eZ Components are Zeta Components now! http://bit.ly/9S7zbn -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkvkJT0ACgkQ5bO3TcRivBTnaACgrUBQHVMHD6ZUNWkGYY/dk4bZ Y24AoKxerLobJH9ZLllsZ22ZvWOMkAuX =0BVF -END PGP SIGNATURE- - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: Zeta proposal - Next steps
On 05/07/2010 04:35 PM, Tobias Schlitt wrote: Hi, we gathered 3 mentors by our call for mentors here on the list and private contact to Apache members. These are: - Christian Grobmeier (not ASF member) Sorry according to http://incubator.apache.org/incubation/Incubation_Policy.html#Mentor he should be in the Incubator PMC. Cheers Jean-Frederic - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [PROPOSAL] Deltacloud Project
On 05/07/2010 10:21 AM, Davanum Srinivas wrote: Carl, if you need additional mentors, count me in. thanks, dims Dims, That would be great. Carl. - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: Open source communities survey
Hi, On Fri, May 7, 2010 at 10:57 AM, Emmanuel NECKEBROECK emmanuel.neckebro...@gmail.com wrote: ...How Apache assess its communities? what are the criteria that they take in account? When a community wants to take part in your foundation, what are the criteria that it has to have? Why? How does Apache select the hosted community?... http://incubator.apache.org/incubation/Process_Description.html and the rest of the incubator.apache.org website should answer most of those questions. -Bertrand - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [PROPOSAL] Deltacloud Project
Project look very interesting to me, so I would like to contribute too. Deepal Carl, if you need additional mentors, count me in. thanks, dims On Fri, May 7, 2010 at 9:18 AM, Carl Trieloff cctriel...@redhat.com wrote: On 05/07/2010 02:33 AM, Gurkan Erdogdu wrote: Hi; +1; I am happy to help during incubation process; Thanks; --Gurkan Gurkan, Is this an offer to help mentor the project, or just help out as needed. Reason for asking is if you would like to be represented onto the proposal or just voicing support which is greatly appreciated. regards, Carl. - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org -- If we knew what it was we were doing, it would not be called research, would it? - Albert Einstein http://blogs.deepal.org http://deepal.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: Zeta proposal - Next steps
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi Jean-Frederic, On 05/07/2010 05:05 PM, jean-frederic clere wrote: On 05/07/2010 04:35 PM, Tobias Schlitt wrote: we gathered 3 mentors by our call for mentors here on the list and private contact to Apache members. These are: Sorry according to http://incubator.apache.org/incubation/Incubation_Policy.html#Mentor he should be in the Incubator PMC. ah, sorry, I misread this paragraph. I thought only 1 mentor must be a member of the IPMC. Craig is already member of the IPMC and he is mentoring several projects, so he is the wise guy in our mentor team. Julien applied for becoming a member of the IPMC, so we'll need to wait for this to happen. He is also an ASF member. Christian is not an ASF member, but applied also for becoming an IPMC member. So, we need to wait and see now, how the applications of them proceed. Is there anything, we as the proposing project, can do in the meantime? Anything else to prepare? Thanks for your advice, regards, Toby - -- Tobias Schlitt tob...@schlitt.info GPG Key: 0xC462BC14 a passion for php http://schlitt.info/opensource eZ Components are Zeta Components now! http://bit.ly/9S7zbn -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.10 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkvkQVUACgkQ5bO3TcRivBTZqgCgrWUhy6Nz+AC6Yd6DAURFegnd WrwAn3YCmDRx5zK1QbVFo/8WEuzu94Vz =sGCu -END PGP SIGNATURE- - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [PROPOSAL] Deltacloud Project
On Fri, 2010-05-07 at 10:21 -0400, Davanum Srinivas wrote: if you need additional mentors, count me in. Great. I added you to the proposal. David - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [PROPOSAL] Deltacloud Project
On Fri, 2010-05-07 at 11:51 -0400, Deepal jayasinghe wrote: Project look very interesting to me, so I would like to contribute too. Just added you to the proposal. David - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [PROPOSAL] Deltacloud Project
On Thu, 2010-05-06 at 22:15 -0700, Matt Hogstrom wrote: +1 You can add me as a committer as well ... Done. David - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE][PROPOSAL] Amber incubator
+1 (not yet binding?) thanks david jencks On May 4, 2010, at 3:48 PM, Simone Gianni wrote: I would like to present for a vote the following proposal to be sponsored by the Shindig PMC for a new Amber podling. The goal is to build a community around delivering a OAuth v1.0, v1.0a and upcoming v2.0 API and implementation The proposal is available on the wiki at and included below: http://wiki.apache.org/incubator/AmberProposal [] +1 to accept Amber into the Incubator [] 0 don't care [] -1 object and reason why. Thanks, Simone Gianni --- Proposal text from the wiki --- = Amber = == Abstract == The following proposal is about Apache Amber, a Java development framework mainly aimed to build OAuth-aware applications. After a brief explanation of the OAuth protocol, the following proposal describes how Apache Amber solves issues related to the implementation of applications that adhere to such specification. == Proposal == Amber will have no or negligible dependencies and will provide both an API specification for, and an unconditionally compliant implementation of, the OAuth v1.0, v1.0a and v2.0 specifications. The API specification will be provided as a separate JAR file allowing re-use by other developers and permits configuration: * by XML * by the Java JAR Services ServiceLoader mechanism * programmatically The API component specifies that an implementation must provide default classes for Provider, Consumer and Token objects making Amber easy to integrate with existing infrastructure and OAuth client interactions possible with virtually no additional configuration. The API is flexible enough to allow programmatic customisation or replacement of much of the implementation, including the default HTTP transport. Amber will provide both client and server functionality, enabling developers to deploy robust OAuth services with minimal effort. == Background == Roughly, OAuth is a mechanism that allows users to share their private resources, like photo, videos or contacts, stored on a site with another site avoiding giving their username and password credentials. Hence, from the user point-of-view, OAuth could be the way to improve their experience across different applications with an enhanced privacy and security control in a simple and standard method from desktop and web applications. The protocol was initially developed by the oauth.net community and now is under IETF standardization process. The main idea behind OAuth is represented by the token concept. Each token grants access to a site, for a specific resource (or a group of resources), and for a precise time-interval. The user is only required to authenticate with the Provider of their original account, after which that entity provides a re-usable to token to the Consumer who can use it to access resources at the Provider, on the users behalf. Moreover, the total transparency to the user, that is completely unaware of using the protocol, represents one of the main valuable characteristics of the specification. Apache Amber community aims not just to create a simple low-level library, but rather to provide a complete OAuth framework easy to use with Java code, on top of which users can build new-generation killer applications. There are currently three implementation efforts going on in ASF for OAuth v1. A stable implementation of OAuth v1 is present in Apache Shindig, but it is not actively developed and not shared with other projects. A Lab having Simone Tripodi as its PI is working on an implementation for an OAuth library that could be used by other products. Zhihong Zhang wrote an OAuth plugin for JMeter. At the same time, on the IETF OAuth v2 mailing list, other people expressed interest for a Java API and implementation, among them two Apache committers and one active contributor. Outside the ASF there are three known Java OAuth 1.0/1.0a libraries * The oauth.net reference implementation by John Kristian, Praveen Alavilli and Dirk Balfanz. * OAuth SignPost - a simple OAuth message signing client for Java and Apache HttpComponents by Matthias Kaeppler. * OAuth Scribe - a simple OAuth client by Pablo Fernandez. * asmx-oauth (on google code) - a complete open source OAuth 1.0 Consumer and Service Provider implementation provided by Asemantics Srl (Simone Tripodi was involved). == Rationale == The key role played by the OAuth specification, within the overall Open Stack technologies, jointly with its high degree of adoption and maturity, strongly suggest having an Apache leaded incubator for suitable reference implementation. Furthermore, the OAuth specification is currently gaining value due to its involvement in a standardization process within the IETF, as the actual internet draft. Having the Apache Amber as an Apache Incubator could be an opportunity to enforce the actual Apache projects that already reference other
Re: [PROPOSAL] Deltacloud Project
+1. I too would like to contribute to this project. Thanks, Senaka. On Fri, May 7, 2010 at 1:11 AM, David Lutterkort lut...@redhat.com wrote: Hi, I would like to propose the Deltacloud API[1] for addition to the Apache incubator. I have added the initial proposal to the Wiki[2]; it is also included below for convenience. There are a few additional people that have expressed interest in becoming initial committers; I am waiting for their express consent to list them as committers, and will add them to the Wiki as I get that. We are looking forward to any and all feedback and/or questions on the proposal. We already have two mentors, but would very much welcome additional volunteers to help steer Deltacloud through the incubation process. David [1] http://deltacloud.org/ [2] http://wiki.apache.org/incubator/DeltacloudProposal Deltacloud, a cross-cloud web service API = Abstract Deltacloud defines a web service API for interacting with cloud service providers and resources in those clouds in a unified manner. In addition, it consists of a number of implementations of this API for the most popular clouds. Proposal * Define a REST-based API for managing and manipulating cloud resources in a manner that isolates the API client as much as possible from the particulars of specific cloud API's * Provide an open API definition for cloud providers for their IaaS clouds and a basis on which PaaS providers can layer their offering * Provide image management and directory capabilities as part of the API * The current implementation allows instance lifecycle management (create, start, stop, destroy, reboot), and querying of related resources like available images, instance sizes, and allowed instance actions for a number of public and private clouds * Currently supported are Amazon EC2, Eucalyptus, Rackspace, RimuHosting, GoGrid, OpenNebula, and RHEV-M * Future enhancements should broaden the scope of the API to include networking, firewalling, authentication, accounting, and image management Background -- An important issue for cloud users is that of avoiding lock-in to a specific cloud. By providing a cross-cloud API for infrastructure-as-a-service (IaaS) clouds, Deltacloud addresses this concern and strives to provide the best possible API for writing cloud-management applications that can target multiple clouds. There are also no efforts currently to define a truly open-source cloud API, one for which there is a proper upstream, independent of any specific cloud provider. Deltacloud API strives to create a community around building an open-source cloud API in a manner that fully allows for tried-and-true open source mechanisms such as user-driven innovation. By providing a web-service API, Deltacloud is language agnostic, and one of its subordinated goals is to provide a practical vocabulary for talking about IaaS cloud resources and operations on them. Rationale - IaaS clouds provide numerous advantages to their users, for example, making provisioning new servers more agile. If users directly use the 'native' cloud API's, they risk locking themselves in to the API of a specific cloud provider. There is therefore a strong need for an API that can be used across a wide range of public and private clouds, and that can serve as the basis for developing cloud management applications; in contrast to several existing language-specific efforts in this direction, Deltacloud is conceived as a web service. This will allow the project to attract a broad community of users of the API and cloud providers interested in offering a truly open-source API, with a proper upstream community. We strongly believe that the best way to drive such an API effort is by developing the API and open-source implementations of the API side-by-side. Initial Goals - Deltacloud is an existing open source project; initially started by Red Hat, it has attracted a number of outside contributors. We look at moving this project to the ASF as the next step to broaden the community, and put the project on solid footing since the ASF governance model is well suited for the Deltacloud project goals. The ASF is a great location for Deltacloud to build a community and will benefit from ASL licensing. Current Status -- Deltacloud API is licensed under the LGPL: * Deltacloud Website (http://deltacloud.org) There are two projects hosted there: the API under consideration here and the Aggregator (not part of this proposal, though also open source) * Deltacloud git repository ( http://git.fedorahosted.org/git/?p=deltacloud/core.git;a=summary) * Deltacloud mailing lists - users (https://fedorahosted.org/mailman/listinfo/deltacloud-users) - developers ( https://fedorahosted.org/mailman/listinfo/deltacloud-devel)
Re: [PROPOSAL] Deltacloud Project
On Fri, 2010-05-07 at 22:14 +0530, Senaka Fernando wrote: +1. I too would like to contribute to this project. Thanks. Just added you to initial committers on the proposal. David - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: Zeta proposal - Next steps
Sorry according to http://incubator.apache.org/incubation/Incubation_Policy.html#Mentor he should be in the Incubator PMC. ah, sorry, I misread this paragraph. I thought only 1 mentor must be a member of the IPMC. Christian is not an ASF member, but applied also for becoming an IPMC member. I have no feedback so far from the IPMC. If this doesn't change I cannot help on mentoring - but I will enjoy helping on the mailing list and such. - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE][PROPOSAL] Amber incubator
+1 (non-binding) On Fri, May 7, 2010 at 9:42 AM, David Jencks david_jen...@yahoo.com wrote: +1 (not yet binding?) thanks david jencks On May 4, 2010, at 3:48 PM, Simone Gianni wrote: I would like to present for a vote the following proposal to be sponsored by the Shindig PMC for a new Amber podling. The goal is to build a community around delivering a OAuth v1.0, v1.0a and upcoming v2.0 API and implementation The proposal is available on the wiki at and included below: http://wiki.apache.org/incubator/AmberProposal [] +1 to accept Amber into the Incubator [] 0 don't care [] -1 object and reason why. Thanks, Simone Gianni --- Proposal text from the wiki --- = Amber = == Abstract == The following proposal is about Apache Amber, a Java development framework mainly aimed to build OAuth-aware applications. After a brief explanation of the OAuth protocol, the following proposal describes how Apache Amber solves issues related to the implementation of applications that adhere to such specification. == Proposal == Amber will have no or negligible dependencies and will provide both an API specification for, and an unconditionally compliant implementation of, the OAuth v1.0, v1.0a and v2.0 specifications. The API specification will be provided as a separate JAR file allowing re-use by other developers and permits configuration: * by XML * by the Java JAR Services ServiceLoader mechanism * programmatically The API component specifies that an implementation must provide default classes for Provider, Consumer and Token objects making Amber easy to integrate with existing infrastructure and OAuth client interactions possible with virtually no additional configuration. The API is flexible enough to allow programmatic customisation or replacement of much of the implementation, including the default HTTP transport. Amber will provide both client and server functionality, enabling developers to deploy robust OAuth services with minimal effort. == Background == Roughly, OAuth is a mechanism that allows users to share their private resources, like photo, videos or contacts, stored on a site with another site avoiding giving their username and password credentials. Hence, from the user point-of-view, OAuth could be the way to improve their experience across different applications with an enhanced privacy and security control in a simple and standard method from desktop and web applications. The protocol was initially developed by the oauth.net community and now is under IETF standardization process. The main idea behind OAuth is represented by the token concept. Each token grants access to a site, for a specific resource (or a group of resources), and for a precise time-interval. The user is only required to authenticate with the Provider of their original account, after which that entity provides a re-usable to token to the Consumer who can use it to access resources at the Provider, on the users behalf. Moreover, the total transparency to the user, that is completely unaware of using the protocol, represents one of the main valuable characteristics of the specification. Apache Amber community aims not just to create a simple low-level library, but rather to provide a complete OAuth framework easy to use with Java code, on top of which users can build new-generation killer applications. There are currently three implementation efforts going on in ASF for OAuth v1. A stable implementation of OAuth v1 is present in Apache Shindig, but it is not actively developed and not shared with other projects. A Lab having Simone Tripodi as its PI is working on an implementation for an OAuth library that could be used by other products. Zhihong Zhang wrote an OAuth plugin for JMeter. At the same time, on the IETF OAuth v2 mailing list, other people expressed interest for a Java API and implementation, among them two Apache committers and one active contributor. Outside the ASF there are three known Java OAuth 1.0/1.0a libraries * The oauth.net reference implementation by John Kristian, Praveen Alavilli and Dirk Balfanz. * OAuth SignPost - a simple OAuth message signing client for Java and Apache HttpComponents by Matthias Kaeppler. * OAuth Scribe - a simple OAuth client by Pablo Fernandez. * asmx-oauth (on google code) - a complete open source OAuth 1.0 Consumer and Service Provider implementation provided by Asemantics Srl (Simone Tripodi was involved). == Rationale == The key role played by the OAuth specification, within the overall Open Stack technologies, jointly with its high degree of adoption and maturity, strongly suggest having an Apache leaded incubator for suitable reference implementation. Furthermore, the OAuth specification is currently gaining value due to its involvement in a
Re: [VOTE] Accept Whirr for Incubation
+1 Doug On 05/05/2010 11:06 AM, Tom White wrote: We've added three mentors since starting the proposal thread, so I would like to start the vote to accept Whirr into the Apache Incubator. The proposal is included below and is also at: http://wiki.apache.org/incubator/WhirrProposal Please cast your votes: [ ] +1 Accept Whirr for incubation [ ] +0 Don't care [ ] -1 Reject for the following reason: Thanks, Tom = Whirr, a library of cloud services = == Abstract == Whirr will be a set of libraries for running cloud services. == Proposal == Whirr will provide code for running a variety of software services on cloud infrastructure. It will provide bindings in several languages (e.g. Python and Java) for popular cloud providers to make it easy to start and stop services like Hadoop clusters. The project will not be limited to a particular set of services, rather it will be expected that a range of services are developed, as determined by the project contributors. Possible services include Hadoop, HBase, !ZooKeeper, Cassandra. == Background == The ability to run services on cloud providers is very useful, particularly for proofs of concept, testing, and also ad hoc production work. Bringing up clusters in the cloud is non-trivial, since careful choreography is required. (Designing an interface that is convenient as well as secure is also a challenge in a cloud context.) Making services that runs on a variety of cloud providers is harder, even with the availability of libraries like libcloud and jclouds, since each platform's quirks and extra features must be considered (and either worked around, or possibly taken advantage of, as appropriate) . Whirr will facilitate sharing of best practices, both for a particular service (such as Hadoop configuration on a particular provider), and for common cloud operations (such as installation of dependencies across cloud providers). It will provide a space to share good configurations and will encode service-specific knowledge. == Rationale == There are already scripts in the Hadoop project that allow users to run Hadoop clusters on Amazon EC2 and other cloud providers. While users have found these scripts useful, their current home as a Hadoop Common contrib project has the following limitations: * Tying the scripts' release cycle to Hadoop's means that it is difficult to distribute updates to the scripts which are changing fast (new features and bugfixes). * The scripts support multiple versions of Hadoop, so it makes more sense to distribute them separately from Hadoop itself. * They are general: people want to contribute code for non-Hadoop services like Cassandra (for example: http://github.com/johanoskarsson/cassandra-ec2). * Having a uniform approach to running services in the cloud, hosted in one project, makes launching sets of complementary services easier for the user. Today, the scripts and libraries hosted within each project (e.g. in Hadoop, HBase, Cassandra) have slightly different conventions and semantics, and are likely to diverge over time. Building a community around cloud infrastructure services will help enforce a common approach to running services in the cloud. == Initial Goals == * Provide a new home for the existing Hadoop cloud scripts. * Add more services (e.g. HBase) * Develop Java libraries for Hadoop clusters * Add new cloud providers by taking advantage of libcloud and jclouds. * (Future) Run on own hardware, so users can take advantage of the same interface to control services running locally or in the cloud. == Current Status == === Meritocracy === The Hadoop scripts were originally created by Tom White, and have had a substantial number of contributions from members of the Hadoop community. By becoming its own project, significant contributors to Whirr would become committers, and allow the project to grow. === Community === The community interested in cloud service infrastructure is currently spread across many smaller projects, and one of the main goals of this project is to build a vibrant community to share best practices and build common infrastructure. For example, this project would provide a home to facilitate collaboration between the groups of Hadoop and HBase developers who are building cloud services. === Core developers === Tom White wrote most of the original code and is familiar with open source and Apache-style development, being a Hadoop committer and an ASF member. There have been a number of contributors who have provided patches to these scripts over time. Andrew Purtell who created the HBase cloud scripts is a HBase committer. Johan Oskarsson (Hadoop and Cassandra committer) ported the scripts to Cassandra. === Alignment === Whirr complements libcloud, currently in the Incubator. Libcloud provides multi-cloud provider support, while Whirr will provide multi-service support in the cloud. Whirr will build cloud components for several Apache projects, such as Hadoop, HBase, !ZooKeeper,
Re: [PROPOSAL] Deltacloud Project
I would like to mentor but I am not a IPMC member. But if possible I would like to contribute and you could add me as an initial committer. Thanks; --Gurkan From: Carl Trieloff cctriel...@redhat.com To: general@incubator.apache.org Sent: Fri, May 7, 2010 4:18:02 PM Subject: Re: [PROPOSAL] Deltacloud Project On 05/07/2010 02:33 AM, Gurkan Erdogdu wrote: Hi; +1; I am happy to help during incubation process; Thanks; --Gurkan Gurkan, Is this an offer to help mentor the project, or just help out as needed. Reason for asking is if you would like to be represented onto the proposal or just voicing support which is greatly appreciated. regards, Carl. - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Release Apache Libcloud 0.3.1
On Fri, May 7, 2010 at 3:48 AM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: Test tarballs for Apache Libcloud 0.3.1 are available at: http://people.apache.org/~pquerna/libcloud-0.3.1/ Where is the KEYS file? http://www.apache.org/dist/incubator/libcloud/KEYS The directory structures of the bz2 and zip archives are different - the file r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json is in a different place in the archives. The bz2 archive needs to be corrected. I don't understand what or why this is a problem. We use python's distutils to create both the zip file and the tarbz2 from the same export of the 0.3.0 source. The order of a file in a tar-strream compared to a zip shouldn't matter in material way that I can think of. The XML files under fixtures should have AL headers. As far as I can tell, adding such comments does not affect the test cases. We are not adding license blocks to test case fixtures that are response bodies from APIs. -1 Minor problem: There is no mention of the dependency on zope. Please test and place your votes please; +/- 1 [ ] Release Apache Libcloud 0.3.1 Vote closes on Monday May 10, 2010 at 1pm PST. This release fixes several issues related to the license blocks, NOTICE file, and test cases that were noticed in the scrubbed 0.3.0 release. It is based upon this tag: https://svn.apache.org/repos/asf/incubator/libcloud/tags/0.3.1 Thanks, Paul - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [PROPOSAL] Deltacloud Project
On Fri, 2010-05-07 at 13:04 -0700, Gurkan Erdogdu wrote: I would like to mentor but I am not a IPMC member. But if possible I would like to contribute and you could add me as an initial committer. Glad to have you. I just added you to the list of initial committers on the proposal page. David - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Release Apache Libcloud 0.3.1
On 07/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 3:48 AM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: Test tarballs for Apache Libcloud 0.3.1 are available at: http://people.apache.org/~pquerna/libcloud-0.3.1/ Where is the KEYS file? http://www.apache.org/dist/incubator/libcloud/KEYS The directory structures of the bz2 and zip archives are different - the file r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json is in a different place in the archives. The bz2 archive needs to be corrected. I don't understand what or why this is a problem. We use python's distutils to create both the zip file and the tarbz2 from the same export of the 0.3.0 source. The order of a file in a tar-strream compared to a zip shouldn't matter in material way that I can think of. It's not the order that is the problem - the directory structure is different. The file is in a different directory in the two archives. The XML files under fixtures should have AL headers. As far as I can tell, adding such comments does not affect the test cases. We are not adding license blocks to test case fixtures that are response bodies from APIs. -1 Minor problem: There is no mention of the dependency on zope. Please test and place your votes please; +/- 1 [ ] Release Apache Libcloud 0.3.1 Vote closes on Monday May 10, 2010 at 1pm PST. This release fixes several issues related to the license blocks, NOTICE file, and test cases that were noticed in the scrubbed 0.3.0 release. It is based upon this tag: https://svn.apache.org/repos/asf/incubator/libcloud/tags/0.3.1 Thanks, Paul - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Release Apache Libcloud 0.3.1
On Fri, May 7, 2010 at 5:33 PM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 3:48 AM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: Test tarballs for Apache Libcloud 0.3.1 are available at: http://people.apache.org/~pquerna/libcloud-0.3.1/ Where is the KEYS file? http://www.apache.org/dist/incubator/libcloud/KEYS The directory structures of the bz2 and zip archives are different - the file r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json is in a different place in the archives. The bz2 archive needs to be corrected. I don't understand what or why this is a problem. We use python's distutils to create both the zip file and the tarbz2 from the same export of the 0.3.0 source. The order of a file in a tar-strream compared to a zip shouldn't matter in material way that I can think of. It's not the order that is the problem - the directory structure is different. The file is in a different directory in the two archives. $ find . -name r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json ./tar/apache-libcloud-0.3.1/test/fixtures/rimuhosting/r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json ./zip/apache-libcloud-0.3.1/test/fixtures/rimuhosting/r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json I am unable to reproduce this problem on osx using the command line tar and unzip tools? How are you extracting the tarball/zip file? Thanks, Paul - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Release Apache Libcloud 0.3.1
On 08/05/2010, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 3:48 AM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: Test tarballs for Apache Libcloud 0.3.1 are available at: http://people.apache.org/~pquerna/libcloud-0.3.1/ Where is the KEYS file? http://www.apache.org/dist/incubator/libcloud/KEYS quote This file contains the PGP keys of various developers that work on the Apache HTTP Server and its subprojects. /quote Copy-paste error? The directory structures of the bz2 and zip archives are different - the file r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json is in a different place in the archives. The bz2 archive needs to be corrected. I don't understand what or why this is a problem. We use python's distutils to create both the zip file and the tarbz2 from the same export of the 0.3.0 source. The order of a file in a tar-strream compared to a zip shouldn't matter in material way that I can think of. It's not the order that is the problem - the directory structure is different. The file is in a different directory in the two archives. The XML files under fixtures should have AL headers. As far as I can tell, adding such comments does not affect the test cases. We are not adding license blocks to test case fixtures that are response bodies from APIs. -1 Minor problem: There is no mention of the dependency on zope. Please test and place your votes please; +/- 1 [ ] Release Apache Libcloud 0.3.1 Vote closes on Monday May 10, 2010 at 1pm PST. This release fixes several issues related to the license blocks, NOTICE file, and test cases that were noticed in the scrubbed 0.3.0 release. It is based upon this tag: https://svn.apache.org/repos/asf/incubator/libcloud/tags/0.3.1 Thanks, Paul - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Release Apache Libcloud 0.3.1
On 08/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 5:33 PM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 3:48 AM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: Test tarballs for Apache Libcloud 0.3.1 are available at: http://people.apache.org/~pquerna/libcloud-0.3.1/ Where is the KEYS file? http://www.apache.org/dist/incubator/libcloud/KEYS The directory structures of the bz2 and zip archives are different - the file r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json is in a different place in the archives. The bz2 archive needs to be corrected. I don't understand what or why this is a problem. We use python's distutils to create both the zip file and the tarbz2 from the same export of the 0.3.0 source. The order of a file in a tar-strream compared to a zip shouldn't matter in material way that I can think of. It's not the order that is the problem - the directory structure is different. The file is in a different directory in the two archives. $ find . -name r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json ./tar/apache-libcloud-0.3.1/test/fixtures/rimuhosting/r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json ./zip/apache-libcloud-0.3.1/test/fixtures/rimuhosting/r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json I am unable to reproduce this problem on osx using the command line tar and unzip tools? How are you extracting the tarball/zip file? Using an Ant script which uses: bunzip2 src=${pathname} dest=${filename}/ untar src=${filename} dest=${filename}-bz2/ delete file=${filename}/ with the appropriate settings. In the expanded bz2 archive, the file is in: test\fixtures\rimuhosting This is in parallel with apache-libcloud-0.3.1 under which all the other files appear. whereas in the expanded zip archive, the file is in: apache-libcloud-0.3.1\test\fixtures\rimuhosting I don't know whether it is relevant, but the file name is significantly longer than any of the others. Thanks, Paul - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Release Apache Libcloud 0.3.1
On Fri, May 7, 2010 at 6:04 PM, sebb seb...@gmail.com wrote: On 08/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 5:33 PM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 3:48 AM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: Test tarballs for Apache Libcloud 0.3.1 are available at: http://people.apache.org/~pquerna/libcloud-0.3.1/ Where is the KEYS file? http://www.apache.org/dist/incubator/libcloud/KEYS The directory structures of the bz2 and zip archives are different - the file r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json is in a different place in the archives. The bz2 archive needs to be corrected. I don't understand what or why this is a problem. We use python's distutils to create both the zip file and the tarbz2 from the same export of the 0.3.0 source. The order of a file in a tar-strream compared to a zip shouldn't matter in material way that I can think of. It's not the order that is the problem - the directory structure is different. The file is in a different directory in the two archives. $ find . -name r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json ./tar/apache-libcloud-0.3.1/test/fixtures/rimuhosting/r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json ./zip/apache-libcloud-0.3.1/test/fixtures/rimuhosting/r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json I am unable to reproduce this problem on osx using the command line tar and unzip tools? How are you extracting the tarball/zip file? Using an Ant script which uses: bunzip2 src=${pathname} dest=${filename}/ untar src=${filename} dest=${filename}-bz2/ delete file=${filename}/ with the appropriate settings. In the expanded bz2 archive, the file is in: test\fixtures\rimuhosting This is in parallel with apache-libcloud-0.3.1 under which all the other files appear. whereas in the expanded zip archive, the file is in: apache-libcloud-0.3.1\test\fixtures\rimuhosting I don't know whether it is relevant, but the file name is significantly longer than any of the others. its a bug in ant: https://issues.apache.org/bugzilla/show_bug.cgi?id=41924 - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Release Apache Libcloud 0.3.1
On 08/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 6:04 PM, sebb seb...@gmail.com wrote: On 08/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 5:33 PM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: On Fri, May 7, 2010 at 3:48 AM, sebb seb...@gmail.com wrote: On 07/05/2010, Paul Querna p...@querna.org wrote: Test tarballs for Apache Libcloud 0.3.1 are available at: http://people.apache.org/~pquerna/libcloud-0.3.1/ Where is the KEYS file? http://www.apache.org/dist/incubator/libcloud/KEYS The directory structures of the bz2 and zip archives are different - the file r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json is in a different place in the archives. The bz2 archive needs to be corrected. I don't understand what or why this is a problem. We use python's distutils to create both the zip file and the tarbz2 from the same export of the 0.3.0 source. The order of a file in a tar-strream compared to a zip shouldn't matter in material way that I can think of. It's not the order that is the problem - the directory structure is different. The file is in a different directory in the two archives. $ find . -name r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json ./tar/apache-libcloud-0.3.1/test/fixtures/rimuhosting/r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json ./zip/apache-libcloud-0.3.1/test/fixtures/rimuhosting/r_orders_order_88833465_api_ivan_net_nz_vps_running_state.json I am unable to reproduce this problem on osx using the command line tar and unzip tools? How are you extracting the tarball/zip file? Using an Ant script which uses: bunzip2 src=${pathname} dest=${filename}/ untar src=${filename} dest=${filename}-bz2/ delete file=${filename}/ with the appropriate settings. In the expanded bz2 archive, the file is in: test\fixtures\rimuhosting This is in parallel with apache-libcloud-0.3.1 under which all the other files appear. whereas in the expanded zip archive, the file is in: apache-libcloud-0.3.1\test\fixtures\rimuhosting I don't know whether it is relevant, but the file name is significantly longer than any of the others. its a bug in ant: https://issues.apache.org/bugzilla/show_bug.cgi?id=41924 Ah - did not know about that. It seems Winzip 9.0 has the same problem reading the tar file. Might perhaps be worth renaming the file if that is possible? - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org - To unsubscribe, e-mail: general-unsubscr...@incubator.apache.org For additional commands, e-mail: general-h...@incubator.apache.org
Re: [VOTE] Accept Whirr for Incubation
+1 --kevan On May 5, 2010, at 2:06 PM, Tom White wrote: We've added three mentors since starting the proposal thread, so I would like to start the vote to accept Whirr into the Apache Incubator. The proposal is included below and is also at: http://wiki.apache.org/incubator/WhirrProposal Please cast your votes: [ ] +1 Accept Whirr for incubation [ ] +0 Don't care [ ] -1 Reject for the following reason: Thanks, Tom = Whirr, a library of cloud services = == Abstract == Whirr will be a set of libraries for running cloud services. == Proposal == Whirr will provide code for running a variety of software services on cloud infrastructure. It will provide bindings in several languages (e.g. Python and Java) for popular cloud providers to make it easy to start and stop services like Hadoop clusters. The project will not be limited to a particular set of services, rather it will be expected that a range of services are developed, as determined by the project contributors. Possible services include Hadoop, HBase, !ZooKeeper, Cassandra. == Background == The ability to run services on cloud providers is very useful, particularly for proofs of concept, testing, and also ad hoc production work. Bringing up clusters in the cloud is non-trivial, since careful choreography is required. (Designing an interface that is convenient as well as secure is also a challenge in a cloud context.) Making services that runs on a variety of cloud providers is harder, even with the availability of libraries like libcloud and jclouds, since each platform's quirks and extra features must be considered (and either worked around, or possibly taken advantage of, as appropriate) . Whirr will facilitate sharing of best practices, both for a particular service (such as Hadoop configuration on a particular provider), and for common cloud operations (such as installation of dependencies across cloud providers). It will provide a space to share good configurations and will encode service-specific knowledge. == Rationale == There are already scripts in the Hadoop project that allow users to run Hadoop clusters on Amazon EC2 and other cloud providers. While users have found these scripts useful, their current home as a Hadoop Common contrib project has the following limitations: * Tying the scripts' release cycle to Hadoop's means that it is difficult to distribute updates to the scripts which are changing fast (new features and bugfixes). * The scripts support multiple versions of Hadoop, so it makes more sense to distribute them separately from Hadoop itself. * They are general: people want to contribute code for non-Hadoop services like Cassandra (for example: http://github.com/johanoskarsson/cassandra-ec2). * Having a uniform approach to running services in the cloud, hosted in one project, makes launching sets of complementary services easier for the user. Today, the scripts and libraries hosted within each project (e.g. in Hadoop, HBase, Cassandra) have slightly different conventions and semantics, and are likely to diverge over time. Building a community around cloud infrastructure services will help enforce a common approach to running services in the cloud. == Initial Goals == * Provide a new home for the existing Hadoop cloud scripts. * Add more services (e.g. HBase) * Develop Java libraries for Hadoop clusters * Add new cloud providers by taking advantage of libcloud and jclouds. * (Future) Run on own hardware, so users can take advantage of the same interface to control services running locally or in the cloud. == Current Status == === Meritocracy === The Hadoop scripts were originally created by Tom White, and have had a substantial number of contributions from members of the Hadoop community. By becoming its own project, significant contributors to Whirr would become committers, and allow the project to grow. === Community === The community interested in cloud service infrastructure is currently spread across many smaller projects, and one of the main goals of this project is to build a vibrant community to share best practices and build common infrastructure. For example, this project would provide a home to facilitate collaboration between the groups of Hadoop and HBase developers who are building cloud services. === Core developers === Tom White wrote most of the original code and is familiar with open source and Apache-style development, being a Hadoop committer and an ASF member. There have been a number of contributors who have provided patches to these scripts over time. Andrew Purtell who created the HBase cloud scripts is a HBase committer. Johan Oskarsson (Hadoop and Cassandra committer) ported the scripts to Cassandra. === Alignment === Whirr complements libcloud, currently in the Incubator. Libcloud provides multi-cloud provider support, while Whirr will provide multi-service support in the cloud. Whirr
Re: [PROPOSAL] Deltacloud Project
David, +1 for this project. We've been doing something similar in WSO2 (called Ozone) and are happy to jump to this. Have you looked at OpenNebula (http://opennebula.org/) - we were heading towards using that (and adding some Web service APIs etc.). Looks like they decided to form a business a few days ago: http://opennebula.ulitzer.com/node/1382814. If OpenNebula an Deltacloud have similar missions maybe we can find a way to get them here too - it'll be great for the world to have one superb IaaS cloud abstraction API, under Apache license of course. I plan to join the list and lurk .. looks like you have enough mentors already. Sanjiva. On Fri, May 7, 2010 at 1:11 AM, David Lutterkort lut...@redhat.com wrote: Hi, I would like to propose the Deltacloud API[1] for addition to the Apache incubator. I have added the initial proposal to the Wiki[2]; it is also included below for convenience. There are a few additional people that have expressed interest in becoming initial committers; I am waiting for their express consent to list them as committers, and will add them to the Wiki as I get that. We are looking forward to any and all feedback and/or questions on the proposal. We already have two mentors, but would very much welcome additional volunteers to help steer Deltacloud through the incubation process. David [1] http://deltacloud.org/ [2] http://wiki.apache.org/incubator/DeltacloudProposal Deltacloud, a cross-cloud web service API = Abstract Deltacloud defines a web service API for interacting with cloud service providers and resources in those clouds in a unified manner. In addition, it consists of a number of implementations of this API for the most popular clouds. Proposal * Define a REST-based API for managing and manipulating cloud resources in a manner that isolates the API client as much as possible from the particulars of specific cloud API's * Provide an open API definition for cloud providers for their IaaS clouds and a basis on which PaaS providers can layer their offering * Provide image management and directory capabilities as part of the API * The current implementation allows instance lifecycle management (create, start, stop, destroy, reboot), and querying of related resources like available images, instance sizes, and allowed instance actions for a number of public and private clouds * Currently supported are Amazon EC2, Eucalyptus, Rackspace, RimuHosting, GoGrid, OpenNebula, and RHEV-M * Future enhancements should broaden the scope of the API to include networking, firewalling, authentication, accounting, and image management Background -- An important issue for cloud users is that of avoiding lock-in to a specific cloud. By providing a cross-cloud API for infrastructure-as-a-service (IaaS) clouds, Deltacloud addresses this concern and strives to provide the best possible API for writing cloud-management applications that can target multiple clouds. There are also no efforts currently to define a truly open-source cloud API, one for which there is a proper upstream, independent of any specific cloud provider. Deltacloud API strives to create a community around building an open-source cloud API in a manner that fully allows for tried-and-true open source mechanisms such as user-driven innovation. By providing a web-service API, Deltacloud is language agnostic, and one of its subordinated goals is to provide a practical vocabulary for talking about IaaS cloud resources and operations on them. Rationale - IaaS clouds provide numerous advantages to their users, for example, making provisioning new servers more agile. If users directly use the 'native' cloud API's, they risk locking themselves in to the API of a specific cloud provider. There is therefore a strong need for an API that can be used across a wide range of public and private clouds, and that can serve as the basis for developing cloud management applications; in contrast to several existing language-specific efforts in this direction, Deltacloud is conceived as a web service. This will allow the project to attract a broad community of users of the API and cloud providers interested in offering a truly open-source API, with a proper upstream community. We strongly believe that the best way to drive such an API effort is by developing the API and open-source implementations of the API side-by-side. Initial Goals - Deltacloud is an existing open source project; initially started by Red Hat, it has attracted a number of outside contributors. We look at moving this project to the ASF as the next step to broaden the community, and put the project on solid footing since the ASF governance model is well suited for the Deltacloud project goals. The ASF is a great location for Deltacloud to build a community and