Re: urgent(ish): heat-cfntools -> python-boto -> python3 -> sad

2014-11-10 Thread Ryan Brown
Just talked to gholms, and the python3 dep is a bug and he's removed it
in this build[1].

I'd still be in favor of a longer term plan to drop the boto dep, but we
should probably do more evaluation on what functionality we'd lose. This
could be a candidate for an optional dependency down the road.

[1]: http://koji.fedoraproject.org/koji/buildinfo?buildID=591757

On 11/10/2014 11:34 AM, Steven Hardy wrote:
> On Sat, Nov 08, 2014 at 01:39:47PM -0500, Matthew Miller wrote:
>> hey all. heat-cfntools requires python-boto, which requires python3,
>> which shouldn't happen before we're ready to transition to python3
>> overall.
> 
> When you say "requires" python3, you're saying that boto now works only
> with python3?  I've probably missed some context if so, can you provide any
> links to clarify why that's happened?
> 
> On the face of it either upstream or the package maintainer forcing python3
> vs providing Py3k compatibility seems like a pretty impressively
> user-hostile change :(
> 
>> This has a _significant_ impact on image size now that the python-boto
>> dep is gone from cloud init. Having _one_ python requirement is bad
>> enough but it's egregious to pull in both 2 and 3.
>>
>> Options: 
>>
>> 1. does heat-cfntools hard-require python-boto?
> 
> Right now, yes, but if we have to it can be reworked to break that
> requirement.
> 
>> 2. can we use a python-boto that isn't python3 before we're ready?
>>(there was supposed to be a coherent plan for this!)
> 
> Yes, this sounds like the best short-term fix while we work on (1).
> 
>> 3. should we drop heat-cfntools for f21?
> 
> We'd really prefer it if you didn't, please :)
> 
> We (the heat community) are happy to help drive resolution of this, it's
> just something of an unexpected issue tbh :(
> 
> Historically boto has broken us more regularly than any other dependency
> though, so it may not be a bad thing to break that dependency anyway.  I'm
> not sure if we can do it in the next 7 days, so planning for (2) sounds
> like the best plan right now, if possible.
> 
>> Agents are a pain. :(
> 
> I appreciate the frustration, but it feels a bit like heat-cfntools is the
> messenger here, with boto being the real problem?
> 
> Let us know how you'd like to proceed and we'll try to get it sorted for
> the change deadline.
> 
> Thanks,
> 
> Steve
> ___
> cloud mailing list
> cloud@lists.fedoraproject.org
> https://admin.fedoraproject.org/mailman/listinfo/cloud
> Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
> 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: more heat-cfntools dependency creep

2014-12-12 Thread Ryan Brown
The dep actually isn't new, it was added (according to my buddy `git
blame`) 4 months ago in response to this[1] issue because users were
expecting to be able to install gems with cfntools.

You're right that this should have been caught earlier, and a nightly
check would certainly help. Sorry I missed the cutoff getting that build
to stable.

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1130964

On 12/12/2014 11:25 AM, Matthew Miller wrote:
> heat-cfntools pulls ruby into the cloud image :(
> 
> We're _really_ trying to get interpretted languages _out_ of there, and
> of course we're trying to get the size down overall. 
> 
> https://bugzilla.redhat.com/show_bug.cgi?id=1173682
> 
> 
> I'm not sure when this started -- I wish I'd caught it before the f21
> release.
> 
> We should have a nightly test check of some sort which alerts when
> new packages are pulled in via dependency to the nightly image builds.
> 
> 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Vagrant mounting /var/lib/docker?

2015-03-17 Thread Ryan Brown
On 03/17/2015 11:29 AM, Joe Brockmeier wrote:
> Hey all,
> 
> Any Vagrant wizards around? I'd like to figure out how to mount a
> directory on my local host as /var/lib/docker for the Atomic Vagrant
> boxen - so users don't have to worry about other ways of expanding the
> storage to putter around with Atomic and running containers.
> 
> Mounting the directory works fine, but I get errors around missing
> files/directories when trying to pull a docker image. It may be what I'm
> wanting to do is overly complex, but thought I'd see if anybody had
> suggestions...

It's likely something to do with how vagrant is mounting the folder.
What provider are you using? The standard `synced_folder` has a couple
different backends (NFS, smb, etc) so it may be specific to certain
providers.

I think the easiest way to punt would be to offer a sparse image (though
what format is a whole other discussion).

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Vagrant mounting /var/lib/docker?

2015-03-17 Thread Ryan Brown
On 03/17/2015 03:19 PM, Joe Brockmeier wrote:
> On 03/17/2015 03:18 PM, Jason Brooks wrote:
>> I don't know if this is any easier than adding a virtual disk, adding 
>> that as a physical volume, extending the root logical vol, and resizing
>> the partition. Although, when I type it all out like that... 
> 
> Right. :-) Was trying to just make this as easy as mounting a local
> directory...

It'd be possible to add a vdisk and format+mount it at /var/lib/docker
during vagrant's provisioning step, and it'd still allow nice COW
semantics.
-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Cloud image lifetimes

2015-03-19 Thread Ryan Brown
On 03/18/2015 06:38 PM, David Gay wrote:
> Greetings!
> 
> We sort of ran out of time in today's Cloud WG meeting, but I did want to ask:
> 
> What are your thoughts on AMI lifetimes? That is to say, how long should EC2 
> AMIs exist before they're deleted? A few points to consider:
> 
> - AMIs only cost us for storage, so it's not a *huge* cost to maintain a 
> public AMI
>   - At the same time, there are a lot of AMIs, since we build 2-4 per AWS 
> region per build, and that number is growing
> - There are 9 regions now, and we have 2 virtualization types, and 2 
> volume types, as well (9 regions * 2 * 2 = 36 AMIs per Base image build, 18 
> for Atomic builds (since they are only available in HVM format))
>   - This total number will only grow larger as we add instance-store 
> AMIs, and so on
> - This isn't even taking into account any costs we'll have once we secure a 
> deal with other providers like HP, Rackspace, and GCE, to maintain public 
> images on their services
> 
> I propose we have some sort of discussion regarding how long cloud image 
> builds should be available on services like AWS. I suspect this will resolve 
> to having different lifetimes for scratch, test, RC, final, and maybe other 
> build types.

In my experience, folks expect AMIs to stick around for a long time. AMI
IDs work their way into all sorts of places (scripts, ansible playbooks,
CloudFormation templates, and a zillion others) so I think that deleting
an AMI before the end of the supported lifetime of a release would make
people sad*.

I think it's reasonable to only offer that support for release AMIs, and
scratch/test/RC/etc AMIs would have to have a shorter lifetime.

My (very rough) calculations put the cost of storing 4 AMIs per region
at around $5/month, so each "final" AMI built will cost $5 * 13-16
months, or $65-$80. Not expensive, but not exactly free. Costs will, of
course, vary for other cloud providers.

Of course, there's no replacement for checking the metrics to see what
folks actually use.

* or angry, because their scripts will be broken

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Cloud image lifetimes

2015-03-19 Thread Ryan Brown
On 03/19/2015 08:54 AM, Joe Brockmeier wrote:
> On 03/18/2015 06:38 PM, David Gay wrote:
>> Greetings!
>>
>> We sort of ran out of time in today's Cloud WG meeting, but I did want
>> to ask:
>>
>> What are your thoughts on AMI lifetimes? That is to say, how long should
>> EC2 AMIs exist before they're deleted? A few points to consider:
> 
> I feel like I should know this, but I don't.
> 
> If a user spins up an AMI and then it's deleted by the provider, do they
> still have their instance(s) or do they lose the ability to create new
> images?

The instances they already started would still run and be available, but
they wouldn't be able to spin up anything new. If creating/killing
instances is something they do a lot (autoscaling groups, worker farms,
etc) then that could hose them just as surely as killing their existing
instances.

> That would color my response a bit.
> 
> Do we know how other projects handle theirs? If I go to spin up a Foo
> Linux release from 2 years ago, is the AMI still there?
> 
> At minimum, we should probably delete any AMIs that are no longer a
> supported version of Fedora, and I'd also be for deleting any TC, alpha,
> beta, etc. AMIs - especially once a release is published. So, for
> instance, any F21 alpha, beta, etc. AMIs can probably go to the great
> bit bucket in the sky at this point.
> 
> Also wonder if this is something we need to have ACK'ed by FESCo?
> 
> Best,
> 
> jzb
> 
> 
> 
> ___
> cloud mailing list
> cloud@lists.fedoraproject.org
> https://admin.fedoraproject.org/mailman/listinfo/cloud
> Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct
> 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Cloud image lifetimes

2015-03-24 Thread Ryan Brown
On 03/24/2015 01:51 PM, David Gay wrote:
> Cool, thanks for the input everyone. A short summary of what's been
> discussed:
> 
> 1. Everyone seems to agree that we should hope to get usage data from
> AWS, but as Dennis mentioned (and I expect), there isn't any usage
> data available for AMIs. If there is such a console, I've never seen
> one.
> 
> 2. Q: "If a user spins up an AMI and then it's deleted by the
> provider, do they still have their instance(s) or do they lose the
> ability to create new images?" A: "The instances they already started
> would still run and be available, but they wouldn't be able to spin
> up anything new. If creating/killing instances is something they do a
> lot (autoscaling groups, worker farms, etc) then that could hose them
> just as surely as killing their existing instances."
> 
> 3. We should probably look into how other projects handle their AMIs.
> I think the consensus here though is that whatever lifetime we have
> for releases, the alpha, beta, TC, RC, and other testing builds can
> -- and should -- be safely eliminated after the release. There's no
> good reason I can think of that someone would yell at us for deleting
> a test build AMI of a release that's already happened.
> 
> 4. Anyone have an opinion on jzb wondering if we should run this by
> FESCo?
> 
> 5. Regarding this exchange:
> 
> "at around $5/month, so each "final" AMI built will cost $5 * 13-16 
> months, or $65-$80. Not expensive, but not exactly free. Costs will,
> of course, vary for other cloud providers." "your math is off, there
> should only be 9 Atomic as we only build it for x86_64 where we build
> the base for i386 and x86_64  so you have two arches by 2 image types
> by 9 regions"
> 
> Whatever the costs will be per AMI, I can tell you that what I've
> heard from people in the cloud WG is that we want a number of
> different AMIs *per build*. A new build currently results in 6 AMIs:
> 2 for atomic (standard + gp2) and 4 for base (standard + gp2, both
> for HVM and paravirtual virtualization). I spoke with gholms some
> time back and I think we determined that we're also going to want
> instance-store AMIs, as well as *encrypted* EBS AMIs. So, maybe there

+1 for instance-store AMIs, they're incredibly useful. I think it's a
good time to think about what AMIs we should be producing though, and
talk to FESCO about just how much we're willing to spend on providing AMIs.

> should be some discussion on that with the full group, since that
> will result in a large number of AMIs. If we end up building that
> many different combinations of storage types, volume types, and
> virtualization types, we're talking a fair amount of AMIs being kept
> up during the release process, because of how many image builds go
> through Koji. Dennis mentioned to me that there is some sort of Koji
> bug that, if fixed, would builds be marked as either "real life" or
> "scratch", so we could at least cut down a bit on the number of AMIs
> being built.
> 
> I think this discussion should continue a bit more based on all that.
> However, I *do* move that I immediately delete at least all the
> alpha, beta, TC, and RC builds that were created back when we were
> working on F21.

+1 sounds like a good plan for now.

> 
> -- David ___ cloud
> mailing list cloud@lists.fedoraproject.org 
> https://admin.fedoraproject.org/mailman/listinfo/cloud Fedora Code of
> Conduct: http://fedoraproject.org/code-of-conduct
> 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Cloud image use cases

2015-07-10 Thread Ryan Brown
On 07/10/2015 08:52 AM, Major Hayden wrote:
> On 07/10/2015 07:28 AM, Josh Boyer wrote:
>> OK.  So your answer to my immediate question is "neutral base that 
>> people have to customize".  Fair enough.  Now, why would someone
>> wish to choose a Fedora cloud image over Ubuntu or CoreOS or any of
>> the other "minimal base that you have to customize" images?
> 
> It depends on the use case (which seems like a recursive statement in
> this thread). ;)
> 
> Our customers (disclaimer: I work for Rackspace) usually choose the
> operating system for cloud instances that they're most familiar with
> or the ones that mesh will with their organization's strategy.
> Ubuntu seems to be a popular choice due to the million howto's laying
> around for installing services on Ubuntu.
> 
> When it comes to the ultra-minimal OS choices largely intended for
> container platforms, like CoreOS, Atomic, or RancherOS, the customer
> usually has an idea of how they're planning to integrate/automate
> those operating systems on multiple instances.
> 
> Whenever I've spoken with customers about what they want from an OS
> in a virtual machine, they want it to contain a small package set
> that lets them run their automation on top of it (i.e. Ansible, Chef,
> Puppet).  Removing Python from that image would be a serious
> curveball since most people expect to have Python available on any
> system running yum/dnf.

Well, it kind of has to be available until we make yum/dnf not need
Python anymore, which sounds like *loads* of work.

Personally, I like (and use) the cloud image for two use cases.

1) Openstack development with local virtualization. Having a
small-footprint image that has cloud-init is awesome for testing
coordination or multinode installations of databases/services.

2) Cloud infrastructure (AWS/Rackspace)
  a) an OS I'm familiar with
  b) has great docs/community
  c) is pretty low-resource/cheap to run
  d) has up-to-date stuff
  e) automation I've written for RHEL-derivatives "just works"
  f) works everywhere I want

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Cloud image use cases

2015-07-10 Thread Ryan Brown
On 07/10/2015 09:48 AM, Matthew Miller wrote:
> On Fri, Jul 10, 2015 at 07:52:29AM -0500, Major Hayden wrote:
>> Whenever I've spoken with customers about what they want from an OS
>> in a virtual machine, they want it to contain a small package set
>> that lets them run their automation on top of it (i.e. Ansible, Chef,
>> Puppet). Removing Python from that image would be a serious curveball
>> since most people expect to have Python available on any system
>> running yum/dnf.
> 
> Yeah, I'm willing to back down on wanting to remove Python.

+1 It'd be a lot of work (rewriting dnf/whatever, maintaining both
codebases, etc) for (relatively) little gain.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Cloud image use cases

2015-07-10 Thread Ryan Brown
On 07/10/2015 10:03 AM, Haïkel wrote:
> 2015-07-10 15:59 GMT+02:00 Ryan Brown  <mailto:rybr...@redhat.com>>:
> 
> On 07/10/2015 09:48 AM, Matthew Miller wrote:
>> On Fri, Jul 10, 2015 at 07:52:29AM -0500, Major Hayden wrote:
>>> Whenever I've spoken with customers about what they want from an
>>> OS in a virtual machine, they want it to contain a small package
>>> set that lets them run their automation on top of it (i.e.
>>> Ansible, Chef, Puppet). Removing Python from that image would be
>>> a serious curveball since most people expect to have Python
>>> available on any system running yum/dnf.
>> 
>> Yeah, I'm willing to back down on wanting to remove Python.
> 
> +1 It'd be a lot of work (rewriting dnf/whatever, maintaining both 
> codebases, etc) for (relatively) little gain. Depends: for end-users,
> it could mean a smaller bill each month on storage.

Yes, but the engineer-hours to make that transition might better spent
making the cloud image (or our docs, or whatever) awesome in other ways.

> But I agree that we're not ready to drop python and it's *unlikely* 
> before a long time. I also agree that it shouldn't be a high/medium
> priority task.
> 
> As for dnf, there are discussion upstream to rewrite it in C, VMWare
> has also written a drop-in replacement for dnf based on
> hawkey/librepo that could be considered.
> 
> The more problematic component is cloud-init.

Indeed. CoreOS has a sorta-ish-done cloud-init written in go, but it's
nowhere near feature complete last I checked.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Local DNSSEC resolver and Fedora cloud

2015-08-13 Thread Ryan Brown
On 08/11/2015 02:05 PM, P J P wrote:
> Hello all,
> 
> As we know, Fedora-23 Alpha release has just been announced. Which
> means, most of the proposed features which are approved for F23 are
> in reasonably good shape for us to try out.
> 
> One of the proposed system wide change is to install and enable local
> DNSSEC validating resolver across Fedora variants.
> 
> -> https://fedoraproject.org/wiki/Changes/Default_Local_DNS_Resolver
> 
> This features proposes to install unbound[1] DNSSEC resolver along
> with the dnssec-trigger[2] tool, which is used to dynamically
> configure the 'unbound' resolver. Upon successful setup, user would
> have the unbound[1] DNSSEC resolver listening on the 127.0.0.1:53
> address. And the '/etc/resolv.conf' would point to this server as the
> designated 'nameserver' for the system.

Conveniently, this came up at the DNSSEC session yesterday afternoon. I
won't speak for the whole group, but I'd be concerned about what cases
the resolver would be enabled for.

As a user for the cloud image (local virt, AWS) I don't think adding
unbound would be much of an improvement. For cloud image deployment
scenarios, if DNS security is of importance to my deployment, I can
enforce that external to the instance by running one (or several) DNSSEC
resolvers that can be shared between my whole fleet. In AWS, you can set
up your VPC to configure a custom resolver over DHCP, and there are
similar options in Azure/Rackspace/etc.

A 1:1 ratio of servers to DNS resolvers seems pretty wasteful to me,
especially in an environment where marginal performance increases cost
money. So I'd be against enabling a local DNSSEC resolver in the cloud
image.

For Atomic Host, I think it makes more sense to have a shared resolver.
In that case, the host's resolver can be shared by all the tenant
containers. Not only do you get to amortize the cost of running Unbound
across N containers on the host, but you get shared DNS caching as well.


tl;dr: please don't put it in the cloud image, but I think it makes
sense for Atomic Host.

> Both unbound[1] & dnssec-trigger[2] packages are available in Fedora
> since long. And the proposed feature solution is known to work well
> for majority of the users. Currently work is in progress to ensure
> that the proposed feature works seamlessly well across all variants
> and addresses all use-cases for the Fedora users.
> 
> 
> The feature has been approved for the upcoming F23 release; But we
> need affirmation from the individual working groups to install and
> enable this feature in the respective variants.
> 
> 
> -> https://bugzilla.redhat.com/show_bug.cgi?id=1203950
> 
> 
> The affirmation would enable us to include the 'dnssec-trigger' &
> 'unbound' packages in the respective Fedora kickstart files.
> 
> Could we please have your(cloud-WG) consent to enable this feature on
> the Fedora cloud variant?
> 
> 
> If you have any concerns/comments/suggestions please let us know
> here.
> 
> --
> 
> [1] https://unbound.net/ 
> [2] http://www.nlnetlabs.nl/projects/dnssec-trigger/
> [3] https://lists.fedoraproject.org/pipermail/cloud/2015-July/005590.html
>
> Thank you.
> 
> ---Regards -P J P http://feedmug.com 
> ___ cloud mailing list 
> cloud@lists.fedoraproject.org 
> https://admin.fedoraproject.org/mailman/listinfo/cloud Fedora Code of
> Conduct: http://fedoraproject.org/code-of-conduct
> 

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.

-- 
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: text for getfedora.org website

2015-09-15 Thread Ryan Brown

On 09/14/2015 03:22 PM, Aditya Patawari wrote:

Hello,

During the last meeting we talked about the text for gofedora.org. The
current text is good but I suggest something on more positive note.

Current text:
Fedora Cloud provides a minimal image of Fedora for use in public and
private cloud environments. It includes just the bare essentials, so
you get enough to run your cloud application -- and nothing more.

Proposed text:
Fedora Cloud provides a minimal image of Fedora for use in public and
private cloud environments. It includes just the bare essentials
making it very light on resources, so you get enough to run your cloud
application.


That looks great, there is a small tweak that I'd like though.

Proposed alteration to proposed text:
Fedora Cloud provides a minimal image of Fedora for use in public and
private cloud environments. It includes just the bare essentials
making it very light on resources, *but* you get enough to run your 
cloud application.


Just replace "so" with "but" in the last clause.

-Ryan

--
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Mo's proposed fedora atomic logo

2015-09-15 Thread Ryan Brown

On 09/15/2015 02:23 PM, Matthew Miller wrote:

https://mattdm.fedorapeople.org/misc/newfedoratomiclogomockup.png

With Fedora Atomic Host as primary, the current Fedora Cloud logo would
be still used for the Cloud Base image, but would probably be
de-colored (see the non-Edition logos on https://arm.fedoraproject.org/
for an example).

Feedback? I think it looks great.


A++ ship it

--
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct


Re: Mo's proposed fedora atomic logo

2015-09-16 Thread Ryan Brown

On 09/15/2015 05:10 PM, Haïkel wrote:

2015-09-15 22:52 GMT+02:00 Jason Brooks :



If I didn't already know about the cloud base image as its own
WG and such, I would have looked for it as a variant of the Server
product.



It makes sense (that's why we support the cattle-to-pet scenario), but
server folks are focusing towards a different segment.
If you read about bimodal IT (thanks Gartner for feeding me with funny
buzzwords), server is focusing to traditional IT (mode 1) and cloud to
agile IT (mode 2).

Depending how things evolve around containers, we might consider that
cloud should solely focus on Atomic and maybe transfer ownership of
the classic image to server WG. But this is too early for considering
this.
After all, we decided to make Atomic primary to boost it, and refine
our story around it.


I think we're over-estimating where users are in the "magic bi-modal IT 
devops agile transformation quadrant" (which I hear Gartner is calling 
it now).


I think there's a fairly close mapping from where users are in the 
adoption cycle to what page they need to get to.


Early adopters -> Atomic
Early majority -> Cloud/Server
Late majority/long tail -> Server/Workstation

There's your early adopters are already all about containers and know 
they want Atomic, and they'll go get it. We don't even really need to 
highlight it very much, just mention "Atomic is a cloud thing for 
containers" and they'll be all over it.


The early majority has some stuff in the cloud already, and probably 
still have some "pets" either on real hardware or long-lived VMs. They 
might look at Atomic to evaluate it, but are likely still going to have 
Cloud or Server as their bread and butter. Highlighting atomic from the 
cloud page and the main getfedora.org would help us get these users into 
Atomic.


The "early majority" will still have the Cloud/Server products as their 
main consumption for quite some time, and even if we think containers 
are where the cloud is going, there aren't going to be loads of users 
there for a while.


I'm not sure that transferring the cloud image over to the Server WG 
makes much sense, since the Cloud WG has the expertise and infra to 
maintain it already.

--
Ryan Brown / Software Engineer, Openstack / Red Hat, Inc.
___
cloud mailing list
cloud@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/cloud
Fedora Code of Conduct: http://fedoraproject.org/code-of-conduct