I'd like to add that Thomas makes the best openstack packages by far. He's
been a force of nature in packaging and his attention to detail is second
to none.
On Wed, Feb 15, 2017 at 11:28 AM, Haïkel wrote:
> 2017-02-15 13:42 GMT+01:00 Thomas Goirand
I hope they have gotten better. Last time I tried to contribute to
Ubuntu's packaging effort they took over a ear to respond.
On Feb 15, 2017 11:56 AM, "Allison Randal" wrote:
> On 02/15/2017 07:42 AM, Thomas Goirand wrote:
> > I will continue to maintain OpenStack Newton
I can see a huge problem with your contributing operators... all of them
are enterprise.
enterprise needs are radically different from small to medium deployers who
openstack has traditionally failed to work well for.
On Tue, Jan 17, 2017 at 12:47 PM, Piet Kruithof
wrote:
you know how many folks are STILL running havana openstack?
On Mon, Oct 31, 2016 at 2:44 PM, Andreas Jaeger wrote:
> On 10/31/2016 07:33 PM, Lutz Birkhahn wrote:
> > Hi,
> >
> > I have already manually created PDF versions of about 8 of the OpenStack
> Manuals (within about 4-6
so project ' ' would be perfectly okay then.
On Wed, Oct 5, 2016 at 5:36 PM, Steve Martinelli
wrote:
> There are some restrictions.
>
> 1. The project name cannot be longer than 64 characters.
> 2. Within a domain, the project name is unique. So you can have project
>
I think the best general way to view networking in cloud is WAN vs Cloud
Lan.
There's almost always an edge routing env for your cloud environments (
whether they be by region or by policy or by tim is an angry dude and you
don't touch his instances ).
Everything beyond that edge is a WAN
I figure if you have entity Y's workloads running on entity X's hardware...
and that's 51% or greater portion of gross revenue... you are a public
cloud.
On Mon, Sep 26, 2016 at 11:35 AM, Kenny Johnston
wrote:
> That seems like a strange definition. It doesn't
I'd love to see your results on this . Very interesting stuff.
On Sep 17, 2016 1:37 AM, "Joe Topjian" wrote:
> Hi all,
>
> We're planning to deploy Murano to one of our OpenStack clouds and I'm
> debating the RabbitMQ setup.
>
> For background: the Murano agent that runs on
I want desperately to see a failed deployments talk at summit. I'd be glad
to contribute but we'd need info on a variety of failure states.
On Sep 12, 2016 1:05 PM, "Jonathan D. Proulx" wrote:
>
> I agree this would make a very interesting OPs session.
>
> As many have
early days was 2 full time 2 part time for a cluster size of a couple
hundred.
On Wed, Sep 7, 2016 at 6:59 PM, Kris G. Lindgren
wrote:
> Hello all,
>
>
>
> I was hoping to poll other operators to see what their average team size
> vs’s deployment size is, as I am trying
there are fundamental longevity of life questions with SSDs and tuning.
I'd be interested in hearing about that as well.
On Fri, Aug 5, 2016 at 11:23 AM, Edgar Magana
wrote:
> Tim,
>
>
>
> At Workday team we are working on that, our work is for CEPH performance.
>
the v1 helion product was a joke for deployment at scale. I still don't
know whose hair brained idea it was to use OOO there and then. but it was
hair brained at best. From my perspective the biggest issue with helion,
was insane architecture decisions like that one being made with no
adherence
Vish the virtual machine barista?
On Thu, Jul 7, 2016 at 4:23 PM, Kruithof Jr, Pieter <
pieter.kruithof...@intel.com> wrote:
> Operators,
>
> If you have a few moments, please review the following:
>
> https://review.openstack.org/#/c/326662/14
>
> The intent of the document is to generate a
I'll check out giftwrap. never heard of it. But interesting.
On Thu, Jun 23, 2016 at 7:50 PM, Xav Paice wrote:
> Can I suggest that using the tool https://github.com/openstack/giftwrap
> might make live a bunch easier?
>
> I went down a similar path with building Debs in a
I know from conversations that a few folks package their python apps as
distributable virtualenvs. spotify created dh-virtualenv for this. you
can do it pretty simply by hand.
I built a toolchain for building rpms as distributable virtualenvs and that
works really well.
What I'd like to do is
I use thermite.
On Wed, Jun 22, 2016 at 5:26 PM, Gilles Mocellin <
gilles.mocel...@nuagelibre.org> wrote:
> Hello,
>
> While digging in nova's database, I found that many objects ar not really
> deleted, but instead just marked deleted.
> In fact, it's a general behavior in other projects
+1 also SSL
On Tue, Jun 14, 2016 at 4:58 PM, Russell Bryant wrote:
> This is the most common approach I've heard of (doing rate limiting in
> your load balancer).
>
> On Tue, Jun 14, 2016 at 12:10 PM, Kingshott, Daniel <
> daniel.kingsh...@bestbuy.com> wrote:
>
>> We use
PCI compliance / ITAR / TS stuff all require isolation. You'd need to
stand up isolated environments of the translation env for each.
On Fri, May 6, 2016 at 11:50 AM, Jonathan Proulx <j...@csail.mit.edu> wrote:
> On Fri, May 06, 2016 at 11:39:03AM -0400, Silence Dogood wrote:
> :thi
what you should be looking for is hvm.
On Tue, May 3, 2016 at 3:20 PM, Maish Saidel-Keesing
wrote:
> I would think that the problem is that OpenStack does not really report
> back that you are using KVM - it reports that you are using QEMU.
>
> Even when in nova.conf I have
+1
On Fri, Mar 4, 2016 at 12:30 PM, Matt Jarvis
wrote:
> +1
>
> On 4 March 2016 at 17:21, Robert Starmer wrote:
>
>> If fixing a typo in a document is considered a technical contribution,
>> then I think we've already cast the net far and wide.
cool!
On Thu, Mar 3, 2016 at 1:39 PM, Mathieu Gagné <mga...@internap.com> wrote:
> On 2016-03-03 12:50 PM, Silence Dogood wrote:
> > We did some early affinity work and discovered some interesting problems
> > with affinity and scheduling. =/ by default openstack us
We did some early affinity work and discovered some interesting problems
with affinity and scheduling. =/ by default openstack used to ( may still
) deploy nodes across hosts evenly.
Personally, I think this is a bad approach. Most cloud providers stack
across a couple racks at a time filling
How about just OPS : {$Verified_Count} Physical Nodes
=D
On Thu, Mar 3, 2016 at 12:08 PM, Robert Starmer wrote:
> I setup an etherpad to try to capture this discussion:
>
> https://etherpad.openstack.org/p/OperatorRecognition
>
> R
>
> On Thu, Mar 3, 2016 at 9:04 AM, Robert
- In-place Full Release upgrades (upgrade an entire cloud from Icehouse
to Kilo for instance)
This tends to be the most likely scenario with CI/CD being almost
impossible for anyone using supported openstack components ( such as SDN /
NAS / other hardware integration pieces ).
That's not
> the one I included for glance. They will be geared around a one box
> install at first.
>
> I'll update the site.
>
> Chris
>
> Sent from my iPhone
>
> On Mar 2, 2016, at 1:07 PM, Silence Dogood <m...@nycresistor.com> wrote:
>
> This is neat man. Any supp
This is neat man. Any support for versioning?
On Wed, Mar 2, 2016 at 3:54 PM, wrote:
> Hi all;
>
> I'm still a bit new to the world of stacking, but like many of you I have
> suffered thru the process of manual Openstack installation.
>
> I've been a developer for
I believe Eric Windisch did at one point run OpenStack on a pi.
The problem is that it's got so little ram, and no hypervisor. Also at
least it USED to not be able to run docker since docker wasn't
crosscompiled to arm at the time.
It's a terrible target for openstack. NUCs on the other
I believe Eric Windisch did at one point run OpenStack on a pi.
The problem is that it's got so little ram, and no hypervisor. Also at
least it USED to not be able to run docker since docker wasn't
crosscompiled to arm at the time.
It's a terrible target for openstack. NUCs on the other
>From a purely benchmarking aspect it makes sense. It's like a burn in test
case use. That only makes it make sense.
On Fri, Feb 19, 2016 at 5:09 PM, Kevin Bringard (kevinbri) <
kevin...@cisco.com> wrote:
> Sorry for top posting.
>
> Just wanted to say I agree with Monty (and didn't want you
29 matches
Mail list logo