Re: [openstack-dev] [tc][all] A culture change (nitpicking)

2018-05-29 Thread Jonathan Proulx
On Tue, May 29, 2018 at 03:53:41PM -0400, Doug Hellmann wrote:
:> >> maybe we're all saying the same thing here?
:> > Yeah, I feel like we're all essentially in agreement that nits (of the
:> > English mistake of typo type) do need to get fixed, but sometimes
:> > (often?) putting the burden of fixing them on the original patch
:> > contributor is neither fair nor constructive.
:> I am ok with this statement if we are all in agreement that doing 
:> follow-up patches is an acceptable practice.
:
:Has it ever not been?
:
:It seems like it has always come down to a bit of negotiation with
:the original author, hasn't it? And that won't change, except that
:we will be emphasizing to reviewers that we encourage them to be
:more active in seeking out that negotiation and then proposing
:patches?

Exactly, it's more codifying a default.

It's not been unacceptable but I think there's some understandable
reluctance to make changes to someone else's work, you don't want to
seem like your taking over or getting in the way.  At least that's
what's in my head when deciding should this be a comment or a patch.

I think this discussion suggests for certain class of "nits" patch is
preferred to comment.  If that is true making this explicit is a good
thing becuase let's face it my social skills are only marginally
better than my speeling :)

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] OpenStack "S" Release Naming Preliminary Results

2018-03-22 Thread Jonathan Proulx
On Wed, Mar 21, 2018 at 08:32:38PM -0400, Paul Belanger wrote:

:6. Spandau  loses to Solar by 195–88, loses to Springer by 125–118

Given this is at #6 and formal vetting is yet to come it's probably
not much of an issue, but "Spandau's" first association for many will
be Nazi war criminals via Spandau Prison
https://en.wikipedia.org/wiki/Spandau_Prison

So best avoided to say the least.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Ops Meetups team minutes + main topics

2017-11-21 Thread Jonathan Proulx
On Tue, Nov 21, 2017 at 10:10:00AM -0700, David Medberry wrote:
:Jon,
:
:I think the Foundation staff were very very wary of extending the PTG or
:doing dual sites simultaneously due to not saving a thing logistically.
:Yes, it would conceivably save travel for folks that need to go to two
:separate events (as would the other colo options on the table) but not
:saving a thing logistically over two separate events as we have now. A six
:or seven day sprint/thing/ptg would also mean encroaching on one or both
:weekends (above and beyond travel dates) and that may really limit venue
:choices as private parties (weddings, etc) tend to book those locales on
:weekends.

Yes, that was my main concern as well.  Though I'd not extended it to
the fact the two events wouldn't fit in a single working week.  So
sounds like the logistics are just illogistical.

-Jon

:On Tue, Nov 21, 2017 at 10:06 AM, Jonathan Proulx <j...@csail.mit.edu> wrote:
:
:> :On Tue, Nov 21, 2017 at 9:15 AM, Chris Morgan <mihali...@gmail.com>
:> wrote:
:>
:> :> The big topic of debate, however, was whether subsequent meetups should
:> be
:> :> co-located with OpenStack PTG. This is a question for the wider
:> OpenStack
:> :> operators community.
:>
:> For people who attend both I thnik this would be a big win, if they
:> were in the same location (city anyway) but held in series.  One
:> (longer) trip but no scheduling conflict.
:>
:> Downside I see is that makes scheduling constraints pretty tight
:> either for having two sponsorslocation available in a coordinated time
:> and place or making a much bigger ask of a single location.
:>
:> Those are my thoughts, not sure if the amount to an opinion.
:>
:> -Jon
:>
:>
:>

-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] Ops Meetups team minutes + main topics

2017-11-21 Thread Jonathan Proulx
:On Tue, Nov 21, 2017 at 9:15 AM, Chris Morgan  wrote:

:> The big topic of debate, however, was whether subsequent meetups should be
:> co-located with OpenStack PTG. This is a question for the wider OpenStack
:> operators community.

For people who attend both I thnik this would be a big win, if they
were in the same location (city anyway) but held in series.  One
(longer) trip but no scheduling conflict.

Downside I see is that makes scheduling constraints pretty tight
either for having two sponsorslocation available in a coordinated time
and place or making a much bigger ask of a single location.

Those are my thoughts, not sure if the amount to an opinion.

-Jon



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Supporting SSH host certificates

2017-09-29 Thread Jonathan Proulx
Giuseppe ,

I'm pretty sure this is the project you want ot look into:

http://git.openstack.org/cgit/openstack/barbican/

"Barbican is a ReST API designed for the secure storage, provisioning
and management of secrets, including in OpenStack environments."

-Jon


On Fri, Sep 29, 2017 at 02:21:06PM -0500, Giuseppe de Candia wrote:
:Hi Folks,
:
:
:
:My intent in this e-mail is to solicit advice for how to inject SSH host
:certificates into VM instances, with minimal or no burden on users.
:
:
:
:Background (skip if you're already familiar with SSH certificates): without
:host certificates, when clients ssh to a host for the first time (or after
:the host has been re-installed), they have to hope that there's no man in
:the middle and that the public key being presented actually belongs to the
:host they're trying to reach. The host's public key is stored in the
:client's known_hosts file. SSH host certicates eliminate the possibility of
:Man-in-the-Middle attack: a Certificate Authority public key is distributed
:to clients (and written to their known_hosts file with a special syntax and
:options); the host public key is signed by the CA, generating an SSH
:certificate that contains the hostname and validity period (among other
:things). When negotiating the ssh connection, the host presents its SSH
:host certificate and the client verifies that it was signed by the CA.
:
:
:
:How to support SSH host certificates in OpenStack?
:
:
:
:First, let's consider doing it by hand, instance by instance. The only
:solution I can think of is to VNC to the instance, copy the public key to
:my CA server, sign it, and then write the certificate back into the host
:(again via VNC). I cannot ssh without risking a MITM attack. What about
:using Nova user-data? User-data is exposed via the metadata service.
:Metadata is queried via http (reply transmitted in the clear, susceptible
:to snooping), and any compute node can query for any instance's
:meta-data/user-data.
:
:
:
:At this point I have to admit I'm ignorant of details of cloud-init. I know
:cloud-init allows specifying SSH private keys (both for users and for SSH
:service). I have not yet studied how such information is securely injected
:into an instance. I assume it should only be made available via ConfigDrive
:rather than metadata-service (again, that service transmits in the clear).
:
:
:
:What about providing SSH host certificates as a service in OpenStack? Let's
:keep out of scope issues around choosing and storing the CA keys, but the
:CA key is per project. What design supports setting up the SSH host
:certificate automatically for every VM instance?
:
:
:
:I have looked at Vendor Data and I don't see a way to use that, mainly
:because 1) it doesn't take parameters, so you can't pass the public key
:out; and 2) it's queried over http, not https.
:
:
:
:Just as a feasibility argument, one solution would be to modify Nova
:compute instance boot code. Nova compute can securely query a CA service
:asking for a triplet (private key, public key, SSH certificate) for the
:specific hostname. It can then inject the triplet using ConfigDrive. I
:believe this securely gets the private key into the instance.
:
:
:
:I cannot figure out how to get the equivalent functionality without
:modifying Nova compute and the boot process. Every solution I can think of
:risks either exposing the private key or vulnerability to a MITM attack
:during the signing process.
:
:
:
:Your help is appreciated.
:
:
:
:--Pino

:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Jonathan Proulx
On Tue, Sep 26, 2017 at 01:34:14PM -0700, Michał Jastrzębski wrote:
:In Kolla, during this PTG, we came up with idea of scenario based
:testing+documentation. Basically what we want to do is to provide set
:of kolla configurations, howtos and tempest configs to test out
:different "constellations" or use-cases. If, instead of in Kolla, do
:these in cross-community manner (and just host kolla-specific things
:in kolla), I think that would partially address what you're asking for
:here.

Yeas, that sounds like a great idea.

-Jon

:On 26 September 2017 at 13:01, Jonathan Proulx <j...@csail.mit.edu> wrote:
:> On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:
:>
:> :OpenStack is big. Big enough that a user will likely be fine with learning
:> :a new set of tools to manage it.
:>
:> New users in the startup sense of new, probably.
:>
:> People with entrenched environments, I doubt it.
:>
:> But OpenStack is big. Big enough I think all the major config systems
:> are fairly well represented, so whether I'm right or wrong this
:> doesn't seem like an issue to me :)
:>
:> Having common targets (constellations, reference architectures,
:> whatever) so all the config systems build the same things (or a subset
:> or superset of the same things) seems like it would have benefits all
:> around.
:>
:> -Jon
:>
:> __
:> OpenStack Development Mailing List (not for usage questions)
:> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
:
:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [ptg] Simplification in OpenStack

2017-09-26 Thread Jonathan Proulx
On Tue, Sep 26, 2017 at 12:16:30PM -0700, Clint Byrum wrote:

:OpenStack is big. Big enough that a user will likely be fine with learning
:a new set of tools to manage it.

New users in the startup sense of new, probably.

People with entrenched environments, I doubt it.

But OpenStack is big. Big enough I think all the major config systems
are fairly well represented, so whether I'm right or wrong this
doesn't seem like an issue to me :)

Having common targets (constellations, reference architectures,
whatever) so all the config systems build the same things (or a subset
or superset of the same things) seems like it would have benefits all
around.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-05 Thread Jonathan Proulx
On Fri, May 05, 2017 at 02:04:43PM -0600, John Griffith wrote:
:On Fri, May 5, 2017 at 11:24 AM, Chris Friesen 

:> Cinder theoretically supports LVM/iSCSI, but if you actually try to use it
:> for anything stressful it falls over.
:>
:
:​Oh really?​
:
:​I'd love some detail on this.  What falls over?

I'm a bit out of date on this personally, but we ditched all iSCSI a
few years ago becasue we found it generally flaky on Linux. We we were
motly using Equallogic SAN both for OpenStack and stand alone servers
but saw same issues with some other targets as well.

So I wonder if this is a Cinder issue or just a Linux issue.

What we saw fall over was any slight network bump would permanently
drop conenct to backing storage requiring reset.  But as I say this
was decidedly not a Cinder issue.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [tc] [all] OpenStack moving both too fast and too slow at the same time

2017-05-04 Thread Jonathan Proulx
On Thu, May 04, 2017 at 04:14:07PM +0200, Thierry Carrez wrote:
:Chris Dent wrote:
:> On Wed, 3 May 2017, Drew Fisher wrote:
:>> "Most large customers move slowly and thus are running older versions,
:>> which are EOL upstream sometimes before they even deploy them."
:> 
:> Can someone with more of the history give more detail on where the
:> expectation arose that upstream ought to be responsible things like
:> long term support? I had always understood that such features were
:> part of the way in which the corporately avaialable products added
:> value?

:In parallel, OpenStack became more stable, so the demand for longer-term
:maintenance is stronger. People still expect "upstream" to provide it,
:not realizing upstream is made of people employed by various
:organizations, and that apparently their interest in funding work in
:that area is pretty dead.

Wearing my Operator hat I don't really care if "LTS" comes from
upstream or downstream.  I think the upstream expectation has
developed becuase there has been some upstream efforts and as far as I
can see no recent downstream efforts in support of stable releases,
though obviously I mostly pay attention to "my" distro so may be
missing things in this space.

Having watched this for some time I agree with everything Thierry has
said.

The increasing demand for "LTS" like releases is definitely a tribute
to the overall maturity of core services.  I used to be desperate for
the next release and back porting patches into custom packages just to
keep things working.

Now if I belived Ubuntu (which my world OpenStack and otherwise
happens to be built on) would provide a direct upgrade path from their
16.04 released OpenStack to what ever lands in their next LTS I'd
probably sit rather happily on that.  Which is a hugely positive shift.

:I agree that our current stable branch model is inappropriate:
:maintaining stable branches for one year only is a bit useless. But I
:only see two outcomes:
:
:1/ The OpenStack community still thinks there is a lot of value in doing
:this work upstream, in which case organizations should invest resources
:in making that happen (starting with giving the Stable branch
:maintenance PTL a job), and then, yes, we should definitely consider
:things like LTS or longer periods of support for stable branches, to
:match the evolving usage of OpenStack.
:
:2/ The OpenStack community thinks this is better handled downstream, and
:we should just get rid of them completely. This is a valid approach, and
:a lot of other open source communities just do that.
:
:The current reality in terms of invested resources points to (2). I
:personally would prefer (1), because that lets us address security
:issues more efficiently and avoids duplicating effort downstream. But
:unfortunately I don't control where development resources are posted.

Yes it seems that way to me as well.

just killing the stable branch model without some plan either
internally or externally to provide a better stability story seems
like it would send the wrong signal.  So I'd much prefer the distro
people to either back option 1) with significant resources so it can
really work or make public commitments to handle option 2) in a
reasonable way.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [All] IRC Mishaps

2017-02-10 Thread Jonathan Proulx

Well the worst thing I've done is type and send my password...that was
on an internal work channel not an OpenStack one, but I think that
only made it more embarrassing!

-Jon

On Wed, Feb 08, 2017 at 08:36:16PM +, Kendall Nelson wrote:
:Hello All!
:
:So I am sure we've all seen it: people writing terminal commands into our
:project channels, misusing '/' commands, etc. But have any of you actually
:done it?
:
:If any of you cores, ptls or other upstanding members of our wonderful
:community have had one of these embarrassing experiences please reply! I am
:writing an article for the SuperUser trying to make us all seem a little
:more human to people new to the community and new to using IRC. It can be
:scary asking questions to such a large group of smart people and its even
:more off putting when we make mistakes in front of them.
:
:So please share your stories!
:
:-Kendall Nelson (diablo_rojo)

:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [osa] [docs] OpenStack-Ansible deploy guide live!

2016-11-30 Thread Jonathan Proulx
On Wed, Nov 30, 2016 at 09:24:19AM -0600, Major Hayden wrote:
:On 11/30/2016 09:03 AM, Alexandra Settle wrote:
:> I am really pleased to announce that the OpenStack-Ansible Deployment Guide 
is now available on the docs.o.o website! You can view it in all its glory 
here: http://docs.openstack.org/project-deploy-guide/newton/
:> 
:> This now paves the way for many other deployment projects to publish their 
deployment guides on the docs.o.o website, under “Deployment Guides” 
 and gain more visibility.
:> 
:> Any questions about this effort, feel free to contact me directly J
:
:Awesome!  Great work by everyone involved. ;)

This is perfect timing for me, thanks :)

I have no experience with OSA yet but we've got a small pile of
hardware we're scheduled to test it out on tomorrow.  Haven't put it
into practice but given a quick read through this guide looks like it
will be quite helpful!

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [oslo.config] Encrypt the sensitive options

2016-05-02 Thread Jonathan Proulx
On Mon, May 02, 2016 at 11:41:58AM -0700, Morgan Fainberg wrote:
:On Mon, May 2, 2016 at 11:32 AM, Adam Young  wrote:

:> Kerberos would work, too, for deployments that prefer that form of
:> Authentication.  We can document this, but do not need to implement.
:>
:>
:Never hurts to have alternatives.

Not sure howmany people have this use case, but I do so +1 from me :)

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Jonathan Proulx
On Wed, Mar 02, 2016 at 02:05:40PM -0600, Monty Taylor wrote:

:(try writing an idempotent ansible playbook that tries to make your
:security group look exactly like you want it not knowing in advance
:what security group rules this provider happens to want to give you
:that you didn't think to explicitly look for.)

my approach is just never to use 'default' & only use groups I've
created.

but yes making default policies obvious and easily discoverable is a
good thing.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Jonathan Proulx
On Wed, Mar 02, 2016 at 11:36:17AM -0800, Gregory Haynes wrote:
:Clearly, some operators and users disagree with the opinion that 'by
:default security groups should closed off' given that we have several
:large public providers who have changed these defaults (despite there
:being no documented way to do so), and we have users in this thread
:expressing that opinion. Given that, I am not sure there is any value
:behind us expressing we have different opinions on what defaults should
:be (let alone enforcing them by not allowing them to be configured)
:unless there are some technical reasons beyond 'this is not what my
:policy is, what my customers wants', etc. I also understand the goal of
:trying to make clouds more similar for better interoperability (and I
:think that is extremely important), but the reality is we have created
:a situation where clouds are already not identical here in an even
:worse, undocumented way because we are enforcing a certain set of
:opinions here.


On the topic of 'norms' and interoperability my operational opinion is
that neive users are unlikely to actually use multiple clouds, or at
most swithc between clouds infrequently, and sofisticated users for
wome interoreability is more important will be able to automate
creating their desired security groups so as long as the API is there
the site default policy is irrelevant.

:To me this is an extremely clear indication that at a minimum the
:defaults should be configurable since discussion around them seems to
:devolve into different opinions on security policies, and there is no
:way we should be in the business of dictating that.

Yes that!

-Jon

:Cheers, Greg

:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


-- 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [neutron] - Changing the Neutron default security group rules

2016-03-03 Thread Jonathan Proulx
On Wed, Mar 02, 2016 at 10:19:50PM +, James Denton wrote:
:My opinion is that the current stance of ‘deny all’ is probably the safest bet 
for all parties (including users) at this point. It’s been that way for years 
now, and is a substantial change that may result in little benefit. After all, 
you’re probably looking at most users removing the default rule(s) just to add 
something that’s more restrictive and suits their organization’s security 
posture. If they aren’t, then it’s possible they’re introducing unnecessary 
risk. 


I agree whole heartedly that reversing the default would be
disasterous.

It would be good if a site could define their own default, so I could
say allow ssh from 'our' networks by default (but not the whole
internet). Or maybe even further restrict egress traffic so that it
could only talk to internal hosts.

To go a little further down my wish list I'd really like to do be able
to offer a standard selection of security groups for my site no tjust
'default', but that may be a bit off this topic.  Briefly my
motivation is that 'internal' here includes a number of differnt
netblock some with pretty weird masks so users tend to use 0.0.0.0/0
when they don't really meant to just to save some rather tedious
typing at setup time.

-Jon


:
:There should be some onus put on the provider and/or the user/project/tenant 
to develop a default security policy that meets their needs, even going so far 
as to make the configuration of their default security group the first thing 
they do once the project is created. Maybe some changes to the workflow in 
Horizon could help mitigate some issues users are experiencing with limited 
access to instances by allowing them to apply some rules at the time of 
instance creation rather than associating groups consisting of unknown rules. 
Or allowing changes to the default security group rules of a project when that 
project is created. There are some ways to enable providers/users to help 
themselves rather than a blanket default change across all environments. If I’m 
a user utilizing multiple OpenStack providers, I’m probably bringing my own 
security groups and rules with me anyway and am not relying on any provider 
defaults.
: 
:
:James
:
:
:
:
:
:
:
:On 3/2/16, 3:47 PM, "Jeremy Stanley"  wrote:
:
:>On 2016-03-02 21:25:25 + (+), Sean M. Collins wrote:
:>> Jeremy Stanley wrote:
:>> > On 2016-03-03 07:49:03 +1300 (+1300), Xav Paice wrote:
:>> > [...]
:>> > > In my mind, the default security group is there so that as people
:>> > > are developing their security policy they can at least start with
:>> > > a default that offers a small amount of protection.
:>> > 
:>> > Well, not a small amount of protection. The instances boot
:>> > completely unreachable from the global Internet, so this is pretty
:>> > significant protection if you consider the most secure system is one
:>> > which isn't connected to anything.
:>> 
:>> This is only if you are booting on a v4 network, which has NAT enabled.
:>> Many public providers, the network you attach to is publicly routed, and
:>> with the move to IPv6 - this will become more common. Remember, NAT is
:>> not a security device.
:>
:>I agree that address translation is a blight on the Internet, useful
:>in some specific circumstances (such as virtual address load
:>balancing) but otherwise an ugly workaround for dealing with address
:>exhaustion and connecting conflicting address assignments. I'll be
:>thrilled when its use trails off to the point that newcomers cease
:>thinking that's what connectivity with the Internet is supposed to
:>be like.
:>
:>What I was referring to in my last message was the default security
:>group policy, which blocks all ingress traffic. My point was that
:>dropping all inbound connections, while a pretty secure
:>configuration, is unlikely to be the desired configuration for
:>_most_ servers. The question is whether there's enough overlap in
:>different desired filtering policies to come up with a better
:>default than one everybody has to change because it's useful for
:>basically nobody, or whether we can come up with easier solutions
:>for picking between a canned set of default behaviors (per Monty's
:>suggestion) which users can expect to find in every OpenStack
:>environment and which provide consistent behaviors across all of
:>them.
:>-- 
:>Jeremy Stanley
:>
:>__
:>OpenStack Development Mailing List (not for usage questions)
:>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
:__
:OpenStack Development Mailing List (not for usage questions)
:Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
:http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

-- 


Re: [openstack-dev] [all] A proposal to separate the design summit

2016-02-23 Thread Jonathan Proulx
On Tue, Feb 23, 2016 at 10:14:11PM +0800, Qiming Teng wrote:

:My take of this is that we are saving the cost by isolating developers
:(contributors) from users/customers.

I'm a little concerned about this as well.  Though presumably at least
the PTLs would still attend the User/Ops conference even if their
project didn't co-schedule a midcycle and while there could be more
focused on that user feed back rather than splitting their attention
with implementation detais and other design summit type issues.

I'm not entierly settled in my opinion yet, but right now the proposed
changes seem like a good direction to me.

Moving the design summit seems popular with the dev community here.

Moving the User/Ops session further after release also seems like a
good plan as there will be some people there with real production
experiance with the new release.  In Tokyo we had an Operators session
on upgrade issues with Liberty that was very well attended but exactly
zero attendees had actually run the upgrade in production.

So later in the cycle is definitely better for getting feed back on
the last realeas, but is there a good plan for how that feed back will
feed into the next release (or maybe at that point it will be next+1)?

-Jon 

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-operators] [stable][all] Keeping Juno "alive" for longer.

2015-11-06 Thread Jonathan Proulx
On Fri, Nov 06, 2015 at 05:28:13PM +, Mark Baker wrote:
:Worth mentioning that OpenStack releases that come out at the same time as
:Ubuntu LTS releases (12.04 + Essex, 14.04 + Icehouse, 16.04 + Mitaka) are
:supported for 5 years by Canonical so are already kind of an LTS. Support
:in this context means patches, updates and commercial support (for a fee).
:For paying customers 3 years of patches, updates and commercial support for
:April releases, (Kilo, O, Q etc..) is also available.


And Canonical will support a live upgarde directly from Essex to
Icehouse and Icehouse to Mitaka?

I'd love to see Shuttleworth do that that as a live keynote, but only
on a system with at least hundres on nodes and many VMs...


That's where LTS falls down conceptually we're struggling to make
single release upgrades work at this point.

I do agree LTS for release would be great but honestly OpenStack isn't
Mature enough for that yet.

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron] Allow for per-subnet dhcp options

2015-09-12 Thread Jonathan Proulx
On Fri, Sep 11, 2015 at 3:43 PM, Kyle Mestery <mest...@mestery.com> wrote:
> On Fri, Sep 11, 2015 at 2:04 PM, Jonathan Proulx <j...@jonproulx.com> wrote:
>>
>> I'm hurt that this blue print has seen no love in 18 months:
>> https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet
>>
>
> This BP has no RFE bug or spec filed for it, so it's hard to be on anyone's
> radar when it's not following the submission guidelines Neutron has for new
> work [1]. I'm sorry this has flown under the radar so far, hopefully it can
> rise up with an RFE bug.
>
> [1] http://docs.openstack.org/developer/neutron/policies/blueprints.html

Fair.  Does look like there was never anything behind this BP, so my
own fault for not looking deeper 18mo ago and noticing that.  I like
the RFE bug tag though that part of the process is news to me (good
news).

Thanks,
-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Allow for per-subnet dhcp options

2015-09-11 Thread Jonathan Proulx
I'm hurt that this blue print has seen no love in 18 months:
https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet

I need different MTUs and different domians on different subnets.  It
appears there is still no way to do this other than running a network
node (or two if I want HA) for each subnet.

Please someone tell me I'm a fool and there's an easy way to do this
that I failed to find (again)...

-Jon

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] Allow for per-subnet dhcp options

2014-09-11 Thread Jonathan Proulx
Hi All,

I'm hoping to get this blueprint
https://blueprints.launchpad.net/neutron/+spec/dhcp-options-per-subnet
some love...seems it's been hanging around since January so my
assumption is it's not going anywhere.

As a private cloud operator I make heavy use of vlan based provider
networks to plug VMs into exiting datacenter networks.

Some of these are Jumbo frame networks and some use standard 1500 MTUs
so I really want to specify the MTU per subnet, there is currently no
way to do this.  I can get it globally in dnsmasq.conf or I can set it
per port using extra-dhcp-opt neither of which really do what I need.

Given that extra-dhcp-opt is implemented per port is seems to me that
making a similar implementation per subnet would not be a difficult
task for someone familiar with the code.

I'm not that person but if you are, then you can be my Neutron hero
for the next release cycle :)

-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron] nova-network as ML2 mechanism?

2014-07-29 Thread Jonathan Proulx
Hi All,

Would making an nova-network mechanism driver for the ml2 plugin be possible?

I'm an operator not a developer so apologies if this has been
discussed and is either planned or impossible, but a quick web search
didn't hit anything.

As an operator I would envision this a a transition mechanism, which
AFAIK is still lacking, between nova network and neutron.

If a DB transition scrip similar to the ovs-ml2 conversion could be
created, operator could transition their controller/network-nodes to
neutron while initially leaving the compute nodes with active
nova-network configs active.  It's a much simpler matter for most
operators I think to then do rolling upgrades of compute hosts to
proper neutron agents either by live migrating existing VMs or simply
through attrition.  And this would preserve continuity of VMs through
the upgrade (these may be cattle but you still don't want to slaughter
all  of them at once!)

This is no longer my use case as I jumped into neutron with Grizzly,
but having just transitioned to Icehouse and ML2, it got me to
thinking.  If this sounds feasible from a development standpoint I'd
recommend taking the discussion to the operators list to see if others
share my opinion before doing major work in that direction.

Just a thought,
-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Fwd: Fwd: Debian people don't like bash8 as a project name (Bug#748383: ITP: bash8 -- bash script style guide checker)

2014-06-11 Thread Jonathan Proulx
On Wed, Jun 11, 2014 at 2:30 PM, Morgan Fainberg
morgan.fainb...@gmail.com wrote:


 On 06/11/2014 02:01 PM, Sean Dague wrote:
 Honestly, I kind of don't care. :)

 +1 :-)

 +1 yep. that about covers it.

Ordinarily I'd agree that naming is a bike shed argument, but
projectnameinteger strongly suggests that  the package contains
that major version of the project.  Now it will be a long time before
bash gets to 8.0, but it's still a pretty bad name. Though I don't
care enough to write more than 5 sentences about it. I will think it's
dumb every time I see it.

Well off to write a bash script that generates ASCII art snakes and
cats, gonna call it python4 (pronounced python-quatre).

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Nova] os-migrateLive not working with neutron in Havana (or apparently Grizzly)

2014-02-04 Thread Jonathan Proulx
HI all,

Trying to get a little love on bug https://bugs.launchpad.net/nova/+bug/1227836

Short version is the instance migrates, but there's an RPC time out
that keeps nova thinking it's still on the old node mid-migration.
Informal survey of operators seems to suggest this always happens when
using neutron networking and never when using nova-networking (for
small values of always and never)

Feels like I could kludge in a longer timeout somewhere and it would
work for now, so I'm sifting through unfamiliar code trying to find
that and hoping someone here just knows where it is and can make my
week a whole lot better by pointing it out.

Better less kludgy solutions also welcomed, but I need a kernel update
on all my compute nodes so quick and dirty is all I need for right
now.

Thanks,
-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Nova] Grizzly - Havanna db migration bug/1245502 still biting me?

2013-12-30 Thread Jonathan Proulx
ah, because the patched version won't work if you've already run the
unpatched version, would be nice if that had been captured in the bug
but did dig it out of
http://lists.openstack.org/pipermail/openstack/2013-October/002481.html

On Mon, Dec 30, 2013 at 1:25 PM, Jonathan Proulx j...@jonproulx.com wrote:
 Hi All,

 I'm mid upgrade between Grizzly and Havana and seem to still be having
 issues with https://bugs.launchpad.net/nova/+bug/1245502

 I grabbed the patched version of 185_rename_unique_constraints.py but
 that migration still keeps failing with many issues trying to drop
 nonexistant indexes and add preexisting indexes.

 for example:
 CRITICAL nova [-] (OperationalError) (1553, Cannot drop index
 'uniq_instance_type_id_x_project_id_x_deleted': needed in a foreign
 key constraint) 'ALTER TABLE instance_type_projects DROP INDEX
 uniq_instance_type_id_x_project_id_x_deleted' ()


 I'm on Ubuntu 12.04 having originally installed Essex and
 progressively upgraded using cloud archive packages since.

 # nova-manage  db version
 184

 a dump of my nova database schema as it currently stands is at:
 https://gist.github.com/jon-proulx/79a77e8b771f90847ae9

 The bug is marked fix released, but one of the last comments request
 schemata from affected systems so not sure if replying to the bug is
 appropriate or if this should be a new bug.

 -Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] why are we backporting low priority v3 api fixes to v2?

2013-12-02 Thread Jonathan Proulx
On Mon, Dec 2, 2013 at 9:27 AM, Joe Gordon joe.gord...@gmail.com wrote:


 I don't think we should be blocking them per-se as long as they fit the API
 change guidelines https://wiki.openstack.org/wiki/APIChangeGuidelines.

Agreed, possibly not what one would assign developers to do but as an
open project if it is important enough to someone that they've already
done the work why not accept the change?

-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] future fate of nova-network?

2013-11-22 Thread Jonathan Proulx
To add to the screams of others removing features from nova-network to
achieve parity with neutron is a non starter, and it rather scares me
to hear it suggested.

I do try not to rant in public, especially about things I'm not
competent to really help fix, but I can't really contain this one any
longer:

rant
As an operator I've moved my cloud neutron already, but while it
provides many advanced features it still really falls down on
providing simple solutions for simple use cases.  Most operators I've
talked to informally hate it for that and don't want to go near it and
for new users, even those with advanced skill sets, neutron causes by
far the most cursing and rage quits I've seen (again just my
subjective observation) on IRC, Twitter, and the mailing lists.

Providing feature parity and easy cut over *should have been* priority
1 when quantum split out of nova as it was for cinder (which was a
delightful and completely unnoticable transition)

We need feature parity and complexity parity with nova-network for the
use cases it covers.  The failure to do so or even have a reasonable
plan to do so is currently the worst thing about openstack.
/rant

I do appreciate the work being done on advanced networking features in
neutron, I'm even using some of them, just someone please bring focus
back on the basics.

-Jon

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Openstack-dev][nova] Disable per-user rate limiting by default

2013-07-26 Thread Jonathan Proulx
On Fri, Jul 26, 2013 at 1:01 PM, Jay Pipes jaypi...@gmail.com wrote:

 On 07/25/2013 08:24 PM, Joshua Harlow wrote:

 You mean process/forking API right?

 Honestly I'd sort of think the whole limits.py that is this
 rate-limiting could also be turned off by default (or a log warn message
 occurs) when multi-process nova-api is used since the control for that
 paste module actually returns the currently enforced limits (and how
 much remaining) and on repeated calls to different processes those
 values will actually be different . This adds to the confusion that this
 rate-limiting in-memory/process solution creates which does also seem bad.

 https://github.com/openstack/**nova/blob/master/nova/api/**
 openstack/compute/limits.pyhttps://github.com/openstack/nova/blob/master/nova/api/openstack/compute/limits.py

 Maybe we should not have that code in nova in the future, idk


Agreed


  +10. Like using SSL in the Python daemons, it doesn't belong in a
 production Nova deployment. This kind of thing is more appropriate to
 handle in some external terminator, IMO


Strongly disagree about SSL.  Anything that talks on the network should be
able to do so securely.  It is valid to want to abstract that away for
someone else to deal with but if that is the case it should be done
explicitly, like writing WSGI apps and requiring a server to do network
communications.

-Jon
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev