Re: [Openstack-operators] [User-committee] User Committee elections

2017-02-18 Thread Maish Saidel-Keesing
Congratulations to Both Melvin and Shamail!!

I am sure you will do an awesome job.

Looking forward to continue the work - we have started to make the entire
OpenStack community a better one

On Sat, Feb 18, 2017 at 7:47 AM, Edgar Magana 
wrote:

> Congratulations to our elected User Committee Members!
>
>
>
> This is a huge achievement for the UC. We all together we are going to
> make huge impact in OpenStack. Looking forward to working with you Shamail
> and Melvin.
>
>
>
> Chris, Yih Leong and Maish,
>
>
>
> I want to thank you for being part of our efforts. We are all one team and
> I also look forward to working with you.
>
>
>
> Edgar
>
>
>
> *From: *Matt Jarvis 
> *Reply-To: *"m...@mattjarvis.org.uk" 
> *Date: *Friday, February 17, 2017 at 4:10 PM
> *To: *user-committee , OpenStack
> Operators 
> *Subject: *[User-committee] User Committee elections
>
>
>
> Hi All
>
>
>
> I'm very pleased to announce the results of the first User Committee
> elections, which closed at 21.59UTC on the 17th February 2017.
>
>
>
> The two candidates elected to the User Committee are Melvin Hillsman and
> Shamail Tahir.
>
>
>
> Congratulations to Melvin and Shamail, I know they will both do an
> excellent job in representing our community.
>
>
>
> Thank you to the other candidates who participated, and to everyone who
> voted.
>
>
>
> Full results of the poll can be found at http://civs.cs.cornell.edu/
> cgi-bin/results.pl?id=E_4ec37f2d06428110
> 
> .
>
>
>
> Matt
>
>
>
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Committee Election

2017-02-13 Thread Maish Saidel-Keesing
Hi Mark


On 13/02/17 11:42, Mark Baker wrote:
> Where can I see information about the candidates?
>
Here are the Statements from each of the candidates.

http://lists.openstack.org/pipermail/user-committee/2017-February/001652.html
- Christopher Aedo
http://lists.openstack.org/pipermail/user-committee/2017-February/001635.html
- Melvin Hillsman
http://lists.openstack.org/pipermail/user-committee/2017-February/001661.html
- Shamail Tahir
http://lists.openstack.org/pipermail/user-committee/2017-February/001680.html
- Maish Saidel-Keesing
http://lists.openstack.org/pipermail/user-committee/2017-February/001687.html
- Yih Leong Sun

>
>
> Best Regards
>
>
> Mark Baker
>
> On 11 February 2017 at 04:11, Matt Van Winkle <mvanw...@rackspace.com
> <mailto:mvanw...@rackspace.com>> wrote:
>
> Greetings!
>
>  
>
> The time for our very first User Committee election is almost upon
> us.  In case you are unaware, there were recent changes – approved
> by the Board of Directors – that allowed for the expansion of the
> UC from 3 to 5 members and a move to elected positions.  Monday
> morning UTC, February 13^th , notifications will go out to all
> community members with the AUC designation.  The poll will stay
> open till 23:59 UTC on Friday, February 17^th .
>
>  
>
> This election, members will be voting on two additional seats for
> the UC.  We have a pool of five vetted candidates:
>
>  
>
> Yih Leong Sun
>
> Maish Saidel-Keesing
>
> Christopher Adeo
>
> Shamail Tahir
>
> Melvin Hillsman
>
>  
>
> We encourage all AUC members to participate in order to maintain a
> strong and active UC.  Many thanks to the current Committee and
> many other community members who have worked very hard over the
> last couple of years to implement the AUC process, develop the new
> structure and obtain all the necessary approval and by-laws
> changes to make it a reality.  For information on the new UC
> approach, the details of the AUC designation or any other
> questions around this area of OpenStack governance, you may refer
> to the charter online [1]
>
>  
>
> If you have any other election related questions, you may reach
> out directly to the election inspectors – Matt Van Winkle
> (v...@rackspace.com) <mailto:v...@rackspace.com%29> or Matt Jarvis
> (m...@mattjarvis.org.uk) <mailto:m...@mattjarvis.org.uk%29> with
> any questions.
>
>  
>
> Thank you all, and happy voting!
>
> Matt and Matt
>
>  
>
> [1] https://governance.openstack.org/uc/reference/charter.html
> <https://governance.openstack.org/uc/reference/charter.html> 
>
>  
>
>   
>
>  
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> <mailto:OpenStack-operators@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> <http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators>
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] UC Bylaws Changes - Approved!

2016-12-20 Thread Maish Saidel-Keesing
That is awesome news!



On 06/12/16 18:26, Edgar Magana wrote:
>
> Team,
>
>  
>
> I am gladly informing you that the BoD has just approved our proposal
> to change the OpenStack Bylaws:
>
> https://docs.google.com/document/d/1QmLOeseAkjBWM_TXsUeKBErNaSHnuZp81II0T71ARfo/edit?usp=sharing
>
>  
>
> This is a huge step to bring the visibility and empowerment that we
> were looking to have from the OpenStack User perspective.
>
>  
>
> I want to thank you all! You made this possible because of your
> support, guidance and motivation.
>
>  
>
> We have a lot of more work to do but for today we can take a break and
> celebrate.
>
>  
>
> OpenStack User Committee,
>
> Shilla Saebi, Jon Proulx and Edgar Magana
>

-- 
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [AUC] Cancelling today's AUC WG meeting

2016-09-08 Thread Maish Saidel-Keesing
Due to unforeseen circumstances, today's AUC meeting today will _*not*_
be held today.

Catch you all next week.

-- 
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-11 Thread Maish Saidel-Keesing

On 11/05/16 22:18, Chris Friesen wrote:
> On 05/11/2016 11:46 AM, Ronald Bradford wrote:
>> I have been curious as to why as mentioned in the thread
>> virt_type=kvm, but
>> os-hypervisors API call states QEMU.
>
> Arguably in both cases the hypervisor is qemu.  When virt_type=kvm we
> simply enable some additional acceleration.
>
> So rather than asking "Are you using qemu or kvm?", it would be more
> accurate to ask "Are you using hardware-accelerated qemu or just
> software emulation?".

And how would OpenStack present the difference to the Operator?

In the meantime - I have opened up this bug [1]

> Chris
>
>
[1] https://bugs.launchpad.net/horizon/+bug/1580746
-- 
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-11 Thread Maish Saidel-Keesing
gt; >>
> >>--
> >>/kashyap
> >>
> >>___
> >>OpenStack-operators mailing list
> >>OpenStack-operators@lists.openstack.org
> <mailto:OpenStack-operators@lists.openstack.org>
> >>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> >
> >___
> >OpenStack-operators mailing list
> >OpenStack-operators@lists.openstack.org
> <mailto:OpenStack-operators@lists.openstack.org>
> >http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> <mailto:OpenStack-operators@lists.openstack.org>
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-11 Thread Maish Saidel-Keesing
Which still brings me back to the original point.

Is this a bug - and should it be reported as such?



On 11/05/16 18:51, Kashyap Chamarthy wrote:
> On Tue, May 03, 2016 at 02:27:00PM -0500, Sergio Cuellar Valdes wrote:
>
> [...]
>
>> I'm confused too about the use of KVM or QEMU In the computes the
>> file​/etc/nova/nova-compute.conf has:
>>
>> virt_type=kvm
>>
>> The output of:
>>
>> nova hypervisor-show  | grep hypervisor_type
>>
>> is:
>>
>> hypervisor_type   | QEMU
> As Dan noted in his response, it's because it is reporting the libvirt driver
> name (which is reported as QEMU).
>
> Refer below if you want to double-confirm if your instances are using KVM.
>
>> The virsh dumpxml of the instances shows:
>>
>> 
> That means, yes, you using KVM.  You can confirm that by checking your QEMU
> command-line of the Nova instance, you'll see something like "accel=kvm":
>
>   # This is on Fedora 23 system
>   $ ps -ef | grep -i qemu-system-x86_64
>   [...] /usr/bin/qemu-system-x86_64 -machine accel=kvm [...]
>
>> 
>> /usr/bin/qemu-system-x86_64
>>
>> ​But according to ​this document [1], it is using QEMU emulator instead of
>> KVM, because it is not using /usr/bin/qemu-kvm
>>
>>
>> So I really don't know if it's using KVM or QEMU.
> As noted above, a sure-fire way to know is to see if the instance's QEMU
> command-line has "accel=kvm".
>
> A related useful tool is `virt-host-validate` (which is part of libvirt-client
> package, at least on Fedora-based systems):
>
>$ virt-host-validate | egrep -i 'kvm'
> QEMU: Checking if device /dev/kvm exists  
>  : PASS
> QEMU: Checking if device /dev/kvm is accessible   
>  : PASS
>
>
>> [1] https://libvirt.org/drvqemu.html
>>
>

-- 
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] User Survey usage of QEMU (as opposed to KVM) ?

2016-05-03 Thread Maish Saidel-Keesing
I would think that the problem is that OpenStack does not really report
back that you are using KVM - it reports that you are using QEMU.

Even when in nova.conf I have configured virt_type=kvm, when I run nova
hypervisor-show XXX | grep hypervisor_type

I am presented with the following

| hypervisor_type   | QEMU

Bug?


On 03/05/16 18:01, Daniel P. Berrange wrote:
> Hello Operators,
>
> One of the things that constantly puzzles me when reading the user
> survey results wrt hypervisor is the high number of respondants
> claiming to be using QEMU (as distinct from KVM).
>
> As a reminder, in Nova saying virt_type=qemu causes Nova to use
> plain QEMU with pure CPU emulation which is many many times slower
> to than native CPU performance, while virt_type=kvm causes Nova to
> use QEMU with KVM hardware CPU acceleration which is close to native
> performance.
>
> IOW, virt_type=qemu is not something you'd ever really want to use
> unless you had no other options due to the terrible performance it
> would show. The only reasons to use QEMU are if you need non-native
> architecture support (ie running arm/ppc on x86_64 host), or if you
> can't do KVM due to hardware restrictions (ie ancient hardware, or
> running compute hosts inside virtual machines)
>
> Despite this, in the 2016 survey 10% claimed to be using QEMU in
> production & 3% in PoC and dev, in 2014 it was even higher at 15%
> in prod & 12% in PoC and 28% in dev.
>
> Personally my gut feeling says that QEMU usage ought to be in very
> low single figures, so I'm curious as to the apparent anomoly.
>
> I can think of a few reasons
>
>  1. Respondants are confused as to the difference between QEMU
> and KVM, so are saying QEMU, despite fact they are using KVM.
>
>  2. Respondants are confused as to the difference between QEMU
> and KVM, so have mistakenly configured their nova hosts to
> use QEMU instead of KVM and suffering poor performance without
> realizing their mistake.
>
>  3. There are more people than I expect who are running their
> cloud compute hosts inside virtual machines, and thus are
> unable to use KVM.
>
>  4. There are more people than I expect who are providing cloud
> hosting for non-native architectures. eg ability to run an
> arm7/ppc guest image on an x86_64 host and so genuinely must
> use QEMU
>
> If items 1 / 2 are the cause, then by implication the user survey
> is likely under-reporting the (already huge) scale of the KVM usage.
>
> I can see 3. being a likely explanation for high usage of QEMU in a
> dev or PoC scenario, but it feels unlikely for a production deployment.
>
> While 4 is technically possible, Nova doesn't really do a very good
> job at mixed guest arch hosting - I'm pretty sure there are broken
> pieces waiting to bite people who try it.
>
> Does anyone have any thoughts on this topic ?
>
> Indeed, is there anyone here who genuinely use virt_type=qemu in a
> production deployment of OpenStack who might have other reasons that
> I've missed ?
>
> Regards,
> Daniel

-- 
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-09 Thread Maish Saidel-Keesing

On 11/09/15 22:18, matt wrote:
Hell.  There's no clear upgrade path, and no guaranteed matched 
functionality just for starters.


Also most enterprise deployments do 3 to 5 year deployment plans.   
This ties into how equipment / power / resources are budgeted in the 
project plans.  They don't work with this mentality of rapid release 
cycles.


We assumed early on that the people deploying OpenStack would be more 
agile because of the ephemeral nature of cloud.  That's not really 
what's happening. There are good and bad reasons for that.  One good 
reason is policy certification.  By the time a team has prepped, 
built, tested an environment and is moving to production it's already 
been an entire release ( or two since most ops refuse to use a fresh 
release for stability reasons ).  By the time it passes independent 
security / qa testing and development workflows for deploying apps to 
the environment it's been 3-4 releases or more. But more often than 
not the problem is most of the VM workloads aren't good with ephemeral 
and mandating downtime on systems is an onerous change control 
process.  Making the upgrade process for the environment very 
difficult and time consuming.


More than that vendors that provide extra ( sometimes necessary ) 
additions to openstack, such as switch vendors take at least a few 
months to test a new release and certify their drivers for 
deployment.  Most folks aren't even beginning to deploy a fresh 
release of openstack EVEN if they wanted to until it's been out for at 
least six months.   It's not like they can really test pre-rc releases 
and expect their tests to mean anything.


There's almost no one riding the wave of new deployments.


Matt - every word above is golden. Well said!


On Mon, Nov 9, 2015 at 3:06 PM, Tom Cameron <tom.came...@rackspace.com 
<mailto:tom.came...@rackspace.com>> wrote:


>I would not call that the extreme minority.
>I would say a good percentage of users are on only getting to
Juno now.

The survey seems to indicate lots of people are on Havana,
Icehouse and Juno in production. I would love to see the survey
ask _why_ people are on older versions because for many operators
I suspect they forked when they needed a feature or function that
didn't yet exist, and they're now stuck in a horrible parallel
universe where upstream has not only added the missing feature but
has also massively improved code quality. Meanwhile, they can't
spend the person hours on either porting their work into the new
Big Tent world we live in, or can't bare the thought of having to
throw away their hard earned tech debt. For more on this, see the
myth of the "sunken cost".

If it turns out people really are deploying new clouds with old
versions on purpose because of a perceived stability benefit, then
they aren't reading the release schedule pages close enough to see
that what they're deploying today will be abandoned soon in the
future. In my _personal_ opinion which has nothing to do with
Openstack or my employer, this is really poor operational due
diligence.

If, however, a deployer has been working on a proof of concept for
18-24 months and they're now ready to go live with their cloud
running a release from 18-24 months ago, I have sympathy for them.
The bigger the deployment, the harder this one is to solve which
makes it a prime candidate for the LTS strategy.

Either way, we've lost the original conversation long ago. It
sounds like we all agree that an LTS release strategy suits most
needs but also that it would take a lot of work that hasn't yet
been thought of or started. Maybe there should be a session in
Austin for this topic after blueprints are submitted and
discussed? It would be nice to have the operators and developers
input in a single place, and to get this idea on the radar of all
of the projects.

--
Tom Cameron


________
    From: Maish Saidel-Keesing <mais...@maishsk.com
<mailto:mais...@maishsk.com>>
Sent: Monday, November 9, 2015 14:29
To: Tom Cameron; Jeremy Stanley;
openstack-operators@lists.openstack.org
<mailto:openstack-operators@lists.openstack.org>
Subject: Re: [Openstack-operators] [openstack-dev] [stable][all]
Keeping Juno "alive" for longer.

On 11/09/15 21:01, Tom Cameron wrote:
>  From your other thread...
>
>> Or else you're saying you intend to fix the current inability
of our projects to skip intermediate releases entirely during upgrades
> I think without knowing it, that's what most would be
suggesting, yeah. Of course, like you mentioned, the real work is
in how upgrades get refactored to skip intermediate releases (two
or three of them).
>
> DB schema changes can basically be rolled up 

Re: [Openstack-operators] [openstack-dev] [stable][all] Keeping Juno "alive" for longer.

2015-11-09 Thread Maish Saidel-Keesing

On 11/09/15 22:06, Tom Cameron wrote:

I would not call that the extreme minority.
I would say a good percentage of users are on only getting to Juno now.

The survey seems to indicate lots of people are on Havana, Icehouse and Juno in 
production. I would love to see the survey ask _why_ people are on older versions because 
for many operators I suspect they forked when they needed a feature or function that 
didn't yet exist, and they're now stuck in a horrible parallel universe where upstream 
has not only added the missing feature but has also massively improved code quality. 
Meanwhile, they can't spend the person hours on either porting their work into the new 
Big Tent world we live in, or can't bare the thought of having to throw away their hard 
earned tech debt. For more on this, see the myth of the "sunken cost".

If it turns out people really are deploying new clouds with old versions on 
purpose because of a perceived stability benefit, then they aren't reading the 
release schedule pages close enough to see that what they're deploying today 
will be abandoned soon in the future. In my _personal_ opinion which has 
nothing to do with Openstack or my employer, this is really poor operational 
due diligence.

I don't think people are deploying old clouds or old versions.
They are just stuck on older versions. Why (as matt said in his reply) 
the upgrade process is hell! And when your environment grows past a 
certain point if you have have to upgrade say 100 hosts, it can take a 
good couple of months to get the quirks fixed and sorted out, and then 
you have to start all over again, because the next release just came out.


A constant game of chasing your tail to keep up with the technology, is 
one that some are not willing to play (or are not capable of playing 
either).


If, however, a deployer has been working on a proof of concept for 18-24 months 
and they're now ready to go live with their cloud running a release from 18-24 
months ago, I have sympathy for them. The bigger the deployment, the harder 
this one is to solve which makes it a prime candidate for the LTS strategy.

Either way, we've lost the original conversation long ago. It sounds like we 
all agree that an LTS release strategy suits most needs but also that it would 
take a lot of work that hasn't yet been thought of or started. Maybe there 
should be a session in Austin for this topic after blueprints are submitted and 
discussed? It would be nice to have the operators and developers input in a 
single place, and to get this idea on the radar of all of the projects.

--
Tom Cameron


____
From: Maish Saidel-Keesing <mais...@maishsk.com>
Sent: Monday, November 9, 2015 14:29
To: Tom Cameron; Jeremy Stanley; openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] [stable][all] Keeping Juno 
"alive" for longer.

On 11/09/15 21:01, Tom Cameron wrote:

  From your other thread...


Or else you're saying you intend to fix the current inability of our projects 
to skip intermediate releases entirely during upgrades

I think without knowing it, that's what most would be suggesting, yeah. Of 
course, like you mentioned, the real work is in how upgrades get refactored to 
skip intermediate releases (two or three of them).

DB schema changes can basically be rolled up and kept around for a while, so 
that's not too be a problem. Config files OTOH have no schema or schema 
validator, so that would require tooling and all kinds of fun (bug prone) 
wizardry.

This is all solvable, but it adds complexity for the sake of what I can only 
imagine are the extreme minority of users. What do the user/operator surveys 
say about the usage of older releases? What portion of the user base is 
actually on releases prior to Havana?

I would not call that the extreme minority.
I would say a good percentage of users are on only getting to Juno now.

--
Tom Cameron



From: Jeremy Stanley <fu...@yuggoth.org>
Sent: Monday, November 9, 2015 12:35
To: openstack-operators@lists.openstack.org
Subject: Re: [Openstack-operators] [openstack-dev] [stable][all] Keeping Juno 
"alive" for longer.

On 2015-11-09 17:11:35 + (+), Tom Cameron wrote:
[...]

I support an LTS release strategy because it will allow more
adoption for more sectors by offering that stability everyone's
talking about. But, it shouldn't be a super-super long support
offering. Maybe steal some of Ubuntu's game and do an LTS every 4
releases or so (24 months), but then maybe Openstack only supports
them for 24 months time? Again, my concern is that this is free,
open source software and you're probably not going to get many
community members to volunteer to offer their precious time fixing
bugs in a 2-year-old codebase that have been fixed for 18 months
in a newer version.

[...]

Because we want people to be able upgrade their depl

Re: [Openstack-operators] [openstack-dev] Scheduler proposal

2015-10-08 Thread Maish Saidel-Keesing

Forgive the top-post.

Cross-posting to openstack-operators for their feedback as well.

Ed the work seems very promising, and I am interested to see how this 
evolves.


With my operator hat on I have one piece of feedback.

By adding in a new Database solution (Cassandra) we are now up to three 
different database solutions in use in OpenStack


MySQL (practically everything)
MongoDB (Ceilometer)
Cassandra.

Not to mention two different message queues
Kafka (Monasca)
RabbitMQ (everything else)

Operational overhead has a cost - maintaining 3 different database 
tools, backing them up, providing HA, etc. has operational cost.


This is not to say that this cannot be overseen, but it should be taken 
into consideration.


And *if* they can be consolidated into an agreed solution across the 
whole of OpenStack - that would be highly beneficial (IMHO).



--
Best Regards,
Maish Saidel-Keesing


On 10/08/15 03:24, Ed Leafe wrote:

On Oct 7, 2015, at 2:28 PM, Zane Bitter <zbit...@redhat.com> wrote:


It seems to me (disclaimer: not a Nova dev) that which database to use is 
completely irrelevant to your proposal,

Well, not entirely. The difference is that what Cassandra offers that separates it from 
other DBs is exactly the feature that we need. The solution to the scheduler isn't to 
simply "use a database".


which is really about moving the scheduling from a distributed collection of 
Python processes with ad-hoc (or sometimes completely missing) synchronisation 
into the database to take advantage of its well-defined semantics. But you've 
framed it in such a way as to guarantee that this never gets discussed, because 
everyone will be too busy arguing about whether or not Cassandra is better than 
Galera.

Understood - all one has to do is review the original thread from back in July 
to see this happening. But the reason that I framed it then as an experiment in 
which we would come up with measures of success we could all agree on up-front 
was so that if someone else thought that Product Foo would be even better, we 
could set up a similar test bed and try it out. IOW, instead of bikeshedding, 
if you want a different color, you build another shed and we can all have a 
look.


-- Ed Leafe




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Election] [TC] TC Candidacy

2015-09-29 Thread Maish Saidel-Keesing

Hello to you all.

I would like to propose myself as a candidate for the Technical 
Committee for

the upcoming term. The reasons for running in the last election [1]
are still relevant for this election of the TC.

Since the last election my involvement in OpenStack has increased with a
spotlight on the Operators aspect of the community:

- Focusing on the ops-tags-team[2] helping to create tags with the 
intent of

   creating information relevant to Operators.
- Helping to vet and review submissions to the OpenStack Planet[3] and
   contributing as a core in openstack-planet-core.
- Participating in the Item Writing Committee of the first Foundation
   initiative for the inaugural OpenStack Certification Exam Program.

As an OpenStack community we have made some huge steps in the right 
direction,

and are bringing more and more of the Operator and User community into our
midst. Operators and Users should also be represented in the
Technical Committee.

It is my hope that the electorate accept that there is a huge benefit,
and also a clear need, to have representation from all aspects of 
OpenStack, not

only from the developer community. When this happens - the disconnect (and
sometimes tension) that we have between these different parts will cease 
to be

an issue and we as a community will continue to thrive and grow.
In order to finally bridge this gap, it is time to open the ranks, bring an
Operator into the TC and to become truly inclusive.

I humbly ask for your selection so that I may represent Operators in the
Technical Committee for the next term.

Thank you for your consideration.

Some more information about me:
OpenStack profile: https://www.openstack.org/community/members/profile/15265
Reviews: 
https://review.openstack.org/#/q/reviewer:%22Maish+Saidel-Keesing%22,n,z

IRC: maishsk
Blog: http://technodrone.blogspot.com

[1] 
http://lists.openstack.org/pipermail/openstack-dev/2015-April/062372.html
[2] 
https://review.openstack.org/#/q/status:open+project:stackforge/ops-tags-team,n,z

[3] https://wiki.openstack.org/wiki/AddingYourBlog

--
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-dev] [Neutron] Allowing DNS suffix to be set per subnet (at least per tenant)

2015-09-03 Thread Maish Saidel-Keesing

On 09/03/15 20:51, Gal Sagie wrote:
I am not sure if this address what you need specifically, but it would 
be worth checking these

two approved liberty specs:

1) 
https://github.com/openstack/neutron-specs/blob/master/specs/liberty/internal-dns-resolution.rst
2) 
https://github.com/openstack/neutron-specs/blob/master/specs/liberty/external-dns-resolution.rst



Thanks Gal,

So I see that from the bp [1] the fqdn will be configurable for each and 
every port ?


I think that this does open up a number of interesting possibilities, 
but I would also think that it would be sufficient to do this on a 
subnet level?


We do already have the option of setting nameservers per subnet - I 
assume the data model is already implemented - which is interesting - 
because I don't see that as part of the information that is sent by 
dnsmasq so it must be coming from neutron somewhere.


The domain suffix - definitely is handled by dnsmasq.


On Thu, Sep 3, 2015 at 8:37 PM, Steve Wormley <openst...@wormley.com 
<mailto:openst...@wormley.com>> wrote:


As far as I am aware it is not presently built-in to Openstack.
You'll need to add a dnsmasq_config_file option to your dhcp agent
configurations and then populate the file with:
domain=DOMAIN_NAME,CIDR for each network
i.e.
domain=example.com <http://example.com>,10.11.22.0/24
<http://10.11.22.0/24>
...

-Steve


On Thu, Sep 3, 2015 at 1:04 AM, Maish Saidel-Keesing
<mais...@maishsk.com <mailto:mais...@maishsk.com>> wrote:

Hello all (cross-posting to openstack-operators as well)

Today the setting of the dns suffix that is provided to the
instance is passed through dhcp_agent.

There is the option of setting different DNS servers per
subnet (and and therefore tenant) but the domain suffix is
something that stays the same throughout the whole system is
the domain suffix.

I see that this is not a current neutron feature.

Is this on the roadmap? Are there ways to achieve this today?
If so I would be very interested in hearing how.

Thanks
    -- 
    Best Regards,

Maish Saidel-Keesing


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe:
openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
<http://openstack-dev-requ...@lists.openstack.org?subject:unsubscribe>
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev




--
Best Regards ,

The G.


--
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Neutron] Allowing DNS suffix to be set per subnet (at least per tenant)

2015-09-03 Thread Maish Saidel-Keesing

Hello all (cross-posting to openstack-operators as well)

Today the setting of the dns suffix that is provided to the instance is 
passed through dhcp_agent.


There is the option of setting different DNS servers per subnet (and and 
therefore tenant) but the domain suffix is something that stays the same 
throughout the whole system is the domain suffix.


I see that this is not a current neutron feature.

Is this on the roadmap? Are there ways to achieve this today? If so I 
would be very interested in hearing how.


Thanks
--
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [all] OpenStack voting by the numbers

2015-07-29 Thread Maish Saidel-Keesing

Some of my thoughts on the Voting process.

http://technodrone.blogspot.com/2015/07/openstack-summit-voting-by-numbers.html

Guess which category has the most number of submissions??
;)

--
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [tags] Meeting this week

2015-07-29 Thread Maish Saidel-Keesing
I will probably be in flight at that time - so I am sorry but I Will not 
be able to join.




On 07/28/15 22:28, Tom Fifield wrote:

Hi all,

I think it's probably a good idea to have a meeting in our scheduled 
slot 1400 UTC on Thurs 30th July.


I'll actually be in Beijing at the time, but I've planned to be there, 
but it something goes wrong, it would be great if someone could run 
the meeting. I think a good discussion topic is what you'd like to do 
for the mid-cycle ops event as we'll likely have a 90 minute in-person 
session.



Regards,


Tom

On 16/07/15 21:11, Tom Fifield wrote:

OK, if there isn't soon an outpouring of support for this meeting, I
think it's best cancelled :)


On 16/07/15 18:37, Maish Saidel-Keesing wrote:

I would prefer to defer today's meeting

On 07/16/15 11:17, Tom Fifield wrote:

Hi,

According to the logs from last week, which are sadly in yet another
directory: http://eavesdrop.openstack.org/meetings/_operator_tags/ 
, we

do have a meeting this week, but the only agenda item (Jamespage 
markbaker - thoughts on packaging) didn't pan out since markbaker 
wasn't

available.

Is there interest for a meeting, and any proposed topics? ops:ha?

Regards,


Tom



On 16/07/15 16:10, Maish Saidel-Keesing wrote:

Are we having a meeting today at 14:00 UTC?

On 06/29/15 07:39, Tom Fifield wrote:

Hi,

As noted last meeting, we didn't get even half way through out 
agenda,

so we will meet this week as well.

So, join us this Thursday Jul 2nd 1400 UTC in #openstack-meeting on
freenode
(http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150702T1400 


)

To kick off with agenda item #4:
https://etherpad.openstack.org/p/ops-tags-June-2015

Previous meeting notes can be found at:
http://eavesdrop.openstack.org/meetings/ops_tags/2015/


Regards,


Tom






___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators




___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [tags] Meeting this week

2015-07-16 Thread Maish Saidel-Keesing

Are we having a meeting today at 14:00 UTC?

On 06/29/15 07:39, Tom Fifield wrote:

Hi,

As noted last meeting, we didn't get even half way through out agenda,
so we will meet this week as well.

So, join us this Thursday Jul 2nd 1400 UTC in #openstack-meeting on
freenode
(http://www.timeanddate.com/worldclock/fixedtime.html?iso=20150702T1400 )

To kick off with agenda item #4:
https://etherpad.openstack.org/p/ops-tags-June-2015

Previous meeting notes can be found at:
http://eavesdrop.openstack.org/meetings/ops_tags/2015/


Regards,


Tom



--
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [tags] ops:ha tag request for feedback

2015-07-13 Thread Maish Saidel-Keesing
I would appreciate if you could all leave your comments and thoughts on 
the following patch [1].


Please be advised this is an initial version and your feedback is very 
much appreciated.


[1] https://review.openstack.org/#/c/200128/1
--
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [tags] Ops-Data vs. Ops-Tags

2015-06-29 Thread Maish Saidel-Keesing
, but complementary concepts. Operational data about
projects is a necessary first step if we ever want to define operational
tags. You should definitely not limit yourself to the tag framework, and
define the best ways to gather and convey that information. As a second
step, someone may propose tags based on that operational data (I have a
few ideas there already), but that is really a second step.

That doesn't mean we can't display operational data on the official
website describing projects. If the Foundation staff sees value in
displaying that information on www.openstack.org, it can certainly be
displayed, in parallel to the labels/tags.

In conclusion, I'd like to suggest that you find an better name to
describe this operational data about projects, because calling them
tags or labels will be confusing in this two-step picture. My
personal suggestion would be ops-data, but I don't really care which
color you paint that bikeshed (as long as it's not blue!).

Thanks for reading so far, hoping we can work within the same framework
to communicate the best information to all the consumers of our ecosystem.

--
Thierry Carrez (ttx)

[1] 
http://eavesdrop.openstack.org/meetings/ops_tags/2015/ops_tags.2015-06-18-14.03.log.html

--
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [ops][tags][packaging] [tc] ops:packaging tag - a little common sense, please

2015-06-10 Thread Maish Saidel-Keesing
. But please don't call these 
things tags, because they aren't.


Before I move on to other issues, I'd like to point out that the more 
you go down the route of adding more and more attributes, most of 
which would be optional, to these structured documents, the more you 
will run into a problem of having stale and misleading data contained 
in these JSON files. And that will lead to a worse user experience for 
operators than the current wiki, which, like all wikis, is notoriously 
out-of-date in many places.


A tag should mean one thing, and one thing only, to encourage clarity. 
The definition of the tag should be decisive regarding why a 
particular project has been tagged with that tag.


= Operators should not be curating packaging tags =

*Packagers* should be curating tags that correspond to whether or not 
packages exist for particular projects in OpenStack. Operators consume 
these packages, for sure, but the packagers in the upstream operating 
system communities are the ones that know the most accurate 
information about the state of packaging for a particular project and 
a particular release.


I strongly believe that these ops:packaged tags should really just be 
tags in the openstack/governance repository (i.e. regular TC tags) and 
be curated by the packaging community, which means they would not have 
the ops: prefix on them.


= Remove value component from the tag =

The current proposal for both ops:packaged and ops:production-use [3] 
tag definitions include a value component. For example, the 
ops:packaged tags must include one of the following values:


 - good
 - beginning
 - warning
 - no

With each of the above values attempting to indicate to the audience 
that the packages for a particular project are in varying states of 
repair and bug-freeness. There are a number of problems with 
including this value in the tag:


1) This value judgement about the packaging quality is ripe for 
getting out-of-date VERY quickly. Who is going to continually update 
the value parts for the different projects? Things change very quickly 
in packaging-land. Bugs are fixed, new packages built and published. 
Who in the ops community is going to track this? Please see point 
above about Operators should not be curating packaging tags.


2) All software, including packages, has bugs. This is something that 
the Ops community just needs to accept and get over. Quabbling with 
each other about what constitutes a major bug in packaging and how 
many major bugs bugs constitute a warning value is less than 
useful to the audience here. Instead, the ops community should focus 
on providing useful documentation and links to the audience, in the 
form of long-form release notes or opinions about packages and 
documentation on the OpenStack wiki.


= Packaging tags should be release-specific, or they will be wrong =

For these packaging tags, the release must be part of the tag itself, 
otherwise the information it denotes would be indeterminate.


As an example, suppose you have a tag that looks like this:

 ops:packaged:centos:good

And in the tag definition you say that the tag is applied to projects 
that have CentOS RPM packages available within the last 6 months. 
Well, as you all know, packages are published for a *particular 
release of OpenStack*. So, if Nova is tagged with 
ops:packaged:centos:good in, say, August 2015, the tag would 
ostensibly be saying that packages exist for Nova in Kilo. However, 
once November rolls around, and packages for Liberty don't (yet) 
exist, are you going to remove this ops:packaged:centos:good tag 
from Nova or (worse) change it to ops:pacakged:centos:no?


Instead, have the tag be specific to a release of OpenStack:

packaged:centos:kilo

= In summary =

In short, I would love it if the Ops Tags team would stick with binary 
tag definitions -- a tag should mean one thing and one thing only.


I don't believe the Ops Tags team should be curating the packaging 
tags -- the packaging community should do that, and do that under the 
main openstack/governance repository.


Packagers, I would love it if you would curate a set of tags that 
looks kind of like this:


 - packaged:centos:kilo
 - packaged:ubuntu:liberty
 - packaged:sles:juno

I will be proposing the above tag definition to the 
openstack/governance repository this week.


Thanks for listening,
-jay

[1] 
http://eavesdrop.openstack.org/irclogs/%23openstack-meeting/%23openstack-meeting.2015-06-09.log.html#t2015-06-09T20:18:00

[2] https://review.openstack.org/#/c/186633
[3] https://review.openstack.org/#/c/189168


--
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [Tags] Tags Team Repo our first tags!

2015-06-01 Thread Maish Saidel-Keesing

On 06/01/15 11:07, Tom Fifield wrote:

Hi all,

Thank you very much for officially kicking off the Ops Tags Team at the
Vancouver summit!

Based on our discussions, I've made a bit of progress. We now have a

* wiki page: https://wiki.openstack.org/wiki/Operations/Tags
* repository: https://github.com/stackforge/ops-tags-team


... and ... a first attempt at writing up those tags we discussed at the
meeting:


* Ops:docs:install-guide: https://review.openstack.org/#/c/186638/

* Ops:Packaged: https://review.openstack.org/#/c/186633/


Please jump on the code review system and add your comments!


Regards,


Tom

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
Thanks Tom! Great start. I have already added some of my comments to the 
reviews.


Does this repo have the same 'core' reviewer methodology like all others?
If so who are those who have the 'authority' to merge patch?
--
Best Regards,
Maish Saidel-Keesing

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Backlog Specs: a way to send requirements to the developer community

2015-05-14 Thread Maish Saidel-Keesing

On 05/14/15 21:04, Boris Pavlovic wrote:

John,

I believe that backlog should be different much simpler then specs.

Imho Operators don't have time / don't want to write long long specs 
and analyze how they are aligned with specs
or moreover how they should be implemented and how they impact 
performance/security/scalability. They want

just to provide feedback and someday get it implemented/fixed.

In Rally we chose different way called feature request.
The process is the same as for specs, but template is much simpler.

Bravo

Can we please have this as a default template and the default way to 
allow Operators to submit a feature request for EVERY and ALL the 
OpenStack projects ??





Here is the page:
https://rally.readthedocs.org/en/latest/feature_requests.html

And here is the sample of feature request:
https://rally.readthedocs.org/en/latest/feature_request/launch_specific_benchmark.html


Best regards,
Boris Pavlovic

On Thu, May 14, 2015 at 9:03 PM, Boris Pavlovic 
bpavlo...@mirantis.com mailto:bpavlo...@mirantis.com wrote:


John,

I believe that backlog should be different much simpler then specs.

Imho Operators don't have time / don't want to write long long
specs and analyze how they are aligned with specs
or moreover how they should be implemented and how they impact
performance/security/scalability. They want
just to provide feedback and someday get it implemented/fixed.

In Rally we chose different way called feature request.
The process is the same as for specs, but template is much simpler.

Here is the page:
https://rally.readthedocs.org/en/latest/feature_requests.html

And here is the sample of feature request:

https://rally.readthedocs.org/en/latest/feature_request/launch_specific_benchmark.html


Best regards,
Boris Pavlovic

On Thu, May 14, 2015 at 8:47 PM, John Garbutt
j...@johngarbutt.com mailto:j...@johngarbutt.com wrote:

Hi,

I was talking with Matt (VW) about how best some large deployment
working sessions could send their requirements to Nova.

As an operator, if you have a problem that needs fixing or use
case
that needs addressing, a great way of raising that issue with the
developer community is a Backlog nova-spec.

You can read more about Nova's backlog specs here:
http://specs.openstack.org/openstack/nova-specs/specs/backlog/

Any questions, comments or ideas, please do let me know.

Thanks,
John

PS
In Kilo we formally started accepting backlog specs,
although we are
only just getting the first of these submitted now. There is
actually
a patch to fix up how they get rendered:
https://review.openstack.org/#/c/182793/2

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
mailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators





___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Best Regards,
Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Hypervisor decision

2015-03-19 Thread Maish Saidel-Keesing

That is interesting Tim.

Why Hyper-V if I may ask? Why not stick just with KVM?

Maish

On 19/03/15 08:22, Tim Bell wrote:


At CERN, we run KVM and Hyper-V. Both work fine.

Depending on the size of your cluster, you may have other factors to 
consider such as monitoring and configuration management. We use 
Puppet to configure both environnments.


Images are tagged with a property hypervisor_type which is used to 
schedule workloads to the appropriate hypervisor.


Tim

*From:*matt [mailto:m...@nycresistor.com]
*Sent:* 18 March 2015 23:24
*To:* Abel Lopez
*Cc:* openstack-operators@lists.openstack.org
*Subject:* Re: [Openstack-operators] Hypervisor decision

most openstack environments at kvm, so if you want to stick with the 
herd, that's the way to go.


On Wed, Mar 18, 2015 at 5:53 PM, Abel Lopez alopg...@gmail.com 
mailto:alopg...@gmail.com wrote:


Interesting topic, since you're already running Hyper-v and ESX,
I'm inferring that your workload is heavy on windows VMs.
If you're doing majority windows, and minority linux, stick with
hyper-v. The benchmarks I've read show that windows VMs run
fastest on hyper-v VS all others.
If you expect an even split, it might make sense to create Host
Aggregates of various hypervisiors like hyper-v and KVM, and
utilize extra-specs in the flavors and guest images to aid in
scheduling, for example Windows images launch on the hyper-v pool


 On Mar 18, 2015, at 2:41 PM, Vytenis Silgalis
vsilga...@outlook.com mailto:vsilga...@outlook.com wrote:

 Hello,

 I'm looking to champion openstack at my company, we currently
run both a small hyper-v cluster and 3 VMware clusters.   However
we are not married to any specific hypervisor.  What I'm looking
for is recommendations for which hypervisor we should look at for
our openstack environments and the pros/con's people have run into
with the various hypervisors supported by openstack.


 Thanks,
 Vytenis
 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
mailto:OpenStack-operators@lists.openstack.org

http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
mailto:OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


--
Best Regards, Maish Saidel-Keesing
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] database hoarding

2014-10-31 Thread Maish Saidel-Keesing

On 31/10/2014 11:01, Daniele Venzano wrote:
 On 10/30/14 23:30, Abel Lopez wrote:
 As an operator, I'd prefer to have time based criteria over number of
 rows, too.
 I envision something like `nova-manage db purge [days]` where we can
 leave it up to the administrator to decide how much of their old data
 (if any) they'd be OK losing.
 I would certainly use this feature, but please, please, please: make
 it work across all OpenStack databases, not just for Nova, but also
 for Cinder, Neutron, etc.
A Huge +1 for across all projects!!
Maish
 Let's try to get something integrated project-wide instead of having a
 number of poorly documented, slightly inconsistent commands.



 ___
 OpenStack-operators mailing list
 OpenStack-operators@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

-- 
Maish Saidel-Keesing


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators