[Openstack-operators] [Fault Genes] WG Weekly Meeting

2017-01-13 Thread Nematollah Bidokhti
BEGIN:VCALENDAR
METHOD:REQUEST
PRODID:Microsoft Exchange Server 2010
VERSION:2.0
BEGIN:VTIMEZONE
TZID:Pacific Standard Time
BEGIN:STANDARD
DTSTART:16010101T02
TZOFFSETFROM:-0700
TZOFFSETTO:-0800
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=1SU;BYMONTH=11
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:16010101T02
TZOFFSETFROM:-0800
TZOFFSETTO:-0700
RRULE:FREQ=YEARLY;INTERVAL=1;BYDAY=2SU;BYMONTH=3
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
ORGANIZER;CN=Nematollah Bidokhti:MAILTO:nematollah.bidok...@huawei.com
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='openstack
 -operat...@lists.openstack.org':MAILTO:openstack-operators@lists.openstack
 .org
ATTENDEE;ROLE=REQ-PARTICIPANT;PARTSTAT=NEEDS-ACTION;RSVP=TRUE;CN='user-comm
 it...@lists.openstack.org':MAILTO:user-commit...@lists.openstack.org
DESCRIPTION;LANGUAGE=en-US:When: Occurs every Thursday effective 5/19/2016 
 from 8:00 AM to 9:00 AM (UTC-08:00) Pacific Time (US & Canada).\nWhere: Co
 nference Call\n\nNote: The GMT offset above does not reflect daylight savi
 ng time adjustments.\n\n*~*~*~*~*~*~*~*~*~*\n\nStandard Agenda Items:\n-  
  Launchpad data transformation to Fault Genes database\n-   Stacko
 verflow data capture\n-   User Interface for Operators\n-   Machin
 e learning analysis process\n-   Collaboration with other projects suc
 h as Congress\n-   Open items\n\n\n\n  Meeting Conference Link: ht
 tps://www.connectmeeting.att.com\n  Meeting Number: 8887160594\n  
 Code: 3773562\n  USA Toll-Free: 888-716-0594\n  USA Caller Paid: 2
 15-861-6199\nFor Other Countries:Click Here to View Global Con
 ference Access Numbers\n\n\n\nThe link to
  the wiki https://wiki.openstack.org/wiki/Fault_Genes_Working_Group\n\n\n\
 n
RRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=TH;WKST=SU
EXDATE;TZID=Pacific Standard Time:20161124T08,20161208T08
SUMMARY;LANGUAGE=en-US:[Fault Genes] WG Weekly Meeting
DTSTART;TZID=Pacific Standard Time:20160519T08
DTEND;TZID=Pacific Standard Time:20160519T09
UID:04008200E00074C5B7101A82E00850B72A929FAAD101000
 0100028E111C3B604D445920643DCDC32B7B2
CLASS:PUBLIC
PRIORITY:5
DTSTAMP:20170113T180119Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:24
LOCATION;LANGUAGE=en-US:Conference Call
X-MICROSOFT-CDO-APPT-SEQUENCE:24
X-MICROSOFT-CDO-OWNERAPPTID:1315723232
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
END:VEVENT
BEGIN:VEVENT
SUMMARY:[Fault Genes] WG Weekly Meeting
DTSTART;TZID=Pacific Standard Time:20161103T08
DTEND;TZID=Pacific Standard Time:20161103T09
UID:04008200E00074C5B7101A82E00850B72A929FAAD101000
 0100028E111C3B604D445920643DCDC32B7B2
RECURRENCE-ID;TZID=Pacific Standard Time:20161103T00
CLASS:PUBLIC
PRIORITY:5
DTSTAMP:20170113T180119Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:24
LOCATION:Conference Call
X-MICROSOFT-CDO-APPT-SEQUENCE:24
X-MICROSOFT-CDO-OWNERAPPTID:1315723232
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
END:VEVENT
BEGIN:VEVENT
SUMMARY:[Fault Genes] WG Weekly Meeting
DTSTART;TZID=Pacific Standard Time:20161110T08
DTEND;TZID=Pacific Standard Time:20161110T09
UID:04008200E00074C5B7101A82E00850B72A929FAAD101000
 0100028E111C3B604D445920643DCDC32B7B2
RECURRENCE-ID;TZID=Pacific Standard Time:20161110T00
CLASS:PUBLIC
PRIORITY:5
DTSTAMP:20170113T180119Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:24
LOCATION:Conference Call
X-MICROSOFT-CDO-APPT-SEQUENCE:24
X-MICROSOFT-CDO-OWNERAPPTID:1315723232
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
END:VEVENT
BEGIN:VEVENT
SUMMARY:[Fault Genes] WG Weekly Meeting
DTSTART;TZID=Pacific Standard Time:20161201T08
DTEND;TZID=Pacific Standard Time:20161201T09
UID:04008200E00074C5B7101A82E00850B72A929FAAD101000
 0100028E111C3B604D445920643DCDC32B7B2
RECURRENCE-ID;TZID=Pacific Standard Time:20161201T00
CLASS:PUBLIC
PRIORITY:5
DTSTAMP:20170113T180119Z
TRANSP:OPAQUE
STATUS:CONFIRMED
SEQUENCE:24
LOCATION:Conference Call
X-MICROSOFT-CDO-APPT-SEQUENCE:24
X-MICROSOFT-CDO-OWNERAPPTID:1315723232
X-MICROSOFT-CDO-BUSYSTATUS:TENTATIVE
X-MICROSOFT-CDO-INTENDEDSTATUS:BUSY
X-MICROSOFT-CDO-ALLDAYEVENT:FALSE
X-MICROSOFT-CDO-IMPORTANCE:1
X-MICROSOFT-CDO-INSTTYPE:1
X-MICROSOFT-DISALLOW-COUNTER:FALSE
END:VEVENT
BEGIN:VEVENT
SUMMARY:[Fault Genes] WG Weekly Meeting
DTSTART;TZID=Pacific Standard Time:20161222T08
DTEND;TZID=Pacific Standard Time:20161222T09000

Re: [Openstack-operators] [Fault Genes] WG Weekly Meeting

2017-01-13 Thread Nematollah Bidokhti
Hi All,

Following are the meeting summaries:

*   We had a good participation
*   We have a new member (solution architect) from University of Alabama
*   Nemat presented the team machine learning approach to analyze the data 
and focused on data pre-processing
*   Team agreed that our initial focus will be on defining and analyzing 
the fault classifications
*   Isaac presented the activities that Intel & Rackspace are performing 
regarding the OpenStack resiliency and HA
*   Nemat presented the draft of OpenStack Fault Management Blueprint white 
paper & team provided some feedback.
o   The plan is to have internal reviews first and then send it to a larger 
audience
o   Also adding additional use cases

Action Items:

*   Michael will obtain & combine the latest set of data from Launchpad, 
stack overflow and provide them to the team for analysis
*   Zainab, Suli & Michael will perform the 1st part of the data 
pre-processing for the next meeting.
o   The activity is to come up with word frequency of use from the raw data
o   This is the 1st step for us to start the machine learning data modeling 
which will help us to create the dictionary
*   Isaac will provide all the data and links to the Intel/Rackspace fault 
insertion testing data
*   Nemat submit an abstract for OpenStack summit in Boston regarding the 
OpenStack fault management blueprint

Thanks,
Nemat


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [all] [goals] proposing a new goal: "Control Plane API endpoints deployment via WSGI"

2017-01-13 Thread Andy McCrae
>
>
> I have been looking for a Community Goal [1] that would directly help
> Operators and I found the "run API via WSGI" useful.
> So I've decided to propose this one as a goal for Pike but I'll stay
> open to postpone it to Queen if our community thinks we already have
> too much goals for Pike.
>
> Note that this goal might help to achieve 2 other goals later:
> - enable and test SSL everywhere
> - enable and test IPv6 everywhere
>
> Here's the draft:
> https://review.openstack.org/419706
>
> Any feedback is very welcome, thanks for reading so far.
>
> [1] https://etherpad.openstack.org/p/community-goals


I'd be in favour of that being a goal - I was keen to propose this a goal
for OpenStack-Ansible during the Pike cycle, so if it's community wide that
would be great.

Aside from the SSL everywhere and IPv6 everywhere I think it helps
deployments to become more uniform and easier to manage.

Thanks,
Andy
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] [publiccloud-wg] Doodle poll closed for chair elections

2017-01-13 Thread Zhipeng Huang
Thanks Matt for your leadership for the WG. Let's work together to build a
strong and great working group :)

On Jan 13, 2017 6:33 PM, "Matt Jarvis"  wrote:

> Hello All
>
> I've just closed the poll to elect the chairs for the Public Cloud working
> group. The results can be found at
>
> http://doodle.com/poll/s63r5s4ghyucmnqu
>
> Given the votes cast, and our desire for wide representation, I'd like to
> propose 3 co-chairs for this group :
>
>
>- Matt Jarvis - Independent
>- Tobias Rydberg - City Cloud
>- Howard Huang - Huawei
>
>
> I will update our wiki page, and look forward to discussing our next steps
> at our meeting next week. Thank you to everyone who participated.
>
> Matt
>
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
>
>
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [nova] Do you use os-instance-usage-audit-log?

2017-01-13 Thread Tomáš Vondra
Hi Matt,
I've looked at my Nova config and yes, I have it on. We do billing using
Ceilometer data and I think compute.instance.exists is consumed as well. The
Ceilometer event retention is set to 6 months and the database size is in
single gigabytes. Nova database table task_log only contains the fact that
the audit job ran successfully and has 6 MB. It was not pruned for more than
a year.
Tomas

-Original Message-
From: Matt Riedemann [mailto:mrie...@linux.vnet.ibm.com] 
Sent: Thursday, January 12, 2017 12:09 AM
To: openstack-operators@lists.openstack.org
Subject: [Openstack-operators] [nova] Do you use
os-instance-usage-audit-log?

Nova's got this REST API [1] which pulls task_log data from the nova
database if the 'instance_usage_audit' config option value is True on any
compute host.

That table is populated in a periodic task from all computes that have it
enabled and by default it 'audits' instances created in the last month (the
time window is adjustable via the 'instance_get_active_by_window_joined'
config option).

The periodic task also emits a 'compute.instance.exists' notification for
each instance on that compute host which falls into the audit period. I'm
fairly certain that notification is meant to be consumed by Ceilometer which
is going to store it in it's own time-series database.

It just so happens that Nova is also storing this audit data in it's own
database, and never cleaning it up - the only way in-tree to move that data
out of the nova.task_log table is to archive it into shadow tables, but that
doesn't cut down on the bloat in your database. That
os-instance-usage-audit-log REST API is relying on the nova database though.

So my question is, is anyone using this in any shape or form, either via the
Nova REST API or Ceilometer? Or are you using it in one form but not the
other (maybe only via Ceilometer)? If you're using it, how are you
controlling the table growth, i.e. are you deleting records over a certain
age from the nova database using a cron job?

Mike Bayer was going to try and find some large production data sets to see
how many of these records are in a big and busy production DB that's using
this feature, but I'm also simply interested in how people use this, if it's
useful at all, and if there is interest in somehow putting a limit on the
data, i.e. we could add a config option to nova to only store records in the
task_log table under a certain max age.

[1] 
http://developer.openstack.org/api-ref/compute/#server-usage-audit-log-os-in
stance-usage-audit-log

-- 

Thanks,

Matt Riedemann


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [publiccloud-wg] Doodle poll closed for chair elections

2017-01-13 Thread Matt Jarvis
Hello All

I've just closed the poll to elect the chairs for the Public Cloud working
group. The results can be found at

http://doodle.com/poll/s63r5s4ghyucmnqu

Given the votes cast, and our desire for wide representation, I'd like to
propose 3 co-chairs for this group :


   - Matt Jarvis - Independent
   - Tobias Rydberg - City Cloud
   - Howard Huang - Huawei


I will update our wiki page, and look forward to discussing our next steps
at our meeting next week. Thank you to everyone who participated.

Matt
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators