Re: [Openstack-operators] new SIGs to cover use cases

2018-11-13 Thread Stig Telfer
You are right to make the connection - this is a subject that regularly comes 
up in the discussions of the Scientific SIG, though it’s just one of many use 
cases for hybrid cloud.  If a new SIG was created around hybrid cloud, it would 
be useful to have it closely connected with the Scientific SIG.

Cheers,
Stig


> On 13 Nov 2018, at 09:01,  
>  wrote:
> 
> Good point.
> Adding SIG list.
> 
> -Original Message-
> From: Jeremy Stanley [mailto:fu...@yuggoth.org] 
> Sent: Monday, November 12, 2018 4:46 PM
> To: openstack-operators@lists.openstack.org
> Subject: Re: [Openstack-operators] new SIGs to cover use cases
> 
> 
> [EXTERNAL EMAIL] 
> Please report any suspicious attachments, links, or requests for sensitive 
> information.
> 
> 
> On 2018-11-12 15:46:38 + (+), arkady.kanev...@dell.com wrote:
> [...]
>>  1.  Do we have or want to create a user community around Hybrid cloud.
> [...]
>>  2.  As we target AI/ML as 2019 target application domain do we
>>  want to create a SIG for it? Or do we extend scientific
>>  community SIG to cover it?
> [...]
> 
> It may also be worthwhile to ask this on the openstack-sigs mailing
> list.
> -- 
> Jeremy Stanley
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC meeting today: Keycloak and federated authentication, SIG in Berlin

2018-09-26 Thread Stig Telfer
Hi All - 

We have an IRC meeting today at 1100 UTC in channel #openstack-meeting.  
Everyone is welcome.

This week we are gathering requirements and sharing experiences on using 
Keycloak for simplifying federated authentication.  We also have Berlin forum 
proposals to discuss.

The full agenda is here: 
https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_September_26th_2018 


Cheers,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC meeting: Docker and HPC

2018-04-25 Thread Stig Telfer
Hello All -

We have an IRC meeting at 1100 UTC today in channel #openstack-meeting.  
Everyone is welcome.

Today we have Christian Kniep from Docker joining us to talk about how Docker 
can be adapted to suit the requirements of HPC workloads.  I saw him present on 
this recently and it should be a very interesting discussion.

The full agenda is here:

https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_April_25th_2018

Cheers,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC Meeting, Wednesday 1100UTC: Cyborg and Forum

2018-03-27 Thread Stig Telfer
Hi all - 

We have a Scientific SIG meeting on Wednesday at 1100 UTC.  This week’s agenda 
is at:

https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_March_28th_2018 


We’ll be hearing from Zhipeng Huang about the Cyborg project for managing 
hardware acceleration resources, and discussing forum topics to propose for the 
Vancouver summit.

Everyone is welcome.

Cheers,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC meeting - Managing dedicated capacity, PTG topics and beyond

2018-02-20 Thread Stig Telfer
Hi All - 

We have a Scientific SIG IRC meeting today in channel #openstack-meeting at 
2100 UTC.  Everyone is welcome.

This week we are gathering details on best practice for managing dedicated 
capacity in scientific OpenStack private clouds, eg for resources funded by 
specific projects or partners.  Also we are gathering presentation picks for 
community voting for Vancouver, and preparations for the PTG in Dublin next 
week.

The full agenda is here: 
https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_20th_2018 


Cheers,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Meeting Wednesday: Ironic for infrastructure management, Kubernetes-as-a-Service

2018-02-13 Thread Stig Telfer
Hi All - 

We have a Scientific SIG IRC meeting on Wednesday at 1100 UTC in channel 
#openstack-meeting.  Everyone is welcome.

As well as some up-coming conferences, we also have a discussion on recent CERN 
& SKA projects using Ironic for bare metal infrastructure management, and an 
update from Saverio at SWITCH on their Kubernetes-as-a-Service.

The agenda details are here:

https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_14th_2018 


Cheers,
Stig___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC meeting: preemptible instances and upcoming events

2018-01-30 Thread Stig Telfer
Hi All - 

We have a Scientific SIG IRC meeting on Wednesday at 1100 UTC in channel 
#openstack-meeting.  Everyone is welcome.

This week’s agenda includes an update on recent work towards preemptible “spot” 
instances.  We also have a few events on the calendar to discuss and plan for.

Full agenda is here:
https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_January_31st_2018 


Best wishes
Stig___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC meeting Wednesday 1100UTC: Bare metal Magnum

2018-01-16 Thread Stig Telfer
Hi All - 

We have an IRC meeting on Wednesday at 1100 UTC in channel #openstack-meeting.  
Everyone is welcome.

Agenda is here:
https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_January_17th_2018 


This week we have two main items on the agenda: Our guest is Spyros Trigazis 
from CERN, who will be discussing latest improvements in Magnum’s support for 
research computing use cases, and in particular bare metal use cases.  We’d 
also like to kick off some discussion around PTG planning.

Cheers,
Stig___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC meeting today: SGX security and Ironic, RCUK cloud workshop

2018-01-09 Thread Stig Telfer
Hello All - 

We have an IRC meeting in channel #openstack-meeting at 2100 UTC today (just 
over an hour’s time).  All are welcome.

Today’s agenda is here:

https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_January_9th_2018 


We have Andrey Brito from the Federal University of Campina Grande discussing 
some of his work using Intel SGX for strengthening the security of Ironic 
compute instances.  We also have a roundup of yesterday’s RCUK Cloud Workshop 
in London.  Plus, inevitably, a roundup of people’s experiences of the impact 
of the Spectre/Meltdown remediations.

Cheers,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC meeting today CANCELLED - chair HA failure

2017-12-06 Thread Stig Telfer
Hi All - 

Unfortunately we have to cancel today's Scientific SIG meeting due to 
non-availability of chairs for the meeting.

There were a couple of upcoming scientific OpenStack-related events to 
publicise:

- Computing Insight UK 2017, 12-13 December in Manchester, UK has a cloud theme 
to the conference and a Manchester OpenStack meetup is planned to coincide. 
https://eventbooking.stfc.ac.uk/news-events/ciuk-2017 


- Research Councils UK Cloud Workshop 2018, 8 January 2018 at the Crick 
Institute in London is seeking additional OpenStack-themed presentations for 
the conference schedule.  Abstracts can be submitted during registration via 
http://bit.ly/rcuk-cloud-workshop2018-reg 


Apologies,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] CaaS with magnum

2017-11-22 Thread Stig Telfer
Hi Sergio - 

On a much smaller scale we use Magnum with Ironic bare metal.  Like the CERN 
team we are using the Pike release.  Mostly we use Docker Swarm mode on Fedora 
25.

Our team made a few fixes to get this working on our deployment (particularly 
relating to Ironic support and updating to Docker Swarm Mode), but I think all 
those patches have made their way upstream.  We continue to use it successfully 
for our application workloads, and our team is still active with contributions 
back upstream.

Best wishes,
Stig


> On 22 Nov 2017, at 08:08, Tim Bell  wrote:
> 
> We use Magnum at CERN to provide Kubernetes, Mesos and Docker Swarm on 
> demand. We’re running over 100 clusters currently using Atomic.
>  
> More details at 
> https://cds.cern.ch/record/2258301/files/openstack-france-magnum.pdf 
> 
>  
> Tim
>  
> From: Sergio Morales Acuña 
> Date: Wednesday, 22 November 2017 at 01:01
> To: openstack-operators 
> Subject: [Openstack-operators] CaaS with magnum
>  
> Hi.
>  
> I'm using Openstack Ocata and trying Magnum.
>  
> I encountered a lot of problems but I been able to solved many of them.
>  
> Now I'm curious about your experience with Magnum. ¿Any success stories? 
> ¿What about more recent versions of k8s (1.7 or 1.8)? ¿What driver is, in 
> your opinion, better: Atomic or CoreOS? ¿Do I need to upgrade Magnum to 
> follow K8S's crazy changes?
>  
> ¿Any tips on the CaaS problem?¿It's Magnum Ocata too old for this world?
>  
> Cheers
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][kolla] Pike upgrade report

2017-09-21 Thread Stig Telfer
Hi All -

In this week’s Scientific WG/SIG meeting there was some interest in our 
experiences with upgrading to Pike earlier this week.

I’ve gathered together our notes to make a blog post on how we got on: 
https://www.stackhpc.com/kolla-kayobe-pike.html 
 - the crux is that Kolla 
(through Kayobe) did a pretty good job of making a complex process achievable. 
Still had some fault trapping to do though!

Cheers,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC Meeting today at 2100 UTC: Opportunistic utilisation

2017-09-19 Thread Stig Telfer
Hello All - 

We have an IRC meeting today at 2100 UTC in channel #openstack-meeting.  
Everyone is welcome.

Today we will be joined by Rajul Kumar of Northeastern University in Boston, 
who will be talking about their recent research into methods for opportunistic 
usage of cloud resources for HTC.

The agenda is available here: 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_September_19th_2017

Full details of how to join Scientific WG/SIG meetings are available here:
http://eavesdrop.openstack.org/#Scientific_Working_Group

Please note, we are in transition to becoming an OpenStack SIG, and meeting 
announcements will transition to the openstack-sigs mailing list.  Get on it if 
you haven’t already...

Cheers,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Reminder: IRC Meeting today 1100UTC #openstack-meeting

2017-09-13 Thread Stig Telfer
Hello all - 

We have an IRC meeting today at 1100 UTC (about 2 hours time) in 
#openstack-meeting.

Today we are rolling over the topics from last week’s discussion in the other 
time zone.  Principally this was seeking input on how people measure available 
resources, and provide opportunistic usage of them.  We also have an update on 
the second edition of the Scientific OpenStack book.  Finally, discussion on 
whether the WG should transition into a SIG.

The agenda is available here:
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_September_13th_2017
 


Everyone is welcome.

Cheers,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] WG IRC meeting: 2100 UTC - opportunistic capacity etc.

2017-09-05 Thread Stig Telfer
Hello all -We have a Scientific WG IRC meeting today at 2100 UTC in channel #openstack-meeting.  Everyone is welcome.The agenda is here: https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_September_5th_2017Today we’d like to discuss people’s solutions for making use of opportunistic capacity in OpenStack, to achieve maximum utilisation from private cloud resources.  We’ve also got updates on the new edition of the Scientific OpenStack book, and plenty of upcoming events to plan for.Cheers,Stig
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Scientific WG IRC meeting Wednesday 1100 UTC

2017-08-15 Thread Stig Telfer
Hi All - 

We have an IRC meeting coming up on Wednesday at 1100 UTC on 
#openstack-meeting.  Everyone is welcome.

This week we’ve got matters to discuss around an updated edition of the 
Scientific OpenStack book, WG activities for the OpenStack day coming up in 
London, plus discussion/update on the Open Research Cloud project.

The full agenda is available here:

https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_August_16th_2017
 


Details of how to join our meetings is available here:

http://eavesdrop.openstack.org/#Scientific_Working_Group 


Cheers,
Stig___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Scientific WG IRC meeting today at 2100 UTC

2017-08-08 Thread Stig Telfer
Hi All - 

We have an IRC meeting coming up in a few hours on #openstack-meeting.  
Everyone is welcome.

This week we’ve got matters to discuss around Supercomputing 2017, an updated 
edition of the Scientific OpenStack book, and the up-and-coming User Committee 
elections.

The full agenda is available here:

https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_August_8th_2017
 


Details of how to join our meetings is available here:

http://eavesdrop.openstack.org/#Scientific_Working_Group 


Cheers,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Reminder: Scientific WG IRC meeting today 1100 UTC

2017-08-02 Thread Stig Telfer
Hi All - 

We have an IRC meeting today at 1100 UTC in #openstack-meeting (about 2 hours 
time).  Everyone is welcome.

This week we have Pierre Riteau from the Chameleon Cloud presenting their work 
on cloud workload tracing.  Plus a round-up of WG activities for Supercomputing 
2017 and OpenStack Days London.

This week’s agenda in full is here:

https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_August_2nd_2017
 


Details of our meetings is available here:

http://eavesdrop.openstack.org/#Scientific_Working_Group 


Cheers,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Scientific WG IRC meeting: today 2100 UTC

2017-07-25 Thread Stig Telfer
Hello all - 

We have a Scientific WG IRC meeting shortly at 2100 UTC in channel 
#openstack-meeting.  

The meeting details are available here: 

http://eavesdrop.openstack.org/#Scientific_Working_Group

The agenda is available here:

https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_July_25th_2017

Everyone is welcome.

Cheers,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Scientific WG IRC Meeting today: Lustre on OpenStack

2017-07-19 Thread Stig Telfer
Hi All - 

We have a Scientific WG IRC meeting today at 1100 UTC (ie, about 40 minutes 
time) in channel #openstack-meeting.  The agenda is available here:

https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_July_19th_2017
 


Today we have a guest speaker, James Beal from the Wellcome Trust Sanger 
Institute, who will be talking about his  recent research on using the Lustre 
parallel filesystem in OpenStack.  A link to his presentation is here:

https://docs.google.com/presentation/d/1kGRzcdVQX95abei1bDVoRzxyC02i89_m5_sOfp8Aq6o/edit#slide=id.g22aee564af_0_0
 


Everyone is welcome!

Cheers,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Scientific WG IRC Meeting: Tuesday 2100 UTC

2017-07-11 Thread Stig Telfer
Hi All - 

We have an IRC meeting coming up later in channel #openstack-meeting.  We’d 
like to discuss plans for activities at Supercomputing 2017, and follow up on 
some investigations for scientific application catalogues.

Agenda details for today:

https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_July_11th_2017
 


Details for how to join the meetings:

http://eavesdrop.openstack.org/#Scientific_Working_Group 


Everyone is welcome.

Cheers,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] NEW TIME: Scientific WG IRC Meeting Wednesday 1100 UTC

2017-07-04 Thread Stig Telfer
Hi All - 

We have a Scientific WG meeting on Wednesday at 1100 UTC in #openstack-meeting. 
 This is our NEW TIME - two hours later than usual.

This week we’d like to discuss plans for SuperComputing as the BoF submission 
deadline is approaching.  Also, a catch up on scientific application catalogues 
and Blair’s experiences with transparent huge pages issues under load.

The meeting agenda is available here:

https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_July_5th_2017
 


Details for our IRC meetings are available here:

http://eavesdrop.openstack.org/#Scientific_Working_Group 


Everyone is welcome.

Cheers,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [neutron] ML2/OVS dropping packets?

2017-06-21 Thread Stig Telfer
Hi Jon -

From what I understand, while you might have gone to the trouble of configuring 
a lossless data centre ethernet, that guarantee of packet loss ends at the 
hypervisor. OVS (and other virtual switches) will drop packets rather than 
exert back pressure.

I saw a useful paper from IBM Zurich on developing a flow-controlled virtual 
switch:

http://researcher.ibm.com/researcher/files/zurich-DCR/Got%20Loss%20Get%20zOVN.pdf
 


It’s a bit dated (2013) but may still apply.

If you figure out a way of preventing this with modern OVS, I’d be very 
interested to know.

Best wishes,
Stig


> On 21 Jun 2017, at 16:24, Jonathan Proulx  wrote:
> 
> On Wed, Jun 21, 2017 at 02:39:23AM -0700, Kevin Benton wrote:
> :Are there any events going on during these outages that would cause
> :reprogramming by the Neutron agent? (e.g. port updates) If not, it's likely
> :an OVS issue and you might want to cross-post to the ovs-discuss mailing
> :list.
> 
> Guess I'll have to wander deeper into OVS land.
> 
> No agent updates and nothing in ovs logs (at INFO), flipping to Debug
> and there's so many messages they get dropped:
> 
> 017-06-21T15:15:36.972Z|00794|dpif(handler12)|DBG|Dropped 35 log messages in 
> last 0 seconds (most recently, 0 seconds ago) due to excessive rate
> 
> /me wanders over to ovs-discuss
> 
> Thanks,
> -Jon
> 
> :Can you check the vswitch logs during the packet loss to see if there are
> :any messages indicating a reason? If that doesn't show anything and it can
> :be reliably reproduced, it might be worth increasing the logging for the
> :vswitch to debug.
> :
> :
> :
> :On Tue, Jun 20, 2017 at 12:36 PM, Jonathan Proulx  wrote:
> :
> :> Hi All,
> :>
> :> I have a very busy VM (well one of my users does I don't have access
> :> but do have cooperative and copentent admin to interact with on th
> :> eother end).
> :>
> :> At peak times it *sometimes* misses packets.  I've been didding in for
> :> a bit ant it looks like they get dropped in OVS land.
> :>
> :> The VM's main function in life is to pull down webpages from other
> :> sites and analyze as requested.  During peak times ( EU/US working
> :> hours ) it sometimes hangs some requests and sometimes fails.
> :>
> :> Looking at traffic the out bound SYN request from VM is always good
> :> and returning ACK always gets to physical interface of the hypervisosr
> :> (on a provider vlan).
> :>
> :> When packets get dropped they do not make it to the qvo-XX on
> :> the integration bridge.
> :>
> :> My suspicion is that OVS isn't keeping up eth1-br flow rules remaping
> :> from external to internal vlan-id but neither quite sure how to prove
> :> that or what to do about it.
> :>
> :> My initial though had been to blame contrack but drops are happening
> :> before the iptables rules and while there's a lot of connections on
> :> this hypervisor:
> :>
> :> net.netfilter.nf_conntrack_count = 351880
> :>
> :> There should be plent of overhead to handle:
> :>
> :> net.netfilter.nf_conntrack_max = 1048576
> :>
> :> Anyone have thought son where to go with this?
> :>
> :> version details:
> :> Ubuntu 14.04
> :> OpenStack Mitaka
> :> ovs-vsctl (Open vSwitch) 2.5.0
> :>
> :> Thanks,
> :> -Jon
> :>
> :> --
> :>
> :> ___
> :> OpenStack-operators mailing list
> :> OpenStack-operators@lists.openstack.org
> :> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> :>
> 
> -- 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] [scientific] 0900 UTC meeting time change?

2017-06-21 Thread Stig Telfer
Doing future meetings at 1100UTC would also be fine by me.  I’ll put it on the 
agenda for today’s meeting (in ~10 minutes)

Stig


> On 21 Jun 2017, at 09:13, Blair Bethwaite  wrote:
> 
> Thanks Pierre. That's also my preference.
> 
> Just to be clear, today's 0900 UTC meeting (45 mins from now) is going ahead 
> at the usual time.
> 
> On 21 Jun. 2017 5:21 pm, "Pierre Riteau"  > wrote:
> Hi Blair,
> 
> I strongly prefer 1100 UTC.
> 
> Pierre
> 
> > On 21 Jun 2017, at 06:54, Blair Bethwaite  > > wrote:
> >
> > Hi all,
> >
> > The Scientific-WG's 0900 UTC meeting time (it's the non-US friendly time) 
> > is increasingly difficult for me to make. A couple of meetings back we 
> > discussed changing it and had general agreement. The purpose here is to get 
> > a straw poll of preferences for -2 or +2 to the current time, i.e., do you 
> > prefer 0700 or 1100 UTC instead (subject to meeting channel availability)?
> >
> > Cheers,
> > b1airo
> > ___
> > User-committee mailing list
> > user-commit...@lists.openstack.org 
> > 
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee 
> > 
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org 
> 
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators 
> 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC Meeting: Science app catalogues, security of research computing on OpenStack - Wednesday 0900 UTC

2017-06-20 Thread Stig Telfer
Greetings! 

We have an IRC meeting on Wednesday at 0900 UTC in channel #openstack-meeting.

This week we’d like to hear people’s thoughts and experiences on providing 
scientific application catalogues to users - in particular with a view to 
gathering best practice for a new chapter for the Scientific OpenStack book.

Similarly, we’d like to discuss what people are doing for security of research 
computing instances on OpenStack.

The agenda is available here: 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_June_21st_2017
 

Details of the IRC meeting are here: 
http://eavesdrop.openstack.org/#Scientific_Working_Group 


Please come along with ideas, suggestions or requirements.  All are welcome.

Cheers,
Stig___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Starting soon: IRC Meeting: RDMA-enabled Hadoop and Spark, Open Research Cloud - Tuesday 2100 UTC

2017-06-13 Thread Stig Telfer
Hello all - 

We have an IRC meeting coming up on Tuesday at 2100 UTC in channel 
#openstack-meeting - about 90 minutes time.

This week we have Dr Xiaoyi Lu and Prof DK Panda from Ohio State University to 
talk about HiBD, their project for creating RDMA-enabled optimisations of 
Hadoop, Spark and Memcached.  For context (and some impressive benchmarks), DK 
recently presented their work at HPCAC:

- Video: https://youtu.be/MOUL_rOqQUw
- Slides: 
http://www.hpcadvisorycouncil.com/events/2017/swiss-workshop/pdf/Tuesday11April/DKPanda_BigDataMeetsHPC_Tue04112017.pdf

We’ll be discussing subjects such as implementation in OpenStack infrastructure.

We’ll also have an update on the Open Research Cloud initiative, and look ahead 
to Scientific OpenStack events coming this summer.

Hopefully see you on Tuesday, 2100 UTC, #openstack-meeting.  Everyone is 
welcome.  Details of how to join are at:

http://eavesdrop.openstack.org/#Scientific_Working_Group

Cheers,
Stig
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC Meeting: RDMA-enabled Hadoop and Spark, Open Research Cloud - Tuesday 2100 UTC

2017-06-12 Thread Stig Telfer
Hello all - 

We have an IRC meeting coming up on Tuesday at 2100 UTC in channel 
#openstack-meeting.

This week we have Dr Xiaoyi Lu and Prof DK Panda from Ohio State University to 
talk about HiBD, their project for creating RDMA-enabled optimisations of 
Hadoop, Spark and Memcached.  For context (and some impressive benchmarks), DK 
recently presented their work at HPCAC:

- Video: https://youtu.be/MOUL_rOqQUw
- Slides: 
http://www.hpcadvisorycouncil.com/events/2017/swiss-workshop/pdf/Tuesday11April/DKPanda_BigDataMeetsHPC_Tue04112017.pdf

We’ll be discussing subjects such as implementation in OpenStack infrastructure.

We’ll also have an update on the Open Research Cloud initiative, and look ahead 
to Scientific OpenStack events coming this summer.

Hopefully see you on Tuesday, 2100 UTC, #openstack-meeting.  Everyone is 
welcome.  Details of how to join are at:

http://eavesdrop.openstack.org/#Scientific_Working_Group

Cheers,
Stig
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Reminder: Scientific WG IRC meeting - Tuesday 2100 UTC

2017-05-30 Thread Stig Telfer
Hi all -

We have a meeting on Tuesday at 2100 UTC in channel #openstack-meeting.  We’d 
like to round up some summit video picks, cover the discussion on gathering 
OpenStack-related research papers, and discuss goals for the new cycle.

Today’s full agenda is here:
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_May_30th_2017

Details of the meeting are here:
http://eavesdrop.openstack.org/#Scientific_Working_Group

Cheers,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Reminder: Scientific WG IRC meeting - Wednesday 0900 UTC

2017-05-23 Thread Stig Telfer
Hello -

We have a meeting on Wednesday at 0900 UTC in channel #openstack-meeting.  We’d 
like to cover the discussion on gathering OpenStack-related research papers, 
and round up on activities from the summit.

Today’s full agenda is here:
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_May_24th_2017
 


Details of the meeting are here:
http://eavesdrop.openstack.org/#Scientific_Working_Group 


Cheers,
Stig___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] IRC Meeting reminder: Scientific WG, today 2100 UTC

2017-05-16 Thread Stig Telfer
Greetings all - 

We have a meeting coming up today at 2100 UTC in channel #openstack-meeting.  
We’ve got plenty of follow-on discussions from the Boston summit and cloud 
congress, plus an interview of WG members on why OpenStack works well for 
research computing.

Today’s full agenda is here:
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_May_16th_2017
 


Details of the meeting are here:
http://eavesdrop.openstack.org/#Scientific_Working_Group 


Cheers,
Stig___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Today's Scientific WG meeting CANCELLED

2017-05-02 Thread Stig Telfer
Hi all - 

Unfortunately we must cancel today’s IRC meeting as none of the co-chairs are 
able to run the session.  However, there’s plenty going on next week to look 
forward to:

https://etherpad.openstack.org/p/Scientific-WG-boston 


We are still looking for more lightning talk topics (see the top of the 
etherpad above).

Apologies,
Stig

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Lightning talks on Scientific OpenStack

2017-04-27 Thread Stig Telfer
Hi George - 

Sorry for the slow response.  The consensus was for 8 minutes maximum.  That 
should be plenty for a lightning talk, and enables us to fit one more in.

Best wishes,
Stig


> On 27 Apr 2017, at 20:29, George Mihaiescu <lmihaie...@gmail.com> wrote:
> 
> Hi Stig, it will be 10 minutes sessions like in Barcelona?
> 
> Thanks,
> George 
> 
>> On Apr 26, 2017, at 03:31, Stig Telfer <stig.openst...@telfer.org> wrote:
>> 
>> Hi All - 
>> 
>> We have planned a session of lightning talks at the Boston summit to discuss 
>> topics specific for OpenStack and research computing applications.  This was 
>> a great success at Barcelona and generated some stimulating discussion.  We 
>> are also hoping for a small prize for the best talk of the session!
>> 
>> This is the event:
>> https://www.openstack.org/summit/boston-2017/summit-schedule/events/18676
>> 
>> If you’d like to propose a talk, please add a title and your name here:
>> https://etherpad.openstack.org/p/Scientific-WG-boston
>> 
>> Everyone is welcome.
>> 
>> Cheers,
>> Stig
>> 
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Reminder: Scientific WG IRC meeting today at 2100 UTC

2017-04-18 Thread Stig Telfer
Greetings - 

We have a Scientific WG IRC meeting at the top of the hour (2100 UTC) in 
channel #openstack-meeting.  Everyone is welcome.

The agenda[1] is a round-up of some of the planning for Boston: Summit, Forum, 
Congress and Social. We’ll try to whistle through that and get onto AOB - of 
which there seems to be more than the average week in the Scientific OpenStack 
world.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_April_18th_2017
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Passing a flavor's extra_specs to libvirt

2017-04-13 Thread Stig Telfer
Hello Paco - 

> On 13 Apr 2017, at 11:10, Paco Bernabé  wrote:
> 
> Hi,
> 
> The issue is apparently solved; we found a solution here 
> https://www.stackhpc.com/tripleo-numa-vcpu-pinning.html where libvirt and 
> qemu-kvm version restrictions where indicated. The CentOS7.3 repo has an 
> older qemu-kvm version (1.5.3) than the one needed (>= 2.1.0), so we added 
> the kvm-common repo, as recommended by the web. Now 1 host is returned 
> (Filter NUMATopologyFilter returned 1 hosts) and the guest VM has the desired 
> cpu topology.

I’d seen your mail on my way onto a plane, and wanted to get home and get my 
facts straight before responding.  Great to see you got there first, and that 
our post was helpful: made my day :-)

Share and enjoy,
Stig

> 
> -- 
> Met vriendelijke groeten / Best regards,
> Paco Bernabé
> Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | T 
> +31 610 961 785 | p...@surfsara.nl | www.surfsara.nl
> 
> 
> 
> 
>> Op 13 apr. 2017, om 11:38 heeft Paco Bernabé  
>> het volgende geschreven:
>> 
>> Hi,
>> 
>> More info; in de log file of the nova-scheduler we see messages like ( 
>> is de compute host name):
>> 
>>  • ,  fails NUMA topology requirements. No host NUMA 
>> topology while the instance specified one. host_passes 
>> /usr/lib/python2.7/site-packages/nova/scheduler/filters/numa_topology_filter.py:100
>>  • Filter NUMATopologyFilter returned 0 hosts
>> 
>> So, we are not sure if the filters are ok in nova.conf:
>> 
>> scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,CoreFilter,NUMATopologyFilter
>> 
>> -- 
>> Met vriendelijke groeten / Best regards,
>> Paco Bernabé
>> Senior Systemsprogrammer | SURFsara | Science Park 140 | 1098XG Amsterdam | 
>> T +31 610 961 785 | p...@surfsara.nl | www.surfsara.nl
>> 
>> 
>> 
>> 
>>> Op 13 apr. 2017, om 09:34 heeft Paco Bernabé 
>>>  het volgende geschreven:
>>> 
>>> Hi,
>>> 
>>> After reading the following articles:
>>> 
>>> • https://docs.openstack.org/admin-guide/compute-flavors.html
>>> • 
>>> http://redhatstackblog.redhat.com/2015/05/05/cpu-pinning-and-numa-topology-awareness-in-openstack-compute/
>>> • 
>>> http://openstack-in-production.blogspot.nl/2015/08/numa-and-cpu-pinning-in-high-throughput.html
>>> • 
>>> http://www.stratoscale.com/blog/openstack/cpu-pinning-and-numa-awareness/
>>> 
>>> We are not able yet to expose the NUMA config to the guest VM. This is the 
>>> configuration of one of our compute nodes:
>>> 
>>> # lscpu
>>> Architecture:  x86_64
>>> CPU op-mode(s):32-bit, 64-bit
>>> Byte Order:Little Endian
>>> CPU(s):48
>>> On-line CPU(s) list:   0-47
>>> Thread(s) per core:2
>>> Core(s) per socket:12
>>> Socket(s): 2
>>> NUMA node(s):  4
>>> Vendor ID: GenuineIntel
>>> CPU family:6
>>> Model: 79
>>> Model name:Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
>>> Stepping:  1
>>> CPU MHz:   2266.085
>>> BogoMIPS:  4404.00
>>> Virtualization:VT-x
>>> L1d cache: 32K
>>> L1i cache: 32K
>>> L2 cache:  256K
>>> L3 cache:  15360K
>>> NUMA node0 CPU(s): 0-5,24-29
>>> NUMA node1 CPU(s): 6-11,30-35
>>> NUMA node2 CPU(s): 12-17,36-41
>>> NUMA node3 CPU(s): 18-23,42-47
>>> 
>>> 
>>> And this is the flavour configuration:
>>> 
>>> OS-FLV-DISABLED:disabled   | False  
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> OS-FLV-EXT-DATA:ephemeral  | 2048   
>>> 
>>> 
>>> 
>>> 
>>> 
>>>  
>>> disk   | 30 
>>> 
>>> 
>>> 
>>>   

[Openstack-operators] [scientific] Reminder: Scientific WG IRC meeting today at 0900 UTC

2017-04-12 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting later today at 0900 UTC in channel 
#openstack-meeting.  Everyone is welcome.

The agenda[1] is a round-up of some of the Forum planning that has been going 
on in the last couple of weeks.  We also have a round-up of the OpenStack angle 
at the HPC Advisory Council conference. Plus an update on WG activities planned 
for Boston.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_April_12th_2017
 

[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG IRC meeting today at 2100 UTC

2017-03-21 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting later today at 2100 UTC in channel 
#openstack-meeting.  Everyone is welcome.

The agenda[1] is a round-up of the goings-on at the operators meetup in Milan 
and discussion on input to the Forum at Boston.  Scientific WG meeting details 
are available here[2].

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_March_21st_2017
 

[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Meeting today CANCELLED

2017-03-15 Thread Stig Telfer
Hi All - 

Today’s Scientific WG meeting is cancelled due to the Milan Ops meetup this 
week.

There’s plenty of relevant action at the ops meetup (see the list of etherpads 
here[1]).  Hopefully there’ll be a good trip report and plenty to discuss at 
next week’s meeting.

Apologies for late notice,
Stig

[1] https://etherpad.openstack.org/p/MIL-ops-meetup 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG IRC meeting, Tuesday 2100 UTC

2017-03-07 Thread Stig Telfer
Greetings all - 

We have a Scientific WG IRC meeting later today at 2100 UTC in channel 
#openstack-meeting.  Everyone is welcome.

The agenda[1] is a simple one - follow-on brainstorming for Scientific WG input 
to the Forum at Boston.  Scientific WG meeting details are available here[2].

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_March_7th_2017
 

[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Forum Brainstorming???

2017-03-01 Thread Stig Telfer
Thanks Shamail - 

We’ve just completed a first pass on our brainstorming in the Scientific WG.  
Some ideas from a research computing perspective are here:

https://etherpad.openstack.org/p/BOS-UC-brainstorming-scientific-wg

More input from WG members (and anyone else) is of course welcome and 
appreciated.

Best wishes,
Stig


> On 27 Feb 2017, at 20:38, Shamail Tahir  wrote:
> 
> Hi everyone,
> 
> Welcome to the topic selection process for our first Forum in Boston. If 
> you've participated in an ops meetup before, this should seem pretty 
> comfortable. If not, note that this is not a classic conference track with 
> speakers and presentations. OpenStack community members (participants in 
> development teams, working groups, and other interested individuals) discuss 
> the topics they want to cover and get alignment on and we welcome your 
> participation.
> 
> The Forum is for the entire community to come together; create a neutral 
> space rather than having separate “ops” and “dev” days. Boston marks the 
> start of the Queen's release cycle, where ideas and requirements will be 
> gathered. Users should aim to come armed with feedback from February's Ocata 
> release if at all possible. We aim to ensure the broadest coverage of topics 
> that will allow for multiple parts of the community getting together to 
> discuss key areas within our community/projects.
> 
> Examples of the types of discussions and some sessions that might fit within 
> each one:
>   • Strategic, whole-of-community discussions, to think about the big 
> picture, including beyond just one release cycle and new technologies
>   • eg Making OpenStack One Platform for containers/VMs/Bare Metal 
> (Strategic session) the entire community congregates to share opinions on how 
> to make OpenStack achieve its integration engine goal
>   • Cross-project sessions, in a similar vein to what has happened at 
> past design summits, but with increased emphasis on issues that are relevant 
> to all areas of the community
>   • eg Rolling Upgrades at Scale (Cross-Project session) – the Large 
> Deployments Team collaborates with Nova, Cinder and Keystone to tackle issues 
> that come up with rolling upgrades when there’s a large number of machines.
>   • Project-specific sessions, where developers can ask users specific 
> questions about their experience, users can provide feedback from the last 
> release and cross-community collaboration on the priorities, and ‘blue sky’ 
> ideas for the next release.
>   • eg Neutron Pain Points (Project-Specific session) – Co-organized by 
> neutron developers and users. Neutron developers bring some specific 
> questions they want answered, Neutron users bring feedback from the latest 
> release and ideas about the future.
> 
> There are two stages to the brainstorming:
> 1. Starting today, set up an etherpad with your group/team, or use one on 
> the list and start discussing ideas you'd like to talk about at the Forum. 
> Then, through +1s on etherpads and mailing list discussion, work out which 
> ones are the most needed - just like you did prior to the ops events.  
> 2. Then, in a couple of weeks, we will open up a more formal web-based 
> tool for submission of abstracts that came out of the brainstorming on top.
> 
> We expect working groups may make their own etherpads, however the User 
> Committee offers a catch-all to get the widest feedback possible:
> https://etherpad.openstack.org/p/BOS-UC-brainstorming
> 
> Feel free to use that, or make one for your group and add it to the list at: 
> https://wiki.openstack.org/wiki/Forum/Boston2017
> 
> Thanks,
> User Committee
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: IRC meeting Wednesday 0900

2017-02-28 Thread Stig Telfer
Greetings all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC in channel 
#openstack-meeting.  Everyone is welcome.

The agenda[1] is a simple one - brainstorming for Scientific WG input to the 
Forum at Boston.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_March_1st_2017
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: IRC meeting Tuesday 2100

2017-02-21 Thread Stig Telfer
Greetings -

We have a Scientific WG IRC meeting shortly, at 2100 UTC in channel 
#openstack-meeting

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’ll survey some new work in the HPC monitoring landscape, look at 
the study coming together for identity federation and look at some options for 
an evening social at the Boston summit.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_February_21st_2017
 

[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] OpenStack at Supercomputing 2017

2017-02-07 Thread Stig Telfer
Hi All - 

We are hoping to put on a couple of hands-on OpenStack workshops at the 
Supercomputing 2017 conference in Denver later this year.  The proposal is to 
do a workshop on tuning OpenStack for scientific workloads, and a workshop on 
cloud-native deployment of scientific platforms and applications.

If you’re going to Supercomputing this year, and would be interested in taking 
part in a workshop, please add your details to our etherpad for planning the 
activity:

https://etherpad.openstack.org/p/SC17WorkshopWorksheet

The deadline for proposal submission is early next week.

Many thanks!
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: IRC meeting Tuesday 2100 UTC

2017-02-07 Thread Stig Telfer
Greetings -

We have a Scientific WG IRC meeting in about an hour’s time, at 2100 UTC in 
channel #openstack-meeting

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’ll hear from Blair on his migration to a layer-3 ECMP fabric and 
the impact it has had on MPI workloads.  Also, some further studies on 
hypervisor tuning using HPL.  Plus we have a data for the Boston Cloud 
Declaration workshop on federation policy.  And planning for SuperComputing 
2017 is already underway with proposals submitted for some hands-on workshops.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_February_7th_2017
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: IRC Meeting Wednesday 1st 0900 UTC

2017-01-31 Thread Stig Telfer
Hello - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we are talking about the Milan ops meetup, the Boston summit and 
SC2017 in Denver.  Between those three there should be plenty of WG activity to 
discuss.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_February_1st_2017
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Tuesday 2100 UTC

2017-01-24 Thread Stig Telfer
Greetings all - 

We have a Scientific WG IRC meeting on Tuesday at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’d like to develop the discussion on what the WG can achieve for 
the Boston OpenStack Summit. We’d also like to share people’s thoughts and 
experiences on frameworks for scientific reproducibility.  There’s also the 
opportunity to propose a hands-on workshop at SC2017.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_January_24th_2017
 

  
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific-wg][scientific] Reminder: IRC Meeting Wednesday 18th 0900 UTC

2017-01-17 Thread Stig Telfer
Hello all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we are looking at some upcoming events in the calendar, and then 
talking about federation (in its various forms) and how the WG can contribute 
in this area.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_January_18th_2016
 

[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Ansible-driven management of Dell server BIOS and RAID config

2017-01-10 Thread Stig Telfer
Hi All - 

We’ve just published the sources and a detailed writeup for some new tools for 
Ansible-driven management of Dell iDRAC BIOS and RAID configuration:

https://www.stackhpc.com/ansible-drac.html

The code’s up on Github and Ansible Galaxy.

It should fit neatly into any infrastructure using OpenStack Ironic for 
infrastructure management (and Dell server hardware).

Share and enjoy,
Stig
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Tuesday 2100 UTC

2017-01-10 Thread Stig Telfer
Greetings all - 

We have a Scientific WG IRC meeting on Tuesday at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’d like get our calendars out and talk about upcoming events for 
2017, in particular anything that WG members are actively participating in.  
Not least of all, the Boston OpenStack Summit...

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_January_10th_2017
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Wednesday 0900 UTC

2016-12-20 Thread Stig Telfer
Greetings all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’ve heard of a number of relevant events coming in early 2017 - not 
least of which, we’d like to get the discussion started on what the WG can 
achieve for the Boston OpenStack Summit.  Plus we’d like to continue the 
discussion on how GPUs are handled in virtualised environments.  Bring and 
share guidance for getting started and best practice for optimisation, advanced 
topics, etc.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_December_21st_2016
 

[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Tuesday 2100 UTC

2016-12-13 Thread Stig Telfer
Greetings all - 

We have a Scientific WG IRC meeting on Tuesday at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’d like to discuss how GPUs are handled in virtualised 
environments.  Bring and share guidance for getting started and best practice 
for optimisation, advanced topics, etc.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_December_13th_2016
 

[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Computing Insight UK in Manchester - Scientific OpenStack breakout session?

2016-12-08 Thread Stig Telfer
I have arranged a Scientific OpenStack breakout session for this conference - 
2pm on Thursday 15th.  If you’re coming to CIUK 
[https://eventbooking.stfc.ac.uk/news-events/ciuk-2016 
<https://eventbooking.stfc.ac.uk/news-events/ciuk-2016>], please do come along 
- and also let me know if you would like to present.

Best wishes,
Stig


> On 7 Dec 2016, at 10:24, Stig Telfer <stig.openst...@telfer.org> wrote:
> 
> Hi All - 
> 
> Is anyone attending the Computing Insight conference in the UK next week, and 
> is there interest in organising an OpenStack breakout session on the Thursday 
> afternoon?  There doesn’t appear to be much for OpenStack in the breakout 
> programme currently - and I think there ought to be...
> 
> Best wishes,
> Stig
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Computing Insight UK in Manchester - Scientific OpenStack breakout session?

2016-12-07 Thread Stig Telfer
Hi All - 

Is anyone attending the Computing Insight conference in the UK next week, and 
is there interest in organising an OpenStack breakout session on the Thursday 
afternoon?  There doesn’t appear to be much for OpenStack in the breakout 
programme currently - and I think there ought to be...

Best wishes,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Wednesday 0900 UTC

2016-12-06 Thread Stig Telfer
Hi everyone - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’d like to continue our focus on all things monitoring, spanning 
from application workload tracing to infrastructure monitoring and failure root 
cause analysis.  Bring what works for you and we’ll throw it all into a pot to 
make a primordial soup of best practice.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_December_7th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Tuesday 2100 UTC

2016-11-28 Thread Stig Telfer
Hi everyone - 

We have a Scientific WG IRC meeting on Tuesday at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

Firstly this week we are planning to take a look at some experiments 
benchmarking TCP performance over VXLAN and OVS tenant networks.  Secondly we’d 
like to hear people’s thoughts on requirements for telemetry and monitoring for 
research computing use cases.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_November_29th_2016
 

[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] CANCELLED: Scientific WG IRC meeting Tuesday 15th November 2100 UTC

2016-11-14 Thread Stig Telfer
Hello All - 

Unfortunately there will be no Scientific WG meeting this week: by coincidence 
it conflicts with the Scientific OpenStack BoF at the Supercomputing 2016 
conference.

Apologies,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: IRC meeting Wednesday 9th November 0900 UTC

2016-11-08 Thread Stig Telfer
Greetings All - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting

The agenda is available here[1] and full IRC meeting details are here[2].

We’d like to follow up on the events at Barcelona, and complete our planning of 
activity areas for the Ocata design cycle.  Last week we had some people come 
forward to lead and to participate, and we’d like to hear from any others who 
are interested.

If anyone would like to add an item for discussion on the agenda, it is also 
available in an etherpad[3].

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_November_9th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
[3] https://etherpad.openstack.org/p/Scientific-WG-next-meeting-agenda


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific][scientific-wg] Reminder: IRC meeting Tuesday 1st November 2100 UTC

2016-11-03 Thread Stig Telfer

> On 3 Nov 2016, at 16:21, Álvaro López García <al...@ifca.unican.es> wrote:
> 
> On 01 Nov 2016 (12:59), Stig Telfer wrote:
>> Hi All - 
> 
> Hi All,
> 
>> We have a Scientific WG IRC meeting today at 2100 UTC on channel 
>> #openstack-meeting
>> 
>> The agenda is available here[1] and full IRC meeting details are here[2].
>> 
>> We’d like to follow up on the events at Barcelona, and plan activity areas 
>> for the Ocata design cycle.
>> 
>> If anyone would like to add an item for discussion on the agenda, it is also 
>> available in an etherpad[3].
> 
> I could not attend the meeting (2100 UTC meetings are hard for me) and I
> am afraid I will not be able to join next week either.
> 
> Nevertheless, I have read the minutes and the irc logs and you can count
> me on the (identity) federation part.

Hi Alvaro - thank you for volunteering, that’s great news!

Best wishes,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: IRC meeting Tuesday 1st November 2100 UTC

2016-11-01 Thread Stig Telfer
Hi All - 

We have a Scientific WG IRC meeting today at 2100 UTC on channel 
#openstack-meeting

The agenda is available here[1] and full IRC meeting details are here[2].

We’d like to follow up on the events at Barcelona, and plan activity areas for 
the Ocata design cycle.

If anyone would like to add an item for discussion on the agenda, it is also 
available in an etherpad[3].

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_November_1st_2016
 

[2] http://eavesdrop.openstack.org/#Scientific_Working_Group 

[3] https://etherpad.openstack.org/p/Scientific-WG-next-meeting-agenda 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] This week's Scientific WG meeting

2016-10-17 Thread Stig Telfer
Hi All - 

Apologies, there will not be a WG meeting this week: Blair’s en route to Europe 
and I’ll be travelling home from the OpenStack Identity Federation workshop.

Nevertheless I had two items to raise:

- People interested in contributing lightning talks for the Scientific 
OpenStack BoF, please let me know.
- Anyone who registered for the evening social but has not yet bought their 
ticket, please do so before the remaining tickets go public!

Best wishes,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Barcelona Scientific BoF: Call for lightning talks

2016-10-11 Thread Stig Telfer
Hello all - 

We have our schedule confirmed and will be having a BoF for Scientific 
OpenStack users at 2:15pm on the summit Wednesday: 
https://www.openstack.org/summit/barcelona-2016/summit-schedule/events/16779/scientific-working-group-bof-and-poster-session

We are planning to run some lightning talks in this session, typically up to 5 
minutes long.  If you or your institution have been implementing some bright 
ideas that take OpenStack into new territory for research computing use cases, 
lets hear it!

Please follow up to me and Blair (Scientific WG co-chairs) if you’re interested 
in speaking and would like to bag a slot.

Best wishes,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Wednesday 0900 UTC

2016-10-11 Thread Stig Telfer
Hi everyone - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we have our confirmed slots on the Barcelona summit schedule, plus 
other planning besides.  Also Blair’s going to follow up on his recent work on 
hypervisor tuning.  We are also hoping to have Phil Kershaw, chair of the Cloud 
Working Group of the Research Councils UK, to talk about an upcoming cloud 
workshop on OpenStack for research computing hosted at the Francis Crick 
institute in London.

If anyone would like to add an item for discussion on the agenda, it is also 
available in an etherpad[3].

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_October_12th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
[3] https://etherpad.openstack.org/p/Scientific-WG-next-meeting-agenda
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] Users of Ceph with Accelio?

2016-10-07 Thread Stig Telfer
Hi All - 

I’m interested in evaluating the performance of Ceph running over Accelio for 
RDMA data transport.  From what I can see on the web, the early work done on 
adding RDMA support does not appear to have materialised yet into a stable 
production release.

Is there anyone out there who has used Accelio with Ceph and can comment on 
their experience with it?

Many thanks,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Wednesday 0900 UTC

2016-09-27 Thread Stig Telfer
Greetings all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we have discussion on federated identity management on the agenda, 
plus updates on Barcelona.  Looking further ahead we are looking for 
OpenStack-centric picks from the Supercomputing 2016 conference schedule.  I’d 
also like to continue the discussion on how we might change the way the WG 
works for the next cycle and gather peoples thoughts for the agenda for the 
Barcelona committee meeting.

If anyone would like to add an item for discussion on the agenda, it is also 
available in an etherpad[3].

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_September_28th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
[3] https://etherpad.openstack.org/p/Scientific-WG-next-meeting-agenda
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Tuesday 2100 UTC

2016-09-20 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting on Tuesday at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we have updates on Barcelona, and are looking for OpenStack-centric 
picks from the Supercomputing 2016 conference schedule.  I’m also interested in 
discussing how people might like to change the way the WG works for the next 
cycle and gather peoples thoughts for the agenda for the Barcelona committee 
meeting.

If anyone would like to add an item for discussion on the agenda, it is also 
available in an etherpad[3].

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_September_20th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
[3] https://etherpad.openstack.org/p/Scientific-WG-next-meeting-agenda
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Wednesday 0900 UTC

2016-09-13 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we are looking for HPC filesystem war stories (Lustre/GPFS etc), and 
discussing a new project for scientific cloud federation.  Also we will be 
updating everyone with progress on WG activities planned for the Barcelona 
summit.

If anyone would like to add an item for discussion on the agenda, it is also 
available in an etherpad[3].

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_September_14th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
[3] https://etherpad.openstack.org/p/Scientific-WG-next-meeting-agenda
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG meeting Wednesday 0900 UTC

2016-08-30 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’ll be looking at gathering some top picks for a selection themed 
on scientific compute from the conference schedule.  Also, reviewing progress 
so far on the OpenStack/HPC white papers and seeking volunteer experts!

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_August_31st_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group



___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG IRC Meeting Tuesday 2100 UTC

2016-08-23 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting today at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

In addition to the usual items this week I’d like to do some planning for 
activities at the Barcelona Summit.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_August_23rd_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific] Seeking users of Infiniband

2016-08-17 Thread Stig Telfer
Thanks Blair,

To clarify I’m interested in IB rather than RoCE.  There don’t seem to be many.

Best wishes,
Stig


> On 17 Aug 2016, at 08:43, Blair Bethwaite <blair.bethwa...@gmail.com> wrote:
> 
> Hi Stig,
> 
> When you say IB are you specifically talking about link-layer, or more the 
> RDMA capability and IB semantics supported by the drivers and APIs (so both 
> native IB and RoCE)?
> 
> Cheers,
> 
> 
> On 17 Aug 2016 2:28 AM, "Stig Telfer" <stig.openst...@telfer.org> wrote:
> Hi All -
> 
> I’m looking for some new data points on people’s experience of using 
> Infiniband within a virtualised OpenStack configuration.
> 
> Is there anyone on the list who is doing this, and how well does it work?
> 
> Many thanks,
> Stig
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Seeking users of Infiniband

2016-08-16 Thread Stig Telfer
Hi All - 

I’m looking for some new data points on people’s experience of using Infiniband 
within a virtualised OpenStack configuration.

Is there anyone on the list who is doing this, and how well does it work?

Many thanks,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Scientific WG meeting today CANCELLED

2016-08-09 Thread Stig Telfer
Hi All - 

Apologies, our dual-redundant co-chair strategy has failed, as I’m on vacation 
and Blair has taken ill.  Unfortunately we shall need to cancel our meeting for 
later today and hope to come back stronger next week.

Apologies for the short notice,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG Meeting Wednesday 0900 UTC

2016-08-02 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week in addition to the usual themes I thought it would be useful to share 
information about upcoming scientific OpenStack events, as I’ve heard of 
several this week and I am sure others have too.  We’ll also be joined by Anne 
Bertucio to talk about student discounts for the Certified OpenStack 
Administrator qualification.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_August_3rd_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] PCI Passthrough issues

2016-07-28 Thread Stig Telfer
Just out of interest, I saw this talk from DK Panda a few months ago which 
covers MPI developments, including for GPU-Direct and for running in 
virtualised environments:

https://youtu.be/AsFakPJSplo

Do you know if this means there is a version of MVAPICH2 that supports 
GPU-Direct optimised for a virtualised environment, or are they entirely 
disjoint efforts?

Might be tricky - I am not sure how virtual PCI BARs map to the hypervisor’s 
physical PCI BARs.  If the physical PCI ranges are hidden from the VM it may 
not be possible to initiate a peer-to-peer transfer.  Does anyone know if it 
can be done?

Best wishes,
Stig


> On 26 Jul 2016, at 08:09, Blair Bethwaite  wrote:
> 
> Hi Joe, Jon -
> 
> We seem to be good now on both qemu 2.3 and 2.5 with kernel 3.19
> (lowest we've tried). Also thanks to Jon we had an easy fix for the
> snapshot issues!
> 
> Next question - has anyone figured out how to make GPU P2P work? We
> haven't tried very hard yet, but with our current setup we're telling
> Nova to pass through the GK210GL "3D controller" and that results in
> the guest seeing individual GPUs attached to a virtualised PCI bus,
> even when e.g. passing through two K80s on the same board. Next
> obvious step is to try passing through the on-board PLX PCI bridge,
> but wondering whether anyone else has been down this path yet?
> 
> Cheers,
> 
> On 20 July 2016 at 12:57, Blair Bethwaite  wrote:
>> Thanks for the confirmation Joe!
>> 
>> On 20 July 2016 at 12:19, Joe Topjian  wrote:
>>> Hi Blair,
>>> 
>>> We only updated qemu. We're running the version of libvirt from the Kilo
>>> cloudarchive.
>>> 
>>> We've been in production with our K80s for around two weeks now and have had
>>> several users report success.
>>> 
>>> Thanks,
>>> Joe
>>> 
>>> On Tue, Jul 19, 2016 at 5:06 PM, Blair Bethwaite 
>>> wrote:
 
 Hilariously (or not!) we finally hit the same issue last week once
 folks actually started trying to do something (other than build and
 load drivers) with the K80s we're passing through. This
 
 https://devtalk.nvidia.com/default/topic/850833/pci-passthrough-kvm-for-cuda-usage/
 is the best discussion of the issue I've found so far, haven't tracked
 down an actual bug yet though. I wonder whether it has something to do
 with the memory size of the device, as we've been happy for a long
 time with other NVIDIA GPUs (GRID K1, K2, M2070, ...).
 
 Jon, when you grabbed Mitaka Qemu, did you also update libvirt? We're
 just working through this and have tried upgrading both but are
 hitting some issues with Nova and Neutron on the compute nodes,
 thinking it may libvirt related but debug isn't helping much yet.
 
 Cheers,
 
 On 8 July 2016 at 00:54, Jonathan Proulx  wrote:
> On Thu, Jul 07, 2016 at 11:13:29AM +1000, Blair Bethwaite wrote:
> :Jon,
> :
> :Awesome, thanks for sharing. We've just run into an issue with SRIOV
> :VF passthrough that sounds like it might be the same problem (device
> :disappearing after a reboot), but haven't yet investigated deeply -
> :this will help with somewhere to start!
> 
> :By the way, the nouveau mention was because we had missed it on some
> :K80 hypervisors recently and seen passthrough apparently work, but
> :then the NVIDIA drivers would not build in the guest as they claimed
> :they could not find a supported device (despite the GPU being visible
> :on the PCI bus).
> 
> Definitely sage advice!
> 
> :I have also heard passing mention of requiring qemu
> :2.3+ but don't have any specific details of the related issue.
> 
> I didn't do a bisection but with qemu 2.2 (from ubuntu cloudarchive
> kilo) I was sad and with 2.5 (from ubuntu cloudarchive mitaka but
> installed on a kilo hypervisor) I am working.
> 
> Thanks,
> -Jon
> 
> 
> :Cheers,
> :
> :On 7 July 2016 at 08:13, Jonathan Proulx  wrote:
> :> On Wed, Jul 06, 2016 at 12:32:26PM -0400, Jonathan D. Proulx wrote:
> :> :
> :> :I do have an odd remaining issue where I can run cuda jobs in the vm
> :> :but snapshots fail and after pause (for snapshotting) the pci device
> :> :can't be reattached (which is where i think it deletes the snapshot
> :> :it took).  Got same issue with 3.16 and 4.4 kernels.
> :> :
> :> :Not very well categorized yet, but I'm hoping it's because the VM I
> :> :was hacking on had it's libvirt.xml written out with the older qemu
> :> :maybe?  It had been through a couple reboots of the physical system
> :> :though.
> :> :
> :> :Currently building a fresh instance and bashing more keys...
> :>
> :> After an ugly bout of bashing I've solve my failing snapshot issue
> :> which I'll post here in hopes of 

[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG Meeting Tuesday 2100 UTC

2016-07-26 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting later today at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’ll be focusing on planning for the Supercomputing conference.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_July_26th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG Meeting July 20th 0900 UTC

2016-07-19 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting on Wednesday at 0900 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week if possible I’d like to review recent WG activities, in particular 
outputs for our user stories activity area.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_July_20th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] [scientific-wg] high-performance/parallel file-systems panel at Barcelona

2016-07-12 Thread Stig Telfer
Hi Blair - 

Great idea, thanks for sharing it.

We (Cambridge University) would love to be included if we can add value in the 
Lustre corner.

Best wishes,
Stig


> On 12 Jul 2016, at 15:03, Blair Bethwaite  wrote:
> 
> Hi all,
> 
> Just pondering summit talk submissions and wondering if anyone else
> out there is interested in participating in a HPFS panel session...?
> 
> Assuming we have at least one person already who can cover direct
> mounting of Lustre into OpenStack guests then it'd be nice to find
> folks who have experience integrating with other filesystems and/or
> approaches, e.g., GPFS, Manila, Object Storage translation gateways.
> 
> -- 
> Cheers,
> ~Blairo
> 
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] ACTION Needed - All WGs Chairs

2016-07-08 Thread Stig Telfer
Hello Edgar - 

Scientific WG checking in.

Blair Bethwaite and I have been co-chairs since the group’s formal inauguration 
at the Austin summit.

A brief summary of where we are:

* We had about a hundred people present at the Austin session for the WG.  The 
summit etherpad [1] shows how the WG brings together a wide group of 
institutions.  The discussions were all too brief but the social connections 
made have flourished.

* There is plenty of outreach to the scientific computing community, including 
various events planned for the Supercomputing conference in November:
  - In large part due to Bill Boas and other WG members, there will be a panel 
session on OpenStack.
  - Thanks to Jon Mills and other WG members a BOF session application is 
undergoing preparation.
  - WG members from institutions with booths have also offered to host 
OpenStack content at the show.
  - WG members are assisting the Foundation with a white paper on OpenStack and 
HPC.

* WG members have also been very active in the HPC / Research speaker track at 
the summits.

* The WG has a wiki page[2] and a page under construction listing 
scientific/research OpenStack clouds.

* We meet weekly on IRC[3].  The focus of discussion tends to be on knowledge 
sharing rather than co-ordination of work, and this seems to work well for our 
members.

I don’t think we have specific plans for the Barcelona summit at this point - 
other than that we are looking forward to participating.

Best wishes,
Stig and Blair

[1] https://etherpad.openstack.org/p/scientific-wg-austin-summit-agenda
[2] https://wiki.openstack.org/wiki/Scientific_working_group
[3] http://eavesdrop.openstack.org/#Scientific_Working_Group


> On 8 Jul 2016, at 03:13, Edgar Magana  wrote:
> 
> Dear Working Groups Chairs,
>  
> As you know WGs present an opportunity to consolidate and increase the 
> Community around OpenStack from the User and Operator perspective. One of the 
> goals of the User Committee is to help WGs to have the resources needed to 
> achieve their specific goals and exactly as the TC validates and accepts 
> development projects the UC needs to validate and accept WGs before they be 
> granted with resources and support from UC and the OpenStack Foundation. 
> (https://wiki.openstack.org/wiki/Procedure_for_Creating_a_New_Working_Group)
>  
> In order to validate the current work and progress of the existing WGs 
> under:https://wiki.openstack.org/wiki/Governance/Foundation/UserCommittee#Working_Groups
> The UC would like to have all chairs to reply to this thread with the status 
> of the WG and their plans for the Barcelona summit. This is also an 
> opportunity to have a space allocated to get together and discuss about work 
> and activities. This is only for existing WGs not for new ones. If we do not 
> hear from WG Chairs, we will assume that those WGs are no longer active and 
> therefore they will be removed from UC documentation.
>  
> Please, reach out to us with a short report of the activities (wikis, 
> etherpads, repos, etc).
>  
> Thanks,
>  
> Edgar, Shilla and Jon
> UC Members.
>  
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [User-committee] [scientific-wg] possible time change for non-US fortnightly meeting

2016-06-30 Thread Stig Telfer
Sounds good to me too!

Best wishes,
Stig


> On 29 Jun 2016, at 07:49, Dario Vianello  wrote:
> 
> Same here :-)
> 
> Thanks!
> 
> 
> Dario Vianello
> 
> Cloud Bioinformatics Application Architect
> European Bioinformatics Institute (EMBL-EBI)
> Wellcome Trust Genome Campus, Hinxton, Cambridge, CB10 1SD, UK
> Email: da...@ebi.ac.uk
> 
>> On 28 Jun 2016, at 13:42,  
>>  wrote:
>> 
>> 0900 would work better for me J
>>  
>> Thanks
>>  
>> Alexander
>>  
>> From: Blair Bethwaite [mailto:blair.bethwa...@gmail.com] 
>> Sent: 28 June 2016 12:37
>> To: user-committee ; openstack-oper. 
>> 
>> Subject: [User-committee] [scientific-wg] possible time change for non-US 
>> fortnightly meeting
>>  
>> Hi all,
>> 
>> Currently the scientific-wg is meeting weekly on irc with alternating times 
>> week to week - 2100 UTC Tues this week and 0700 UTC Weds next week. The 
>> basic idea being to have both US and non-US friendly times.
>> 
>> The former time is pretty well attended but the latter is somewhat hit and 
>> miss so we're considering whether it should be adjusted. Would it help you 
>> attend if we pushed the 0700 UTC to 0900 or later?
>> 
>> Cheers,
>> Blairo & Stig
>> 
>> ___
>> User-committee mailing list
>> user-commit...@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee
> 
> ___
> User-committee mailing list
> user-commit...@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG Meeting Tuesday 2100 UTC

2016-06-28 Thread Stig Telfer
Hi all - 

We have a Scientific WG IRC meeting later today at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

This week we’ll be looking at the HPC/Research track for the upcoming Barcelona 
summit, and a review of actions achieved and achievable for our four activity 
areas.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_June_28th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG IRC meeting: 2100 UTC

2016-06-14 Thread Stig Telfer
Hi All - 

We have a Scientific WG IRC meeting on Tuesday 14 June at 2100 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

The headline agenda item is the Supercomputing 2016 conference in November in 
Salt Lake City.  There are several OpenStack-related activities already planned 
or proposed for this event.

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_June_14th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: IRC Meeting Wednesday 8 June 0700 UTC

2016-06-07 Thread Stig Telfer
Hello all - 

We have a Scientific WG IRC meeting on Wednesday 8 June at 0700 UTC on channel 
#openstack-meeting.

The agenda is available here[1] and full IRC meeting details are here[2].

The headline items for discussion are:

- Accounting/Scheduling: Using Blazar on Chameleon
- NeCTAR Research Cloud Net Promoter Score survey data for UX team
- We are looking for leads for the activity areas of parallel filesystems and 
resource accounting/scheduling, can we have some volunteers?

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_June_8th_2016
[2] http://eavesdrop.openstack.org/#Scientific_Working_Group

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Reminder: Scientific WG Meeting Wednesday 0700 UTC

2016-05-24 Thread Stig Telfer
Hi All -

We have a Scientific WG IRC meeting tomorrow at 0700 UTC on channel 
#openstack-meeting

The agenda is available here[1].

Best wishes,
Stig

[1] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_May_25th_2016
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific][scientific-wg] Scientific Working Group IRC meeting, Tuesday 2100 UTC

2016-05-16 Thread Stig Telfer
Hi all - 

The OpenStack Scientific WG has its first IRC meeting tomorrow (Tuesday) at 
2100 UTC on #openstack-meeting.  A calendar entry for the meetings is attached. 
 We will be alternating time zones of following WG meetings to broaden 
participation.

Full meeting details can be found here [1] and the meeting agenda can be found 
on the wiki page here [2].  The intention is to start looking at how we can 
make some of the contributions we have been talking about.

Best wishes,
Stig

[1] http://eavesdrop.openstack.org/#Scientific_Working_Group
[2] 
https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_May_17th_2016

BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//yaml2ical agendas//EN
BEGIN:VEVENT
SUMMARY:Scientific Working Group
DTSTART;VALUE=DATE-TIME:20160517T21Z
DURATION:PT1H
DESCRIPTION:Project:  Scientific Working Group\nChair:  Blair Bethwaite (b
 lairo) and Stig Telfer (oneswig)\nDescription:  The Scientific Working Gro
 up is dedicated to representing and advancing the use-cases and needs of r
 esearch and high-performance computing atop OpenStack\, as well as providi
 ng a forum for cross-institutional collaboration. If you are (or would lik
 e to) run OpenStack to support researchers/scientists/academics and/or HPC
 /HTC\, then please join!\n\nAgenda URL:  https://wiki.openstack.org/wiki/S
 cientific_working_group
LOCATION:#openstack-meeting
RRULE:FREQ=WEEKLY;INTERVAL=2
END:VEVENT
BEGIN:VEVENT
SUMMARY:Scientific Working Group
DTSTART;VALUE=DATE-TIME:20160525T07Z
DURATION:PT1H
DESCRIPTION:Project:  Scientific Working Group\nChair:  Blair Bethwaite (b
 lairo) and Stig Telfer (oneswig)\nDescription:  The Scientific Working Gro
 up is dedicated to representing and advancing the use-cases and needs of r
 esearch and high-performance computing atop OpenStack\, as well as providi
 ng a forum for cross-institutional collaboration. If you are (or would lik
 e to) run OpenStack to support researchers/scientists/academics and/or HPC
 /HTC\, then please join!\n\nAgenda URL:  https://wiki.openstack.org/wiki/S
 cientific_working_group
LOCATION:#openstack-meeting
RRULE:FREQ=WEEKLY;INTERVAL=2
END:VEVENT
END:VCALENDAR
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [scientific] Ironic Summit recap - ops experiences

2016-05-11 Thread Stig Telfer
Hi All - 

Jim Rollenhagen from the Ironic project has just posted a great summit report 
of Ironic team activities on the openstack-devs mailing list[1], which included 
this item which will be of interest to the Scientific WG members who are 
looking to work on bare metal activities this cycle:

> # Making ops less worse
> 
> [Etherpad](https://etherpad.openstack.org/p/ironic-newton-summit-ops)
> 
> We discussed some common failure cases that operators see, and how we
> can solve them in code.
> 
> We discussed flaky BMCs, which end with the node in maintenance mode,
> and if Ironic can get them out of that mode automagically. We identified
> the need to distinguish between maintenance set by ironic and set by
> operators, and do things like attempt to connect to the BMC on a power
> state request, and turn off maintenance mode if successful. JayF is
> going to write a spec for this differentiation.
> 
> Folks also expressed the desire to be able to reset the BMC via APIs. We
> have a BMC reset function in the vendor interface for the ipmitool
> driver; dtantsur volunteered to write a spec to promote that method to
> an official ManagementInterface method.
> 
> We also talked for a while about stuck states. This has been mostly
> solved in code, but is still a problem for some deployers. We decided
> that we should not have a "reset-state" API like nova does, but rather a
> command line tool to handle this. lintan has volunteered to write a
> proposal for this; I have also posted some [straw man
> code](https://review.openstack.org/#/c/311273/) that someone is welcome
> to take over or use.

The operator issues already identified cover some things we’ve hit at 
Cambridge, please do scan through and contribute if there is anything they have 
not covered.

Best wishes,
Stig

[1] http://lists.openstack.org/pipermail/openstack-dev/2016-May/094658.html 
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [scientific][accounting] Resource management

2016-05-03 Thread Stig Telfer
Thanks Tim, this is a great read and sets out CERN’s experience and use cases 
for enhanced accounting very well.

Best wishes,
Stig


> On 2 May 2016, at 18:02, Tim Bell  wrote:
> 
> 
> Following the discussions last week, I have put down a blog on how CERN does 
> it’s resource management for the accounting team on the Scientific Working 
> group. The areas we looked at were
>   • Lustre-as-a-Service in Manila
>   • Bare metal management
>   • Accounting
>   • User Stories and Reference Architectures
> The details are at 
> http://openstack-in-production.blogspot.fr/2016/04/resource-management-at-cern.html.
>  I list 5 needs to support these use cases. 
> 
> Need #1 : CPU performance based allocation and scheduling
> Need #2 : Nested Quotas
> Need #3 : Spot Market
> Need #4 : Reducing quota below utilization
> Need #5 : Components without quota
> 
> Given that the aim for the user stories input is that we consolidate a common 
> set of requirements, I’d welcome feedback on the Needs to see if these are 
> common across the scientific community and the relative priorities (mine are 
> unsorted).
> 
> Tim
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] Intel 10 GbE / bonding issue with hash policy layer3+4

2016-03-24 Thread Stig Telfer
Hi Sascha -

I had a similar experience earlier this week.  I was testing bond performance 
on a dual-port Mellanox NIC, LACP-bonded with layer 3+4 transmit hash.  First 
run of iperf (8 streams), I saw a reasonable distribution across the links.  
Shortly after that, performance dropped to a level that suggested no 
distribution.  I didn’t look into it any further at the time.

This was on CoreOS beta (kernel 4.3-6 IIRC).

If our experiences have a common root cause, I have a different distro and NIC 
to you, which might eliminate those components.  I should have the kit back in 
this configuration within a week, I’ll try to probe and report back.

Best wishes,
Stig


> On 23 Mar 2016, at 15:45, MailingLists - EWS 
>  wrote:
> 
> Sascha,
> 
> What version of the ixgbe driver are you using? Is it the same on both
> kernels? Have you tried the latest "out of tree driver" from E1000 to see if
> the issue goes away?
> 
> I follow the E1000 mailing list and I seem to recall some rather recent
> posts regarding bonding and the ixgbe along with some patches being applied
> to the driver, however I don't know what version of kernel these issues were
> on, or even if the patches were accepted.
> 
> https://sourceforge.net/p/e1000/mailman/e1000-devel/thread/87618083B2453E4A8
> 714035B62D67992504FB5FF%40FMSMSX105.amr.corp.intel.com/#msg34727125
> 
> Something about a timing issue with detecting the slave's link speed and
> passing that information to the bonding driver in a timely fashion.
> 
> Tom Walsh
> ExpressHosting
> https://expresshosting.net/
> 
>> -Original Message-
>> From: Sascha Vogt [mailto:sascha.v...@gmail.com]
>> Sent: Wednesday, March 23, 2016 5:54 AM
>> To: openstack-operators
>> Subject: [Openstack-operators] Intel 10 GbE / bonding issue with hash
> policy
>> layer3+4
>> 
>> Hi all,
>> 
>> I thought it might be of interest / get feedback from the operators
>> community about an oddity we experienced with Intel 10 GbE NICs and LACP
>> bonding.
>> 
>> We have Ubuntu 14.04.4 as OS and Intel 10 GbE NICs with the ixgbe Kernel
>> module. We use VLANS for ceph-client, ceph-data, openstack-data,
>> openstack-client networks all on a single LACP bonding of two 10 GbE
> ports.
>> 
>> As bonding hash policy we chose layer3+4 so we can use the full 20 Gb even
>> if only two servers communicate with each other. Typically we check that
> by
>> using iperf to a single server with -P 4 and see if we exceed the 10 Gb
> limit
>> (just a few times to check).
>> 
>> Due to Ubuntus default of installing the latest Kernel our new host had
>> Kernel 4.2.0 instead of the Kernel 3.16 the other machines had and we
>> noticed that iperf only used 10 Gb.
>> 
>>> # cat /proc/net/bonding/bond0
>>> Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)
>>> 
>>> Bonding Mode: IEEE 802.3ad Dynamic link aggregation Transmit Hash
>>> Policy: layer3+4 (1)
>> 
>> This was shown on both - Kernel 3.16 and 4.2.0
>> 
>> After downgrading to Kernel 3.16 we got the iperf results we expected.
>> 
>> Does anyone have a similar setup? Anyone noticed the same things? To us
>> this looks like a bug in the Kernel (ixgbe module?), or are we
>> misunderstanding the hash policy layer3+4?
>> 
>> Any feedback is welcome :) I have not yet posted this to the Kernel ML or
>> Ubuntus ML yet, so if no one here is having a similar setup I'll move over
>> there. I just thought OpenStack ops might be the place were it is most
> likely
>> that someone has a similar setup :)
>> 
>> Greetings
>> -Sascha-
>> 
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
> 
> 
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [gnocchi][ceilometer] Taking Scientific WG Ops Meetup Feedback back to Ceilometer

2016-03-04 Thread Stig Telfer

> On 4 Mar 2016, at 15:40, gordon chung  wrote:
> 
>> One part of the documentation set that we were missing was a guide to how to 
>> migrate from ceilometer to a ceilometer/gnocchi combination (which I 
>> understand is the ultimate architecture). We would like to migrate the 
>> historical data we have stored in ceilometer.
>> 
>> The main line documentation (such as 
>> http://docs.openstack.org/liberty/config-reference/content/) does not yet 
>> contain details on the Gnocchi configuration so some people may miss this 
>> option when following the docs. The gnocchi.xyz has good content but it is 
>> not the standard configuration guides. I guess once Gnocchi is into the Big 
>> Tent then this sort of integration can occur.
>> 
>> It’s early days yet so we don’t yet have the feeling for running Gnocchi at 
>> scale.
>> 
>> Tim
> 
> hi,
> 
> there isn't an active migration path between Ceilometer's legacy 
> database to Gnocchi. it was discussed but given the limited resources we 
> have (contributors are welcomed) we just do not have the bandwidth 
> currently to build a tool to help port data over.
> 
> Gnocchi is not a 1:1 replacement of Ceilometer's old API, it does not 
> capture full-fidelity of data and requires defined archive policy rules, 
> which makes a migration tool necessary. it is mainly intended to replace 
> the 'ceilometer statistics' use case of the old API.
> 
> we were hoping to get a reference architecture this cycle but were 
> delayed due to resources being pulled to other tasks. currently, the 
> best option is to take a look at the devstack plugins for Gnocchi[1] and 
> Ceilometer[2] (requires bash knowledge) to get an idea of how they're 
> set up to work together in the gate.
> 
> as with all new things, it's recommended you test this out first, to 
> learn new API[3] and find potential bugs. you can configure the 
> Ceilometer collector to write to both existing db and Gnocchi so your 
> current usage is not interrupted.
> 
> [1] https://github.com/openstack/gnocchi/blob/master/devstack/plugin.sh
> [2] https://github.com/openstack/ceilometer/blob/master/devstack/plugin.sh
> [3] http://gnocchi.xyz/rest.html
> 
> cheers,
> -- 
> gord

We evaluated Monasca as an alternative for monitoring.  One of its appeals is 
that through its use of Kafka we get logging, events and time-series telemetry 
data streams all in one place (we send syslog through the same Kafka instance). 
 The attraction of this was that it would then be feasible to create a system 
for multi-variate alert triggering and anomaly detection, using a single API to 
access near-real-time data from arbitrary and diverse sources from across the 
data centre.

Not that that system ever got created, ahem, but through bringing the data 
sources together I think it was brought a step closer…

Gord, if Gnocchi is functionality split off from Ceilometer, would using 
Gnocchi require API changes to any client subscribing to sources of both 
monitoring and event data?

Huge thanks to you Anita for starting this discussion.

Best wishes,
Stig


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators