Re: [Openstack-operators] [Nova][Scheduler] How to filter and pick Host with least number of instances.

2016-10-06 Thread Karan
Thanks Mikhail for mentioning custom wieghter. Sure I would like to
look at how you've implemented your own weighter. Please share it when
you've it with you. Also, it would be helpful if  you can give
pointers on implementing your weighter based on different  metrics of
hosts.

On Wed, Oct 5, 2016 at 4:11 PM, Mikhail Medvedev  wrote:
> Hi Karan,
>
> On Sep 22, 2016 19:19, "Karan"  wrote:
>>
>> Hi
>>
>> Is it possible to configure openstack scehduler to schedule instances
>> to a host with least number of instances running on it?
>> When multiple hosts are eligible to spawn a new instance, scheduler
>> applies weight multipliers to available RAM and CPU and pick one host.
>> Is there a way to ask scheduler to pick a Host with least number y of
>> instances on it.
>>
>
> Yes, there is a way to select a host with the least number of instances. It
> can be done by writing a custom weighter that returns negated number of
> instances as host weight. I wrote an implementation that has been used for a
> while in our test cloud, but I am not going to be able to share it until
> next week. Let me know if you still need it by then.
>
>>
>> Thanks
>> Karan
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>
> --
> Mikhail Medvedev (mmedvede)
> IBM, OpenStack CI for KVM on Power

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] monitor memory ballooning from within the affected VM

2016-10-06 Thread Lukas Lehner
Hey

http://unix.stackexchange.com/questions/314832/openstack-kvm-monitor-memory-ballooning-from-within-the-affected-vm

Lukas
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [telemetry][gnocchi] benchmarking gnocchi v3

2016-10-06 Thread gordon chung
hi folks,

as announced recently, we released Gnocchi v3[1][2]! this marked a major 
change in how we process and store data in Gnocchi as we worked on 
building a truly open source time-series service.

as we were building it, i've been benchmarking the results and feeding 
it back into our development. now that we have a release, i thought i'd 
share the results of my last benchmarks in some fancy powerpoint[3]. if 
you don't want a backstory to some of the design changes just jump to 
slide 15[4] for some comparisons on Gnocchi v2 vs Gnocchi v3.

the slides focus only on the performance aspect of Gnocchi but we added 
other stuff as well to improve the flexibility of the service.

feel free to ask me questions on my experience.

[1] https://julien.danjou.info/blog/2016/gnocchi-3.0-release
[2] 
http://lists.openstack.org/pipermail/openstack-announce/2016-September/001649.html
[3] http://www.slideshare.net/GordonChung/gnocchi-v3
[4] http://www.slideshare.net/GordonChung/gnocchi-v3/15

cheers,

-- 
gord
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [puppet] Presence at the PTG

2016-10-06 Thread Alex Schultz
Hi,

We chatted about this a bit in the last meeting[0], but I wanted to
send a note to the wider audience. Our initial thought was that the
puppet group will not have a specific presence at the upcoming PTG in
Atlanta.  We don't think we'll have any topics that we can't work
through via our traditional irc/email workflows. If anyone has any
topics or items that they would like to work through at the upcoming
PTG, please let us know and we can revisit this.

Thanks,
-Alex

[0] 
http://eavesdrop.openstack.org/meetings/puppet_openstack/2016/puppet_openstack.2016-04-19-15.00.html

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [recognition] AUC Recognition WG Meeting Reminder (10/06)

2016-10-06 Thread Shamail Tahir
Hi everyone,

The AUC recognition WG will be meeting on October 6th, 2016 at 1900 UTC.
The details can be found on our wiki page[1].  See you there!

*Agenda*
* Review status of open action items
* Items needed for UC readout
* Open


[1] https://wiki.openstack.org/wiki/AUCRecognition#Meeting_Information

-- 
Thanks,
Shamail Tahir
t: @ShamailXD
tz: Eastern Time
___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


[Openstack-operators] [Neutron][Mitaka] very slow network listing

2016-10-06 Thread Piotr Misiak
Hi guys,

I have a very slow network list response time when i add --shared false
parameter to cli command.
Look at this: http://paste.openstack.org/show/584409/
without --shared False argument I've got response in 2 seconds
with --shared False argument I've got response in 32 seconds
I debugged a little bit and I see that database returns over 182000
records which is 200MB of data but there are only 4000 unique records.
There are more or less 45 duplicates for every unique record and I have
45 records in neutron RBAC so I see a correlation here.

The issue is quite important because Horizon uses the request with
shared=false to show up the "Launch Instance" form and it takes ages.

Anyone has a similar issue?


___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators


Re: [Openstack-operators] [openstack-hpc] What's the state of openstack-hpc now?

2016-10-06 Thread Blair Bethwaite
Hi Andrew,

Just wanted to quickly say that I really appreciate your prompt reply and
hope you'll be happy to assist further if possible. I've just gotten
slightly sidetracked by some other issues but will come back to this in the
next week and provide more background info and results of workaround
attempts.

Cheers,
Blair

On 28 Sep 2016 2:13 AM, "Andrew J Younge"  wrote:

> Hi Blair,
>
> I'm very interested to hear more about your project using virtualzed
> GPUs, and hopefully JP and/or myself can be of help here.
>
> So in the past we've struggled with the usage of PCI bridges as a
> connector between multiple GPUs. This was first seen with Xen and
> S2070 servers (which has 4 older GPUs across Nvidia PCI bridges) and
> found that the ACS was prohibiting the successful passthrough of the
> GPU. While we just decided to use discrete independent adapters moving
> forward, we've never gone back and tried this with KVM. With that, I
> can expect the same issues as the ACS cannot guarantee proper
> isolation of the device. Looking at the K80 GPUs, I'm seeing that
> there are 3 PLX bridges for each GPU pair (see my output below for a
> native system w/out KVM), and I'd estimate likely these would be on
> the same iommu group.  This could be the problem.
>
> I have heard that such a patch exists in KVM for you to override the
> IOMMU groups and ACS protections, however I don't have any experience
> with it directly [1]. In our experiments, we used an updated SeaBIOS,
> whereas the link provided below details a UEFI BIOS.  This may have
> different implications that I don't have experience with.
> Furthermore, I assume this patch will likely just be ignoring all of
> ACS, which is going to be an obvious and potentially severe security
> risk. In a purely academic environment such a security risk may not
> matter, but it should be noted nonetheless.
>
> So, lets take a few steps back to confirm things.   Are you able to
> actually pass both K80 GPUs through to a running KVM instance, and
> have the Nvidia drivers loaded? Any dmesg output errors here may go a
> long way. Are you also passing through the PCI bridge device (lspci
> should show one)? If you're actually making it that far, it may next
> be worth simply running a regular CUDA application set first before
> trying any GPUDirect methods. For our GPUDirect usage, we were
> specifically leveraging the RDMA support with an InfiniBand adapter
> rather than CUDA P2P, so your mileage may vary there as well.
>
> Hopefully this is helpful in finding your problem. With this, I'd be
> interested to hear if the ACS override mechanism, or any other option
> works for enabling passthrough with K80 GPUs (we have a few dozen
> non-virtualized for another project).  If you have any other
> non-bridged GPU cards (like a K20 or C2075) lying around, it may be
> worth giving that a try to try to rule-out other potential issues
> first.
>
> [1] https://wiki.archlinux.org/index.php/PCI_passthrough_via_
> OVMF#Bypassing_the_IOMMU_groups_.28ACS_override_patch.29
>
> [root@r-001 ~]# lspci | grep -i -e PLX -e nvidia
> 02:00.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 03:08.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 03:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 04:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)
> 05:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)
> 06:00.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 07:08.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 07:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 08:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)
> 09:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)
> 82:00.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 83:08.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 83:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 84:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)
> 85:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla K80] (rev a1)
> 86:00.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 87:08.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 87:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI
> Express Gen 3 (8.0 GT/s) Switch (rev ca)
> 88:00.0 3D controller: NVIDIA Corporation GK210GL [Tesla 

Re: [Openstack-operators] Tenant/Project naming restrictions

2016-10-06 Thread Saverio Proto
Is the '@' character allowed in the tenant/project names ?

Saverio

2016-10-05 23:36 GMT+02:00 Steve Martinelli :
> There are some restrictions.
>
> 1. The project name cannot be longer than 64 characters.
> 2. Within a domain, the project name is unique. So you can have project
> "foo" in the "default" domain, and in any other domain.
>
> On Wed, Oct 5, 2016 at 5:16 PM, Vigil, David Gabriel 
> wrote:
>>
>> What, if any, are the official tenant/project naming
>> requirements/restrictions? I can’t find any documentation that speaks to any
>> limitations. Is this documented somewhere?
>>
>>
>>
>>
>>
>>
>>
>> Dave G Vigil Sr
>>
>> Systems Integration Analyst Sr/SAIC Lead 09321
>>
>> Common Engineering Environment
>>
>> dgv...@sandia.gov
>>
>> 505-284-0157 (office)
>>
>> SAIC
>>
>>
>>
>>
>> ___
>> OpenStack-operators mailing list
>> OpenStack-operators@lists.openstack.org
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>>
>
>
> ___
> OpenStack-operators mailing list
> OpenStack-operators@lists.openstack.org
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators
>

___
OpenStack-operators mailing list
OpenStack-operators@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators