Re: [openstack-dev] [oslo] log message translations

2014-02-02 Thread Ying Chun Guo


Ben Nemec openst...@nemebean.com wrote on 2014/01/30 00:52:14:

 Ben Nemec openst...@nemebean.com
 2014/01/30 00:52

 Please respond to
 openst...@nemebean.com

 To

 Doug Hellmann doug.hellm...@dreamhost.com,

 cc

 OpenStack Development Mailing List (not for usage questions)
 openstack-dev@lists.openstack.org, Ying Chun Guo/China/IBM@IBMCN

 Subject

 Re: [openstack-dev] [oslo] log message translations

 Okay, I think you've convinced me.  Specific comments below.
 -Ben
 On 2014-01-29 07:05, Doug Hellmann wrote:

 On Tue, Jan 28, 2014 at 8:47 PM, Ben Nemec openst...@nemebean.com
wrote:
 On 2014-01-27 11:42, Doug Hellmann wrote:
 We have a blueprint open for separating translated log messages into
 different domains so the translation team can prioritize them
 differently (focusing on errors and warnings before debug messages,
 for example) [1]. Some concerns were raised related to the review
 [2], and I would like to address those in this thread and see if we
 can reach consensus about how to proceed.
 The implementation in [2] provides a set of new marker functions
 similar to _(), one for each log level (we have _LE, LW, _LI, _LD,
 etc.). These would be used in conjunction with _(), and reserved for
 log messages. Exceptions, API messages, and other user-facing
 messages all would still be marked for translation with _() and
 would (I assume) receive the highest priority work from the translation
team.
 When the string extraction CI job is updated, we will have one
 main catalog for each app or library, and additional catalogs for
 the log levels. Those show up in transifex separately, but will be
 named in a way that they are obviously related. Each translation
 team will be able to decide, based on the requirements of their
 users, how to set priorities for translating the different catalogs.
 Existing strings being sent to the log and marked with _() will be
 removed from the main catalog and moved to the appropriate log-
 level-specific catalog when their marker function is changed. My
 understanding is that transifex is smart enough to recognize the
 same string from more than one source, and to suggest previous
 translations when it sees the same text. This should make it easier
 for the translation teams to catch up by reusing the translations
 they have already done, in the new catalogs.
 One concern that was raised was the need to mark all of the log
 messages by hand. I investigated using extraction patterns like
 LOG.debug( and LOG.info(, but because of the way the translation
 actually works internally we cannot do that. There are a few related
reasons.
 In other applications, the function _() translates a string at the
 point where it is invoked, and returns a new string object.
 OpenStack has a requirement that messages be translated multiple
 times, whether in the API or the LOG (there is already support for
 logging in more than one language, to different log files). This
 requirement means we delay the translation operation until right
 before the string is output, at which time we know the target
 language. We could update the log functions to create Message
 objects dynamically, except...
 Each app or library that uses the translation code will need its own
 domain for the message catalogs. We get around that right now by
 not translating many messages from the libraries, but that's
 obviously not what we want long term (we at least want exceptions
 translated). If we had a special version of a logger in oslo.log
 that knew how to create Message objects for the format strings used
 in logging (the first argument to LOG.debug for example), it would
 also have to know what translation domain to use so the proper
 catalog could be loaded. The wrapper functions defined in the patch
 [2] include this information, and can be updated to be application
 or library specific when oslo.log eventually becomes its own library.
 Further, as part of moving the logging code from oslo-incubator to
 oslo.log, and making our logging something we can use from other
 OpenStack libraries, we are trying to change the implementation of
 the logging code so it is no longer necessary to create loggers with
 our special wrapper function. That would mean that oslo.log will be
 a library for *configuring* logging, but the actual log calls can be
 handled with Python's standard library, eliminating a dependency
 between new libraries and oslo.log. (This is a longer, and separate,
 discussion, but I mention it here as backround. We don't want to
 change the API of the logger in oslo.log because we don't want to be
 using it directly in the first place.)
 Another concern raised was the use of a prefix _L for these
 functions, since it ties the priority definitions to logs. I chose
 that prefix as an explicit indicate that these *are* just for logs.
 I am not associating any actual priority with them. The translators
 want us to move the log messages out of the main catalog. Having
 them all in separate catalogs 

[openstack-dev] [solum] Issues in running tests

2014-02-02 Thread Rajdeep Dua
Hi,
I am facing some errors running tests in a fresh installation of solum

$tox -e py27





gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall 
-Wstrict-prototypes -fPIC 
-I/home/hadoop/work/openstack/solum-2/solum/.tox/py27/build/lxml/src/lxml/includes
 -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o 
build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w


In file included from src/lxml/lxml.etree.c:340:0:


/home/hadoop/work/openstack/solum-2/solum/.tox/py27/build/lxml/src/lxml/includes/etree_defs.h:9:31:
 fatal error: libxml/xmlversion.h: No such file or directory


compilation terminated.


error: command 'gcc' failed with exit status 1




Thanks
Rajdeep___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Issues in running tests

2014-02-02 Thread Julien Vey
Hi Rajdeep,

We just updated the
documentationhttps://review.openstack.org/#/c/67590/1/CONTRIBUTING.rst
recently
with some necessary packages to install :  *libxml2-dev* and *libxslt-dev*.
You just need to install those 2 packages.
If you are on Ubuntu, see
http://stackoverflow.com/questions/6504810/how-to-install-lxml-on-ubuntu/6504860#6504860

Julien


2014-02-02 Rajdeep Dua dua_rajd...@yahoo.com:

 Hi,
 I am facing some errors running tests in a fresh installation of solum

 $tox -e py27

 

 gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall
 -Wstrict-prototypes -fPIC
 -I/home/hadoop/work/openstack/solum-2/solum/.tox/py27/build/lxml/src/lxml/includes
 -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o
 build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w

 In file included from src/lxml/lxml.etree.c:340:0:

 /home/hadoop/work/openstack/solum-2/solum/.tox/py27/build/lxml/src/lxml/includes/etree_defs.h:9:31:
 fatal error: libxml/xmlversion.h: No such file or directory

 compilation terminated.

 error: command 'gcc' failed with exit status 1

 


 Thanks
 Rajdeep

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] keystone-manage db_sync doesn't work if [database] connection points to IPv6 address

2014-02-02 Thread Martinx - ジェームズ
Guys,

I'm trying to install IceHouse-2 in a dual-stacked environment (Ubuntu
14.04) but, keystone-manage db_sync doesn't work if db connection points
to a IPv6 address, like this:

My /etc/network/interfaces looks like:

---
# The loopback network interface
auto lo
iface lo inet loopback
iface lo inet6 loopback

auto eth0
# IPv6
iface eth0 inet6 static
address 2001:1291::fffa::
netmask 64
gateway 2001:1291::fffa::1
   # dns-* options are implemented by the resolvconf package, if
installed
dns-search domain.com
dns-nameservers 2001:4860:4860::8844
# IPv4
iface eth0 inet static
   address 192.168.XX.100
   netmask 24
   gateway 192.168.XX.1
   # dns-* options are implemented by the resolvconf package, if
installed
   dns-nameservers 8.8.4.4 8.8.8.8
   dns-search domain.com
---

My /etc/hosts contains:

---
2001:1291::fffa::controller-1.domain.com  controller-1
192.168.XX.100  controller-1.domain.com  controller-1
---

MySQL binds on both IPv4 and IPv6, my.cnf like this:

---
bind-address = ::
---

My /etc/keystone/keystone.conf contains:

---
connection = mysql://
keystoneUser:keystonep...@controller-1.domain.com/keystone
---

So, this way, keystone-manage db_sync does not work but, if I replace
keystone.conf connection line into this:

---
connection = mysql://keystoneUser:keystonep...@192.168.xx.100/keystone
---

It works! Then, after db_sync, I return it back to FQDN, where it resolves
to IPv6 address and it works fine...

Cheers!
Thiago
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Heat] [Nova] [oslo] [Ceilometer] about notifications : huge and may be non secure

2014-02-02 Thread Swann Croiset
Thanks Sandy to confirm the gap and pointing to me the CADF effort .. very
interresting with many impacts! I'll definitely follow that.

So I guess OpenStack can live with these notifications until we have better
things.

Anyway, I will be proposing may be a patch to oslo to temporarily fix this
..  at least I'll have a feedback from core team.

+
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] keystone-manage db_sync doesn't work if [database] connection points to IPv6 address

2014-02-02 Thread Dolph Mathews
Can you open a bug for this at https://bugs.launchpad.net/keystone ? Thanks!

On Sun, Feb 2, 2014 at 9:15 AM, Martinx - ジェームズ
thiagocmarti...@gmail.comwrote:

 Guys,

 I'm trying to install IceHouse-2 in a dual-stacked environment (Ubuntu
 14.04) but, keystone-manage db_sync doesn't work if db connection points
 to a IPv6 address, like this:

 My /etc/network/interfaces looks like:

 ---
 # The loopback network interface
 auto lo
 iface lo inet loopback
 iface lo inet6 loopback

 auto eth0
 # IPv6
 iface eth0 inet6 static
 address 2001:1291::fffa::
 netmask 64
 gateway 2001:1291::fffa::1
# dns-* options are implemented by the resolvconf package, if
 installed
 dns-search domain.com
 dns-nameservers 2001:4860:4860::8844
 # IPv4
 iface eth0 inet static
address 192.168.XX.100
netmask 24
gateway 192.168.XX.1
# dns-* options are implemented by the resolvconf package, if
 installed
dns-nameservers 8.8.4.4 8.8.8.8
dns-search domain.com
 ---

 My /etc/hosts contains:

 ---
 2001:1291::fffa::controller-1.domain.com  controller-1
 192.168.XX.100  controller-1.domain.com  controller-1
 ---

 MySQL binds on both IPv4 and IPv6, my.cnf like this:

 ---
 bind-address = ::
 ---

 My /etc/keystone/keystone.conf contains:

 ---
 connection = mysql://
 keystoneUser:keystonep...@controller-1.domain.com/keystone
 ---

 So, this way, keystone-manage db_sync does not work but, if I replace
 keystone.conf connection line into this:

 ---
 connection = mysql://keystoneUser:keystonep...@192.168.xx.100/keystone
 ---

 It works! Then, after db_sync, I return it back to FQDN, where it resolves
 to IPv6 address and it works fine...

 Cheers!
 Thiago

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [nova] bp proposal: configurable-locked-vm-api

2014-02-02 Thread Jae Sang Lee
A blueprint is being discussed about the configurable locked vm api.
 https://blueprints.launchpad.net/nova/+spec/configurable-locked-vm-api

The current implementation does check vm is locked using decorate func.
(@check_instance_lock)

For example)
@wrap_check_policy
*@check_instance_lock*
@check_instance_cell
@check_instance_state(vm_state=None, task_state=None,
  must_have_launched=False)
def delete(self, context, instance):
Terminate an instance.
LOG.debug(_(Going to try to terminate instance),
instance=instance)
self._delete_instance(context, instance)

When administrator want to change check vm is locked
(for example, he doesn't want to check vm is locked for reboot api.),
He must modify compute.api source code, remove decorate function
and restart service in this implementation.

I suggest admin configuration file for restricted API.
Administrator just modify conf file, doesn't need to modify source code,
and doesn't need to restart service. It will be separated from source code.
If conf file is not exist, api check logic will be executed so far.
There are no confusion.

I think this implementation is not critical,
but it can be useful for admin cloud service any better.

Can you take a small amount of time to discuss this blueprint?

Thanks a lot.
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] bp proposal: configurable-locked-vm-api

2014-02-02 Thread Jae Sang Lee
I uploaded changes for this implementation.
Although It doesn't pass a jenkins test, It will be help to understand how
to implement.
 https://review.openstack.org/#/c/70576/



2014-02-03 Jae Sang Lee hyan...@gmail.com:

 A blueprint is being discussed about the configurable locked vm api.
  https://blueprints.launchpad.net/nova/+spec/configurable-locked-vm-api

 The current implementation does check vm is locked using decorate func.
 (@check_instance_lock)

 For example)
 @wrap_check_policy
 *@check_instance_lock*
 @check_instance_cell
 @check_instance_state(vm_state=None, task_state=None,
   must_have_launched=False)
 def delete(self, context, instance):
 Terminate an instance.
 LOG.debug(_(Going to try to terminate instance),
 instance=instance)
 self._delete_instance(context, instance)

 When administrator want to change check vm is locked
 (for example, he doesn't want to check vm is locked for reboot api.),
 He must modify compute.api source code, remove decorate function
 and restart service in this implementation.

 I suggest admin configuration file for restricted API.
 Administrator just modify conf file, doesn't need to modify source code,
 and doesn't need to restart service. It will be separated from source
 code.
 If conf file is not exist, api check logic will be executed so far.
 There are no confusion.

 I think this implementation is not critical,
 but it can be useful for admin cloud service any better.

 Can you take a small amount of time to discuss this blueprint?

 Thanks a lot.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [Neutron][LBaaS] SSL offload support wiki page

2014-02-02 Thread Evgeny Fedoruk
Hi All,

LBaaS SSL support wiki page was updated with up-to-date design decisions:

1.   SSL offload is a core LBaaS feature

2.   Certificate's private key is always transient until secure storage is 
available

https://wiki.openstack.org/wiki/Neutron/LBaaS/

Eugene, you might want to add some HAProxy implementation details in 
Implementation Plan section

Thanks,
Evg
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [solum] Issues in running tests

2014-02-02 Thread Rajdeep Dua
Thanks that worked



On Sunday, February 2, 2014 7:17 PM, Julien Vey vey.jul...@gmail.com wrote:
 
Hi Rajdeep,

We just updated the documentation recently with some necessary packages to 
install :  libxml2-dev and libxslt-dev.

You just need to install those 2 packages. 
If you are on Ubuntu, see 
http://stackoverflow.com/questions/6504810/how-to-install-lxml-on-ubuntu/6504860#6504860

Julien



2014-02-02 Rajdeep Dua dua_rajd...@yahoo.com:

Hi,
I am facing some errors running tests in a fresh installation of solum


$tox -e py27





gcc -pthread -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall 
-Wstrict-prototypes -fPIC 
-I/home/hadoop/work/openstack/solum-2/solum/.tox/py27/build/lxml/src/lxml/includes
 -I/usr/include/python2.7 -c src/lxml/lxml.etree.c -o 
build/temp.linux-x86_64-2.7/src/lxml/lxml.etree.o -w


In file included from src/lxml/lxml.etree.c:340:0:


/home/hadoop/work/openstack/solum-2/solum/.tox/py27/build/lxml/src/lxml/includes/etree_defs.h:9:31:
 fatal error: libxml/xmlversion.h: No such file or directory


compilation terminated.


error: command 'gcc' failed with exit status 1





Thanks
Rajdeep
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] A pair of mode keywords

2014-02-02 Thread Collins, Sean
On Sat, Feb 01, 2014 at 01:18:09AM -0500, Shixiong Shang wrote:
 In other words, I can retrieve the values by:
 
 subnet.ipv6_ra_mode
 subnet.ipv6_address_mode
 
 Is that correct? Would you please confirm?

Yes - that is the intent.

I just have to fix an issue with the DB column definitions, so that
it'll work with postgres, I think I have a closing brace misplaced, so
it's not defining the Enum type correctly, and we have to get
bug #1270212 resolved, since that's making the unit tests fail.

-- 
Sean M. Collins
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] keystone-manage db_sync doesn't work if [database] connection points to IPv6 address

2014-02-02 Thread Martinx - ジェームズ
Sure! I'll...=)


On 2 February 2014 13:32, Dolph Mathews dolph.math...@gmail.com wrote:

 Can you open a bug for this at https://bugs.launchpad.net/keystone ?
 Thanks!

 On Sun, Feb 2, 2014 at 9:15 AM, Martinx - ジェームズ thiagocmarti...@gmail.com
  wrote:

 Guys,

 I'm trying to install IceHouse-2 in a dual-stacked environment (Ubuntu
 14.04) but, keystone-manage db_sync doesn't work if db connection points
 to a IPv6 address, like this:

 My /etc/network/interfaces looks like:

 ---
 # The loopback network interface
 auto lo
 iface lo inet loopback
 iface lo inet6 loopback

 auto eth0
 # IPv6
 iface eth0 inet6 static
 address 2001:1291::fffa::
 netmask 64
 gateway 2001:1291::fffa::1
# dns-* options are implemented by the resolvconf package, if
 installed
 dns-search domain.com
 dns-nameservers 2001:4860:4860::8844
 # IPv4
 iface eth0 inet static
address 192.168.XX.100
netmask 24
gateway 192.168.XX.1
# dns-* options are implemented by the resolvconf package, if
 installed
dns-nameservers 8.8.4.4 8.8.8.8
dns-search domain.com
 ---

 My /etc/hosts contains:

 ---
 2001:1291::fffa::controller-1.domain.com  controller-1
 192.168.XX.100  controller-1.domain.com  controller-1
 ---

 MySQL binds on both IPv4 and IPv6, my.cnf like this:

 ---
 bind-address = ::
 ---

 My /etc/keystone/keystone.conf contains:

 ---
 connection = mysql://
 keystoneUser:keystonep...@controller-1.domain.com/keystone
 ---

 So, this way, keystone-manage db_sync does not work but, if I replace
 keystone.conf connection line into this:

 ---
 connection = mysql://keystoneUser:keystonep...@192.168.xx.100/keystone
 ---

 It works! Then, after db_sync, I return it back to FQDN, where it
 resolves to IPv6 address and it works fine...

 Cheers!
 Thiago

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] A pair of mode keywords

2014-02-02 Thread Shixiong Shang
Excellent! Thanks for your confirmation, Sean!


 On Feb 2, 2014, at 12:53 PM, Collins, Sean 
 sean_colli...@cable.comcast.com wrote:
 
 On Sat, Feb 01, 2014 at 01:18:09AM -0500, Shixiong Shang wrote:
 In other words, I can retrieve the values by:
 
 subnet.ipv6_ra_mode
 subnet.ipv6_address_mode
 
 Is that correct? Would you please confirm?
 
 Yes - that is the intent.
 
 I just have to fix an issue with the DB column definitions, so that
 it'll work with postgres, I think I have a closing brace misplaced, so
 it's not defining the Enum type correctly, and we have to get
 bug #1270212 resolved, since that's making the unit tests fail.
 
 -- 
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-02 Thread Jaromir Coufal

On 2014/30/01 12:59, Ladislav Smola wrote:
 On 01/30/2014 12:39 PM, Jiří Stránský wrote:
 On 01/30/2014 11:26 AM, Tomas Sedovic wrote:

[snip]

 I am for implementing support for Heterogeneous hardware properly,
 lifeless should post what he recommends soon, so I would rather discuss
 that. We should be able to do simple version in I.
Nobody ever said that any implementation of heterogeneous should be 
wrong or poor. This is misinterpretation.



 Lowest common denominator doesn't solve storage vs. compute node. If we
 really have similar hardware, we don't care about, we can just fill the
 nova-baremetal/ironic specs the same as the flavor.
I disagree with this point. This approach of yours will bring super huge 
confusion for the end user. Asking user to enter same values for 
different hardware specs will be huge mistake. User is required to enter 
the reality, it's up to us, how we will help him to make his life easier.


 Why would we want to see in UI that the hardware is different, when we
 can't really determine what goes where.
Because it is reality.

 And as you say assume homogenous hardware and treat it as such. So
 showing in UI that the hardware is different doesn't make any sense then.
This might be just wrong wording, but 'assume homogenous hardware and 
treat it as such' is meant in a way - we deploy roles on nodes randomly, 
because we assume similar HW - as a *first* step. Right after that, we 
add functionality for user to define flavors.


 So the solution for similar hardware is already there.

 I don't see this as an incremental step, but as ugly hack that is not
 placed anywhere on the roadmap.

 Regards,
 Ladislav

-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-02 Thread Jaromir Coufal

On 2014/30/01 19:29, Tzu-Mainn Chen wrote:

Wouldn't lying about the hardware specs when registering nodes be
problematic for upgrades?  Users would have
to re-register their nodes.

+1 for problematic area here


One reason why a custom filter feels attractive is that it provides us
with a clear upgrade path:

Icehouse
   * nodes are registered with correct attributes
   * create a custom scheduler filter that allows any node to match
   * users are informed that for this release, Tuskar will not
differentiate between heterogeneous hardware

J-Release
   * implement the proper use of flavors within Tuskar, allowing Tuskar
to work with heterogeneous hardware
I don't think that this is J-release issue. We are very likely to handle 
this in I-release.



   * work with nova regarding scheduler filters (if needed)
   * remove the custom scheduler filter


Mainn


-- Jarda


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-02 Thread Jaromir Coufal

On 2014/30/01 23:33, Devananda van der Veen wrote:

I was responding based on Treat similar hardware configuration as
equal. When there is a very minor difference in hardware (eg, 1TB vs
1.1TB disks), enrolling them with the same spec (1TB disk) is sufficient
to solve all these issues and mask the need for multiple flavors, and
the hardware wouldn't need to be re-enrolled.
I disagree here, of course user can register HW as they wish, it's their 
responsibility. But asking them to register nodes as equal (even if they 
are close) is going to be mess and huge confusion for users. You would 
actually ask user to enter non-real data - so that he can use our 
deployment tool somehow. From my point of view, this is not right 
approach and I would better see him entering correct information and us 
working with it.



My suggestion does not
address the desire to support significant variation in hardware specs,
such as 8GB RAM vs 64GB RAM, in which case, there is no situation in
which I think those differences should be glossed over, even as a
short-term hack in Icehouse.

if our baremetal flavour said 16GB ram and 1TB disk, it would also
match a node with 24GB ram or 1.5TB disk.

I think this will lead to a lot of confusion, and difficulty with
inventory / resource management. I don't think it's suitable even as a
first-approximation.

Put another way, I dislike the prospect of removing currently-available
functionality (an exact-match scheduler and support for multiple
flavors) to enable ease-of-use in a UI.
I would say this is not for ease-of-use in the UI. It's for bringing 
user functionality to deploy in the UI. Then, in next iteration, to 
support them by picking HW they care about.



Not that I dislike UIs or
anything... it just feels like two steps backwards. If the UI is limited
to homogeneous hardware, accept that; don't take away heterogeneous
hardware support from the rest of the stack.
It's not about taking away support for heterogeneous HW from the whole 
stack. I see the proposal more like adding another filter (possibility) 
for nova-scheduler.



Anyway, it sounds like Robert has a solution in mind, so this is all moot :)

Cheers,
Devananda


Cheers
-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-02 Thread Robert Collins
On 1 February 2014 10:03, Tzu-Mainn Chen tzuma...@redhat.com wrote:
 So after reading the replies on this thread, it seems like I (and others 
 advocating
 a custom scheduler) may have overthought things a bit.  The reason this route 
 was
 suggested was because of conflicting goals for Icehouse:

 a) homogeneous nodes (to simplify requirements)
 b) support diverse hardware sets (to allow as many users as possible to try 
 Tuskar)

 Option b) requires either a custom scheduler or forcing nodes to have the 
 same attributes,
 and the answer to that question is where much of the debate lies.

Not really. It all depends on how you define 'support diverse hardware
sets'. The point I've consistently made is that by working within the
current scheduler we can easily deliver homogeneous support *within* a
given 'service role'. So that is (a), not 'every single node is
identical.

A (b) of supporting arbitrary hardware within a single service role is
significantly more complex, and while I think its entirely doable, it
would be a mistake to tackle this within I (and possibly J). I don't
think users will be impaired by us delaying however.

 However, taking a step back, maybe the real answer is:

 a) homogeneous nodes
 b) document. . .
- **unsupported** means of demoing Tuskar (set node attributes to match 
 flavors, hack
  the scheduler, etc)
- our goals of supporting heterogeneous nodes for the J-release.

 Does this seem reasonable to everyone?

No, because a) is overly scoped.

I think we should have a flavor attribute in the definition of a
service role, and no unsupported hacks needed; and J goals should be
given a chunk of time to refine in Atlanta.

-Rob

-- 
Robert Collins rbtcoll...@hp.com
Distinguished Technologist
HP Converged Cloud

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova] massive number of new errors in logs with oslo.messaging

2014-02-02 Thread Jay Pipes
On Sun, 2014-02-02 at 07:13 -0500, Sean Dague wrote:
 Just noticed this at the end of a successful run:
 
 http://logs.openstack.org/15/63215/13/check/check-tempest-dsvm-full/2636cae/console.html#_2014-02-02_12_02_44_422
 
 It looks like the merge of oslo.messaging brings a huge amount of false
 negative error messages in the logs. Would be good if we didn't ship
 icehouse with this state of things.

Agreed.

And the error messages, which look like this:

Returning exception Unexpected task state: expecting [u'scheduling',
None] but the actual state is deleting to caller

don't make sense -- at least in the English language.

What does the actual state is deleting to caller mean?

Best,
-jay


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-02 Thread Jaromir Coufal

On 2014/30/01 21:28, Robert Collins wrote:

On 30 January 2014 23:26, Tomas Sedovic tsedo...@redhat.com wrote:

Hi all,

I've seen some confusion regarding the homogenous hardware support as the
first step for the tripleo UI. I think it's time to make sure we're all on
the same page.

Here's what I think is not controversial:

1. Build the UI and everything underneath to work with homogenous hardware
in the Icehouse timeframe
2. Figure out how to support heterogenous hardware and do that (may or may
not happen within Icehouse)

The first option implies having a single nova flavour that will match all
the boxes we want to work with. It may or may not be surfaced in the UI (I
think that depends on our undercloud installation story).


I don't agree that (1) implies a single nova flavour. In the context
of the discussion it implied avoiding doing our own scheduling, and
due to the many moving parts we never got beyond that.
I think that homogeneous hardware implies single flavor. That's from the 
definition 'homogeneous'. Question is, how we treat it then.



My expectation is that (argh naming of things) a service definition[1]
will specify a nova flavour, right from the get go. That gives you
homogeneous hardware for any service
[control/network/block-storage/object-storage].
If a service definition specifies a nova flavor, then (based on the fact 
that we have 4 hard-coded roles) we are supporting heterogeneous HW 
(because we would allow user to specify 4 flavors).


What we agreed on in the beginning is homogeneous HW - which links to 
the fact that we have only one flavor.


We should really start with something *simple* and increment on that:

1) one flavor, no association to any role. This is what I see under 
homogeneous HW - MVP0. (As an addition for the sake of usability we 
wanted to add 'no care' filter - so that it picks node without need for 
specifying requirements).


2) association with role - one flavor per role - homogeneous hardware.

3) support multiple node profiles per role.

Why to complicate things from the very beginning (1)?


Jaromir's wireframes include the ability to define multiple such
definitions, so two definitions for compute, for instance (e.g. one
might be KVM, one Xen, or one w/GPUs and the other without, with a
different host aggregate configured).

As long as each definition has a nova flavour, users with multiple
hardware configurations can just create multiple definitions, done.

That is not entirely policy driven, so for longer term you want to be
able to say 'flavour X *or* Y can be used for this', but as a early
iteration it seems very straight forward to me.


Now, someone (I don't honestly know who or when) proposed a slight step up
from point #1 that would allow people to try the UI even if their hardware
varies slightly:



1.1 Treat similar hardware configuration as equal


I think this is a problematic idea, because of the points raised
elsewhere in the thread.

But more importantly, it's totally unnecessary. If one wants to handle
minor variations in hardware (e.g. 1TB vs 1.1TB disks) just register
them as being identical, with the lowest common denominator - Nova
will then treat them as equal.

-Rob


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-02 Thread Jaromir Coufal


On 2014/31/01 22:03, Tzu-Mainn Chen wrote:

So after reading the replies on this thread, it seems like I (and others 
advocating
a custom scheduler) may have overthought things a bit.  The reason this route 
was
suggested was because of conflicting goals for Icehouse:

a) homogeneous nodes (to simplify requirements)
b) support diverse hardware sets (to allow as many users as possible to try 
Tuskar)



Option b) requires either a custom scheduler or forcing nodes to have the same 
attributes,
and the answer to that question is where much of the debate lies.

I think these two goals are pretty accurate.


However, taking a step back, maybe the real answer is:

a) homogeneous nodes
b) document. . .
- **unsupported** means of demoing Tuskar (set node attributes to match 
flavors, hack
  the scheduler, etc)
Why are people calling it 'hack'? It's an additional filter to 
nova-scheduler...?



- our goals of supporting heterogeneous nodes for the J-release.
I wouldn't talk about J-release. I would talk about next iteration or 
next step. Nobody said that we are not able to make it in I-release.



Does this seem reasonable to everyone?

Mainn


Well +1 for a) and it's documentation.

However me and Robert, we look to have different opinions on what 
'homogeneous' means in our context. I think we should clarify that.


-- Jarda

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Cinder Stability Hack-a-thon

2014-02-02 Thread Avishay Traeger
Will join remotely for a few hours each day (time zones and all).  Nice 
effort!

Thanks,
Avishay



From:   Mike Perez thin...@gmail.com
To: OpenStack Development Mailing List 
openstack-dev@lists.openstack.org, 
Date:   02/01/2014 10:09 AM
Subject:[openstack-dev] Cinder Stability Hack-a-thon



Folks,

I would love to get people together who are interested in Cinder 
stability 
to really dedicate a few days. This is not for additional features, but 
rather 
finishing what we already have and really getting those in a good shape 
before the end of the release.

When: Feb 24-26
Where: San Francisco (DreamHost Office can host), Colorado, remote?

Some ideas that come to mind:

- Cleanup/complete volume retype
- Cleanup/complete volume migration [1][2]
- Other ideas that come from this thread.

I can't stress the dedicated part enough. I think if we have some folks 
from core and anyone interested in contributing and staying focus, we 
can really get a lot done in a few days with small set of doable stability 
goals 
to stay focused on. If there is enough interest, being together in the 
mentioned locations would be great, otherwise remote would be fine as 
long as people can stay focused and communicate through suggested 
ideas like team speak or google hangout.

What do you guys think? Location? Other stability concerns to add to the 
list?

[1] - https://bugs.launchpad.net/cinder/+bug/1255622
[2] - https://bugs.launchpad.net/cinder/+bug/1246200


-Mike Perez___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Nova CI Infrastructure

2014-02-02 Thread Alessandro Pilotti
Hi Michael,


On 02 Feb 2014, at 06:19 , Michael Still mi...@stillhq.com wrote:

 I saw another case of the build succeeded message for a failure just
 now... https://review.openstack.org/#/c/59101/ has a rebase failure
 but was marked as successful.
 
 Is this another case of hyper-v not being voting and therefore being a
 bit confusing? The text of the comment clearly indicates this is a
 failure at least.
 

Yes, all the Hyper-V CI messages start with “build succeded”, while the next 
lines show the actual job result.
I asked on infra about how to get rid of that message, but from what I got from 
the chat it is not possible as long as the CI is non voting independently from 
the return status of the single jobs.

Alessandro


 Thanks,
 Michael
 
 On Tue, Jan 28, 2014 at 12:17 AM, Alessandro Pilotti
 apilo...@cloudbasesolutions.com wrote:
 On 25 Jan 2014, at 16:51 , Matt Riedemann mrie...@linux.vnet.ibm.com wrote:
 
 
 
 On 1/24/2014 3:41 PM, Peter Pouliot wrote:
 Hello OpenStack Community,
 
 I am excited at this opportunity to make the community aware that the
 Hyper-V CI infrastructure
 
 is now up and running.  Let's first start with some housekeeping
 details.  Our Tempest logs are
 
 publically available here: http://64.119.130.115. You will see them show
 up in any
 
 Nova Gerrit commit from this moment on.
 snip
 
 So now some questions. :)
 
 I saw this failed on one of my nova patches [1].  It says the build 
 succeeded but that the tests failed.  I talked with Alessandro about this 
 yesterday and he said that's working as designed, something with how the 
 scoring works with zuul?
 
 I spoke with clarkb on infra, since we were also very puzzled by this 
 behaviour. I've been told that when the job is non voting, it's always 
 reported as succeeded, which makes sense, although sligltly misleading.
 The message in the Gerrit comment is clearly stating: Test run failed in 
 ..m ..s (non-voting), so this should be fair enough. It'd be great to have 
 a way to get rid of the Build succeded message above.
 
 The problem I'm having is figuring out why it failed.  I looked at the 
 compute logs but didn't find any errors.  Can someone help me figure out 
 what went wrong here?
 
 
 The reason for the failure of this job can be found here:
 
 http://64.119.130.115/69047/1/devstack_logs/screen-n-api.log.gz
 
 Please search for (1054, Unknown column 'instances.locked_by' in 'field 
 list')
 
 In this case the job failed when nova service-list got called to verify 
 wether the compute nodes have been properly added to the devstack instance 
 in the overcloud.
 
 During the weekend we added also a console.log to help in simplifying 
 debugging, especially in the rare cases in which the job fails before 
 getting to run tempest:
 
 http://64.119.130.115/69047/1/console.log.gz
 
 
 Let me know if this helps in tracking down your issue!
 
 Alessandro
 
 
 [1] https://review.openstack.org/#/c/69047/1
 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Nova CI Infrastructure

2014-02-02 Thread Michael Still
It seems like there were a lot of failing Hyper-V CI jobs for nova
yesterday. Is there some systemic problem or did all those patches
really fail? An example: https://review.openstack.org/#/c/66291/

Michael

On Mon, Feb 3, 2014 at 7:21 AM, Alessandro Pilotti
apilo...@cloudbasesolutions.com wrote:
 Hi Michael,


 On 02 Feb 2014, at 06:19 , Michael Still mi...@stillhq.com wrote:

 I saw another case of the build succeeded message for a failure just
 now... https://review.openstack.org/#/c/59101/ has a rebase failure
 but was marked as successful.

 Is this another case of hyper-v not being voting and therefore being a
 bit confusing? The text of the comment clearly indicates this is a
 failure at least.


 Yes, all the Hyper-V CI messages start with build succeded, while the next 
 lines show the actual job result.
 I asked on infra about how to get rid of that message, but from what I got 
 from the chat it is not possible as long as the CI is non voting 
 independently from the return status of the single jobs.

 Alessandro


 Thanks,
 Michael

 On Tue, Jan 28, 2014 at 12:17 AM, Alessandro Pilotti
 apilo...@cloudbasesolutions.com wrote:
 On 25 Jan 2014, at 16:51 , Matt Riedemann mrie...@linux.vnet.ibm.com 
 wrote:



 On 1/24/2014 3:41 PM, Peter Pouliot wrote:
 Hello OpenStack Community,

 I am excited at this opportunity to make the community aware that the
 Hyper-V CI infrastructure

 is now up and running.  Let's first start with some housekeeping
 details.  Our Tempest logs are

 publically available here: http://64.119.130.115. You will see them show
 up in any

 Nova Gerrit commit from this moment on.
 snip

 So now some questions. :)

 I saw this failed on one of my nova patches [1].  It says the build 
 succeeded but that the tests failed.  I talked with Alessandro about this 
 yesterday and he said that's working as designed, something with how the 
 scoring works with zuul?

 I spoke with clarkb on infra, since we were also very puzzled by this 
 behaviour. I've been told that when the job is non voting, it's always 
 reported as succeeded, which makes sense, although sligltly misleading.
 The message in the Gerrit comment is clearly stating: Test run failed in 
 ..m ..s (non-voting), so this should be fair enough. It'd be great to have 
 a way to get rid of the Build succeded message above.

 The problem I'm having is figuring out why it failed.  I looked at the 
 compute logs but didn't find any errors.  Can someone help me figure out 
 what went wrong here?


 The reason for the failure of this job can be found here:

 http://64.119.130.115/69047/1/devstack_logs/screen-n-api.log.gz

 Please search for (1054, Unknown column 'instances.locked_by' in 'field 
 list')

 In this case the job failed when nova service-list got called to verify 
 wether the compute nodes have been properly added to the devstack instance 
 in the overcloud.

 During the weekend we added also a console.log to help in simplifying 
 debugging, especially in the rare cases in which the job fails before 
 getting to run tempest:

 http://64.119.130.115/69047/1/console.log.gz


 Let me know if this helps in tracking down your issue!

 Alessandro


 [1] https://review.openstack.org/#/c/69047/1

 --

 Thanks,

 Matt Riedemann


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 --
 Rackspace Australia

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Rackspace Australia

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] Hyper-V Nova CI Infrastructure

2014-02-02 Thread Alessandro Pilotti

On 02 Feb 2014, at 23:01 , Michael Still mi...@stillhq.com wrote:

 It seems like there were a lot of failing Hyper-V CI jobs for nova
 yesterday. Is there some systemic problem or did all those patches
 really fail? An example: https://review.openstack.org/#/c/66291/
 


We’re awere of this issue and looking into it. The issue happens in devstack 
before the Hyper-V compute nodes are added and before tempests starts.

I’ll post an update as soon as we get it sorted out.


Thanks,

Alessandro


 Michael
 
 On Mon, Feb 3, 2014 at 7:21 AM, Alessandro Pilotti
 apilo...@cloudbasesolutions.com wrote:
 Hi Michael,
 
 
 On 02 Feb 2014, at 06:19 , Michael Still mi...@stillhq.com wrote:
 
 I saw another case of the build succeeded message for a failure just
 now... https://review.openstack.org/#/c/59101/ has a rebase failure
 but was marked as successful.
 
 Is this another case of hyper-v not being voting and therefore being a
 bit confusing? The text of the comment clearly indicates this is a
 failure at least.
 
 
 Yes, all the Hyper-V CI messages start with build succeded, while the next 
 lines show the actual job result.
 I asked on infra about how to get rid of that message, but from what I got 
 from the chat it is not possible as long as the CI is non voting 
 independently from the return status of the single jobs.
 
 Alessandro
 
 
 Thanks,
 Michael
 
 On Tue, Jan 28, 2014 at 12:17 AM, Alessandro Pilotti
 apilo...@cloudbasesolutions.com wrote:
 On 25 Jan 2014, at 16:51 , Matt Riedemann mrie...@linux.vnet.ibm.com 
 wrote:
 
 
 
 On 1/24/2014 3:41 PM, Peter Pouliot wrote:
 Hello OpenStack Community,
 
 I am excited at this opportunity to make the community aware that the
 Hyper-V CI infrastructure
 
 is now up and running.  Let's first start with some housekeeping
 details.  Our Tempest logs are
 
 publically available here: http://64.119.130.115. You will see them show
 up in any
 
 Nova Gerrit commit from this moment on.
 snip
 
 So now some questions. :)
 
 I saw this failed on one of my nova patches [1].  It says the build 
 succeeded but that the tests failed.  I talked with Alessandro about this 
 yesterday and he said that's working as designed, something with how the 
 scoring works with zuul?
 
 I spoke with clarkb on infra, since we were also very puzzled by this 
 behaviour. I've been told that when the job is non voting, it's always 
 reported as succeeded, which makes sense, although sligltly misleading.
 The message in the Gerrit comment is clearly stating: Test run failed in 
 ..m ..s (non-voting), so this should be fair enough. It'd be great to 
 have a way to get rid of the Build succeded message above.
 
 The problem I'm having is figuring out why it failed.  I looked at the 
 compute logs but didn't find any errors.  Can someone help me figure out 
 what went wrong here?
 
 
 The reason for the failure of this job can be found here:
 
 http://64.119.130.115/69047/1/devstack_logs/screen-n-api.log.gz
 
 Please search for (1054, Unknown column 'instances.locked_by' in 'field 
 list')
 
 In this case the job failed when nova service-list got called to verify 
 wether the compute nodes have been properly added to the devstack instance 
 in the overcloud.
 
 During the weekend we added also a console.log to help in simplifying 
 debugging, especially in the rare cases in which the job fails before 
 getting to run tempest:
 
 http://64.119.130.115/69047/1/console.log.gz
 
 
 Let me know if this helps in tracking down your issue!
 
 Alessandro
 
 
 [1] https://review.openstack.org/#/c/69047/1
 
 --
 
 Thanks,
 
 Matt Riedemann
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 --
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 
 
 
 -- 
 Rackspace Australia
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-02 Thread Clint Byrum
Excerpts from Jaromir Coufal's message of 2014-02-02 11:19:25 -0800:
 On 2014/30/01 23:33, Devananda van der Veen wrote:
  I was responding based on Treat similar hardware configuration as
  equal. When there is a very minor difference in hardware (eg, 1TB vs
  1.1TB disks), enrolling them with the same spec (1TB disk) is sufficient
  to solve all these issues and mask the need for multiple flavors, and
  the hardware wouldn't need to be re-enrolled.
 I disagree here, of course user can register HW as they wish, it's their 
 responsibility. But asking them to register nodes as equal (even if they 
 are close) is going to be mess and huge confusion for users. You would 
 actually ask user to enter non-real data - so that he can use our 
 deployment tool somehow. From my point of view, this is not right 
 approach and I would better see him entering correct information and us 
 working with it.
 

I totally understand the desire to have it all make sense to users. I
wonder if the desire to have something working well in the next 30 days
should override the desire.

My thought is that early adopters are used to this sort of
work-around. Accept this bit of weirdness, and you get all this
awesomeness. In this case Register hardware with similar specs as
identical specs, and you will get automatic deployment over heterogeneous
hardware.

If we keep the scope narrow enough, those users only have to wait a few
more months to be able to make their hardware registrations more accurate.

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Ceilometer]Cumulative metrics resetting

2014-02-02 Thread Adrian Turjak


On 31/01/14 21:50, Julien Danjou wrote:

On Fri, Jan 31 2014, Adrian Turjak wrote:


Is it working on Havana? As I've gone through and tried all the possible
state changes (reboot, suspend, pause, resize, terminate, etc), and not one
triggers a poll cycle outside of the standard polling interval. So in all
those cases apart from pause (which doesn't cause a reset), I'm losing
cumulative data.

It should. Did you enable it in the notification_drivers?

It hadn't been, devstack doesn't enable it by default it seems. I have 
since added it to ceilometer.conf:

notification_driver = ceilometer.compute.nova_notifier

and done a ./unstack ./stack

Still nothing. I'm not sure if there is some documentation on this I'm 
missing, but I've been unable to find anything useful.


Nova_notifier as is only covers the terminate case 
(compute.instance.delete.start) anyway, but once I have it working at 
least for that case, I can alter it to work for the rest easily enough 
by adding in a list of tracked notifications.


Also, side note, see comments #18 and #19 of this thread:
https://bugs.launchpad.net/nova/+bug/1221987

If that is the case, is nova_notifier what I should be using at all?
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [TripleO] [Ironic] Roadmap towards heterogenous hardware support

2014-02-02 Thread Tzu-Mainn Chen
 On 1 February 2014 10:03, Tzu-Mainn Chen tzuma...@redhat.com wrote:
  So after reading the replies on this thread, it seems like I (and others
  advocating
  a custom scheduler) may have overthought things a bit.  The reason this
  route was
  suggested was because of conflicting goals for Icehouse:
 
  a) homogeneous nodes (to simplify requirements)
  b) support diverse hardware sets (to allow as many users as possible to try
  Tuskar)
 
  Option b) requires either a custom scheduler or forcing nodes to have the
  same attributes,
  and the answer to that question is where much of the debate lies.
 
 Not really. It all depends on how you define 'support diverse hardware
 sets'. The point I've consistently made is that by working within the
 current scheduler we can easily deliver homogeneous support *within* a
 given 'service role'. So that is (a), not 'every single node is
 identical.
 
 A (b) of supporting arbitrary hardware within a single service role is
 significantly more complex, and while I think its entirely doable, it
 would be a mistake to tackle this within I (and possibly J). I don't
 think users will be impaired by us delaying however.
 
  However, taking a step back, maybe the real answer is:
 
  a) homogeneous nodes
  b) document. . .
 - **unsupported** means of demoing Tuskar (set node attributes to
 match flavors, hack
   the scheduler, etc)
 - our goals of supporting heterogeneous nodes for the J-release.
 
  Does this seem reasonable to everyone?
 
 No, because a) is overly scoped.
 
 I think we should have a flavor attribute in the definition of a
 service role, and no unsupported hacks needed; and J goals should be
 given a chunk of time to refine in Atlanta.

Fair enough.  It's my fault for being imprecise, but in my email I meant 
homogeneous
as homogeneous per service role.

That being said, are people on board with:

a) a single flavor per service role for Icehouse?
b) documentation as suggested above?

A single flavor per service role shouldn't be significantly harder than a 
single flavor
for all service roles (multiple flavors per service role is where tricky issues 
start
to creep in).

Mainn

 -Rob
 
 --
 Robert Collins rbtcoll...@hp.com
 Distinguished Technologist
 HP Converged Cloud
 
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
 

___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [mistral] Community meeting agenda - 02/03/2014

2014-02-02 Thread Renat Akhmerov
Hi,

This is a reminder that we will have a weekly community meeting in IRC 
(#openstack-meeting) tomorrow at 16.00 UTC.

Here’s the agenda (also published at [0] along with links to the previous 
meetings):

Review action items
Discuss capabilities and DSL again (since we now have new contributors)
Discuss DSL example (https://etherpad.openstack.org/p/mistral-poc)
Discuss current PoC status
Review Blueprints (at least part)
Open discussion (roadblocks, suggestions, etc.)

As usually, everyone is welcome to join!

[0] https://wiki.openstack.org/wiki/Meetings/MistralAgenda

Renat Akhmerov
@ Mirantis Inc.



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] keystone-manage db_sync doesn't work if [database] connection points to IPv6 address

2014-02-02 Thread Martinx - ジェームズ
Done.

https://bugs.launchpad.net/keystone/+bug/1275615

I'll try it again tomorrow... Just to make sure it isn't me doing something
wrong...

Best!
Thiago


On 2 February 2014 15:58, Martinx - ジェームズ thiagocmarti...@gmail.com wrote:

 Sure! I'll...=)


 On 2 February 2014 13:32, Dolph Mathews dolph.math...@gmail.com wrote:

 Can you open a bug for this at https://bugs.launchpad.net/keystone ?
 Thanks!

 On Sun, Feb 2, 2014 at 9:15 AM, Martinx - ジェームズ 
 thiagocmarti...@gmail.com wrote:

 Guys,

 I'm trying to install IceHouse-2 in a dual-stacked environment (Ubuntu
 14.04) but, keystone-manage db_sync doesn't work if db connection points
 to a IPv6 address, like this:

 My /etc/network/interfaces looks like:

 ---
 # The loopback network interface
 auto lo
 iface lo inet loopback
 iface lo inet6 loopback

 auto eth0
 # IPv6
 iface eth0 inet6 static
 address 2001:1291::fffa::
 netmask 64
 gateway 2001:1291::fffa::1
# dns-* options are implemented by the resolvconf package, if
 installed
 dns-search domain.com
 dns-nameservers 2001:4860:4860::8844
 # IPv4
 iface eth0 inet static
address 192.168.XX.100
netmask 24
gateway 192.168.XX.1
# dns-* options are implemented by the resolvconf package, if
 installed
dns-nameservers 8.8.4.4 8.8.8.8
dns-search domain.com
 ---

 My /etc/hosts contains:

 ---
 2001:1291::fffa::controller-1.domain.com  controller-1
 192.168.XX.100  controller-1.domain.com  controller-1
 ---

 MySQL binds on both IPv4 and IPv6, my.cnf like this:

 ---
 bind-address = ::
 ---

 My /etc/keystone/keystone.conf contains:

 ---
 connection = mysql://
 keystoneUser:keystonep...@controller-1.domain.com/keystone
 ---

 So, this way, keystone-manage db_sync does not work but, if I replace
 keystone.conf connection line into this:

 ---
 connection = mysql://keystoneUser:keystonep...@192.168.xx.100/keystone
 ---

 It works! Then, after db_sync, I return it back to FQDN, where it
 resolves to IPv6 address and it works fine...

 Cheers!
 Thiago

 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [Neutron][IPv6] A pair of mode keywords

2014-02-02 Thread Shixiong Shang
I just submitted dnsmasq related code to gerrit:

https://review.openstack.org/70649

This submission intended to implement “ipv6-two-attributes” BP and other three 
blueprints (SLAAC, DHCPv6-Stateful, DHCPv6-Stateless) rooted from it. Please 
review and let me know what you think.

Thanks in advance!

Shixiong





On Feb 2, 2014, at 12:53 PM, Collins, Sean sean_colli...@cable.comcast.com 
wrote:

 On Sat, Feb 01, 2014 at 01:18:09AM -0500, Shixiong Shang wrote:
 In other words, I can retrieve the values by:
 
 subnet.ipv6_ra_mode
 subnet.ipv6_address_mode
 
 Is that correct? Would you please confirm?
 
 Yes - that is the intent.
 
 I just have to fix an issue with the DB column definitions, so that
 it'll work with postgres, I think I have a closing brace misplaced, so
 it's not defining the Enum type correctly, and we have to get
 bug #1270212 resolved, since that's making the unit tests fail.
 
 -- 
 Sean M. Collins
 ___
 OpenStack-dev mailing list
 OpenStack-dev@lists.openstack.org
 http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

2014-02-02 Thread Irena Berezovsky
Hi Sandhya,
Can you please elaborate how do you suggest to extend the below bp for SRIOV 
Ports managed by different Mechanism Driver?
I am not biased to any specific direction here, just think we need common layer 
for managing SRIOV port at neutron, since there is a common pass between nova 
and neutron.

BR,
Irena


From: Sandhya Dasu (sadasu) [mailto:sad...@cisco.com]
Sent: Friday, January 31, 2014 6:46 PM
To: Irena Berezovsky; Robert Li (baoli); Robert Kukura; OpenStack Development 
Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: Re: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi Irena,
  I was initially looking at 
https://blueprints.launchpad.net/neutron/+spec/ml2-typedriver-extra-port-info 
to take care of the extra information required to set up the SR-IOV port. When 
the scope of the BP was being decided, we had very little info about our own 
design so I didn't give any feedback about SR-IOV ports. But, I feel that this 
is the direction we should be going. Maybe we should target this in Juno.

Introducing, SRIOVPortProfileMixin would be creating yet another way to take 
care of extra port config. Let me know what you think.

Thanks,
Sandhya

From: Irena Berezovsky ire...@mellanox.commailto:ire...@mellanox.com
Date: Thursday, January 30, 2014 4:13 PM
To: Robert Li (baoli) ba...@cisco.commailto:ba...@cisco.com, Robert 
Kukura rkuk...@redhat.commailto:rkuk...@redhat.com, Sandhya Dasu 
sad...@cisco.commailto:sad...@cisco.com, OpenStack Development Mailing 
List (not for usage questions) 
openstack-dev@lists.openstack.orgmailto:openstack-dev@lists.openstack.org, 
Brian Bowen (brbowen) brbo...@cisco.commailto:brbo...@cisco.com
Subject: RE: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Robert,
Thank you very much for the summary.
Please, see inline

From: Robert Li (baoli) [mailto:ba...@cisco.com]
Sent: Thursday, January 30, 2014 10:45 PM
To: Robert Kukura; Sandhya Dasu (sadasu); Irena Berezovsky; OpenStack 
Development Mailing List (not for usage questions); Brian Bowen (brbowen)
Subject: [openstack-dev] [nova][neutron] PCI pass-through SRIOV on Jan. 30th

Hi,

We made a lot of progress today. We agreed that:
-- vnic_type will be a top level attribute as binding:vnic_type
-- BPs:
 * Irena's 
https://blueprints.launchpad.net/neutron/+spec/ml2-request-vnic-type for 
binding:vnic_type
 * Bob to submit a BP for binding:profile in ML2. SRIOV input info will be 
encapsulated in binding:profile
 * Bob to submit a BP for binding:vif_details in ML2. SRIOV output info 
will be encapsulated in binding:vif_details, which may include other 
information like security parameters. For SRIOV, vlan_id and profileid are 
candidates.
-- new arguments for port-create will be implicit arguments. Future release may 
make them explicit. New argument: --binding:vnic_type {virtio, direct, macvtap}.
I think that currently we can make do without the profileid as an input 
parameter from the user. The mechanism driver will return a profileid in the 
vif output.

Please correct any misstatement in above.

Issues:
  -- do we need a common utils/driver for SRIOV generic parts to be used by 
individual Mechanism drivers that support SRIOV? More details on what would be 
included in this sriov utils/driver? I'm thinking that a candidate would be the 
helper functions to interpret the pci_slot, which is proposed as a string. 
Anything else in your mind?
[IrenaB] I thought on some SRIOVPortProfileMixin to handle and persist SRIOV 
port related attributes

  -- what should mechanism drivers put in binding:vif_details and how nova 
would use this information? as far as I see it from the code, a VIF object is 
created and populated based on information provided by neutron (from get 
network and get port)

Questions:
  -- nova needs to work with both ML2 and non-ML2 plugins. For regular plugins, 
binding:vnic_type will not be set, I guess. Then would it be treated as a 
virtio type? And if a non-ML2 plugin wants to support SRIOV, would it need to  
implement vnic-type, binding:profile, binding:vif-details for SRIOV itself?
[IrenaB] vnic_type will be added as an additional attribute to binding 
extension. For persistency it should be added in PortBindingMixin for non ML2. 
I didn't think to cover it as part of ML2 vnic_type bp.
For the rest attributes, need to see what Bob plans.

 -- is a neutron agent making decision based on the binding:vif_type?  In that 
case, it makes sense for binding:vnic_type not to be exposed to agents.
[IrenaB] vnic_type is input parameter that will eventually cause certain 
vif_type to be sent to GenericVIFDriver and create network interface. Neutron 
agents periodically scan for attached interfaces. For example, OVS agent will 
look only for OVS interfaces, so if SRIOV interface is created, it won't be 
discovered by OVS agent.

Thanks,
Robert
___
OpenStack-dev mailing list

[openstack-dev] [glance][nova]improvement-of-accessing-to-glance

2014-02-02 Thread Eiichi Aikawa
Hi,

Here is the blueprint about improvement of accessing to glance API server.
  https://blueprints.launchpad.net/nova/+spec/improvement-of-accessing-to-glance

The summary of this bp is:
 - Glance API Servers are categorized into two groups: Primary and 
Secondary.
 - Firstly, Nova tries to access Glance API Servers of Primary randomly.
 - If failed to access all Primary Servers, Nova tries to access Secondary
   Servers randomly.

We suppose the near servers will be treated as Primary, and other servers
as Secondary.

The benefits of this bp we think is as followings.
 - By listing near glance API servers and using them, the total amount of data
   transfer across the networks can be reduced.
 - Especially, in case of using microserver, accessing glance API server
   in the same chassis can make efficiency better than using in other chassis.

This can reduce the network traffic and increase the efficiency.

Please give us your advice/comment.

Regards,
E.Aikawa (aik...@mxk.nes.nec.co.jp)


___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev