Re: [openstack-dev] [nova-lxd]Feature support matrix of nova-lxd
Hi James, Thank you for agreeing. I begin to write the document. Best regards, On 2018/08/31 20:03, James Page wrote: Hi Rikimaru On Fri, 31 Aug 2018 at 11:28 Rikimaru Honjo wrote: Hello, I'm planning to write a feature support matrix[1] of nova-lxd and add it to nova-lxd repository. A similar document exists as todo.txt[2], but this is old. Can I write it? Yes please! If someone is writing the same document now, I'll stop writing. They are not - please go ahead - this would be a valuable contribution for users evaluating this driver. Regards Jjames __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ ★部署名が変わりました。 NTTテクノクロス株式会社 IoTイノベーション事業部 第二ビジネスユニット(IV2BU) 本上力丸 TEL. :045-212-7539 E-mail:honjo.rikim...@po.ntt-tx.co.jp 〒220-0012 横浜市西区みなとみらい4丁目4番5号 横浜アイマークプレイス 13階 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova-lxd]Feature support matrix of nova-lxd
Hello, I'm planning to write a feature support matrix[1] of nova-lxd and add it to nova-lxd repository. A similar document exists as todo.txt[2], but this is old. Can I write it? If someone is writing the same document now, I'll stop writing. [1] It will be like this: https://docs.openstack.org/nova/latest/user/support-matrix.html [2] https://github.com/openstack/nova-lxd/blob/master/specs/todo.txt Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [glance][cinder]Question about cinder as glance store
Hello Gorka, Thank you for replying! I'll try to run glance-api and cinder-volume on the same node according to your information. On 2018/02/07 19:27, Gorka Eguileor wrote: On 07/02, Rikimaru Honjo wrote: Hello, I'm planning to use cinder as glance store. And, I'll setup cinder to connect storage by iSCSI multipath. In this case, can I run glance-api and cinder-volume on the same node? In my understanding, glance-api will attach a volume to own node and write a uploaded image to the volume if glance backend is cinder. I afraid that the race condition of cinder-volume's iSCSI operations and glance-api's iSCSI operations. Is there possibility of occurring it? -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev Hi, When properly set with the right configuration and the right system and OpenStack packages, Cinder, OS-Brick, and Nova no longer have race conditions with iSCSI operations anymore (single or multipathed), not even with drivers that do "shared target". So I would assume that Glance won't have any issues either as long as it's properly making the Cinder and OS-Brick calls. Cheers, Gorka. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [glance][cinder]Question about cinder as glance store
Hello, I'm planning to use cinder as glance store. And, I'll setup cinder to connect storage by iSCSI multipath. In this case, can I run glance-api and cinder-volume on the same node? In my understanding, glance-api will attach a volume to own node and write a uploaded image to the volume if glance backend is cinder. I afraid that the race condition of cinder-volume's iSCSI operations and glance-api's iSCSI operations. Is there possibility of occurring it? -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo][oslo.log]Re: Error will be occurred if watch_log_file option is true
I tried to replace pyinotify to inotify, but same error was occurred. I'm asking about the behavior of inotify to the developer of inotify. I wrote the detail of my status on Launchpad: https://bugs.launchpad.net/masakari/+bug/1740111/comments/4 On 2018/01/31 20:03, Rikimaru Honjo wrote: Hello, Sorry for the very late reply... On 2018/01/10 1:11, Doug Hellmann wrote: Excerpts from Rikimaru Honjo's message of 2018-01-09 18:11:09 +0900: Hello, On 2018/01/04 23:12, Doug Hellmann wrote: Excerpts from Rikimaru Honjo's message of 2018-01-04 18:22:26 +0900: Hello, The below bug was reported in Masakari's Launchpad. I think that this bug was caused by oslo.log. (And, the root cause is a bug of pyinotify using by oslo.log. The detail is written in the bug report.) * masakari-api failed to launch due to setting of watch_log_file and log_file https://bugs.launchpad.net/masakari/+bug/1740111 There is a possibility that this bug will affects all openstack components using oslo.log. (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. I haven't solved the reason of this yet...) Could you help us? And, what should we do...? [1] e.g. nova-api, cinder-api, keystone... Best regards, The bug is in pyinotify. According to the git repo [1] that project was last updated in June of 2015. I recommend we move off of pyinotify entirely, since it appears to be unmaintained. If there is another library to do the same thing we should switch to it (there seem to be lots of options [2]). If there is no viable replacement or fork, we should deprecate that log watching feature (and anything else for which we use pyinotify) and remove it ASAP. We'll need a volunteer to do the evaluation and update oslo.log. Doug [1] https://github.com/seb-m/pyinotify [2] https://pypi.python.org/pypi?%3Aaction=search&term=inotify&submit=search Thank you for replying. I haven't deeply researched, but inotify looks good. Because "weight" of inotify is the largest, and following text is described. https://pypi.python.org/pypi/inotify/0.2.9 This project is unrelated to the *PyInotify* project that existed prior to this one (this project began in 2015). That project is defunct and no longer available. PyInotify is defunct and no longer available... The inotify package seems like a good candidate to replace pyinotify. Have you looked at how hard it would be to change oslo.log? If so, does using the newer library eliminate the bug you had? I am researching it now. (But, I think it is not easy.) I'll create a patch if inotify can eliminate the bug. Doug __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [oslo][oslo.log]Re: Error will be occurred if watch_log_file option is true
Hello, Sorry for the very late reply... On 2018/01/10 1:11, Doug Hellmann wrote: Excerpts from Rikimaru Honjo's message of 2018-01-09 18:11:09 +0900: Hello, On 2018/01/04 23:12, Doug Hellmann wrote: Excerpts from Rikimaru Honjo's message of 2018-01-04 18:22:26 +0900: Hello, The below bug was reported in Masakari's Launchpad. I think that this bug was caused by oslo.log. (And, the root cause is a bug of pyinotify using by oslo.log. The detail is written in the bug report.) * masakari-api failed to launch due to setting of watch_log_file and log_file https://bugs.launchpad.net/masakari/+bug/1740111 There is a possibility that this bug will affects all openstack components using oslo.log. (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. I haven't solved the reason of this yet...) Could you help us? And, what should we do...? [1] e.g. nova-api, cinder-api, keystone... Best regards, The bug is in pyinotify. According to the git repo [1] that project was last updated in June of 2015. I recommend we move off of pyinotify entirely, since it appears to be unmaintained. If there is another library to do the same thing we should switch to it (there seem to be lots of options [2]). If there is no viable replacement or fork, we should deprecate that log watching feature (and anything else for which we use pyinotify) and remove it ASAP. We'll need a volunteer to do the evaluation and update oslo.log. Doug [1] https://github.com/seb-m/pyinotify [2] https://pypi.python.org/pypi?%3Aaction=search&term=inotify&submit=search Thank you for replying. I haven't deeply researched, but inotify looks good. Because "weight" of inotify is the largest, and following text is described. https://pypi.python.org/pypi/inotify/0.2.9 This project is unrelated to the *PyInotify* project that existed prior to this one (this project began in 2015). That project is defunct and no longer available. PyInotify is defunct and no longer available... The inotify package seems like a good candidate to replace pyinotify. Have you looked at how hard it would be to change oslo.log? If so, does using the newer library eliminate the bug you had? I am researching it now. (But, I think it is not easy.) I'll create a patch if inotify can eliminate the bug. Doug __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [masakari] BUG in Masakari Installation and Procedure and/or Documentation
on. 2018-01-24 20:31:17.769 12473 ERROR masakarimonitors.ha.masakari __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo][oslo.log]Re: Error will be occurred if watch_log_file option is true
Hello, On 2018/01/04 23:12, Doug Hellmann wrote: Excerpts from Rikimaru Honjo's message of 2018-01-04 18:22:26 +0900: Hello, The below bug was reported in Masakari's Launchpad. I think that this bug was caused by oslo.log. (And, the root cause is a bug of pyinotify using by oslo.log. The detail is written in the bug report.) * masakari-api failed to launch due to setting of watch_log_file and log_file https://bugs.launchpad.net/masakari/+bug/1740111 There is a possibility that this bug will affects all openstack components using oslo.log. (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. I haven't solved the reason of this yet...) Could you help us? And, what should we do...? [1] e.g. nova-api, cinder-api, keystone... Best regards, The bug is in pyinotify. According to the git repo [1] that project was last updated in June of 2015. I recommend we move off of pyinotify entirely, since it appears to be unmaintained. If there is another library to do the same thing we should switch to it (there seem to be lots of options [2]). If there is no viable replacement or fork, we should deprecate that log watching feature (and anything else for which we use pyinotify) and remove it ASAP. We'll need a volunteer to do the evaluation and update oslo.log. Doug [1] https://github.com/seb-m/pyinotify [2] https://pypi.python.org/pypi?%3Aaction=search&term=inotify&submit=search Thank you for replying. I haven't deeply researched, but inotify looks good. Because "weight" of inotify is the largest, and following text is described. https://pypi.python.org/pypi/inotify/0.2.9 This project is unrelated to the *PyInotify* project that existed prior to this one (this project began in 2015). That project is defunct and no longer available. PyInotify is defunct and no longer available... -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [oslo][oslo.log]Error will be occurred if watch_log_file option is true
Hello, The below bug was reported in Masakari's Launchpad. I think that this bug was caused by oslo.log. (And, the root cause is a bug of pyinotify using by oslo.log. The detail is written in the bug report.) * masakari-api failed to launch due to setting of watch_log_file and log_file https://bugs.launchpad.net/masakari/+bug/1740111 There is a possibility that this bug will affects all openstack components using oslo.log. (But, the processes working with uwsgi[1] wasn't affected when I tried to reproduce. I haven't solved the reason of this yet...) Could you help us? And, what should we do...? [1] e.g. nova-api, cinder-api, keystone... Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [masakari] problems starting up masakari instance monitoring in devstack @ master
Hello Greg, I forgot to tell you. Please use process_list.yaml instead of proc.list.sample. On 2017/12/07 14:03, Rikimaru Honjo wrote: Hello Greg, Please use masakarimonitors.conf instead of hostmonitor.conf and processmonitor.conf. You can generate it by "tox -egenconfig". hostmonitor.conf and processmonitor.conf are used for monitors implemented by shell script. masakarimonitors.conf is a configuration file for monitors implemented by python that you installed. And, we are preparing setting guides. Please see it if you are good. masakari: https://review.openstack.org/#/c/489570/ masakari-monitors: https://review.openstack.org/#/c/489095/ Best regards, On 2017/12/06 22:48, Waines, Greg wrote: I am just getting started working with masakari. I am working on master. I have setup Masakari in Devstack (see details at end of email) ... which starts up masakari-engine and masakari-api processes. I have git cloned the masakari-monitors and started them up (roughly) following the instructions at https://github.com/openstack/masakari-monitors . Specifically: # install & startup monitors cd git clone https://github.com/openstack/masakari-monitors.git cd masakari-monitors sudo python setup.py install cd sudo mkdir /etc/masakarimonitors sudo cp ~/masakari-monitors/etc/masakarimonitors/hostmonitor.conf.sample /etc/masakarimonitors/hostmonitor.conf sudo cp ~/masakari-monitors/etc/masakarimonitors/processmonitor.conf.sample /etc/masakarimonitors/processmonitor.conf sudo cp ~/masakari-monitors/etc/masakarimonitors/proc.list.sample /etc/masakarimonitors/proc.list cd ~/masakari-monitors/masakarimonitors/cmd sudo masakari-processmonitor.sh /etc/masakarimonitors/processmonitor.conf /etc/masakarimonitors/proc.list & sudo masakari-hostmonitor.sh /etc/masakarimonitors/hostmonitor.conf & sudo /usr/bin/python ./instancemonitor.py & However the instancemonitor.py starts and exits ... and does not appear to start any process(es) ... with no error messages and no log file. Is this the correct way to startup masakari instance monitoring ? Greg. My Masakari setup in Devstack sudo useradd -s /bin/bash -d /opt/stack -m stack echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack sudo su - stack git clone https://github.com/openstack-dev/devstack cd devstack local.conf file: [[local|localrc]] ADMIN_PASSWORD=admin DATABASE_PASSWORD=admin RABBIT_PASSWORD=admin SERVICE_PASSWORD=admin # setup Neutron services disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta # ceilometer enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer enable_plugin aodh https://git.openstack.org/openstack/aodh # heat enable_plugin heat https://git.openstack.org/openstack/heat # vitrage enable_plugin vitrage https://git.openstack.org/openstack/vitrage enable_plugin vitrage-dashboard https://git.openstack.org/openstack/vitrage-dashboard # masakari enable_plugin masakari git://git.openstack.org/openstack/masakari ./stack.sh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [masakari]problems starting up masakari instance monitoring in devstack @ master
Hello Greg, Please use masakarimonitors.conf instead of hostmonitor.conf and processmonitor.conf. You can generate it by "tox -egenconfig". hostmonitor.conf and processmonitor.conf are used for monitors implemented by shell script. masakarimonitors.conf is a configuration file for monitors implemented by python that you installed. And, we are preparing setting guides. Please see it if you are good. masakari: https://review.openstack.org/#/c/489570/ masakari-monitors: https://review.openstack.org/#/c/489095/ Best regards, On 2017/12/06 22:48, Waines, Greg wrote: I am just getting started working with masakari. I am working on master. I have setup Masakari in Devstack (see details at end of email) ... which starts up masakari-engine and masakari-api processes. I have git cloned the masakari-monitors and started them up (roughly) following the instructions at https://github.com/openstack/masakari-monitors . Specifically: # install & startup monitors cd git clone https://github.com/openstack/masakari-monitors.git cd masakari-monitors sudo python setup.py install cd sudo mkdir /etc/masakarimonitors sudo cp ~/masakari-monitors/etc/masakarimonitors/hostmonitor.conf.sample /etc/masakarimonitors/hostmonitor.conf sudo cp ~/masakari-monitors/etc/masakarimonitors/processmonitor.conf.sample /etc/masakarimonitors/processmonitor.conf sudo cp ~/masakari-monitors/etc/masakarimonitors/proc.list.sample /etc/masakarimonitors/proc.list cd ~/masakari-monitors/masakarimonitors/cmd sudo masakari-processmonitor.sh /etc/masakarimonitors/processmonitor.conf /etc/masakarimonitors/proc.list & sudo masakari-hostmonitor.sh /etc/masakarimonitors/hostmonitor.conf & sudo /usr/bin/python ./instancemonitor.py & However the instancemonitor.py starts and exits ... and does not appear to start any process(es) ... with no error messages and no log file. Is this the correct way to startup masakari instance monitoring ? Greg. My Masakari setup in Devstack sudo useradd -s /bin/bash -d /opt/stack -m stack echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack sudo su - stack git clone https://github.com/openstack-dev/devstack cd devstack local.conf file: [[local|localrc]] ADMIN_PASSWORD=admin DATABASE_PASSWORD=admin RABBIT_PASSWORD=admin SERVICE_PASSWORD=admin # setup Neutron services disable_service n-net enable_service q-svc enable_service q-agt enable_service q-dhcp enable_service q-l3 enable_service q-meta # ceilometer enable_plugin ceilometer https://git.openstack.org/openstack/ceilometer enable_plugin aodh https://git.openstack.org/openstack/aodh # heat enable_plugin heat https://git.openstack.org/openstack/heat # vitrage enable_plugin vitrage https://git.openstack.org/openstack/vitrage enable_plugin vitrage-dashboard https://git.openstack.org/openstack/vitrage-dashboard # masakari enable_plugin masakari git://git.openstack.org/openstack/masakari ./stack.sh __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [masakari]I submitted a patch that fixes py27 unit tests of Masakari.
Hello, I submitted a patch that fixes py27 unit tests of Masakari. https://review.openstack.org/#/c/516517/ This is a 2nd solution which we discussed in today's IRC meeting.[1] http://eavesdrop.openstack.org/meetings/masakari/2017/masakari.2017-10-31-04.00.log.html#l-54 Please check it. [1] 1st solution is this: https://review.openstack.org/#/c/513520/ Best Regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [masakari]py35 unit tests are failed
Hello, I understood why py35 unit tests were failed. The 'message' attribute was deprecated and removed from Python3.[1] But, some codes of masakari uses message attribute.[2] So some unit tests are failed on py35. I try to fix this by modifying below patch. * Stop using deprecated 'message' attribute in Exception https://review.openstack.org/#/c/486576/ [1] https://www.python.org/dev/peps/pep-0352/ [2] For example... https://github.com/openstack/masakari/blob/master/masakari/db/sqlalchemy/api.py#L231 On 2017/10/27 15:47, Rikimaru Honjo wrote: Hello, py35 unit tests of masakari are failed by same errors on gerrit. e.g. * https://review.openstack.org/#/c/441796/ =>http://logs.openstack.org/96/441796/3/check/openstack-tox-py35/d958f3f/testr_results.html.gz * https://review.openstack.org/#/c/509782/ =>http://logs.openstack.org/82/509782/23/check/openstack-tox-py35/9612809/testr_results.html.gz It seems to have been caused by sqlalchemy, but I haven't analyzed it enough yet. Please tell in this ML or submit a patch if you can solve it. Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Update on Zuul v3 Migration - and what to do about issues
Hello, (Can I still use this thread?) Excuse me, I'm trying to run Zuul v3 in my environment, and I have three question about it. I'd appreciate it if anyone helps. My environment) I use feature/zuulv3 branch, and version is 2.5.3.dev1374. Q1) "Unknown option --die-with-parent" error was occurred when zuul ran job. Is there requirement of bubblewrap version? I used bubblewrap 0.1.7-1~16.04~ansible. If I removed "--die-with-parent" from zuul/driver/bubblewrap/__init__.py, above error wouldn't occurred. Q2) When I specified "zuul_return" in playbook, the below error was occurred on remote host. KeyError: 'ZUUL_JOBDIR' Should I write a playbook to set a environment variable "ZUUL_JOBDIR"? Q3) Setup module of ansible took long time when zuul ran jobs. My job was succeeded if I extended timeout from 60 to 120 by modifying runAnsibleSetup() in zuul/executor/server.py. But, if I run same job directly(by own), it was finished soon. Do you have any knowledge about it? P.S. Is there a constructed VM image or ansible for running zuul v3...? Best regards, On 2017/09/29 23:58, Monty Taylor wrote: Hey everybody! tl;dr - If you're having issues with your jobs, check the FAQ, this email and followups on this thread for mentions of them. If it's an issue with your job and you can spot it (bad config) just submit a patch with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like to ask that you send a follow up email to this thread so that we can ensure we've got them all and so that others can see it too. ** Zuul v3 Migration Status ** If you haven't noticed the Zuul v3 migration - awesome, that means it's working perfectly for you. If you have - sorry for the disruption. It turns out we have a REALLY complicated array of job content you've all created. Hopefully the pain of the moment will be offset by the ability for you to all take direct ownership of your awesome content... so bear with us, your patience is appreciated. If you find yourself with some extra time on your hands while you wait on something, you may find it helpful to read: https://docs.openstack.org/infra/manual/zuulv3.html We're adding content to it as issues arise. Unfortunately, one of the issues is that the infra manual publication job stopped working. While the infra manual publication is being fixed, we're collecting FAQ content for it in an etherpad: https://etherpad.openstack.org/p/zuulv3-migration-faq If you have a job issue, check it first to see if we've got an entry for it. Once manual publication is fixed, we'll update the etherpad to point to the FAQ section of the manual. ** Global Issues ** There are a number of outstanding issues that are being worked. As of right now, there are a few major/systemic ones that we're looking in to that are worth noting: * Zuul Stalls If you say to yourself "zuul doesn't seem to be doing anything, did I do something wrong?", we're having an issue that jeblair and Shrews are currently tracking down with intermittent connection issues in the backend plumbing. When it happens it's an across the board issue, so fixing it is our number one priority. * Incorrect node type We've got reports of things running on trusty that should be running on xenial. The job definitions look correct, so this is also under investigation. * Multinode jobs having POST FAILURE There is a bug in the log collection trying to collect from all nodes while the old jobs were designed to only collect from the 'primary'. Patches are up to fix this and should be fixed soon. * Branch Exclusions being ignored This has been reported and its cause is currently unknown. Thank you all again for your patience! This is a giant rollout with a bunch of changes in it, so we really do appreciate everyone's understanding as we work through it all. Monty __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru s Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [masakari]py35 unit tests are failed
Hello, py35 unit tests of masakari are failed by same errors on gerrit. e.g. * https://review.openstack.org/#/c/441796/ =>http://logs.openstack.org/96/441796/3/check/openstack-tox-py35/d958f3f/testr_results.html.gz * https://review.openstack.org/#/c/509782/ =>http://logs.openstack.org/82/509782/23/check/openstack-tox-py35/9612809/testr_results.html.gz It seems to have been caused by sqlalchemy, but I haven't analyzed it enough yet. Please tell in this ML or submit a patch if you can solve it. Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [masakari]oslo.context 2.19.1 is currently blocked
Hello Ben, On 2017/10/18 0:05, Ben Nemec wrote: Can you add masakari to https://bugs.launchpad.net/oslo.context/+bug/1721432 so we don't miss it in the process of fixing this problem? At some point we will be releasing a new oslo.context and we need to make sure we get all the broken projects fixed before then. OK. I added masakari to the bug report. Thanks. -Ben On 10/17/2017 04:18 AM, Rikimaru Honjo wrote: Hello Masakari contributors, I submitted a patch which fixes masakari UT codes according to oslo.context 2.19.1.[1] Because oslo.context 2.19.1 adds 'project' key in context's to_dict function.[2] But, I realized that the latest global-requirement blocks oslo.context 2.19.1.[3] So I abandoned my patch because it is unnecessary. I think that we can merge "Updated from global requirements" patch now. [1] Fix UT codes according to updating oslo.context https://review.openstack.org/#/c/510760/ [2] Output 'project' key in context's to_dict function https://review.openstack.org/#/c/507444/ [3] Block oslo.context 2.19.1 https://review.openstack.org/#/c/510857/ -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ ★社名とメールアドレスが変わりました。 NTTテクノクロス株式会社 クラウド&セキュリティ事業部 第二事業ユニット(CS2BU) 本上力丸 TEL. :045-212-7539 E-mail:honjo.rikim...@po.ntt-tx.co.jp 〒220-0012 横浜市西区みなとみらい4丁目4番5号 横浜アイマークプレイス 13階 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [masakari]oslo.context 2.19.1 is currently blocked
Hello Masakari contributors, I submitted a patch which fixes masakari UT codes according to oslo.context 2.19.1.[1] Because oslo.context 2.19.1 adds 'project' key in context's to_dict function.[2] But, I realized that the latest global-requirement blocks oslo.context 2.19.1.[3] So I abandoned my patch because it is unnecessary. I think that we can merge "Updated from global requirements" patch now. [1] Fix UT codes according to updating oslo.context https://review.openstack.org/#/c/510760/ [2] Output 'project' key in context's to_dict function https://review.openstack.org/#/c/507444/ [3] Block oslo.context 2.19.1 https://review.openstack.org/#/c/510857/ -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [masakari]Propose changes of the core team
Oops, sorry, my previous mail didn't contain "[masakari]" tag in title. On 2017/10/16 11:34, Rikimaru Honjo wrote: Hi, On 2017/10/13 18:39, Sam P wrote: Hi All Masakari Cores, I would like to propose following changes to Masakari core team. Current core team: Masahito Muroi Rikimaru Honjo Sampath Priyankara (samP) Takashi Kajinami Toshikazu Ichikawa Tushar Patil Yukinori Sagara (A) Proposed to remove from Core team: (1) Toshikazu Ichikawa He was one of the initial members of the project and did a great work on design the initial Masakari API and Masakari architecture. However, he is no longer a active member of the community. I would like to take this opportunity to thank Toshikazu for his work on Masakari. I will vote +1 if Toshikazu agrees this proposal. (B) Confirm your availability as Core member: Following members, please confirm your ability to contribute to Masakari in Queens and future cycles. (1) Takashi Kajinami (2) Masahito Muroi I understand that you are extremely busy with other tasks or other projects in OpenStack. If it is difficult for you to contribute to Masakari, I suggest that you step down from core team for now. In future, if you are wish to participate again, then we can discuss about reinstate you as a core member of the team. (C) Add to new members to core team: (1) Adam Spiers (Suse) I would like to add Adam to core team. He is the current maintainer of the openstack-resource-agents and leader of the OpenStack HA team. Considering his technical knowledge of the subject, and past work he has done in Masakari and related work[1][2], I think Adam is one of the best persons we can have in our team. (2) Kengo Takahara (NTT-TX) Kengo has been an active contributor to the Masakari project from Newton and heavily contribute to crate masakari-monitors and python-masakariclient from scratch [3]. I vote +1 for each persons. Because I've known their achievements. All Masakari core members, please respond with your comments and objections. Please cast your vote on (A) and (C). [1] https://review.openstack.org/#/q/project:openstack/openstack-resource-agents-specs [2] https://etherpad.openstack.org/p/newton-instance-ha [3] http://stackalytics.com/?project_type=all&release=all&metric=commits&user_id=takahara.kengo@as.ntts.co.jp --- Regards, Sampath __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Propose changes of the core team
Hi, On 2017/10/13 18:39, Sam P wrote: Hi All Masakari Cores, I would like to propose following changes to Masakari core team. Current core team: Masahito Muroi Rikimaru Honjo Sampath Priyankara (samP) Takashi Kajinami Toshikazu Ichikawa Tushar Patil Yukinori Sagara (A) Proposed to remove from Core team: (1) Toshikazu Ichikawa He was one of the initial members of the project and did a great work on design the initial Masakari API and Masakari architecture. However, he is no longer a active member of the community. I would like to take this opportunity to thank Toshikazu for his work on Masakari. I will vote +1 if Toshikazu agrees this proposal. (B) Confirm your availability as Core member: Following members, please confirm your ability to contribute to Masakari in Queens and future cycles. (1) Takashi Kajinami (2) Masahito Muroi I understand that you are extremely busy with other tasks or other projects in OpenStack. If it is difficult for you to contribute to Masakari, I suggest that you step down from core team for now. In future, if you are wish to participate again, then we can discuss about reinstate you as a core member of the team. (C) Add to new members to core team: (1) Adam Spiers (Suse) I would like to add Adam to core team. He is the current maintainer of the openstack-resource-agents and leader of the OpenStack HA team. Considering his technical knowledge of the subject, and past work he has done in Masakari and related work[1][2], I think Adam is one of the best persons we can have in our team. (2) Kengo Takahara (NTT-TX) Kengo has been an active contributor to the Masakari project from Newton and heavily contribute to crate masakari-monitors and python-masakariclient from scratch [3]. I vote +1 for each persons. Because I've known their achievements. All Masakari core members, please respond with your comments and objections. Please cast your vote on (A) and (C). [1] https://review.openstack.org/#/q/project:openstack/openstack-resource-agents-specs [2] https://etherpad.openstack.org/p/newton-instance-ha [3] http://stackalytics.com/?project_type=all&release=all&metric=commits&user_id=takahara.kengo@as.ntts.co.jp --- Regards, Sampath __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Update on Zuul v3 Migration - and what to do about issues
Hi David, Jeremy, Thank you for replying! (And, sorry to sent a mail about nodepool to this email thread.) Your advice is so helpful. I retry to configure nodepool according to your advice. Best regards, On 2017/10/11 22:38, Jeremy Stanley wrote: On 2017-10-11 08:13:44 -0400 (-0400), David Shrewsbury wrote: [...] On Wed, Oct 11, 2017 at 1:31 AM, Rikimaru Honjo wrote: [...] 1) Is there the information about differences of configuration between nodepool for zuul v2 and v3? Or, can I configure feature/zuulv3 basically same as lower version? We don't document the differences between versions, but all of the v3 config options are documented in the nodepool docs (you can generate them from the source repo with the command: tox -e docs). [...] It also bears mentioning that Zuul v3 has not officially reached release yet, and the plan is to work on documentation for CI operators upgrading from v2 to v3 after we've been able to successfully use v3 ourselves upstream. The unfortunate lack of documentation around migration is still expected at this stage. __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Update on Zuul v3 Migration - and what to do about issues
Hello, I'm trying to install & configure nodepool for Zuul v3 in my CI environment now. I use feature/zuulv3 branch.(ver. 0.4.1.dev430) I referred nodepool documents and infra/project-config tree. And I have some questions about this version of nodepool. 1)Is there the information about differences of configuration between nodepool for zuul v2 and v3? Or, can I configure feature/zuulv3 basically same as lower version? 2)Below suggestion is wrote in README.rst. But such file is not contained in the infra/system-config tree now. Where is the file? Create or adapt a nodepool yaml file. You can adapt an infra/system-config one, or fake.yaml as desired. Note that fake.yaml's settings won't Just Work - consult ./modules/openstack_project/templates/nodepool/nodepool.yaml.erb in the infra/system-config tree to see a production config. 3)Can I use "images" key in "providers"? I used "images" in nodepool ver.0.3.1, but below sample file doesn't use the key. https://github.com/openstack-infra/nodepool/blob/feature/zuulv3/tools/fake.yaml Best regards, On 2017/09/29 23:58, Monty Taylor wrote: Hey everybody! tl;dr - If you're having issues with your jobs, check the FAQ, this email and followups on this thread for mentions of them. If it's an issue with your job and you can spot it (bad config) just submit a patch with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like to ask that you send a follow up email to this thread so that we can ensure we've got them all and so that others can see it too. ** Zuul v3 Migration Status ** If you haven't noticed the Zuul v3 migration - awesome, that means it's working perfectly for you. If you have - sorry for the disruption. It turns out we have a REALLY complicated array of job content you've all created. Hopefully the pain of the moment will be offset by the ability for you to all take direct ownership of your awesome content... so bear with us, your patience is appreciated. If you find yourself with some extra time on your hands while you wait on something, you may find it helpful to read: https://docs.openstack.org/infra/manual/zuulv3.html We're adding content to it as issues arise. Unfortunately, one of the issues is that the infra manual publication job stopped working. While the infra manual publication is being fixed, we're collecting FAQ content for it in an etherpad: https://etherpad.openstack.org/p/zuulv3-migration-faq If you have a job issue, check it first to see if we've got an entry for it. Once manual publication is fixed, we'll update the etherpad to point to the FAQ section of the manual. ** Global Issues ** There are a number of outstanding issues that are being worked. As of right now, there are a few major/systemic ones that we're looking in to that are worth noting: * Zuul Stalls If you say to yourself "zuul doesn't seem to be doing anything, did I do something wrong?", we're having an issue that jeblair and Shrews are currently tracking down with intermittent connection issues in the backend plumbing. When it happens it's an across the board issue, so fixing it is our number one priority. * Incorrect node type We've got reports of things running on trusty that should be running on xenial. The job definitions look correct, so this is also under investigation. * Multinode jobs having POST FAILURE There is a bug in the log collection trying to collect from all nodes while the old jobs were designed to only collect from the 'primary'. Patches are up to fix this and should be fixed soon. * Branch Exclusions being ignored This has been reported and its cause is currently unknown. Thank you all again for your patience! This is a giant rollout with a bunch of changes in it, so we really do appreciate everyone's understanding as we work through it all. Monty __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Update on Zuul v3 Migration - and what to do about issues
Hi Paul, Thank you for replying! After all, should I install Nodepool and run my job with it? If it is yes, I can do it. On 2017/10/04 21:43, Paul Belanger wrote: On Wed, Oct 04, 2017 at 02:39:17PM +0900, Rikimaru Honjo wrote: Hello, I'm trying to run jobs with Zuul v3 in my local environment.[1] I prepared a sample job that runs sleep command on zuul's host. This job doesn't use Nodepool. [2] As a result, Zuul v3 submitted "SUCCESS" to gerrit when gerrit event occurred. But, error logs were generated. And my job was not run. I'd appreciate it if you help me. (Should I write this topic on Zuul Storyboard?) [1]I use Ubuntu 16.04 and zuul==2.5.3.dev1374. [2]In my understanding, I can use Zuul v3 without Nodepool. https://docs.openstack.org/infra/zuul/feature/zuulv3/user/config.html#attr-job.nodeset If a job has an empty or no nodeset definition, it will still run and may be able to perform actions on the Zuul executor. While this is true, at this time it has limited testing and not sure I would be writing job content to leverage this too much. Right now, we are only using it to trigger RTFD hooks in openstack-infra. Zuulv3 is really meant to be used with nodepool, much tighter now then before. We do have plan to support static nodes in zuulv3, but work on that hasn't finished. [Conditions] * Target project is defined as config-project in tenant configuration file. * I didn't write nodeset in .zuul.yaml. Because my job doesn't use Nodepool. * I configured playbooks's hosts as "- hosts: all" or "- hosts: localhost". (I referred to project-config repository.) [Error logs] "no hosts matched" or "list index out of range" were generated. Please see the attached file. On 2017/09/29 23:58, Monty Taylor wrote: Hey everybody! tl;dr - If you're having issues with your jobs, check the FAQ, this email and followups on this thread for mentions of them. If it's an issue with your job and you can spot it (bad config) just submit a patch with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like to ask that you send a follow up email to this thread so that we can ensure we've got them all and so that others can see it too. ** Zuul v3 Migration Status ** If you haven't noticed the Zuul v3 migration - awesome, that means it's working perfectly for you. If you have - sorry for the disruption. It turns out we have a REALLY complicated array of job content you've all created. Hopefully the pain of the moment will be offset by the ability for you to all take direct ownership of your awesome content... so bear with us, your patience is appreciated. If you find yourself with some extra time on your hands while you wait on something, you may find it helpful to read: https://docs.openstack.org/infra/manual/zuulv3.html We're adding content to it as issues arise. Unfortunately, one of the issues is that the infra manual publication job stopped working. While the infra manual publication is being fixed, we're collecting FAQ content for it in an etherpad: https://etherpad.openstack.org/p/zuulv3-migration-faq If you have a job issue, check it first to see if we've got an entry for it. Once manual publication is fixed, we'll update the etherpad to point to the FAQ section of the manual. ** Global Issues ** There are a number of outstanding issues that are being worked. As of right now, there are a few major/systemic ones that we're looking in to that are worth noting: * Zuul Stalls If you say to yourself "zuul doesn't seem to be doing anything, did I do something wrong?", we're having an issue that jeblair and Shrews are currently tracking down with intermittent connection issues in the backend plumbing. When it happens it's an across the board issue, so fixing it is our number one priority. * Incorrect node type We've got reports of things running on trusty that should be running on xenial. The job definitions look correct, so this is also under investigation. * Multinode jobs having POST FAILURE There is a bug in the log collection trying to collect from all nodes while the old jobs were designed to only collect from the 'primary'. Patches are up to fix this and should be fixed soon. * Branch Exclusions being ignored This has been reported and its cause is currently unknown. Thank you all again for your patience! This is a giant rollout with a bunch of changes in it, so we really do appreciate everyone's understanding as we work through it all. Monty __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] Update on Zuul v3 Migration - and what to do about issues
Hello, I'm trying to run jobs with Zuul v3 in my local environment.[1] I prepared a sample job that runs sleep command on zuul's host. This job doesn't use Nodepool. [2] As a result, Zuul v3 submitted "SUCCESS" to gerrit when gerrit event occurred. But, error logs were generated. And my job was not run. I'd appreciate it if you help me. (Should I write this topic on Zuul Storyboard?) [1]I use Ubuntu 16.04 and zuul==2.5.3.dev1374. [2]In my understanding, I can use Zuul v3 without Nodepool. https://docs.openstack.org/infra/zuul/feature/zuulv3/user/config.html#attr-job.nodeset If a job has an empty or no nodeset definition, it will still run and may be able to perform actions on the Zuul executor. [Conditions] * Target project is defined as config-project in tenant configuration file. * I didn't write nodeset in .zuul.yaml. Because my job doesn't use Nodepool. * I configured playbooks's hosts as "- hosts: all" or "- hosts: localhost". (I referred to project-config repository.) [Error logs] "no hosts matched" or "list index out of range" were generated. Please see the attached file. On 2017/09/29 23:58, Monty Taylor wrote: Hey everybody! tl;dr - If you're having issues with your jobs, check the FAQ, this email and followups on this thread for mentions of them. If it's an issue with your job and you can spot it (bad config) just submit a patch with topic 'zuulv3'. If it's bigger/weirder/you don't know - we'd like to ask that you send a follow up email to this thread so that we can ensure we've got them all and so that others can see it too. ** Zuul v3 Migration Status ** If you haven't noticed the Zuul v3 migration - awesome, that means it's working perfectly for you. If you have - sorry for the disruption. It turns out we have a REALLY complicated array of job content you've all created. Hopefully the pain of the moment will be offset by the ability for you to all take direct ownership of your awesome content... so bear with us, your patience is appreciated. If you find yourself with some extra time on your hands while you wait on something, you may find it helpful to read: https://docs.openstack.org/infra/manual/zuulv3.html We're adding content to it as issues arise. Unfortunately, one of the issues is that the infra manual publication job stopped working. While the infra manual publication is being fixed, we're collecting FAQ content for it in an etherpad: https://etherpad.openstack.org/p/zuulv3-migration-faq If you have a job issue, check it first to see if we've got an entry for it. Once manual publication is fixed, we'll update the etherpad to point to the FAQ section of the manual. ** Global Issues ** There are a number of outstanding issues that are being worked. As of right now, there are a few major/systemic ones that we're looking in to that are worth noting: * Zuul Stalls If you say to yourself "zuul doesn't seem to be doing anything, did I do something wrong?", we're having an issue that jeblair and Shrews are currently tracking down with intermittent connection issues in the backend plumbing. When it happens it's an across the board issue, so fixing it is our number one priority. * Incorrect node type We've got reports of things running on trusty that should be running on xenial. The job definitions look correct, so this is also under investigation. * Multinode jobs having POST FAILURE There is a bug in the log collection trying to collect from all nodes while the old jobs were designed to only collect from the 'primary'. Patches are up to fix this and should be fixed soon. * Branch Exclusions being ignored This has been reported and its cause is currently unknown. Thank you all again for your patience! This is a giant rollout with a bunch of changes in it, so we really do appreciate everyone's understanding as we work through it all. Monty __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp Case1)I configures playbooks's hosts as "- hosts: all". 2017-09-29 16:18:40,247 DEBUG zuul.AnsibleJob: [build: e56656cd5d1444619c01755e6f858be0] Writing logging config for job /tmp/e56656cd5d1444619c01755e6f858be0/work/logs/job-output.txt /tmp/e56656cd5d1444619c01755e6f858be0/ansible/logging.json 2017-09-29 16:18:40,249 DEBUG zuul.BubblewrapExecutionContext: Bubblewrap command: bwrap --dir /tmp --tmpfs /tmp --dir /var --dir /var/tmp --dir /run/user/1000 --ro-bind /usr /usr --ro
Re: [openstack-dev] May I run iscsiadm --op show & update 100 times?
Hello Gorka, On 2017/10/02 20:37, Gorka Eguileor wrote: On 02/10, Rikimaru Honjo wrote: Hello, I'd like to discuss about the following bug of os-brick. * os-brick's iscsi initiator unexpectedly reverts node.startup from "automatic" to "manual". https://bugs.launchpad.net/os-brick/+bug/1670237 The important point of this bug is: When os-brick initializes iscsi connections: 1. os-brick will run "iscsiadm -m discovery" command if we use iscsi multipath. This only happens with a small number of cinder drivers, since most drivers try to avoid the discovery path due to the number of disadvantages it presents for a reliable deployment. The most notorious issue is that the path to the discovery portal on the attaching node is down you cannot attach the volume no matter how many of the other paths are up. 2. os-brick will update node.startup values to "automatic" if we use iscsi. 3. "iscsiadm -m discovery" command will recreate iscsi node repositories.[1] As a result, node.startup values of already attached volumes will be revert to default(=manual). Gorka Eguileor and I discussed how do I fix this bug[2]. Our idea is this: 1. Confirm node.startup values of all the iscsi targets before running discovery. 2. Re-update node.startup values of all the iscsi targets after running discovery. But, I afraid that this operation will take a long time. I ran showing & updating node.startup values 100 times for researching. As a result, it took about 4 seconds. When I ran 200 times, it took about 8 seconds. I think this is a little long. If we use multipath and attach 25 volumes, 100 targets will be created. I think that updating 100 times is a possible use case. How do you think about it? Can I implement the above idea? The approach I proposed is on the review is valid, the flaw is in the specific implementation, you are doing 100 request where 4 would suffice. You don't need to do a request for each target-portal tuple, you only need to do 1 request per portal, which reduces the number of calls to iscsiadm from 100 to 4 in the case you mention. You can check all targets for an IP with: iscsiadm -m node -p IP This means that the performance hit from having 100 or 200 targets should be negligible. I have one question. I can see node.startup values by 1 request per portal as you said. But, may I update values by 1 request per portal? Updating values has been done by 1 request per target until now. So I think my patch should update values in same way(=1 request per target). Cheers, Gorka. [1]This is correct behavior of iscsiadm. https://github.com/open-iscsi/open-iscsi/issues/58#issuecomment-325528315 [2]https://bugs.launchpad.net/os-brick/+bug/1670237 -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ ★社名とメールアドレスが変わりました。 NTTテクノクロス株式会社 クラウド&セキュリティ事業部 第二事業ユニット(CS2BU) 本上力丸 TEL. :045-212-7539 E-mail:honjo.rikim...@po.ntt-tx.co.jp 〒220-0012 横浜市西区みなとみらい4丁目4番5号 横浜アイマークプレイス 13階 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder][nova]May I run iscsiadm --op show & update 100 times?
Hello, I'd like to discuss about the following bug of os-brick. * os-brick's iscsi initiator unexpectedly reverts node.startup from "automatic" to "manual". https://bugs.launchpad.net/os-brick/+bug/1670237 The important point of this bug is: When os-brick initializes iscsi connections: 1. os-brick will run "iscsiadm -m discovery" command if we use iscsi multipath. 2. os-brick will update node.startup values to "automatic" if we use iscsi. 3. "iscsiadm -m discovery" command will recreate iscsi node repositories.[1] As a result, node.startup values of already attached volumes will be revert to default(=manual). Gorka Eguileor and I discussed how do I fix this bug[2]. Our idea is this: 1. Confirm node.startup values of all the iscsi targets before running discovery. 2. Re-update node.startup values of all the iscsi targets after running discovery. But, I afraid that this operation will take a long time. I ran showing & updating node.startup values 100 times for researching. As a result, it took about 4 seconds. When I ran 200 times, it took about 8 seconds. I think this is a little long. If we use multipath and attach 25 volumes, 100 targets will be created. I think that updating 100 times is a possible use case. How do you think about it? Can I implement the above idea? [1]This is correct behavior of iscsiadm. https://github.com/open-iscsi/open-iscsi/issues/58#issuecomment-325528315 [2]https://bugs.launchpad.net/os-brick/+bug/1670237 -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [masakari]I withdraw my remark of today's IRC meerting.
Hello, In today's Masakari IRC meeting, I said that executing masakari notification API was failed in the latest devstack environment[1]. I rebuilt my environment after that. As a result, I couldn't reproduce the issue. So I don't report it to launchpad. Sorry. [1] http://eavesdrop.openstack.org/meetings/masakari/2017/masakari.2017-08-15-04.00.log.html#l-132 Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [masakari] Make 'error' instances recovery configurable
Hi Dinesh and Sampath, On 2017/08/15 1:50, Sam P wrote: Hi Dinesh and Rikimaru, It seems that Dinesh[1] and Rikimaru [2] pushed a patch to fix same issue. Thank you for your effort. Please discuss and merge them into one patch. [1] https://review.openstack.org/#/c/493534/ [2] https://review.openstack.org/#/c/493476/ The implementations are slightly different between my patch and Dinesh's patch. My patch: Remove error instances just before evacuating. Dinesh's patch: Remove error instances while creating instance list. I think that Both patches will works. But I think Dinesh's patch is better than mine. That is easy to understand codes because evacuate_all_instances works at same point. I entrust the decision to other reviewers. I don't stick to my patch. --- Regards, Sampath __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [masakari]I'd like to use identity internal endpoint for authentication.
Hi all, I'd like to use identity internal endpoint for authentication in Masakari. But, I couldn't find correct configurations. Masakari has used admin endpoint. My wish: [user] > [masakari-API] | [Identity Internal endpoint] | [Keystone] Actual: [user] > [masakari-API] | [Identity Admin endpoint] | [Keystone] Do you know about this? Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [masakari]Where is the masakari-monitors spec repository?
Hi Sampath, Thank you for suggesting! I will use masakari-spec repo! On 2017/07/14 19:11, Sam P wrote: Hi Honjo, There are no dedicated spec repositories for masakari-monitors and python-masakariclient. Please use the masakari-spec repository[1] for spec discussion for those 2 projects. [1] https://review.openstack.org/#/q/project:openstack/masakari-specs --- Regards, Sampath On Fri, Jul 14, 2017 at 4:53 PM, Rikimaru Honjo wrote: Hi all, I want to push a new spec document of masakari-monitors. But there is not a masakari-monitors-spec repository. Can I push it to masakari-spec repository? Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ ★社名とメールアドレスが変わりました。 NTTテクノクロス株式会社 クラウド&セキュリティ事業部 第二事業ユニット(CS2BU) 本上力丸 TEL. :045-212-7539 E-mail:honjo.rikim...@po.ntt-tx.co.jp 〒220-0012 横浜市西区みなとみらい4丁目4番5号 横浜アイマークプレイス 13階 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [masakari]Where is the masakari-monitors spec repository?
Hi all, I want to push a new spec document of masakari-monitors. But there is not a masakari-monitors-spec repository. Can I push it to masakari-spec repository? Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [masakari]Remove ERROR instances from recovery targets
Hello all, Current Masakari also rescues ERROR instances when host failure happen. Those instances will be changed to ACTIVE after rescued.[1] But I think that some users don't want to rescue ERROR instances. For example, if user is running 1ACT/n SBY application on instances, launching ERROR instances will cause unexpected effect. So I want to add a configurable option. ERROR instances won't be rescued if the option is set. Please talk your opinion about this issue. P.S. I talked about this issue in IRC meeting. http://eavesdrop.openstack.org/meetings/masakari/2017/masakari.2017-07-11-04.00.log.html But time was up at that time. [1] This is Evacuate API's behavior. [2] There is a possibility that following patch resolve this issue, but that will take time. https://review.openstack.org/#/c/469029/ Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikmaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] upper-constraints.txt is missing
On 2017/06/23 16:17, Andreas Jaeger wrote: On 2017-06-23 08:05, Rikimaru Honjo wrote: Hi, I run "tox -epy27" in nova repository just now. As a result, following error message was printed. HTTPError: 404 Client Error: Not found for url: https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt Actually following URI is missing. https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt The same error occurred other repositories.(e.g. cinder, glance...) Where did upper-constraints.txt go? git.openstack.org is currently broken, we're investigating. Btw. best to report those on #openstack-infra directly, see https://docs.openstack.org/infra/manual/ for further instructions and ways to check the status of our infrastructure, Thank you for suggesting! I'll report on #openstack-infra directly next time. FIY: I succeeded to run tox in my machine by setting following environment variable: $ export UPPER_CONSTRAINTS_FILE=https://raw.githubusercontent.com/openstack/requirements/master/upper-constraints.txt Andreas -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] upper-constraints.txt is missing
Hi, I run "tox -epy27" in nova repository just now. As a result, following error message was printed. HTTPError: 404 Client Error: Not found for url: https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt Actually following URI is missing. https://git.openstack.org/cgit/openstack/requirements/plain/upper-constraints.txt The same error occurred other repositories.(e.g. cinder, glance...) Where did upper-constraints.txt go? Best regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder]Should os-brick update iSCSI node.startup to "automatic"?
Hi all, I reported a following bug of os-brick's iSCSI feature and pushed a patch for it. * os-brick's iscsi initiator unexpectedly reverts node.startup from "automatic" to "manual". https://bugs.launchpad.net/os-brick/+bug/1670237 The patch got -2, but I think that this -2 is based on a misunderstanding. I explained it on gerrit, but there were no reactions. So I'd like to hear your opinions! The important points of the report/patch are, * Executing "iscsiadm -m discovery..." forcibly reverts node.startup from "automatic" to default value "manual". os-brick executes that command. And, current os-brick also updates node.startup to "automatic". As a result, automatic nodes and manual nodes are mixed now. My opinion for the above issue, * No one needs node.startup=automatic now. os-brick users[1] create/re-create iSCSI sessions when they need it. So "manual" is enough. * Therefore, IMO, os-brick shouldn't update node.startup to "automatic". * If by any chance someone needs node.startup=automatic, he should set default value as "automatic" in iscsi.conf. [1]e.g. nova,cinder... Regards, -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Rikimaru Honjo E-mail:honjo.rikim...@po.ntt-tx.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [cinder]Can we run cinder-volume and cinder-backup on a same host?
Hi Michal, Thank you for explaining! I could understand the mechanism of cinder-backup. If you're able to reproduce a scenario that fails these assumptions, please file a bug report and we'll be happy to investigate and provide a fix. Sure. But, according to your explanation, my assumptions won't be realized. On 2017/01/20 17:31, Dulko, Michal wrote: On Fri, 2017-01-20 at 14:15 +0900, Rikimaru Honjo wrote: Hi Cinder devs, I have a question about cinder. Can I run cinder-volume and cinder-backup on a same host when I using iscsi backend? I afraid that iscsi operations will be conflicted between cinder- volume and cinder-backup. In my understanding, iscsi operations are serialized for each individual process. But these could be raced between processes. e.g.(Caution: This is just a forecast.) If cinder-backup execute "multipath -r" while cinder-volume is terminating connection, a multipath garbage will remain unexpectedly. Hi, Before Mitaka it was *required* to place cinder-volume and cinder- backup on the same node. As both services shared same file lock directory, it was safe. In fact cinder-backup simply imported cinder- volume code. Since Mitaka cinder-backup doesn't do any iSCSI operations directly and attaches volumes by calling cinder-volume over RPC. This means that it's possible to place cinder-backup on other node than cinder-volume, but it's still totally safe to place them together. If you're able to reproduce a scenario that fails these assumptions, please file a bug report and we'll be happy to investigate and provide a fix. Thanks, Michal __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ NTTソフトウェア株式会社 クラウド&セキュリティ事業部 第一事業ユニット(CS1BU) 本上力丸 TEL. :045-212-7539 E-mail:honjo.rikim...@po.ntts.co.jp 〒220-0012 横浜市西区みなとみらい4丁目4番5号 横浜アイマークプレイス 13階 __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [cinder]Can we run cinder-volume and cinder-backup on a same host?
Hi Cinder devs, I have a question about cinder. Can I run cinder-volume and cinder-backup on a same host when I using iscsi backend? I afraid that iscsi operations will be conflicted between cinder-volume and cinder-backup. In my understanding, iscsi operations are serialized for each individual process. But these could be raced between processes. e.g.(Caution: This is just a forecast.) If cinder-backup execute "multipath -r" while cinder-volume is terminating connection, a multipath garbage will remain unexpectedly. -- Rikimaru Honjo E-mail:honjo.rikim...@po.ntts.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [magnum]Is internet-access necessary for Magnum + CoreOS?
Hi Yuanying, Thank you for explaining. I consider changing my environment or OS. Regards, On 2016/11/01 19:13, Yuanying OTSUKA wrote: Hi, Rikimaru. Currently, k8s-CoreOS driver dosen’t have way to disable internet access. But k8s-fedora driver has. See, below blueprint. * https://blueprints.launchpad.net/magnum/+spec/support-insecure-registry Maybe you can bring this feature to k8s-coreos driver. Thanks -yuanying 2016年11月1日(火) 15:05 Rikimaru Honjo : Hi all, Can I use magnum + CoreOS on the environment which is not able to access the internet? I'm trying it, but CoreOS often accesses to "quay.io". Please share the knowledge if you know about it. I'm using CoreOS, kubernetes, Magnum 2.0.1. Regards, -- Rikimaru Honjo honjo.rikim...@po.ntts.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [magnum]Is internet-access necessary for Magnum + CoreOS?
Hi all, Can I use magnum + CoreOS on the environment which is not able to access the internet? I'm trying it, but CoreOS often accesses to "quay.io". Please share the knowledge if you know about it. I'm using CoreOS, kubernetes, Magnum 2.0.1. Regards, -- Rikimaru Honjo honjo.rikim...@po.ntts.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [magnum]What version of coreos should I use for stable/mitaka?
Hi Hongbin, Thanks a lot! I try to use the version 1030.0.0! Best regards, On 2016/10/25 22:48, Hongbin Lu wrote: As recorded in this bug report [1]. The version 1030.0.0 was reported to work with mitaka. [1] https://bugs.launchpad.net/magnum/+bug/1615854 On Mon, Oct 24, 2016 at 3:58 AM, Rikimaru Honjo < honjo.rikim...@po.ntts.co.jp> wrote: Hello, I'm using magnum which is stable/mitaka. And, I failed to create a bay by the following bug. (I chose coreos as OS, and kubernetes as COE.) https://bugs.launchpad.net/magnum/+bug/1605554 But I'd like to use still stable/mitaka. What version of coreos should I use? Best regards, -- Rikimaru Honjo E-mail:honjo.rikim...@po.ntts.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [magnum]What version of coreos should I use for stable/mitaka?
Hello, I'm using magnum which is stable/mitaka. And, I failed to create a bay by the following bug. (I chose coreos as OS, and kubernetes as COE.) https://bugs.launchpad.net/magnum/+bug/1605554 But I'd like to use still stable/mitaka. What version of coreos should I use? Best regards, -- Rikimaru Honjo E-mail:honjo.rikim...@po.ntts.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] [nova]Question about unit tests for shelve/unshleve
On 2016/10/18 3:50, Andrew Laski wrote: On Sun, Oct 16, 2016, at 07:11 AM, Rikimaru Honjo wrote: Hi all, I have a question about unit tests of nova. (I found this question when I fixed a bug about shelve.[1]) "nova.tests.unit.compute.test_shelve.ShelveComputeAPITestCase" has test cases for "nova.compute.api.API.shelve()/unshelve()". But "nova.tests.unit.compute.test_compute_api._ComputeAPIUnitTestMixIn" also has test cases for same methods. Is their purpose duplicated? And, can I organize them if their purpose is duplicated? I just looked at them briefly and they're not exactly duplicates. It appears that test_shelve.py has more functional tests and test_compute_api.py is more unit tests. But it would be nice to have them all in the same place. Thank you for explaining and suggesting! I start planning about consolidating tests for shelve. FYI, I think that we should consolidate them into "nova.tests.unit.compute.test_compute_api._ComputeAPIUnitTestMixIn". Because it is inherited to some test classes. Personally I would prefer consolidating them into test_shelve.py because _ComputeAPIUnitTestMixin is a giant class and it can be hard to discover where something is tested. I like having the features tested in a dedicated test file. Your opinion is more convincing than my opinion. I'll consolidate them into test_shelve.py. [1]: https://bugs.launchpad.net/nova/+bug/1588657 Best regards, -- Rikimaru Honjo E-mail:honjo.rikim...@po.ntts.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]Question about unit tests for shelve/unshleve
Hi all, I have a question about unit tests of nova. (I found this question when I fixed a bug about shelve.[1]) "nova.tests.unit.compute.test_shelve.ShelveComputeAPITestCase" has test cases for "nova.compute.api.API.shelve()/unshelve()". But "nova.tests.unit.compute.test_compute_api._ComputeAPIUnitTestMixIn" also has test cases for same methods. Is their purpose duplicated? And, can I organize them if their purpose is duplicated? FYI, I think that we should consolidate them into "nova.tests.unit.compute.test_compute_api._ComputeAPIUnitTestMixIn". Because it is inherited to some test classes. [1]: https://bugs.launchpad.net/nova/+bug/1588657 Best regards, -- Rikimaru Honjo E-mail:honjo.rikim...@po.ntts.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] What is definition of critical bugfixes?
On 2016/09/20 23:42, Matt Riedemann wrote: On 9/20/2016 4:25 AM, Rikimaru Honjo wrote: Hi All, I requested to review my patch in the last Weekly Nova team meeting.[1] In this meeting, Mr. Dan Smith said following things about my patch. * This patch is too large to merge in rc2.[2] * Fix after Newton and backport to newton and mitaka.[3] In my understanding, we can backport only critical bugfixes and security patches in Phase II.[4] And, stable/mitaka move to Phase II after newton. What is definition of critical bugfixes? And, can I backport my patch to mitaka after newton? [1]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-178 [2]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-194 [3]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-185 [4]http://docs.openstack.org/project-team-guide/stable-branches.html#support-phases Best regards, Critical generally means data loss, security issues, or upgrade impacts, i.e. does a bug cause data loss or prevent upgrades to a given release? Thank you for explaining! IMO, my reported bug has potential of data loss by unexpected detaching volume. Latent known issues are generally not considered critical bug fixes, especially if they are large and complicated which means they are prone to introduce regressions. When are issues considered critical bugs or not? Is it after committing to gerrit? (In other words, can I commit to N-2 branch after newton? Of course, considering critical or not is the other problem.) Sorry to repeat questions. -- Rikimaru Honjo E-mail:honjo.rikim...@po.ntts.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
[openstack-dev] [nova]What is definition of critical bugfixes?
Hi All, I requested to review my patch in the last Weekly Nova team meeting.[1] In this meeting, Mr. Dan Smith said following things about my patch. * This patch is too large to merge in rc2.[2] * Fix after Newton and backport to newton and mitaka.[3] In my understanding, we can backport only critical bugfixes and security patches in Phase II.[4] And, stable/mitaka move to Phase II after newton. What is definition of critical bugfixes? And, can I backport my patch to mitaka after newton? [1]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-178 [2]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-194 [3]http://eavesdrop.openstack.org/meetings/nova/2016/nova.2016-09-15-21.00.log.html#l-185 [4]http://docs.openstack.org/project-team-guide/stable-branches.html#support-phases Best regards, -- Rikimaru Honjo E-mail:honjo.rikim...@po.ntts.co.jp __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
Re: [openstack-dev] "nova list" doesn't show networks for few instances
Do you use "2014.2.4" of Juno? If your OpenStack version is older than "2014.2.4", you might be hit the following bug. https://bugs.launchpad.net/nova/+bug/1407664 Rikimaru Honjo honjo.rikim...@po.ntts.co.jp On 2016/03/30 17:08, varun bhatnagar wrote: Hi, I am using OpenStack Juno on a multinode setup. When I did nova list I couldn't see any interfaces attached to few of my VMs although in Dashboard they are visible. -+ | ID | Name | Status | Task State | Power State | Networks | +--+---+++-+--+ | e111e5b6-4a90-4a3b-a465-19000fe1a81d | VM-3 | ACTIVE | - | Running | | | 78c465e9-09a6-477b-9374-9a5eb455ab2b | VM-4 | ACTIVE | - | Running | | | 6cfdef27-018a-4fd3-8f4f-856d804d415b | VM-5 | ACTIVE | - | Running | | | 94f4bddb-e2d5-48e8-88ae-169aba75ebd4 | VM-6 | ACTIVE | - | Running | | | c72a4d9d-fcd1-4b12-90de-ec46fc950ad2 | VM-7 | ACTIVE | - | Running | internal=3001::b, 10.0.0.13, 192.168.154.68 | +--+---+++-+--+ Can anyone please tell me what is causing this and how to fix it? BR, Varun __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev