[openstack-dev] [devstack-dev][swift] Limitations of Erasure Coding in Swift

2018-07-01 Thread super user
Hello everybody,

I would like to ask about the limitations of Erasure Coding in Swift right
now. What can we do to overcome these limitations?

Thank you.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/pike image build by devstack

2018-05-19 Thread Michael Johnson
Yes, this just started occurring with Thursday/Fridays updates to the
Ubuntu cloud image upstream of us.

I have posted a patch for Queens here: https://review.openstack.org/#/c/569531

We will be back porting that as soon as we can to the other stable
releases. Please review the backports as they come out to help the
team merge them as soon as possible.

Michael (johnsom)

On Fri, May 18, 2018 at 10:16 PM, rezroo  wrote:
> Hi - let's try this again - this time with pike :-)
> Any suggestions on how to get the image builder to create a larger loop
> device? I think that's what the problem is.
> Thanks in advance.
>
> 2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
> diskimage_builder.block_device.level1.mbr [-] Write partition entry blockno
> [0] entry [0] start [2048] length [4190208]   [57/1588]
> 2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo sync]
> 2018-05-19 05:03:04.538 | 2018-05-19 05:03:04.537 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo kpartx -avs
> /dev/loop3]
> 2018-05-19 05:03:04.642 | 2018-05-19 05:03:04.642 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo mkfs -t ext4 -i 4096
> -J size=64 -L cloudimg-rootfs -U 376d4b4d-2597-4838-963a-3d
> 9c5fcb5d9c -q /dev/mapper/loop3p1]
> 2018-05-19 05:03:04.824 | 2018-05-19 05:03:04.823 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
> /tmp/dib_build.zv2VZo3W/mnt/]
> 2018-05-19 05:03:04.833 | 2018-05-19 05:03:04.833 INFO
> diskimage_builder.block_device.level3.mount [-] Mounting [mount_mkfs_root]
> to [/tmp/dib_build.zv2VZo3W/mnt/]
> 2018-05-19 05:03:04.834 | 2018-05-19 05:03:04.833 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo mount
> /dev/mapper/loop3p1 /tmp/dib_build.zv2VZo3W/mnt/]
> 2018-05-19 05:03:04.850 | 2018-05-19 05:03:04.850 INFO
> diskimage_builder.block_device.blockdevice [-] create() finished
> 2018-05-19 05:03:05.527 | 2018-05-19 05:03:05.527 INFO
> diskimage_builder.block_device.blockdevice [-] Getting value for
> [image-block-device]
> 2018-05-19 05:03:06.168 | 2018-05-19 05:03:06.168 INFO
> diskimage_builder.block_device.blockdevice [-] Getting value for
> [image-block-devices]
> 2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
> diskimage_builder.block_device.blockdevice [-] Creating fstab
> 2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
> /tmp/dib_build.zv2VZo3W/built/etc]
> 2018-05-19 05:03:06.855 | 2018-05-19 05:03:06.855 INFO
> diskimage_builder.block_device.utils [-] Calling [sudo cp
> /tmp/dib_build.zv2VZo3W/states/block-device/fstab
> /tmp/dib_build.zv2VZo3W/bui
> lt/etc/fstab]
> 2018-05-19 05:03:12.946 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
> 2018-05-19 05:03:12.947 | + source
> /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
> 2018-05-19 05:03:12.947 | ++ export 'DIB_BOOTLOADER_DEFAULT_CMDLINE=nofb
> nomodeset vga=normal'
> 2018-05-19 05:03:12.947 | ++ DIB_BOOTLOADER_DEFAULT_CMDLINE='nofb nomodeset
> vga=normal'
> 2018-05-19 05:03:12.948 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
> 2018-05-19 05:03:12.950 | + source
> /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
> 2018-05-19 05:03:12.950 |  dirname
> /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
> 2018-05-19 05:03:12.951 | +++
> PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/finalise.d/../environment.d/..'
> 2018-05-19 05:03:12.951 | +++ dib-init-system
> 2018-05-19 05:03:12.953 | ++ DIB_INIT_SYSTEM=systemd
> 2018-05-19 05:03:12.953 | ++ export DIB_INIT_SYSTEM
> 2018-05-19 05:03:12.954 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
> 2018-05-19 05:03:12.955 | + source
> /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
> 2018-05-19 05:03:12.955 | ++ export PIP_DOWNLOAD_CACHE=/tmp/pip
> 2018-05-19 05:03:12.955 | ++ PIP_DOWNLOAD_CACHE=/tmp/pip
> 2018-05-19 05:03:12.956 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
> 2018-05-19 05:03:12.958 | + source
> /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
> 2018-05-19 05:03:12.958 | ++ export DISTRO_NAME=ubuntu
> 2018-05-19 05:03:12.958 | ++ DISTRO_NAME=ubuntu
> 2018-05-19 05:03:12.958 | ++ export DIB_RELEASE=xenial
> 2018-05-19 05:03:12.958 | ++ DIB_RELEASE=xenial
> 2018-05-19 05:03:12.959 | dib-run-parts Sat May 19 05:03:12 UTC 2018
> Sourcing environment file
> /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash
> 2018-05-19 05:03:12.961 | + source
> 

[openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/pike image build by devstack

2018-05-18 Thread rezroo

Hi - let's try this again - this time with pike :-)
Any suggestions on how to get the image builder to create a larger loop 
device? I think that's what the problem is.

Thanks in advance.

   2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
   diskimage_builder.block_device.level1.mbr [-] Write partition entry
   blockno [0] entry [0] start [2048] length [4190208]   [57/1588]
   2018-05-19 05:03:04.523 | 2018-05-19 05:03:04.523 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo sync]
   2018-05-19 05:03:04.538 | 2018-05-19 05:03:04.537 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo kpartx -avs
   /dev/loop3]
   2018-05-19 05:03:04.642 | 2018-05-19 05:03:04.642 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mkfs -t ext4
   -i 4096 -J size=64 -L cloudimg-rootfs -U 376d4b4d-2597-4838-963a-3d
   9c5fcb5d9c -q /dev/mapper/loop3p1]
   2018-05-19 05:03:04.824 | 2018-05-19 05:03:04.823 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
   /tmp/dib_build.zv2VZo3W/mnt/]
   2018-05-19 05:03:04.833 | 2018-05-19 05:03:04.833 INFO
   diskimage_builder.block_device.level3.mount [-] Mounting
   [mount_mkfs_root] to [/tmp/dib_build.zv2VZo3W/mnt/]
   2018-05-19 05:03:04.834 | 2018-05-19 05:03:04.833 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mount
   /dev/mapper/loop3p1 /tmp/dib_build.zv2VZo3W/mnt/]
   2018-05-19 05:03:04.850 | 2018-05-19 05:03:04.850 INFO
   diskimage_builder.block_device.blockdevice [-] create() finished
   2018-05-19 05:03:05.527 | 2018-05-19 05:03:05.527 INFO
   diskimage_builder.block_device.blockdevice [-] Getting value for
   [image-block-device]
   2018-05-19 05:03:06.168 | 2018-05-19 05:03:06.168 INFO
   diskimage_builder.block_device.blockdevice [-] Getting value for
   [image-block-devices]
   2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
   diskimage_builder.block_device.blockdevice [-] Creating fstab
   2018-05-19 05:03:06.845 | 2018-05-19 05:03:06.845 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo mkdir -p
   /tmp/dib_build.zv2VZo3W/built/etc]
   2018-05-19 05:03:06.855 | 2018-05-19 05:03:06.855 INFO
   diskimage_builder.block_device.utils [-] Calling [sudo cp
   /tmp/dib_build.zv2VZo3W/states/block-device/fstab
   /tmp/dib_build.zv2VZo3W/bui
   lt/etc/fstab]
   2018-05-19 05:03:12.946 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
   2018-05-19 05:03:12.947 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-bootloader-default-cmdline
   2018-05-19 05:03:12.947 | ++ export
   'DIB_BOOTLOADER_DEFAULT_CMDLINE=nofb nomodeset vga=normal'
   2018-05-19 05:03:12.947 | ++ DIB_BOOTLOADER_DEFAULT_CMDLINE='nofb
   nomodeset vga=normal'
   2018-05-19 05:03:12.948 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
   2018-05-19 05:03:12.950 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
   2018-05-19 05:03:12.950 |  dirname
   /tmp/in_target.d/finalise.d/../environment.d/10-dib-init-system.bash
   2018-05-19 05:03:12.951 | +++
   
PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/finalise.d/../environment.d/..'
   2018-05-19 05:03:12.951 | +++ dib-init-system
   2018-05-19 05:03:12.953 | ++ DIB_INIT_SYSTEM=systemd
   2018-05-19 05:03:12.953 | ++ export DIB_INIT_SYSTEM
   2018-05-19 05:03:12.954 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
   2018-05-19 05:03:12.955 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-pip-cache
   2018-05-19 05:03:12.955 | ++ export PIP_DOWNLOAD_CACHE=/tmp/pip
   2018-05-19 05:03:12.955 | ++ PIP_DOWNLOAD_CACHE=/tmp/pip
   2018-05-19 05:03:12.956 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
   2018-05-19 05:03:12.958 | + source
   /tmp/in_target.d/finalise.d/../environment.d/10-ubuntu-distro-name.bash
   2018-05-19 05:03:12.958 | ++ export DISTRO_NAME=ubuntu
   2018-05-19 05:03:12.958 | ++ DISTRO_NAME=ubuntu
   2018-05-19 05:03:12.958 | ++ export DIB_RELEASE=xenial
   2018-05-19 05:03:12.958 | ++ DIB_RELEASE=xenial
   2018-05-19 05:03:12.959 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash
   2018-05-19 05:03:12.961 | + source
   /tmp/in_target.d/finalise.d/../environment.d/11-dib-install-type.bash
   2018-05-19 05:03:12.961 | ++ export DIB_DEFAULT_INSTALLTYPE=source
   2018-05-19 05:03:12.961 | ++ DIB_DEFAULT_INSTALLTYPE=source
   2018-05-19 05:03:12.962 | dib-run-parts Sat May 19 05:03:12 UTC 2018
   Sourcing environment file
   /tmp/in_target.d/finalise.d/../environment.

Re: [openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/ocata image build by devstack

2018-05-18 Thread Michael Johnson
Hi rezroo,

Yes, the recent release of pip 10 broke the disk image building.
There is a patch posted here: https://review.openstack.org/#/c/562850/
pending review that works around this issue for the ocata branch by
pining the pip used for the image building to a version that does not
have this issue.

Michael


On Thu, May 17, 2018 at 7:38 PM, rezroo  wrote:
> Hello - I'm trying to install a working local.conf devstack ocata on a new
> server, and some python packages have changed so I end up with this error
> during the build of octavia image:
>
> 2018-05-18 01:00:26.276 |   Found existing installation: Jinja2 2.8
> 2018-05-18 01:00:26.280 | Uninstalling Jinja2-2.8:
> 2018-05-18 01:00:26.280 |   Successfully uninstalled Jinja2-2.8
> 2018-05-18 01:00:26.839 |   Found existing installation: PyYAML 3.11
> 2018-05-18 01:00:26.969 | Cannot uninstall 'PyYAML'. It is a distutils
> installed project and thus we cannot accurately determine which files belong
> to it which would lead to only a partial uninstall.
>
> 2018-05-18 02:05:44.768 | Unmount
> /tmp/dib_build.2fbBBePD/mnt/var/cache/apt/archives
> 2018-05-18 02:05:44.796 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/pip
> 2018-05-18 02:05:44.820 | Unmount
> /tmp/dib_build.2fbBBePD/mnt/tmp/in_target.d
> 2018-05-18 02:05:44.844 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/ccache
> 2018-05-18 02:05:44.868 | Unmount /tmp/dib_build.2fbBBePD/mnt/sys
> 2018-05-18 02:05:44.896 | Unmount /tmp/dib_build.2fbBBePD/mnt/proc
> 2018-05-18 02:05:44.920 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev/pts
> 2018-05-18 02:05:44.947 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev
> 2018-05-18 02:05:50.668 |
> +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1
> exit_trap
> 2018-05-18 02:05:50.679 | +./devstack/stack.sh:exit_trap:494 local
> r=1
> 2018-05-18 02:05:50.690 | ++./devstack/stack.sh:exit_trap:495 jobs
> -p
> 2018-05-18 02:05:50.700 | +./devstack/stack.sh:exit_trap:495 jobs=
> 2018-05-18 02:05:50.710 | +./devstack/stack.sh:exit_trap:498 [[ -n
> '' ]]
> 2018-05-18 02:05:50.720 | +./devstack/stack.sh:exit_trap:504
> kill_spinner
> 2018-05-18 02:05:50.731 | +./devstack/stack.sh:kill_spinner:390  '[' '!'
> -z '' ']'
> 2018-05-18 02:05:50.741 | +./devstack/stack.sh:exit_trap:506 [[ 1
> -ne 0 ]]
> 2018-05-18 02:05:50.751 | +./devstack/stack.sh:exit_trap:507 echo
> 'Error on exit'
> 2018-05-18 02:05:50.751 | Error on exit
> 2018-05-18 02:05:50.761 | +./devstack/stack.sh:exit_trap:508
> generate-subunit 1526608058 1092 fail
> 2018-05-18 02:05:51.148 | +./devstack/stack.sh:exit_trap:509 [[ -z
> /tmp ]]
> 2018-05-18 02:05:51.157 | +./devstack/stack.sh:exit_trap:512
> /home/stack/devstack/tools/worlddump.py -d /tmp
>
> I've tried pip uninstalling PyYAML and pip installing it before running
> stack.sh, but the error comes back.
>
> $ sudo pip uninstall PyYAML
> The directory '/home/stack/.cache/pip/http' or its parent directory is not
> owned by the current user and the cache has been disabled. Please check the
> permissions and owner of that directory. If executing pip with sudo, you may
> want sudo's -H flag.
> Uninstalling PyYAML-3.12:
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/INSTALLER
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/METADATA
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/RECORD
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/WHEEL
>   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/top_level.txt
>   /usr/local/lib/python2.7/dist-packages/_yaml.so
> Proceed (y/n)? y
>   Successfully uninstalled PyYAML-3.12
>
> I've posted my question to the pip folks and they think it's an openstack
> issue: https://github.com/pypa/pip/issues/4805
>
> Is there a workaround here?
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][python/pip][octavia] pip failure during octavia/ocata image build by devstack

2018-05-17 Thread rezroo
Hello - I'm trying to install a working local.conf devstack ocata on a 
new server, and some python packages have changed so I end up with this 
error during the build of octavia image:


   2018-05-18 01:00:26.276 |   Found existing installation: Jinja2 2.8
   2018-05-18 01:00:26.280 | Uninstalling Jinja2-2.8:
   2018-05-18 01:00:26.280 |   Successfully uninstalled Jinja2-2.8
   2018-05-18 01:00:26.839 |   Found existing installation: PyYAML 3.11
   2018-05-18 01:00:26.969 | Cannot uninstall 'PyYAML'. It is a
   distutils installed project and thus we cannot accurately determine
   which files belong to it which would lead to only a partial uninstall.

   2018-05-18 02:05:44.768 | Unmount
   /tmp/dib_build.2fbBBePD/mnt/var/cache/apt/archives
   2018-05-18 02:05:44.796 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/pip
   2018-05-18 02:05:44.820 | Unmount
   /tmp/dib_build.2fbBBePD/mnt/tmp/in_target.d
   2018-05-18 02:05:44.844 | Unmount /tmp/dib_build.2fbBBePD/mnt/tmp/ccache
   2018-05-18 02:05:44.868 | Unmount /tmp/dib_build.2fbBBePD/mnt/sys
   2018-05-18 02:05:44.896 | Unmount /tmp/dib_build.2fbBBePD/mnt/proc
   2018-05-18 02:05:44.920 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev/pts
   2018-05-18 02:05:44.947 | Unmount /tmp/dib_build.2fbBBePD/mnt/dev
   2018-05-18 02:05:50.668 |
   +/opt/stack/octavia/devstack/plugin.sh:build_octavia_worker_image:1
   exit_trap
   2018-05-18 02:05:50.679 | +./devstack/stack.sh:exit_trap:494
   local r=1
   2018-05-18 02:05:50.690 |
   ++./devstack/stack.sh:exit_trap:495 jobs -p
   2018-05-18 02:05:50.700 | +./devstack/stack.sh:exit_trap:495
   jobs=
   2018-05-18 02:05:50.710 | +./devstack/stack.sh:exit_trap:498
   [[ -n '' ]]
   2018-05-18 02:05:50.720 | +./devstack/stack.sh:exit_trap:504
   kill_spinner
   2018-05-18 02:05:50.731 | +./devstack/stack.sh:kill_spinner:390 
   '[' '!' -z '' ']'
   2018-05-18 02:05:50.741 | +./devstack/stack.sh:exit_trap:506
   [[ 1 -ne 0 ]]
   2018-05-18 02:05:50.751 | +./devstack/stack.sh:exit_trap:507
   echo 'Error on exit'
   2018-05-18 02:05:50.751 | Error on exit
   2018-05-18 02:05:50.761 | +./devstack/stack.sh:exit_trap:508
   generate-subunit 1526608058 1092 fail
   2018-05-18 02:05:51.148 | +./devstack/stack.sh:exit_trap:509
   [[ -z /tmp ]]
   2018-05-18 02:05:51.157 | +./devstack/stack.sh:exit_trap:512
   /home/stack/devstack/tools/worlddump.py -d /tmp

I've tried pip uninstalling PyYAML and pip installing it before running 
stack.sh, but the error comes back.


   $ sudo pip uninstall PyYAML
   The directory '/home/stack/.cache/pip/http' or its parent directory
   is not owned by the current user and the cache has been disabled.
   Please check the permissions and owner of that directory. If
   executing pip with sudo, you may want sudo's -H flag.
   Uninstalling PyYAML-3.12:
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/INSTALLER
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/METADATA
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/RECORD
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/WHEEL
   /usr/local/lib/python2.7/dist-packages/PyYAML-3.12.dist-info/top_level.txt
  /usr/local/lib/python2.7/dist-packages/_yaml.so
   Proceed (y/n)? y
  Successfully uninstalled PyYAML-3.12

I've posted my question to the pip folks and they think it's an 
openstack issue: https://github.com/pypa/pip/issues/4805


Is there a workaround here?


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Jens Harbott
2018-04-16 7:46 GMT+00:00 Ian Wienand :
> On 04/15/2018 09:32 PM, Gary Kotton wrote:
>>
>> The gate is currently broken with
>>  https://launchpad.net/bugs/1763966.
>> https://review.openstack.org/#/c/561427/
>>  Can unblock us in the short term. Any other ideas?
>
>
> I'm thinking this is probably along the lines of the best idea.  I
> left a fairly long comment on this in [1], but the root issue here is
> that if a system package is created using distutils (rather than
> setuptools) we end up with this problem with pip10.
>
> That means the problem occurs when we a) try to overwrite a system
> package and b) that package has been created using distutils.  This
> means it is a small(er) subset of packages that cause this problem.
> Ergo, our best option might be to see if we can avoid such packages on
> a one-by-one basis, like here.
>
> In some cases, we could just delete the .egg-info file, which is
> approximately what was happening before anyway.
>
> In this particular case, the psutils package is used by glance & the
> peakmem tracker.  Under USE_PYTHON3, devstack's pip_install_gr only
> installs the python3 library; however the peakmem tracker always uses
> python2 -- leaing to missing library the failures in [2].  I have two
> thoughts; either install for both python2 & 3 always [3] or make
> peakmem tracker obey USE_PYTHON3 [4].  We can discuss the approach in
> the reviews.
>
> The other option is to move everything to virtualenv's, so we never
> conflict with a system package, as suggested by clarkb [5] or
> pabelanger [6].  These are more invasive changes, but also arguably
> more correct.
>
> Note diskimage-builder, and hence our image generation for some
> platforms, is also broken.  Working on that in [7].

The cap in devstack has been merged in master and stable/queens, other
merges are being help up by unstable volume checks or so it seems.

There is also another issue caused by pip 10 treating some former
warning as error now. I've tried to list all "global" (Infra+QA)
related issues in [8], feel free to amend as needed.

[8] https://etherpad.openstack.org/p/pip10-mitigation

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Ian Wienand

On 04/15/2018 09:32 PM, Gary Kotton wrote:

The gate is currently broken with
 https://launchpad.net/bugs/1763966. https://review.openstack.org/#/c/561427/
 Can unblock us in the short term. Any other ideas?


I'm thinking this is probably along the lines of the best idea.  I
left a fairly long comment on this in [1], but the root issue here is
that if a system package is created using distutils (rather than
setuptools) we end up with this problem with pip10.

That means the problem occurs when we a) try to overwrite a system
package and b) that package has been created using distutils.  This
means it is a small(er) subset of packages that cause this problem.
Ergo, our best option might be to see if we can avoid such packages on
a one-by-one basis, like here.

In some cases, we could just delete the .egg-info file, which is
approximately what was happening before anyway.

In this particular case, the psutils package is used by glance & the
peakmem tracker.  Under USE_PYTHON3, devstack's pip_install_gr only
installs the python3 library; however the peakmem tracker always uses
python2 -- leaing to missing library the failures in [2].  I have two
thoughts; either install for both python2 & 3 always [3] or make
peakmem tracker obey USE_PYTHON3 [4].  We can discuss the approach in
the reviews.

The other option is to move everything to virtualenv's, so we never
conflict with a system package, as suggested by clarkb [5] or
pabelanger [6].  These are more invasive changes, but also arguably
more correct.

Note diskimage-builder, and hence our image generation for some
platforms, is also broken.  Working on that in [7].

-i


[1] https://github.com/pypa/pip/issues/4805#issuecomment-340987536
[2] https://review.openstack.org/561427
[3] https://review.openstack.org/561524
[4] https://review.openstack.org/561525
[5] https://review.openstack.org/558930
[6] https://review.openstack.org/#/c/552939
[7] https://review.openstack.org/#/c/561479/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Slawomir Kaplonski
Right. Thx Gary :)

> Wiadomość napisana przez Gary Kotton  w dniu 16.04.2018, 
> o godz. 09:14:
> 
> Hi,
> I think that we need https://review.openstack.org/561471 until we have a 
> proper solution.
> Thanks
> Gary
> 
> On 4/16/18, 10:13 AM, "Slawomir Kaplonski"  wrote:
> 
>Hi,
> 
>I just wanted to ask if there is any ongoing work on 
> https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It 
> looks that e.g. all grenade jobs in neutron are broken currently :/
> 
>> Wiadomość napisana przez Gary Kotton  w dniu 15.04.2018, 
>> o godz. 13:32:
>> 
>> Hi,
>> The gate is currently broken with https://launchpad.net/bugs/1763966. 
>> https://review.openstack.org/#/c/561427/ Can unblock us in the short term. 
>> Any other ideas?
>> Thanks
>> Gary
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
>— 
>Best regards
>Slawek Kaplonski
>skapl...@redhat.com
> 
> 
>__
>OpenStack Development Mailing List (not for usage questions)
>Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Best regards
Slawek Kaplonski
skapl...@redhat.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Gary Kotton
Hi,
I think that we need https://review.openstack.org/561471 until we have a proper 
solution.
Thanks
Gary

On 4/16/18, 10:13 AM, "Slawomir Kaplonski"  wrote:

Hi,

I just wanted to ask if there is any ongoing work on 
https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It 
looks that e.g. all grenade jobs in neutron are broken currently :/

> Wiadomość napisana przez Gary Kotton  w dniu 
15.04.2018, o godz. 13:32:
> 
> Hi,
> The gate is currently broken with https://launchpad.net/bugs/1763966. 
https://review.openstack.org/#/c/561427/ Can unblock us in the short term. Any 
other ideas?
> Thanks
> Gary
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Best regards
Slawek Kaplonski
skapl...@redhat.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][infra] pip vs psutil

2018-04-16 Thread Slawomir Kaplonski
Hi,

I just wanted to ask if there is any ongoing work on 
https://bugs.launchpad.net/devstack/+bug/1763966 to fix grenade failures? It 
looks that e.g. all grenade jobs in neutron are broken currently :/

> Wiadomość napisana przez Gary Kotton  w dniu 15.04.2018, 
> o godz. 13:32:
> 
> Hi,
> The gate is currently broken with https://launchpad.net/bugs/1763966. 
> https://review.openstack.org/#/c/561427/ Can unblock us in the short term. 
> Any other ideas?
> Thanks
> Gary
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

— 
Best regards
Slawek Kaplonski
skapl...@redhat.com


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][infra] pip vs psutil

2018-04-15 Thread Gary Kotton
Hi,
The gate is currently broken with https://launchpad.net/bugs/1763966. 
https://review.openstack.org/#/c/561427/ Can unblock us in the short term. Any 
other ideas?
Thanks
Gary
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-04-02 Thread Ghanshyam Mann
On Thu, Mar 29, 2018 at 5:21 AM, James E. Blair  wrote:
> Hi,
>
> I've proposed a change to devstack which slightly alters the
> LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
> those using legacy devstack jobs (but you may want to be aware of it).
> It is more significant for new-style devstack jobs.
>
> The change is at https://review.openstack.org/549252
>
> In summary, when this change lands, new-style devstack jobs should no
> longer need to set LIBS_FROM_GIT explicitly.  Existing legacy jobs
> should be unaffected (but there is a change to the verification process
> performed by devstack).
>
>
> Currently devstack expects the contents of LIBS_FROM_GIT to be
> exclusively a list of python packages which, obviously, should be
> installed from git and not pypi.  It is used for two purposes:
> determining whether an individual package should be installed from git,
> and verifying that a package was installed from git.
>
> In the old devstack-gate system, we prepared many of the common git
> repos, whether they were used or not.  So LIBS_FROM_GIT was created to
> indicate that in some cases devstack should ignore those repos and
> install from pypi instead.  In other words, its original purpose was
> purely as a method of selecting whether a devstack-gate prepared repo
> should be used or ignored.
>
> In Zuul v3, we have a good way to indicate whether a job is going to use
> a repo or not -- add it to "required-projects".  Considering that, the
> LIBS_FROM_GIT variable is redundant.  So my patch causes it to be
> automatically generated based on the contents of required-projects.
> This means that job authors don't need to list every required repository
> twice.
>
> However, a naïve implementation of that runs afoul of the second use of
> LIBS_FROM_GIT -- verifying that python packages are installed from git.
>
> This usage was added later, after a typographical error ("-" vs "_" in a
> python package name) in a constraints file caused us not to install a
> package from git.  Now devstack verifies that every package in
> LIBS_FROM_GIT is installed.  However, Zuul doesn't know that devstack,
> tempest, and other packages aren't installed.  So adding them
> automatically to LIBS_FROM_GIT will cause devstack to fail.
>
> My change modifies this verification to only check that packages
> mentioned in LIBS_FROM_GIT that devstack tried to install were actually
> installed.  I realize that stated as such this sounds tautological,
> however, this check is still valid -- it would have caught the original
> error that prompted the check in the first case.
>
> What the revised check will no longer handle is a typo in a legacy job.
> If someone enters a typo into LIBS_FROM_GIT, it will no longer fail.
> However, I think the risk is worthwhile -- particularly since it is in
> service of a system which eliminates the opportunity to introduce such
> an error in the first place.
>
> To see the result in action, take a look at this change which, in only a
> few lines, implements what was a significantly more complex undertaking
> in Zuul v2:
>
> https://review.openstack.org/548331
>
> Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for
> some reason, you require a new-style devstack job to manually set
> LIBS_FROM_GIT, that will still work.  Simply define the variable as
> normal, and the module which generates the devstack config will bypass
> automatic generation if the variable is already set.

+1, thanks Jim. idea looks good to me as long as it still works for
non-zuulv3 users. ll check the patch.

-gmann

>
> -Jim
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread Sean McGinnis
> 
> Neither local nor third-party CI use should be affected.  There's no
> change in behavior based on current usage patterns.  Only the caveat
> that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or
> non-existent package name), it will not automatically be caught.
> 
> -Jim

Perfect, thanks Jim.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread James E. Blair
Sean McGinnis  writes:

> On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
>> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
>> > Hi,
>> > 
>> > I've proposed a change to devstack which slightly alters the
>> > LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
>> > those using legacy devstack jobs (but you may want to be aware of it).
>> > It is more significant for new-style devstack jobs.
>> > 
>> > -snip-
>> > 
>> 
>> How does this apply to uses of devstack outside of zuul, such as in a
>> local development environment?
>> 
>> Doug
>> 
>
> This is my question too. I know in Cinder there are a lot of third party CI
> systems that do not use zuul. If they are impacted in any way by changes to
> devstack, we will need to make sure they are all aware of those changes (and
> have an alternative method for them to get the same functionality).

Neither local nor third-party CI use should be affected.  There's no
change in behavior based on current usage patterns.  Only the caveat
that if you introduce an error into LIBS_FROM_GIT (e.g., a misspelled or
non-existent package name), it will not automatically be caught.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-29 Thread Sean McGinnis
On Wed, Mar 28, 2018 at 07:37:19PM -0400, Doug Hellmann wrote:
> Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> > Hi,
> > 
> > I've proposed a change to devstack which slightly alters the
> > LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
> > those using legacy devstack jobs (but you may want to be aware of it).
> > It is more significant for new-style devstack jobs.
> > 
> > -snip-
> > 
> 
> How does this apply to uses of devstack outside of zuul, such as in a
> local development environment?
> 
> Doug
> 

This is my question too. I know in Cinder there are a lot of third party CI
systems that do not use zuul. If they are impacted in any way by changes to
devstack, we will need to make sure they are all aware of those changes (and
have an alternative method for them to get the same functionality).

Sean

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-29 Thread Tony Breeds
On Fri, Mar 16, 2018 at 02:29:51PM +, Kwan, Louie wrote:
> In the stable/queens branch, since openstacksdk0.11.3 and 
> os-service-types1.1.0 are described in openstack's upper-constraints.txt, 
> 
> https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
> https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297
> 
> If I do 
> 
> > git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens
> 
> And then stack.sh
> 
> We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Okay that's pretty strange.  I can't think of why you'd be getting the
master version of upper-constraints.txt from the queens branch.

[tony@thor requirements]$ tools/grep-all.sh openstacksdk | grep -E 
'(master|queens)'
origin/master : openstacksdk>=0.11.2  # Apache-2.0
origin/stable/queens  : openstacksdk>=0.9.19  # Apache-2.0
origin/master : openstacksdk===0.12.0
origin/stable/queens  : openstacksdk===0.11.3
[tony@thor requirements]$ tools/grep-all.sh os-service-types | grep -E 
'(master|queens)'
origin/master : os-service-types>=1.2.0  # Apache-2.0
origin/stable/queens  : os-service-types>=1.1.0  # Apache-2.0
origin/master : os-service-types===1.2.0
origin/stable/queens  : os-service-types===1.1.0


I quick eyeball of the code doesn't show anything obvious.

Can you provide the devstack log somewhere?
 
> Having said that, we need the older version, how to configure devstack to use 
> openstacksdk===0.11.3 and os-service-types===1.1.0

We can try to work out why you're getting the wrong versions but what
error/problem do you see with the version from master?

I'd expect some general we need version X of FOO but Y is installed
messages.

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-28 Thread Doug Hellmann
Excerpts from corvus's message of 2018-03-28 13:21:38 -0700:
> Hi,
> 
> I've proposed a change to devstack which slightly alters the
> LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
> those using legacy devstack jobs (but you may want to be aware of it).
> It is more significant for new-style devstack jobs.
> 
> The change is at https://review.openstack.org/549252
> 
> In summary, when this change lands, new-style devstack jobs should no
> longer need to set LIBS_FROM_GIT explicitly.  Existing legacy jobs
> should be unaffected (but there is a change to the verification process
> performed by devstack).
> 
> 
> Currently devstack expects the contents of LIBS_FROM_GIT to be
> exclusively a list of python packages which, obviously, should be
> installed from git and not pypi.  It is used for two purposes:
> determining whether an individual package should be installed from git,
> and verifying that a package was installed from git.
> 
> In the old devstack-gate system, we prepared many of the common git
> repos, whether they were used or not.  So LIBS_FROM_GIT was created to
> indicate that in some cases devstack should ignore those repos and
> install from pypi instead.  In other words, its original purpose was
> purely as a method of selecting whether a devstack-gate prepared repo
> should be used or ignored.
> 
> In Zuul v3, we have a good way to indicate whether a job is going to use
> a repo or not -- add it to "required-projects".  Considering that, the
> LIBS_FROM_GIT variable is redundant.  So my patch causes it to be
> automatically generated based on the contents of required-projects.
> This means that job authors don't need to list every required repository
> twice.
> 
> However, a naïve implementation of that runs afoul of the second use of
> LIBS_FROM_GIT -- verifying that python packages are installed from git.
> 
> This usage was added later, after a typographical error ("-" vs "_" in a
> python package name) in a constraints file caused us not to install a
> package from git.  Now devstack verifies that every package in
> LIBS_FROM_GIT is installed.  However, Zuul doesn't know that devstack,
> tempest, and other packages aren't installed.  So adding them
> automatically to LIBS_FROM_GIT will cause devstack to fail.
> 
> My change modifies this verification to only check that packages
> mentioned in LIBS_FROM_GIT that devstack tried to install were actually
> installed.  I realize that stated as such this sounds tautological,
> however, this check is still valid -- it would have caught the original
> error that prompted the check in the first case.
> 
> What the revised check will no longer handle is a typo in a legacy job.
> If someone enters a typo into LIBS_FROM_GIT, it will no longer fail.
> However, I think the risk is worthwhile -- particularly since it is in
> service of a system which eliminates the opportunity to introduce such
> an error in the first place.
> 
> To see the result in action, take a look at this change which, in only a
> few lines, implements what was a significantly more complex undertaking
> in Zuul v2:
> 
> https://review.openstack.org/548331
> 
> Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for
> some reason, you require a new-style devstack job to manually set
> LIBS_FROM_GIT, that will still work.  Simply define the variable as
> normal, and the module which generates the devstack config will bypass
> automatic generation if the variable is already set.
> 
> -Jim
> 

How does this apply to uses of devstack outside of zuul, such as in a
local development environment?

Doug

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack][qa] Changes to devstack LIBS_FROM_GIT

2018-03-28 Thread James E. Blair
Hi,

I've proposed a change to devstack which slightly alters the
LIBS_FROM_GIT behavior.  This shouldn't be a significant change for
those using legacy devstack jobs (but you may want to be aware of it).
It is more significant for new-style devstack jobs.

The change is at https://review.openstack.org/549252

In summary, when this change lands, new-style devstack jobs should no
longer need to set LIBS_FROM_GIT explicitly.  Existing legacy jobs
should be unaffected (but there is a change to the verification process
performed by devstack).


Currently devstack expects the contents of LIBS_FROM_GIT to be
exclusively a list of python packages which, obviously, should be
installed from git and not pypi.  It is used for two purposes:
determining whether an individual package should be installed from git,
and verifying that a package was installed from git.

In the old devstack-gate system, we prepared many of the common git
repos, whether they were used or not.  So LIBS_FROM_GIT was created to
indicate that in some cases devstack should ignore those repos and
install from pypi instead.  In other words, its original purpose was
purely as a method of selecting whether a devstack-gate prepared repo
should be used or ignored.

In Zuul v3, we have a good way to indicate whether a job is going to use
a repo or not -- add it to "required-projects".  Considering that, the
LIBS_FROM_GIT variable is redundant.  So my patch causes it to be
automatically generated based on the contents of required-projects.
This means that job authors don't need to list every required repository
twice.

However, a naïve implementation of that runs afoul of the second use of
LIBS_FROM_GIT -- verifying that python packages are installed from git.

This usage was added later, after a typographical error ("-" vs "_" in a
python package name) in a constraints file caused us not to install a
package from git.  Now devstack verifies that every package in
LIBS_FROM_GIT is installed.  However, Zuul doesn't know that devstack,
tempest, and other packages aren't installed.  So adding them
automatically to LIBS_FROM_GIT will cause devstack to fail.

My change modifies this verification to only check that packages
mentioned in LIBS_FROM_GIT that devstack tried to install were actually
installed.  I realize that stated as such this sounds tautological,
however, this check is still valid -- it would have caught the original
error that prompted the check in the first case.

What the revised check will no longer handle is a typo in a legacy job.
If someone enters a typo into LIBS_FROM_GIT, it will no longer fail.
However, I think the risk is worthwhile -- particularly since it is in
service of a system which eliminates the opportunity to introduce such
an error in the first place.

To see the result in action, take a look at this change which, in only a
few lines, implements what was a significantly more complex undertaking
in Zuul v2:

https://review.openstack.org/548331

Finally, a note on the automatic generation of LIBS_FROM_GIT -- if, for
some reason, you require a new-style devstack job to manually set
LIBS_FROM_GIT, that will still work.  Simply define the variable as
normal, and the module which generates the devstack config will bypass
automatic generation if the variable is already set.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-21 Thread Monty Taylor

On 03/16/2018 09:29 AM, Kwan, Louie wrote:

In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 
are described in openstack's upper-constraints.txt,

https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297

If I do


git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens


And then stack.sh

We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Having said that, we need the older version, how to configure devstack to use 
openstacksdk===0.11.3 and os-service-types===1.1.0


Would you mind sharing why you need the older versions?

os-service-types is explicitly designed such that the latest version 
should always be correct.


If there is something in 1.2.0 that has broken you in some way that you 
need an older version, that's a problem and we should look in to it.


The story is intended to be similar for sdk moving forward ... but we're 
still pre-1.0, so that makes sense at the moment. I'm still interested 
in what specific issue you had, just to make sure we're aware of issues 
people are having.


Thanks!
Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-16 Thread Matt Riedemann

On 3/16/2018 9:29 AM, Kwan, Louie wrote:

In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 
are described in openstack's upper-constraints.txt,

https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297

If I do


git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens


And then stack.sh

We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Having said that, we need the older version, how to configure devstack to use 
openstacksdk===0.11.3 and os-service-types===1.1.0



You could try setting this in your local.conf:

https://github.com/openstack-dev/devstack/blob/master/stackrc#L547

GITBRANCH["python-openstacksdk"]=0.11.3

But I don't see a similar entry for os-service-types.

I don't know if ^ will work, but it's what I'd try.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] stable/queens: How to configure devstack to use openstacksdk===0.11.3 and os-service-types===1.1.0

2018-03-16 Thread Kwan, Louie
In the stable/queens branch, since openstacksdk0.11.3 and os-service-types1.1.0 
are described in openstack's upper-constraints.txt, 

https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L411
https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt#L297

If I do 

> git clone https://git.openstack.org/openstack-dev/devstack -b stable/queens

And then stack.sh

We will see it is using openstacksdk-0.12.0 and os_service_types-1.2.0

Having said that, we need the older version, how to configure devstack to use 
openstacksdk===0.11.3 and os-service-types===1.1.0

Thanks.
Louie


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Jens Harbott added to core

2018-03-05 Thread Andrea Frittoli
On Mon, 5 Mar 2018, 1:02 am Ian Wienand,  wrote:

> Hello,
>
> Jens Harbott (frickler) has agreed to take on core responsibilities in
> devstack, so feel free to bug him about reviews :)
>

Yay +1

>
> We have also added the members of qa-release in directly to
> devstack-core, just for visibility (they already had permissions via
> qa-release -> devstack-release -> devstack-core).
>
> We have also added devstack-core as grenade core to hopefully expand
> coverage there.
>

Thanks, this helps indeed.
I started working on the zuulv3 native grenade jobs, hopefully this will
help getting a bit more speed on that.


> ---
>
> Always feel free to give a gentle ping on reviews that don't seem have
> received sufficient attention.
>
> But please also take a few minutes to compose a commit message!  I
> think sometimes devs have been deep in the weeds with their cool
> change and devstack requires just a few tweaks.  It's easy to forget
> not all reviewers may have this same context.  A couple of
> well-crafted sentences can avoid pulling projects and "git blame"
> archaeological digs, which gets everything going faster!
>


+1000

Andrea Frittoli (andreaf)

>
> Thanks,
>
> -i
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Jens Harbott added to core

2018-03-04 Thread Ian Wienand
Hello,

Jens Harbott (frickler) has agreed to take on core responsibilities in
devstack, so feel free to bug him about reviews :)

We have also added the members of qa-release in directly to
devstack-core, just for visibility (they already had permissions via
qa-release -> devstack-release -> devstack-core).

We have also added devstack-core as grenade core to hopefully expand
coverage there.

---

Always feel free to give a gentle ping on reviews that don't seem have
received sufficient attention.

But please also take a few minutes to compose a commit message!  I
think sometimes devs have been deep in the weeds with their cool
change and devstack requires just a few tweaks.  It's easy to forget
not all reviewers may have this same context.  A couple of
well-crafted sentences can avoid pulling projects and "git blame"
archaeological digs, which gets everything going faster!

Thanks,

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora

2018-01-24 Thread Andreas Jaeger
On 2018-01-24 14:14, Daniel Mellado wrote:
> Hi everyone,
> 
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
> 
> This is affecting me on Fedora26+ from different network locations, so I
> was wondering if someone from suse could have a look (it did work for
> Andreas in opensuse... thanks in advance!)

Just a heads up:

So, one problem: The signing key was expired. The key was extended but
not used - now the repo has been published again using the extended key.
So, download works.

AFAIU there's still some problem where dnf is not happy with - Daniel is
investigating,

Andreas

> 
> [1]
> https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170
> 
> [2] http://paste.openstack.org/show/652041/
> 
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> 


-- 
 Andreas Jaeger aj@{suse.com,opensuse.org} Twitter: jaegerandi
  SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
   GF: Felix Imendörffer, Jane Smithard, Graham Norton,
   HRB 21284 (AG Nürnberg)
GPG fingerprint = 93A3 365E CE47 B889 DF7F  FED1 389A 563C C272 A126


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora

2018-01-24 Thread Paul Belanger
On Wed, Jan 24, 2018 at 02:14:40PM +0100, Daniel Mellado wrote:
> Hi everyone,
> 
> Since today, when I try to install devstack-plugin-container plugin over
> fedora. It complains in here [1] about not being able to sync the cache
> for the repo with the following error [2].
> 
> This is affecting me on Fedora26+ from different network locations, so I
> was wondering if someone from suse could have a look (it did work for
> Andreas in opensuse... thanks in advance!)
> 
> [1]
> https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170
> 
> [2] http://paste.openstack.org/show/652041/
> 
We should consider mirroring this into our AFS mirror infrastrcuture to help
remove the dependency on opensuse servers. Then each regional mirror has a copy
and we don't always need to hit upstream.

-Paul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Broken repo on devstack-plugin-container for Fedora

2018-01-24 Thread Daniel Mellado
Hi everyone,

Since today, when I try to install devstack-plugin-container plugin over
fedora. It complains in here [1] about not being able to sync the cache
for the repo with the following error [2].

This is affecting me on Fedora26+ from different network locations, so I
was wondering if someone from suse could have a look (it did work for
Andreas in opensuse... thanks in advance!)

[1]
https://github.com/openstack/devstack-plugin-container/blob/master/devstack/lib/docker#L164-L170

[2] http://paste.openstack.org/show/652041/

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][zuul] About devstack plugin orders and the log to contain the running local.conf

2017-11-22 Thread James E. Blair
cor...@inaugust.com (James E. Blair) writes:

> "gong_ys2004"  writes:
>
>> Hi, everyone
>> I am trying to migrate tacker's functional CI job into new zuul v3
>> framework, but it seems:
>> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
>> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I
>> have:  devstack_plugins:
>> heat: https://git.openstack.org/openstack/heat
>> networking-sfc:  https://git.openstack.org/openstack/networking-sfc
>> aodh: https://git.openstack.org/openstack/aodh
>> ceilometer: https://git.openstack.org/openstack/ceilometer
>> barbican: https://git.openstack.org/openstack/barbican
>> mistral: https://git.openstack.org/openstack/mistral
>> tacker: https://git.openstack.org/openstack/tacker
>> but the running order
>> seems:http://logs.openstack.org/04/516004/4/check/tacker-functional-devstack/f365f21/job-output.txt.gz:
>> local plugins=,ceilometer,aodh,mistral,networking-sfc,heat,tacker,barbican
>> I need barbican to start before tacker.
>
> [I changed the subject to replace the 'openstack' tag with 'devstack',
> which is what I assume was intended.]
>
>
> As Yatin Karel later notes, this is handled as a regular python
> dictionary which means we process the keys in an indeterminate order.
>
> I can think of a few ways we can address this:
>
...
> 3) Add dependency information to devstack plugins, but rather than
> having devstack resolve it, have the Ansible role which writes out the
> local.conf read that information and resolve the order.  This lets us
> keep the actual information in plugins so we don't have to continually
> update the role, but it lets us perform the processing in the role
> (which is in Python) when writing the config file.
...
> After considering all of those, I think I favor option 3, because we
> should be able to implement it without too much difficulty, it will
> improve things by providing a known and documented location for plugins
> to specify dependencies, and once it is in place, we can still implement
> option 1 later if we want, using the same declaration.

I discussed this with Dean and we agreed on something close to this
option, except that we would do it in such a way that devstack could
potentially make use of this in the future.  For starters, it will be
easy for devstack to error if someone adds plugins in the wrong order.
If someone feels like having a lot of fun, they could actually implement
a dependency resolver in devstack.

I have two patches which implement this idea:

https://review.openstack.org/521965
https://review.openstack.org/522054

Once those land, we'll need to add the appropriate lines to barbican and
tacker's devstack plugin settings files, then the job you're creating
should start those plugins in the right order automatically.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack][zuul] About devstack plugin orders and the log to contain the running local.conf

2017-10-30 Thread James E. Blair
"gong_ys2004"  writes:

> Hi, everyone
> I am trying to migrate tacker's functional CI job into new zuul v3 framework, 
> but it seems:
> 1. the devstack plugin order is not the one I specified in the .zuull.yaml
> https://review.openstack.org/#/c/516004/4/.zuul.yaml:I have:  
> devstack_plugins:
> heat: https://git.openstack.org/openstack/heat
> networking-sfc:  https://git.openstack.org/openstack/networking-sfc
> aodh: https://git.openstack.org/openstack/aodh
> ceilometer: https://git.openstack.org/openstack/ceilometer
> barbican: https://git.openstack.org/openstack/barbican
> mistral: https://git.openstack.org/openstack/mistral
> tacker: https://git.openstack.org/openstack/tacker
> but the running order 
> seems:http://logs.openstack.org/04/516004/4/check/tacker-functional-devstack/f365f21/job-output.txt.gz:
> local plugins=,ceilometer,aodh,mistral,networking-sfc,heat,tacker,barbican
> I need barbican to start before tacker.

[I changed the subject to replace the 'openstack' tag with 'devstack',
which is what I assume was intended.]


As Yatin Karel later notes, this is handled as a regular python
dictionary which means we process the keys in an indeterminate order.

I can think of a few ways we can address this:

1) Add dependency information to devstack plugins so that devstack
itself is able to work out the correct order.  This is perhaps the ideal
solution from a user experience perspective, but perhaps the most
difficult.

2) Add dependency information to the Ansible role so that it resolves
the order on its own.  This is attractive because it solves a problem
that is unique to this Ansible role entirely within the role.  However,
it means that new plugins would need to also update this role which is
in devstack itself, which partially defeats the purpose of plugins.

3) Add dependency information to devstack plugins, but rather than
having devstack resolve it, have the Ansible role which writes out the
local.conf read that information and resolve the order.  This lets us
keep the actual information in plugins so we don't have to continually
update the role, but it lets us perform the processing in the role
(which is in Python) when writing the config file.

4) Alter Zuul's handling of this to an ordered dictionary.  Then when
you specify a series of plugins, they would be processed in that order.
However, I'm not sure this works very well with Zuul job inheritance.
Imagine that a parent job enabled the barbican plugin, and a child job
enabled ceilometer, needed ceilometer to start before barbican.  There
would be no way to express that.

5) Change the definition of the dictionary to encode ordering
information.  Currently the dictionary schema is simply the name of the
plugin as the key, and either the contents of the "enable_plugin" line,
or "null" if the plugin should be disabled.  We could alter it to be:

  devstack_plugins:
barbican:
  enabled: true
  url: https://git.openstack.org/openstack/barbican
  branch: testing
tacker:
  enabled: true
  url: https://git.openstack.org/openstack/tacker
  requires:
barbican: true

This option is very flexible, but makes using the jobs somewhat more
difficult because of the complexity of the data structure.

After considering all of those, I think I favor option 3, because we
should be able to implement it without too much difficulty, it will
improve things by providing a known and documented location for plugins
to specify dependencies, and once it is in place, we can still implement
option 1 later if we want, using the same declaration.

-Jim

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Tong Liu
The workaround [1] has not landed yet. I saw it has +1 workflow but has not
been merged.

Thanks,
Tong
[1] https://review.openstack.org/#/c/508344/

On Mon, Oct 2, 2017 at 6:51 AM, Mehdi Abaakouk  wrote:

> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
> some issue
> on telemetry integration jobs:
>
>  http://logs.openstack.org/32/508132/1/check/legacy-telemetr
> y-dsvm-integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz
>
>
> On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:
>
>> On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:
>>
>>> 2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
>>>
 We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

 http://logs.openstack.org/32/508132/1/check/legacy-telemetry
 -dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-
 setup-workspace-new.txt

>>>
>>> That looks similar to what Ian fixed in [1], seems like your job needs
>>> a corresponding patch.
>>>
>>
>> Thanks, I have proposed the same kind of patch for telemetry [1]
>>
>> [1] https://review.openstack.org/508448
>>
>> --
>> Mehdi Abaakouk
>> mail: sil...@sileht.net
>> irc: sileht
>>
>
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Mooney, Sean K
This also broke the legacy-tempest-dsvm-nova-os-vif gate job
http://logs.openstack.org/98/508498/1/check/legacy-tempest-dsvm-nova-os-vif/8fdf055/logs/devstacklog.txt.gz#_2017-09-29_14_15_41_961

> -Original Message-
> From: Mehdi Abaakouk [mailto:sil...@sileht.net]
> Sent: Monday, October 2, 2017 2:52 PM
> To: OpenStack Development Mailing List (not for usage questions)
> 
> Subject: Re: [openstack-dev] [devstack] zuulv3 gate status;
> LIBS_FROM_GIT failures
> 
> Looks like the LIBS_FROM_GIT workarounds have landed, but I still have
> some issue on telemetry integration jobs:
> 
>   http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-
> integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz
> 
> On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:
> >On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:
> >>2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
> >>>We also have our legacy-telemetry-dsvm-integration-ceilometer
> broken:
> >>>
> >>>http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-
> int
> >>>egration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-
> new.tx
> >>>t
> >>
> >>That looks similar to what Ian fixed in [1], seems like your job
> needs
> >>a corresponding patch.
> >
> >Thanks, I have proposed the same kind of patch for telemetry [1]
> >
> >[1] https://review.openstack.org/508448
> >
> >--
> >Mehdi Abaakouk
> >mail: sil...@sileht.net
> >irc: sileht
> 
> --
> Mehdi Abaakouk
> mail: sil...@sileht.net
> irc: sileht
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-10-02 Thread Mehdi Abaakouk

Looks like the LIBS_FROM_GIT workarounds have landed, but I still have some 
issue
on telemetry integration jobs:

 
http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e3bd35d/logs/devstacklog.txt.gz

On Fri, Sep 29, 2017 at 10:57:34AM +0200, Mehdi Abaakouk wrote:

On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:

2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :

We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt


That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.


Thanks, I have proposed the same kind of patch for telemetry [1]

[1] https://review.openstack.org/508448

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-29 Thread Attila Fazekas
I have overlay2 and super fast disk I/O (memory cheat + SSD),
just the CPU freq is not high. The CPU is a Broadwell
and actually it has lot more core (E5-2630V4). Even a 5 year old gamer CPU
can be 2 times
faster on a single core, but cannot compete with all of the cores ;-)

This machine have seen faster setup time,  but I'll return to this in an
another topic.

On Tue, Sep 26, 2017 at 6:16 PM, Michał Jastrzębski 
wrote:

> On 26 September 2017 at 07:34, Attila Fazekas  wrote:
> > decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
> >
> > Fully pulling all container takes something like ~4.5 min (from
> localhost,
> > one leaf request at a time),
> > but on the gate vm  we usually have 4 core,
> > so it is possible to go bellow 2 min with better pulling strategy,
> > unless we hit some disk limit.
>
> Check your $docker info. If you kept defaults, storage driver will be
> devicemapper on loopback, which is awfully slow and not very reliable.
> Overlay2 is much better and should speed things up quite a bit. For me
> deployment of 5 node openstack on vms similar to gate took 6min (I had
> registry available in same network). Also if you pull single image it
> will download all base images as well, so next one will be
> significantly faster.
>
> >
> > On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski 
> > wrote:
> >>
> >> On 22 September 2017 at 17:21, Paul Belanger 
> >> wrote:
> >> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> >> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> >> >> > "if DevStack gets custom images prepped to make its jobs
> >> >> > run faster, won't Triple-O, Kolla, et cetera want the same and
> where
> >> >> > do we draw that line?). "
> >> >> >
> >> >> > IMHO we can try to have only one big image per distribution,
> >> >> > where the packages are the union of the packages requested by all
> >> >> > team,
> >> >> > minus the packages blacklisted by any team.
> >> >> [...]
> >> >>
> >> >> Until you realize that some projects want packages from UCA, from
> >> >> RDO, from EPEL, from third-party package repositories. Version
> >> >> conflicts mean they'll still spend time uninstalling the versions
> >> >> they don't want and downloading/installing the ones they do so we
> >> >> have to optimize for one particular set and make the rest
> >> >> second-class citizens in that scenario.
> >> >>
> >> >> Also, preinstalling packages means we _don't_ test that projects
> >> >> actually properly declare their system-level dependencies any
> >> >> longer. I don't know if anyone's concerned about that currently, but
> >> >> it used to be the case that we'd regularly add/break the package
> >> >> dependency declarations in DevStack because of running on images
> >> >> where the things it expected were preinstalled.
> >> >> --
> >> >> Jeremy Stanley
> >> >
> >> > +1
> >> >
> >> > We spend a lot of effort trying to keep the 6 images we have in
> nodepool
> >> > working
> >> > today, I can't imagine how much work it would be to start adding more
> >> > images per
> >> > project.
> >> >
> >> > Personally, I'd like to audit things again once we roll out zuulv3, I
> am
> >> > sure
> >> > there are some tweaks we could make to help speed up things.
> >>
> >> I don't understand, why would you add images per project? We have all
> >> the images there.. What I'm talking about is to leverage what we'll
> >> have soon (registry) to lower time of gates/DIB infra requirements
> >> (DIB would hardly need to refresh images...)
> >>
> >> >
> >> > 
> __
> >> > OpenStack Development Mailing List (not for usage questions)
> >> > Unsubscribe:
> >> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> >> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >>
> >> 
> __
> >> OpenStack Development Mailing List (not for usage questions)
> >> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
> >
> >
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
> >
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists

Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Jens Harbott
2017-09-29 5:41 GMT+00:00 Ian Wienand :
> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>
>> I'm not aware of issues other than these at this time
>
>
> Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
> also failing for unknown reasons.  Any debugging would be helpful,
> thanks.

Seem there are multiple issues with the multinode jobs:

a) post_failures due to an error in log collection, sample fix at
https://review.openstack.org/508473
b) jobs are being run as two identical tasks on primary and subnodes,
triggering https://bugs.launchpad.net/zun/+bug/1720240

Other issues:
- openstack-tox-py27 is being run on trusty nodes instead of xenial
- unit tests are missing in at least neutron gate runs
- some patches are not getting any results from zuul

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Mehdi Abaakouk

On Fri, Sep 29, 2017 at 08:16:38AM +, Jens Harbott wrote:

2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :

We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt


That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.


Thanks, I have proposed the same kind of patch for telemetry [1]

[1] https://review.openstack.org/508448

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Jens Harbott
2017-09-29 7:44 GMT+00:00 Mehdi Abaakouk :
> On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:
>>
>> On 09/29/2017 03:37 PM, Ian Wienand wrote:
>>>
>>> I'm not aware of issues other than these at this time
>>
>>
>> Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
>> also failing for unknown reasons.  Any debugging would be helpful,
>> thanks.
>
>
> We also have our legacy-telemetry-dsvm-integration-ceilometer broken:
>
> http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt

That looks similar to what Ian fixed in [1], seems like your job needs
a corresponding patch.

[1] https://review.openstack.org/#/c/508396

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-29 Thread Mehdi Abaakouk

On Fri, Sep 29, 2017 at 03:41:54PM +1000, Ian Wienand wrote:

On 09/29/2017 03:37 PM, Ian Wienand wrote:

I'm not aware of issues other than these at this time


Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons.  Any debugging would be helpful,
thanks.


We also have our legacy-telemetry-dsvm-integration-ceilometer broken:

http://logs.openstack.org/32/508132/1/check/legacy-telemetry-dsvm-integration-ceilometer/e185ae1/logs/devstack-gate-setup-workspace-new.txt

--
Mehdi Abaakouk
mail: sil...@sileht.net
irc: sileht


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-28 Thread Ian Wienand

On 09/29/2017 03:37 PM, Ian Wienand wrote:

I'm not aware of issues other than these at this time


Actually, that is not true.  legacy-grenade-dsvm-neutron-multinode is
also failing for unknown reasons.  Any debugging would be helpful,
thanks.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] zuulv3 gate status; LIBS_FROM_GIT failures

2017-09-28 Thread Ian Wienand

Hi,

There's a few issues with devstack and the new zuulv3 environment

LIBS_FROM_GIT is broken due to the new repos not having a remote
setup, meaning "pip freeze" doesn't give us useful output.  [1] just
disables the test as a quick fix for this; [2] is a possible real fix
but should be tried a bit more carefully in case there's corners I
missed.  This will be affecting other projects.

However, before we can get this in, we need to fix the gate.  The
"updown" tests have missed a couple of requirement projects due to
them setting flags that were not detected during migration.  [3] is a
fix for that and seems to work.

For some reason, the legacy-tempest-dsvm-nnet job is running against
master, and failing as nova-net is deprecated there.  I'm clutching at
straws to understand this one, as it seems like the branch filters are
setup correctly; [4] is one guess?

I'm not aware of issues other than these at this time

-i

[1] https://review.openstack.org/508344
[2] https://review.openstack.org/508366
[3] https://review.openstack.org/508396
[4] https://review.openstack.org/508405

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-26 Thread Michał Jastrzębski
On 26 September 2017 at 07:34, Attila Fazekas  wrote:
> decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.
>
> Fully pulling all container takes something like ~4.5 min (from localhost,
> one leaf request at a time),
> but on the gate vm  we usually have 4 core,
> so it is possible to go bellow 2 min with better pulling strategy,
> unless we hit some disk limit.

Check your $docker info. If you kept defaults, storage driver will be
devicemapper on loopback, which is awfully slow and not very reliable.
Overlay2 is much better and should speed things up quite a bit. For me
deployment of 5 node openstack on vms similar to gate took 6min (I had
registry available in same network). Also if you pull single image it
will download all base images as well, so next one will be
significantly faster.

>
> On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski 
> wrote:
>>
>> On 22 September 2017 at 17:21, Paul Belanger 
>> wrote:
>> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
>> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> >> > "if DevStack gets custom images prepped to make its jobs
>> >> > run faster, won't Triple-O, Kolla, et cetera want the same and where
>> >> > do we draw that line?). "
>> >> >
>> >> > IMHO we can try to have only one big image per distribution,
>> >> > where the packages are the union of the packages requested by all
>> >> > team,
>> >> > minus the packages blacklisted by any team.
>> >> [...]
>> >>
>> >> Until you realize that some projects want packages from UCA, from
>> >> RDO, from EPEL, from third-party package repositories. Version
>> >> conflicts mean they'll still spend time uninstalling the versions
>> >> they don't want and downloading/installing the ones they do so we
>> >> have to optimize for one particular set and make the rest
>> >> second-class citizens in that scenario.
>> >>
>> >> Also, preinstalling packages means we _don't_ test that projects
>> >> actually properly declare their system-level dependencies any
>> >> longer. I don't know if anyone's concerned about that currently, but
>> >> it used to be the case that we'd regularly add/break the package
>> >> dependency declarations in DevStack because of running on images
>> >> where the things it expected were preinstalled.
>> >> --
>> >> Jeremy Stanley
>> >
>> > +1
>> >
>> > We spend a lot of effort trying to keep the 6 images we have in nodepool
>> > working
>> > today, I can't imagine how much work it would be to start adding more
>> > images per
>> > project.
>> >
>> > Personally, I'd like to audit things again once we roll out zuulv3, I am
>> > sure
>> > there are some tweaks we could make to help speed up things.
>>
>> I don't understand, why would you add images per project? We have all
>> the images there.. What I'm talking about is to leverage what we'll
>> have soon (registry) to lower time of gates/DIB infra requirements
>> (DIB would hardly need to refresh images...)
>>
>> >
>> > __
>> > OpenStack Development Mailing List (not for usage questions)
>> > Unsubscribe:
>> > openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>>
>> __
>> OpenStack Development Mailing List (not for usage questions)
>> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-26 Thread Attila Fazekas
decompressing those registry tar.gz takes ~0.5 min on 2.2 GHz CPU.

Fully pulling all container takes something like ~4.5 min (from localhost,
one leaf request at a time),
but on the gate vm  we usually have 4 core,
so it is possible to go bellow 2 min with better pulling strategy,
unless we hit some disk limit.


On Sat, Sep 23, 2017 at 5:12 AM, Michał Jastrzębski 
wrote:

> On 22 September 2017 at 17:21, Paul Belanger 
> wrote:
> > On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> >> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> >> > "if DevStack gets custom images prepped to make its jobs
> >> > run faster, won't Triple-O, Kolla, et cetera want the same and where
> >> > do we draw that line?). "
> >> >
> >> > IMHO we can try to have only one big image per distribution,
> >> > where the packages are the union of the packages requested by all
> team,
> >> > minus the packages blacklisted by any team.
> >> [...]
> >>
> >> Until you realize that some projects want packages from UCA, from
> >> RDO, from EPEL, from third-party package repositories. Version
> >> conflicts mean they'll still spend time uninstalling the versions
> >> they don't want and downloading/installing the ones they do so we
> >> have to optimize for one particular set and make the rest
> >> second-class citizens in that scenario.
> >>
> >> Also, preinstalling packages means we _don't_ test that projects
> >> actually properly declare their system-level dependencies any
> >> longer. I don't know if anyone's concerned about that currently, but
> >> it used to be the case that we'd regularly add/break the package
> >> dependency declarations in DevStack because of running on images
> >> where the things it expected were preinstalled.
> >> --
> >> Jeremy Stanley
> >
> > +1
> >
> > We spend a lot of effort trying to keep the 6 images we have in nodepool
> working
> > today, I can't imagine how much work it would be to start adding more
> images per
> > project.
> >
> > Personally, I'd like to audit things again once we roll out zuulv3, I am
> sure
> > there are some tweaks we could make to help speed up things.
>
> I don't understand, why would you add images per project? We have all
> the images there.. What I'm talking about is to leverage what we'll
> have soon (registry) to lower time of gates/DIB infra requirements
> (DIB would hardly need to refresh images...)
>
> > 
> __
> > OpenStack Development Mailing List (not for usage questions)
> > Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:
> unsubscribe
> > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-09-25 Thread Tony Breeds
On Fri, Jun 16, 2017 at 12:06:47PM +1000, Tony Breeds wrote:
> Hi All,
>   I just push a review [1] to bump the minimum etcd version to
> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
> cycle to be making changes like this but releasing pike with a dependacy
> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
> 
> Yours Tony.
> 
> [1] https://review.openstack.org/474825

So this came up at the PTG and the current plan is:

Interim solution:
 1. Get the mirroring tool to the point it can be consumed by infra.
 2. setup a new zuulv3 job to run this and do the mirroring.

Middle term solution:
 1. Get etcd3 packages updated in debian/ubuntu create a PPA (or
similar) for infra to consume.

Both of these are intended to be done during Queens.

Long term plan:
 1. Ensue the packages above are just there for 18.04 so we can put this
behind us.

With hindsight I think we need to add something like "Current packages
can be consumed in our CI" as a requirement for anything added as a base
service.



> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

Yours Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Michał Jastrzębski
On 22 September 2017 at 17:21, Paul Belanger  wrote:
> On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
>> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> > "if DevStack gets custom images prepped to make its jobs
>> > run faster, won't Triple-O, Kolla, et cetera want the same and where
>> > do we draw that line?). "
>> >
>> > IMHO we can try to have only one big image per distribution,
>> > where the packages are the union of the packages requested by all team,
>> > minus the packages blacklisted by any team.
>> [...]
>>
>> Until you realize that some projects want packages from UCA, from
>> RDO, from EPEL, from third-party package repositories. Version
>> conflicts mean they'll still spend time uninstalling the versions
>> they don't want and downloading/installing the ones they do so we
>> have to optimize for one particular set and make the rest
>> second-class citizens in that scenario.
>>
>> Also, preinstalling packages means we _don't_ test that projects
>> actually properly declare their system-level dependencies any
>> longer. I don't know if anyone's concerned about that currently, but
>> it used to be the case that we'd regularly add/break the package
>> dependency declarations in DevStack because of running on images
>> where the things it expected were preinstalled.
>> --
>> Jeremy Stanley
>
> +1
>
> We spend a lot of effort trying to keep the 6 images we have in nodepool 
> working
> today, I can't imagine how much work it would be to start adding more images 
> per
> project.
>
> Personally, I'd like to audit things again once we roll out zuulv3, I am sure
> there are some tweaks we could make to help speed up things.

I don't understand, why would you add images per project? We have all
the images there.. What I'm talking about is to leverage what we'll
have soon (registry) to lower time of gates/DIB infra requirements
(DIB would hardly need to refresh images...)

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Paul Belanger
On Fri, Sep 22, 2017 at 02:31:20PM +, Jeremy Stanley wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> > "if DevStack gets custom images prepped to make its jobs
> > run faster, won't Triple-O, Kolla, et cetera want the same and where
> > do we draw that line?). "
> > 
> > IMHO we can try to have only one big image per distribution,
> > where the packages are the union of the packages requested by all team,
> > minus the packages blacklisted by any team.
> [...]
> 
> Until you realize that some projects want packages from UCA, from
> RDO, from EPEL, from third-party package repositories. Version
> conflicts mean they'll still spend time uninstalling the versions
> they don't want and downloading/installing the ones they do so we
> have to optimize for one particular set and make the rest
> second-class citizens in that scenario.
> 
> Also, preinstalling packages means we _don't_ test that projects
> actually properly declare their system-level dependencies any
> longer. I don't know if anyone's concerned about that currently, but
> it used to be the case that we'd regularly add/break the package
> dependency declarations in DevStack because of running on images
> where the things it expected were preinstalled.
> -- 
> Jeremy Stanley

+1

We spend a lot of effort trying to keep the 6 images we have in nodepool working
today, I can't imagine how much work it would be to start adding more images per
project.

Personally, I'd like to audit things again once we roll out zuulv3, I am sure
there are some tweaks we could make to help speed up things.

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Michał Jastrzębski
On 22 September 2017 at 11:45, Clark Boylan  wrote:
> On Fri, Sep 22, 2017, at 08:58 AM, Michał Jastrzębski wrote:
>> Another, more revolutionary (for good or ill) alternative would be to
>> move gates to run Kolla instead of DevStack. We're working towards
>> registry of images, and we support most of openstack services now. If
>> we enable mixed installation (your service in devstack-ish way, others
>> via Kolla), that should lower the amount of downloads quite
>> dramatically (lots of it will be downloads from registry which will be
>> mirrored/cached in every nodepool). Then all we really need is to
>> support barebone image with docker and ansible installed and that's
>> it.
>
> Except that it very likely isn't going to use less bandwidth. We already
> mirror most of these package repos so all transfers are local to the
> nodepool cloud region. In total we seem to grab about 139MB of packages
> for a neutron dvr multinode scenario job (146676348 bytes) on Ubuntu
> Xenial. This is based off the package list compiled at
> http://paste.openstack.org/raw/621753/ then asking apt-cache for the
> package size for the latest version.
>
> Kolla images on the other hand are in the multigigabyte range
> http://tarballs.openstack.org/kolla/images/.
>
> Clark

Right, all 200+ of them, with proper registry management it's going to
be more streamlined. That will lower amount of effort to handle DIB
images tho. We are going to build them anyway, so there net bandwidth
will actually be lower... Also I don't think it's bandwidth that's
issue here as much as general package management and installation of
packages even from locally available mirror, docker would help with
that.

>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] pike time growth in August

2017-09-22 Thread Clark Boylan
On Fri, Sep 22, 2017, at 01:18 PM, Attila Fazekas wrote:
> The main offenders reported by devstack does not seams to explain the
> growth visible on OpenstackHealth [1] .
> The logs also stated to disappear which does not makes easy to figure
> out.
> 
> 
> Which code/infra changes can be related ?
> 
> 
> http://status.openstack.org/openstack-health/#/test/devstack?resolutionKey=day&duration=P6M

A big factor is likely the loss of OSIC. That cloud performed really
well and now we don't have it anymore so averages will increase.

Clark

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] pike time growth in August

2017-09-22 Thread Attila Fazekas
The main offenders reported by devstack does not seams to explain the
growth visible on OpenstackHealth [1] .
The logs also stated to disappear which does not makes easy to figure out.


Which code/infra changes can be related ?


http://status.openstack.org/openstack-health/#/test/devstack?resolutionKey=day&duration=P6M
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Clark Boylan
On Fri, Sep 22, 2017, at 08:58 AM, Michał Jastrzębski wrote:
> Another, more revolutionary (for good or ill) alternative would be to
> move gates to run Kolla instead of DevStack. We're working towards
> registry of images, and we support most of openstack services now. If
> we enable mixed installation (your service in devstack-ish way, others
> via Kolla), that should lower the amount of downloads quite
> dramatically (lots of it will be downloads from registry which will be
> mirrored/cached in every nodepool). Then all we really need is to
> support barebone image with docker and ansible installed and that's
> it.

Except that it very likely isn't going to use less bandwidth. We already
mirror most of these package repos so all transfers are local to the
nodepool cloud region. In total we seem to grab about 139MB of packages
for a neutron dvr multinode scenario job (146676348 bytes) on Ubuntu
Xenial. This is based off the package list compiled at
http://paste.openstack.org/raw/621753/ then asking apt-cache for the
package size for the latest version.

Kolla images on the other hand are in the multigigabyte range
http://tarballs.openstack.org/kolla/images/.

Clark


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Michał Jastrzębski
On 22 September 2017 at 07:31, Jeremy Stanley  wrote:
> On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
>> "if DevStack gets custom images prepped to make its jobs
>> run faster, won't Triple-O, Kolla, et cetera want the same and where
>> do we draw that line?). "
>>
>> IMHO we can try to have only one big image per distribution,
>> where the packages are the union of the packages requested by all team,
>> minus the packages blacklisted by any team.
> [...]
>
> Until you realize that some projects want packages from UCA, from
> RDO, from EPEL, from third-party package repositories. Version
> conflicts mean they'll still spend time uninstalling the versions
> they don't want and downloading/installing the ones they do so we
> have to optimize for one particular set and make the rest
> second-class citizens in that scenario.
>
> Also, preinstalling packages means we _don't_ test that projects
> actually properly declare their system-level dependencies any
> longer. I don't know if anyone's concerned about that currently, but
> it used to be the case that we'd regularly add/break the package
> dependency declarations in DevStack because of running on images
> where the things it expected were preinstalled.
> --
> Jeremy Stanley

Another, more revolutionary (for good or ill) alternative would be to
move gates to run Kolla instead of DevStack. We're working towards
registry of images, and we support most of openstack services now. If
we enable mixed installation (your service in devstack-ish way, others
via Kolla), that should lower the amount of downloads quite
dramatically (lots of it will be downloads from registry which will be
mirrored/cached in every nodepool). Then all we really need is to
support barebone image with docker and ansible installed and that's
it.

> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Jeremy Stanley
On 2017-09-22 15:04:43 +0200 (+0200), Attila Fazekas wrote:
> "if DevStack gets custom images prepped to make its jobs
> run faster, won't Triple-O, Kolla, et cetera want the same and where
> do we draw that line?). "
> 
> IMHO we can try to have only one big image per distribution,
> where the packages are the union of the packages requested by all team,
> minus the packages blacklisted by any team.
[...]

Until you realize that some projects want packages from UCA, from
RDO, from EPEL, from third-party package repositories. Version
conflicts mean they'll still spend time uninstalling the versions
they don't want and downloading/installing the ones they do so we
have to optimize for one particular set and make the rest
second-class citizens in that scenario.

Also, preinstalling packages means we _don't_ test that projects
actually properly declare their system-level dependencies any
longer. I don't know if anyone's concerned about that currently, but
it used to be the case that we'd regularly add/break the package
dependency declarations in DevStack because of running on images
where the things it expected were preinstalled.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-22 Thread Attila Fazekas
"if DevStack gets custom images prepped to make its jobs
run faster, won't Triple-O, Kolla, et cetera want the same and where
do we draw that line?). "

IMHO we can try to have only one big image per distribution,
where the packages are the union of the packages requested by all team,
minus the packages blacklisted by any team.

You need to provide a bug link(s) (distribution/upstream bug) for
blacklisting
a package.

Very unlikely we will run out from the disk space juts because of the too
many packages,
usually if a package causes harm to anything it is a distro/upstream bug
which expected
to be solved within 1..2 cycle in the worst case scenario.

If the above thing proven to be not working, we need to draw the line based
on the
expected usage frequency.




On Wed, Sep 20, 2017 at 3:46 PM, Jeremy Stanley  wrote:

> On 2017-09-20 15:17:28 +0200 (+0200), Attila Fazekas wrote:
> [...]
> > The image building was the good old working solution and unless
> > the image build become a super expensive thing, this is still the
> > best option.
> [...]
>
> It became a super expensive thing, and that's the main reason we
> stopped doing it. Now that Nodepool has grown support for
> distributed/parallel image building and uploading, the cost model
> may have changed a bit in that regard so I agree it doesn't hurt to
> revisit that decision. Nevertheless it will take a fair amount of
> convincing that the savings balances out the costs (not just in
> resource consumption but also administrative overhead and community
> impact... if DevStack gets custom images prepped to make its jobs
> run faster, won't Triple-O, Kolla, et cetera want the same and where
> do we draw that line?).
> --
> Jeremy Stanley
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-20 Thread Jeremy Stanley
On 2017-09-20 15:17:28 +0200 (+0200), Attila Fazekas wrote:
[...]
> The image building was the good old working solution and unless
> the image build become a super expensive thing, this is still the
> best option.
[...]

It became a super expensive thing, and that's the main reason we
stopped doing it. Now that Nodepool has grown support for
distributed/parallel image building and uploading, the cost model
may have changed a bit in that regard so I agree it doesn't hurt to
revisit that decision. Nevertheless it will take a fair amount of
convincing that the savings balances out the costs (not just in
resource consumption but also administrative overhead and community
impact... if DevStack gets custom images prepped to make its jobs
run faster, won't Triple-O, Kolla, et cetera want the same and where
do we draw that line?).
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-20 Thread Attila Fazekas
On Wed, Sep 20, 2017 at 3:11 AM, Ian Wienand  wrote:

> On 09/20/2017 09:30 AM, David Moreau Simard wrote:
>
>> At what point does it become beneficial to build more than one image per
>> OS
>> that is more aggressively tuned/optimized for a particular purpose ?
>>
>
> ... and we can put -dsvm- in the jobs names to indicate it should run
> on these nodes :)
>
> Older hands than myself will remember even more issues, but the
> "thicker" the base-image has been has traditionally just lead to a lot
> more corners for corner-cases can hide in.  We saw this all the time
> with "snapshot" images where we'd be based on upstream images that
> would change ever so slightly and break things, leading to
> diskimage-builder and the -minimal build approach.
>
> That said, in a zuulv3 world where we are not caching all git and have
> considerably smaller images, a nodepool that has a scheduler that
> accounts for flavor sizes and could conceivably understand similar for
> images, and where we're building with discrete elements that could
> "bolt-on" things like a list-of-packages install sanely to daily
> builds ... it's not impossible to imagine.
>
> -i


The problem is these package install steps are not really I/O bottle-necked
in most cases,
even with a regular DSL speed you can  frequently see
the decompress and the post config steps takes more time.

The site-local cache/mirror has visible benefit, but does not eliminates
the issues.

The main enemy is the single threaded CPU intensive operation in most
install/config related script,
the 2th most common issue is serially requesting high latency steps, which
does not reaches neither
the CPU or I/O possibilities at the end.

The fat images are generally cheaper even if your cloud has only 1Gb
Ethernet for image transfer.
You gain more by baking the packages into the image than the 1GbE can steal
from you, because
you also save time what would be loosen on CPU intensive operations or from
random disk access.

It is safe to add all distro packages used  by devstack to the cloud image.

Historically we had issues with some base image packages which presence
changed the
behavior of some component ,for example firewalld vs. libvirt (likely an
already solved issue),
these packages got explicitly removed by devstack in case of necessary.
Those packages not requested by devstack !

Fedora/Centos also has/had issues with overlapping with pypi packages on
main filesystem,
(too long story, pointing fingers ..) , generally not a good idea to add
packages from pypi to
an image which content might be overridden by the distro's package manager.

The distribution package install time delays the gate response,
when the slowest ruining job delayed by this, than the whole response is
delayed.

It Is an user facing latency issue, which should be solved even if the cost
would be higher.

The image building was the good old working solution and unless the image
build
become a super expensive thing, this is still the best option.

site-local mirror also expected to help making the image build step(s)
faster and safer.

The other option is the ready scripts.


>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread Ian Wienand

On 09/20/2017 09:30 AM, David Moreau Simard wrote:

At what point does it become beneficial to build more than one image per OS
that is more aggressively tuned/optimized for a particular purpose ?


... and we can put -dsvm- in the jobs names to indicate it should run
on these nodes :)

Older hands than myself will remember even more issues, but the
"thicker" the base-image has been has traditionally just lead to a lot
more corners for corner-cases can hide in.  We saw this all the time
with "snapshot" images where we'd be based on upstream images that
would change ever so slightly and break things, leading to
diskimage-builder and the -minimal build approach.

That said, in a zuulv3 world where we are not caching all git and have
considerably smaller images, a nodepool that has a scheduler that
accounts for flavor sizes and could conceivably understand similar for
images, and where we're building with discrete elements that could
"bolt-on" things like a list-of-packages install sanely to daily
builds ... it's not impossible to imagine.

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread David Moreau Simard
On Tue, Sep 19, 2017 at 9:03 AM, Jeremy Stanley  wrote:
>
> In order to reduce image sizes and the time it takes to build
> images, once we had local package caches in each provider we stopped
> pre-retrieving packages onto the images. Is the time spent at this
> stage mostly while downloading package files (which is what that
> used to alleviate) or is it more while retrieving indices or
> installing the downloaded packages (things having them pre-retrieved
> on the images never solved anyway)?
>

At what point does it become beneficial to build more than one image per OS
that is more aggressively tuned/optimized for a particular purpose ?

We could take more freedom in a devstack-specific image like pre-install
packages that are provided out of base OS, etc.
Different projects could take this kind of freedom to optimize build times
according to their needs as well.

Here's an example of something we once did in RDO:
1) Aggregate the list of every package installed (rpm -qa) at the end
of several jobs
2) From that sorted and uniq'd list, work out which repositories each
package came from
3) Blacklist every package that was not installed from a base
operating system repository
(i.e, blacklist every package and dependencies from RDO, since
we'll be testing these)
4) Pre-install every package that were not blacklisted in our images

The end result was a list of >700 packages [1] completely unrelated to
OpenStack that ended up
being installed anyway throughout different jobs.
To give an idea of numbers, a fairly vanilla CentOS image has ~400
packages installed.
You can find the (rudimentary) script to achieve this filtering is here [2].

[1]: 
https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/nodepool/scripts/weirdo-packages.txt
[2]: 
https://github.com/rdo-infra/review.rdoproject.org-config/blob/master/nodepool/scripts/filter_packages.sh

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread Ian Wienand

On 09/19/2017 11:03 PM, Jeremy Stanley wrote:

On 2017-09-19 14:15:53 +0200 (+0200), Attila Fazekas wrote:
[...]

The jobs does 120..220 sec apt-get install and packages defined
/files/debs/general are missing from the images before starting the job.



Is the time spent at this stage mostly while downloading package
files (which is what that used to alleviate) or is it more while
retrieving indices or installing the downloaded packages (things
having them pre-retrieved on the images never solved anyway)?


As you're both aware, but others may not be, at the end of the logs
devstack does keep a timing overview that looks something like

=
DevStack Component Timing
=
Total runtime1352

run_process   15
test_with_retry4
apt-get-update 2
pip_install  270
osc  365
wait_for_service  29
dbsync23
apt-get  137
=

That doesn't break things down into download v install, but apt does
have download summary that can be grepped for

---
$ cat devstacklog.txt.gz | grep Fetched
2017-09-19 17:52:45.808 | Fetched 39.3 MB in 1s (26.3 MB/s)
2017-09-19 17:53:41.115 | Fetched 185 kB in 0s (3,222 kB/s)
2017-09-19 17:54:16.365 | Fetched 23.5 MB in 1s (21.1 MB/s)
2017-09-19 17:54:25.779 | Fetched 18.3 MB in 0s (35.6 MB/s)
2017-09-19 17:54:39.439 | Fetched 59.1 kB in 0s (0 B/s)
2017-09-19 17:54:40.986 | Fetched 2,128 kB in 0s (40.0 MB/s)
2017-09-19 17:57:37.190 | Fetched 333 kB in 0s (1,679 kB/s)
2017-09-19 17:58:17.592 | Fetched 50.5 MB in 2s (18.1 MB/s)
2017-09-19 17:58:26.947 | Fetched 5,829 kB in 0s (15.5 MB/s)
2017-09-19 17:58:49.571 | Fetched 5,065 kB in 1s (3,719 kB/s)
2017-09-19 17:59:25.438 | Fetched 9,758 kB in 0s (44.5 MB/s)
2017-09-19 18:00:14.373 | Fetched 77.5 kB in 0s (286 kB/s)
---

As mentioned, we setup the package manager to point to a region-local
mirror during node bringup.  Depending on the i/o situation, it is
probably just as fast as coming off disk :) Note (also as mentioned)
these were never pre-installed, just pre-downloaded to an on-disk
cache area (as an aside, I don't think dnf was ever really happy with
that situation and kept being too smart and clearing it's caches).

If you're feeling regexy you could maybe do something similar with the
pip "Collecting" bits in the logs ... one idea for investigation down
that path is if we could save time by somehow collecting larger
batches of requirements and doing less pip invocations?

-i

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread Jeremy Stanley
On 2017-09-19 14:15:53 +0200 (+0200), Attila Fazekas wrote:
[...]
> Let's start with the first obvious difference compared to the old-time
> jobs.:
> The jobs does 120..220 sec apt-get install and packages defined
> /files/debs/general are missing from the images before starting the job.
> 
> We used to bake multiple packages into the images based on the package list
> provided by devstack in order to save time.
> 
> Why this does not happens anymore ?
> Is anybody working on solving this issue ?
> Is any blocker technical issue / challenge exists ?
> Was it a design decision ?
[...]

In order to reduce image sizes and the time it takes to build
images, once we had local package caches in each provider we stopped
pre-retrieving packages onto the images. Is the time spent at this
stage mostly while downloading package files (which is what that
used to alleviate) or is it more while retrieving indices or
installing the downloaded packages (things having them pre-retrieved
on the images never solved anyway)?

Our earlier analysis of the impact of dropping package files from
images indicated it was negligible for most jobs because of the
caching package mirrors we maintain nearby.
-- 
Jeremy Stanley


signature.asc
Description: Digital signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] Why do we apt-get install NEW files/debs/general at job time ?

2017-09-19 Thread Attila Fazekas
The gate-tempest-dsvm-neutron-full-ubuntu-xenial job is 20..30 min slower
than it supposed to be/used to be.

The extra time has multiple reasons and it is not because we test more :( .
Usually we are just less smart than before.

Huge time increment is visible in devstack as well.
devstack is advertised as:

Running devstack ... this takes 10 - 15 minutes (logs in
logs/devstacklog.txt.gz)

The actual time is 20 - 25 min according to openstack health:
http://status.openstack.org/openstack-health/#/test/devstack?resolutionKey=day&duration=P6M


Let's start with the first obvious difference compared to the old-time
jobs.:
The jobs does 120..220 sec apt-get install and packages defined
/files/debs/general are missing from the images before starting the job.

We used to bake multiple packages into the images based on the package list
provided by devstack in order to save time.

Why this does not happens anymore ?
Is anybody working on solving this issue ?
Is any blocker technical issue / challenge exists ?
Was it a design decision ?

We have similar issue with pypi usage as well.

PS.:
Generally a good idea to group these kind of package install commands to
one huge pip/apt-get/yum .. invocation, because these tools has significant
start up time and they also need to process the dependency graph at
install/update.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] SUSE jobs started failing on peakmem_tracker

2017-09-08 Thread Dirk Müller
Hi David,

Thanks for looking into this. I do watch devstack changes every once in a
while but couldn't catch this one  in time. The missing pmap -XX flag
problem has been there forever but it used to be non fatal. Now it is,
which is in principle a good change.

I will make sure that it passes again on SUSE shortly.

Greetings,
Dirk

I was trying to make sure the existing openSUSE jobs passed on Zuul v3
but even the regular v2 jobs are hitting a bug I filed here [1].
As far as I know, these jobs were passing until recently.



This is preventing us from sanity checking that everything works out
of the box for the suse devstack job for the v3 migration.
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] SUSE jobs started failing on peakmem_tracker

2017-09-08 Thread David Moreau Simard
Hi,

I was trying to make sure the existing openSUSE jobs passed on Zuul v3
but even the regular v2 jobs are hitting a bug I filed here [1].
As far as I know, these jobs were passing until recently.

This is preventing us from sanity checking that everything works out
of the box for the suse devstack job for the v3 migration.
I've disabled this service through localconf in the v3 job for the
time being [2] to test further.

[1]: https://bugs.launchpad.net/devstack/+bug/1716066
[2]: https://review.openstack.org/#/c/502147/

David Moreau Simard
Senior Software Engineer | OpenStack RDO

dmsimard = [irc, github, twitter]

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [ironic] [nova] Trying again on wait_for_compute in devstack

2017-08-02 Thread Brian Haley

On 08/02/2017 07:17 AM, Sean Dague wrote:

The 3 node scenarios in Neutron (which are still experimental nv) are
typically failing to bring online the 3rd compute. In cells v2 you have
to explicitly add nodes to the cells. There is a nova-manage command
"discover-hosts" that takes all the compute nodes which have checked in,
but aren't yet assigned to a cell, and puts them into a cell of your
choosing. We do this in devstack-gate in the gate.

However... subnodes don't take very long to setup (so few services). And
the nova-compute process takes about 30s before it's done all it's
initialization and actually checks in to the cluster. It's a real
possibility that discover_hosts will run before subnode 3 checks in. We
see it in logs. This also really could come and bite us on any multinode
job, and I'm a bit concerned some of the multinode jobs aren't running
multinode some times because of it.

One way to fix this, without putting more logic in devstack-gate, is
ensure that by the time stack.sh finishes, the compute node is up. This
was tried previously, but it turned out that we totally missed that it
broke Ironic (the check happened too early, ironic was not yet running,
so we always failed), Cells v1 (munges hostnames :(  ), and PowerVM
(their nova-compute was never starting correctly, and they were working
around it with a restart later).

This patch https://review.openstack.org/#/c/488381/ tries again. The
check is moved very late, Ironic seems to be running fine with it. Cells
v1 is just skipped, that's deprecated in Nova now, and we're not going
to use it in multinode scenarios. The PowerVM team fixed their
nova-compute start issues, so they should be good to go as well.


I had also filed https://bugs.launchpad.net/neutron/+bug/1707003 for 
this since it was mainly just affecting that one 3-node neutron job. 
Glad I hadn't started working on a patch, I'll just take a look at yours.


Thanks for working on it!

-Brian


This is an FYI that we're going to land this again soon. If you think
this impacts your CI / jobs, please speak up. The CI runs on both the
main and experimental queue on devstack for this change look pretty
good, so I think we're safe to move forward this time. But we also
thought that the last time, and were wrong.

-Sean




__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [ironic] [nova] Trying again on wait_for_compute in devstack

2017-08-02 Thread Sean Dague
An issue with the xenserver CI was identified. Once we get this patch 
in, and backported to ocata, it should also address a frequent grenade 
multinode fail scenario which is plaguing the gate.


-Sean

On 08/02/2017 07:17 AM, Sean Dague wrote:

The 3 node scenarios in Neutron (which are still experimental nv) are
typically failing to bring online the 3rd compute. In cells v2 you have
to explicitly add nodes to the cells. There is a nova-manage command
"discover-hosts" that takes all the compute nodes which have checked in,
but aren't yet assigned to a cell, and puts them into a cell of your
choosing. We do this in devstack-gate in the gate.

However... subnodes don't take very long to setup (so few services). And
the nova-compute process takes about 30s before it's done all it's
initialization and actually checks in to the cluster. It's a real
possibility that discover_hosts will run before subnode 3 checks in. We
see it in logs. This also really could come and bite us on any multinode
job, and I'm a bit concerned some of the multinode jobs aren't running
multinode some times because of it.

One way to fix this, without putting more logic in devstack-gate, is
ensure that by the time stack.sh finishes, the compute node is up. This
was tried previously, but it turned out that we totally missed that it
broke Ironic (the check happened too early, ironic was not yet running,
so we always failed), Cells v1 (munges hostnames :(  ), and PowerVM
(their nova-compute was never starting correctly, and they were working
around it with a restart later).

This patch https://review.openstack.org/#/c/488381/ tries again. The
check is moved very late, Ironic seems to be running fine with it. Cells
v1 is just skipped, that's deprecated in Nova now, and we're not going
to use it in multinode scenarios. The PowerVM team fixed their
nova-compute start issues, so they should be good to go as well.

This is an FYI that we're going to land this again soon. If you think
this impacts your CI / jobs, please speak up. The CI runs on both the
main and experimental queue on devstack for this change look pretty
good, so I think we're safe to move forward this time. But we also
thought that the last time, and were wrong.

-Sean




--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [ironic] [nova] Trying again on wait_for_compute in devstack

2017-08-02 Thread Sean Dague
The 3 node scenarios in Neutron (which are still experimental nv) are
typically failing to bring online the 3rd compute. In cells v2 you have
to explicitly add nodes to the cells. There is a nova-manage command
"discover-hosts" that takes all the compute nodes which have checked in,
but aren't yet assigned to a cell, and puts them into a cell of your
choosing. We do this in devstack-gate in the gate.

However... subnodes don't take very long to setup (so few services). And
the nova-compute process takes about 30s before it's done all it's
initialization and actually checks in to the cluster. It's a real
possibility that discover_hosts will run before subnode 3 checks in. We
see it in logs. This also really could come and bite us on any multinode
job, and I'm a bit concerned some of the multinode jobs aren't running
multinode some times because of it.

One way to fix this, without putting more logic in devstack-gate, is
ensure that by the time stack.sh finishes, the compute node is up. This
was tried previously, but it turned out that we totally missed that it
broke Ironic (the check happened too early, ironic was not yet running,
so we always failed), Cells v1 (munges hostnames :(  ), and PowerVM
(their nova-compute was never starting correctly, and they were working
around it with a restart later).

This patch https://review.openstack.org/#/c/488381/ tries again. The
check is moved very late, Ironic seems to be running fine with it. Cells
v1 is just skipped, that's deprecated in Nova now, and we're not going
to use it in multinode scenarios. The PowerVM team fixed their
nova-compute start issues, so they should be good to go as well.

This is an FYI that we're going to land this again soon. If you think
this impacts your CI / jobs, please speak up. The CI runs on both the
main and experimental queue on devstack for this change look pretty
good, so I think we're safe to move forward this time. But we also
thought that the last time, and were wrong.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-19 Thread Tony Breeds
On Mon, Jun 19, 2017 at 08:17:53AM -0400, Davanum Srinivas wrote:
> Tony,
> 
> 
> On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds  wrote:
> > On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
> >
> >> Awesome! thanks Tony, some kolla jobs do that for example, but i think
> >> this job is a better one to key off of:
> >> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
> >>
> >> Outline of the work is - check if there are any new releases in github
> >> downloads, if so download them using wget and then delegate to the scp
> >> publisher (with keep-hierarchy) to create the new directories and
> >> upload the file(s).
> >
> > So perhaps I'm dense but I can't see an easy way to get a list of
> > release artefacts from github in a form that wget can consume.  The best
> > I can see is via the API.  I've knocked up a quick'n'dirty mirror
> > script[1] but I really feel like I've gone off into the weeds.
> >
> > You basically need to do:
> >
> > git clone  && cd
> > virtualenv .venv
> > .venv/bin/pip install -U pip setuptools wheel
> > .venv/bin/pip install -r ./requirements.txt   # [2]
> > .venv/bin/python ./mirror-github-releases.py \
> > 'coreos/etcd::.*linux.*gz:etcd' \
> > 'coreos/etcd:6225411:.*linux.*gz:etcd'
> 
> Works for me!

Okay I'll put something more compleet together for infra review.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-19 Thread Davanum Srinivas
Tony,


On Sun, Jun 18, 2017 at 11:34 PM, Tony Breeds  wrote:
> On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:
>
>> Awesome! thanks Tony, some kolla jobs do that for example, but i think
>> this job is a better one to key off of:
>> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
>>
>> Outline of the work is - check if there are any new releases in github
>> downloads, if so download them using wget and then delegate to the scp
>> publisher (with keep-hierarchy) to create the new directories and
>> upload the file(s).
>
> So perhaps I'm dense but I can't see an easy way to get a list of
> release artefacts from github in a form that wget can consume.  The best
> I can see is via the API.  I've knocked up a quick'n'dirty mirror
> script[1] but I really feel like I've gone off into the weeds.
>
> You basically need to do:
>
> git clone  && cd
> virtualenv .venv
> .venv/bin/pip install -U pip setuptools wheel
> .venv/bin/pip install -r ./requirements.txt   # [2]
> .venv/bin/python ./mirror-github-releases.py \
> 'coreos/etcd::.*linux.*gz:etcd' \
> 'coreos/etcd:6225411:.*linux.*gz:etcd'

Works for me!

> This will in theory from the 3.2.0 (latest) release and look at the
> 3.1.7 release, see that it's already publically mirrored and move on.
>
> It wouldn't be too hard to incorporate into a job.  Thoughts?
>
> Yours Tony.
>
> [1]  https://github.com/tbreeds/mirror-github-releases
> [2] Yes of course I could publish it on pypi if we want to go down this
> path
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-18 Thread Tony Breeds
On Sun, Jun 18, 2017 at 08:19:16PM -0400, Davanum Srinivas wrote:

> Awesome! thanks Tony, some kolla jobs do that for example, but i think
> this job is a better one to key off of:
> http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381
> 
> Outline of the work is - check if there are any new releases in github
> downloads, if so download them using wget and then delegate to the scp
> publisher (with keep-hierarchy) to create the new directories and
> upload the file(s).

So perhaps I'm dense but I can't see an easy way to get a list of
release artefacts from github in a form that wget can consume.  The best
I can see is via the API.  I've knocked up a quick'n'dirty mirror
script[1] but I really feel like I've gone off into the weeds.

You basically need to do:

git clone  && cd
virtualenv .venv
.venv/bin/pip install -U pip setuptools wheel
.venv/bin/pip install -r ./requirements.txt   # [2]
.venv/bin/python ./mirror-github-releases.py \
'coreos/etcd::.*linux.*gz:etcd' \
'coreos/etcd:6225411:.*linux.*gz:etcd'

This will in theory from the 3.2.0 (latest) release and look at the
3.1.7 release, see that it's already publically mirrored and move on.

It wouldn't be too hard to incorporate into a job.  Thoughts?

Yours Tony.

[1]  https://github.com/tbreeds/mirror-github-releases
[2] Yes of course I could publish it on pypi if we want to go down this
path


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-18 Thread Davanum Srinivas
On Sun, Jun 18, 2017 at 7:36 PM, Tony Breeds  wrote:
> On Fri, Jun 16, 2017 at 03:59:22PM -0400, Davanum Srinivas wrote:
>> Mikhail,
>>
>> I have a TODO on my list - " adding a job that looks for new releases
>> and uploads them to tarballs periodically "
>
> If you point me to how things are added to that mirror I can work
> towards that.

Awesome! thanks Tony, some kolla jobs do that for example, but i think
this job is a better one to key off of:
http://git.openstack.org/cgit/openstack-infra/project-config/tree/jenkins/jobs/infra.yaml#n381

Outline of the work is - check if there are any new releases in github
downloads, if so download them using wget and then delegate to the scp
publisher (with keep-hierarchy) to create the new directories and
upload the file(s).

Thanks,
Dims

>
> Tony.
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-18 Thread Tony Breeds
On Fri, Jun 16, 2017 at 03:59:22PM -0400, Davanum Srinivas wrote:
> Mikhail,
> 
> I have a TODO on my list - " adding a job that looks for new releases
> and uploads them to tarballs periodically "

If you point me to how things are added to that mirror I can work
towards that.

Tony.


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-16 Thread Davanum Srinivas
Mikhail,

I have a TODO on my list - " adding a job that looks for new releases
and uploads them to tarballs periodically "

Thanks,
-- Dims

On Fri, Jun 16, 2017 at 3:32 PM, Mikhail Medvedev  wrote:
> On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague  wrote:
>> On 06/15/2017 10:06 PM, Tony Breeds wrote:
>>> Hi All,
>>>   I just push a review [1] to bump the minimum etcd version to
>>> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
>>> cycle to be making changes like this but releasing pike with a dependacy
>>> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
>>>
>>> Yours Tony.
>>>
>>> [1] https://review.openstack.org/474825
>>
>> It should be fine, no one is really using these much at this point.
>> However it looks like mirroring is not happening automatically? The
>> patch fails on not existing in the infra mirror.
>>
>> -Sean
>>
>
> It appears so. Also, IIRC, infra mirror would only host x86 binaries.
> Right now PowerKVM CI works by patching devstack-gate to override
> infra etcd download url. The fix [2] still needs to get merged to make
> it a bit easier to use d-g with your own etcd mirror.
>
> [2] https://review.openstack.org/#/c/467437/
>
> ---
> Mikhail Medvedev
> IBM OpenStack CI for KVM on Power
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-16 Thread Mikhail Medvedev
On Fri, Jun 16, 2017 at 6:01 AM, Sean Dague  wrote:
> On 06/15/2017 10:06 PM, Tony Breeds wrote:
>> Hi All,
>>   I just push a review [1] to bump the minimum etcd version to
>> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
>> cycle to be making changes like this but releasing pike with a dependacy
>> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
>>
>> Yours Tony.
>>
>> [1] https://review.openstack.org/474825
>
> It should be fine, no one is really using these much at this point.
> However it looks like mirroring is not happening automatically? The
> patch fails on not existing in the infra mirror.
>
> -Sean
>

It appears so. Also, IIRC, infra mirror would only host x86 binaries.
Right now PowerKVM CI works by patching devstack-gate to override
infra etcd download url. The fix [2] still needs to get merged to make
it a bit easier to use d-g with your own etcd mirror.

[2] https://review.openstack.org/#/c/467437/

---
Mikhail Medvedev
IBM OpenStack CI for KVM on Power

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd v3.2.0?

2017-06-16 Thread Sean Dague
On 06/15/2017 10:06 PM, Tony Breeds wrote:
> Hi All,
>   I just push a review [1] to bump the minimum etcd version to
> 3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
> cycle to be making changes like this but releasing pike with a dependacy
> on 3.1.x make it harder for users on ppc64le (not many but a few :D)
> 
> Yours Tony.
> 
> [1] https://review.openstack.org/474825

It should be fine, no one is really using these much at this point.
However it looks like mirroring is not happening automatically? The
patch fails on not existing in the infra mirror.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] etcd v3.2.0?

2017-06-15 Thread Tony Breeds
Hi All,
I just push a review [1] to bump the minimum etcd version to
3.2.0 which works on intel and ppc64le.  I know we're pretty late in the
cycle to be making changes like this but releasing pike with a dependacy
on 3.1.x make it harder for users on ppc64le (not many but a few :D)

Yours Tony.

[1] https://review.openstack.org/474825


signature.asc
Description: PGP signature
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [nova] [kuryr] issue with host to cell mapping on ocata

2017-06-08 Thread Faseela K
Hi,

   I am working on testing kuryr-kubernetes plugin with opendaylight, using 
devstack-ocata, with a 3 node setup.
   I am hitting the below error when I execute "nova show ${vm_name} | grep 
OS-EXT-STS:vm_state", once the vm is booted.

   Keyword 'Verify VM Is ACTIVE' failed after retrying for 15 seconds. The last 
error was: 'ERROR (ClientException): Unexpected API Error. Please report this 
at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. 
 (HTTP 500) (Request-ID: 
req-cead4e05-4f64-4e9c-8bf8-582de2d9d633) Command Returns 1

  I am seeing host cell mapping related errors in the control node n-cond logs 
[0] :

2017-06-08 08:52:01.970 18963 ERROR nova.conductor.manager 
[req-6df07c12-3f8a-4da6-9ecd-f6b61bbc4af6 admin admin] No host-to-cell mapping 
found for selected host sandbox-57023-15-devstack-ocata-2. Setup is incomplete.

Also on my console, I see the below error while using cell mapping script as 
given in [3]


'nova'), ('version_table', 'migrate_version'), ('required_dbs', '[]')]))]) 
__init__ /usr/lib/python2.7/site-packages/migrate/versioning/repository.py:83

ERROR: could not access cell mapping database - has api db been created?

  I do have placement sections configured in nova.conf [1], and placement-api 
and client plugins enabled as needed in the respective local.confs.
  Is there any configuration that I have missed out?

Thanks,
Faseela

[0] 
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/control_1/

[1] 
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/control_1/nova.conf.gz
  
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/compute_1/nova.conf.gz
  
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/compute_2/nova.conf.gz


[2] 
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/control_1/local.conf.gz
  
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/compute_2/local.conf.gz
 
https://logs.opendaylight.org/sandbox/jenkins091/coe-csit-1node-openstack-ocata-kubernetes-carbon/15/archives/compute_1/local.conf.gz

[3] 
https://ask.openstack.org/en/question/102256/how-to-configure-placement-service-for-compute-node-on-ocata/


Thanks,
Faseela
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] devstack install - need help on local.conf settings

2017-06-02 Thread Ganpat Agarwal
Hi Nidhi,

Try this :

*Set up the network environment on the host so that devstack VMs can access
the external world.*

*sudo bash*
*echo 1 > /proc/sys/net/ipv4/ip_forward*
*echo 1 > /proc/sys/net/ipv4/conf/INTERFACE/proxy_arp*
*iptables -t nat -A POSTROUTING -o INTERFACE -j MASQUERADE*

*These command will make sure that network traffic will be correctly routed
in and out of the devstack VMs.*
*The ip_forward and proxy_arp changes will be reset when the machice
reboots. You can make these changes permanent by editing /etc/sysctl.conf
and adding the following lines:*

*net.ipv4.conf.INTERFACE.proxy_arp = 1*
*net.ipv4.ip_forward = 1*

*Replace INTERFACE with ethernet interface of your virtual box.*

Hope it will work as it works always for me.

Regards,
Ganpat

On Fri, Jun 2, 2017 at 3:01 PM, nidhi.h...@wipro.com 
wrote:

> Hello all,
>
>
>
> I am using http://paste.openstack.org/show/595339/ as my local.conf.
>
>
>
> *I wanted to understand :- Which interface should we put as value in *
>
> *PUBLIC_INTERFACE in local.conf.*
>
>
>
>
>
> *Reason why I am asking this is,*
>
>
>
> Once, I installed OpenStack using DevStack, on my linux VM on
>
> VirtualBox - I used PUBLIC_INTERFACE value as eth0
>
> and
>
> I configured only one network adapter on VM in NAT mode.
>
>
>
> Later on I faced lot of networking problems in that OpenStack VM.
>
> Internet wasn’t accessible suddenly and many other probs.
>
>
>
> I debugged and somehow found eth0 was added in
>
> One ovs-bridge and if I remove eth0 from that bridge -
>
> only then internet in my VM used to work well.
>
>
>
> Then I doubted PUBLIC_INTERFACE value in local.conf
>
> is something I should setup correctly..
>
>
>
> Could not find much help on this from google.
>
>
>
> Can someone please enlighten?
>
>
>
> Thanks
>
> Nidhi
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *From:* Nidhi Mittal Hada (Product Engineering Service)
> *Sent:* Wednesday, January 18, 2017 3:49 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing
> problem in devstack install - No Network found for private
>
>
>
> Hi Andreas,
>
>
>
> As in between you suggested to try with default devstack
>
> neutron config params. I tried that i set no config option
>
> for neutron part all default.
>
>
>
> This local.conf is working well..
>
>
>
> for others who are facing problem pasting working local.conf here
>
> http://paste.openstack.org/show/595339/
>
>
>
> Attaching too.
>
>
>
> Thanks
>
> Nidhi
>
>
>
>
>
>
>
>
> --
>
> *From:* Nidhi Mittal Hada (Product Engineering Service)
> *Sent:* Wednesday, January 18, 2017 2:44 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing
> problem in devstack install - No Network found for private
>
>
>
> Andreas,
>
>
>
> I require nothing specific from neutron side.
>
> Just a basic working setup from neutron side
>
> because my work is mostly on storage side of
>
> OpenStack.
>
>
>
> Can you please suggest a working configuration
>
> if  tried recently.
>
>
>
> Thanks
>
> nidhi
>
>
> --
>
> *From:* Nidhi Mittal Hada (Product Engineering Service)
> *Sent:* Wednesday, January 18, 2017 2:35:13 PM
> *To:* openstack-dev@lists.openstack.org
> *Subject:* Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing
> problem in devstack install - No Network found for private
>
>
>
> HI Andreas,
>
>
>
> Thanks for your reply.
>
>
>
> I have no specific reason for using this network configuration in
> local.conf.
>
> I have only basic knowledge of these config options even.
>
> This local.conf network configurations used to work well with earlier
>
> devstack openstack versions. So i did not change it..
>
> Just this time its creating trouble.
>
>
>
> I have not created any ovs bridge manually  before running devstack.
>
> just created this local.conf and ran ./stack.sh in devstack folder.
>
>
>
> Can you please suggest changes if i have not created ovs-bridge manually.
>
>
>
> *At present my settings are - from local.conf - for reference - *
>
> FIXED_RANGE=10.11.12.0/24
>
> NETWORK_GATEWAY=10.11.12.1
>
> FIXED_NETWORK_SIZE=256
>
>
>
> FLOATING_RANGE=10.0.2.0/24
>
> Q_FLOATING_ALLOCATION_POOL=start=10.0.2.104,end=10.0.2.111
>
> PUBLIC_NETWORK_G

Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] devstack install - need help on local.conf settings

2017-06-02 Thread nidhi.h...@wipro.com
Hello all,

I am using http://paste.openstack.org/show/595339/ as my local.conf.

I wanted to understand :- Which interface should we put as value in
PUBLIC_INTERFACE in local.conf.


Reason why I am asking this is,

Once, I installed OpenStack using DevStack, on my linux VM on
VirtualBox - I used PUBLIC_INTERFACE value as eth0
and
I configured only one network adapter on VM in NAT mode.

Later on I faced lot of networking problems in that OpenStack VM.
Internet wasn't accessible suddenly and many other probs.

I debugged and somehow found eth0 was added in
One ovs-bridge and if I remove eth0 from that bridge -
only then internet in my VM used to work well.

Then I doubted PUBLIC_INTERFACE value in local.conf
is something I should setup correctly..

Could not find much help on this from google.

Can someone please enlighten?

Thanks
Nidhi









From: Nidhi Mittal Hada (Product Engineering Service)
Sent: Wednesday, January 18, 2017 3:49 PM
To: openstack-dev@lists.openstack.org
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private


Hi Andreas,



As in between you suggested to try with default devstack

neutron config params. I tried that i set no config option

for neutron part all default.



This local.conf is working well..



for others who are facing problem pasting working local.conf here

http://paste.openstack.org/show/595339/



Attaching too.



Thanks

Nidhi








From: Nidhi Mittal Hada (Product Engineering Service)
Sent: Wednesday, January 18, 2017 2:44 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private


Andreas,



I require nothing specific from neutron side.

Just a basic working setup from neutron side

because my work is mostly on storage side of

OpenStack.



Can you please suggest a working configuration

if  tried recently.



Thanks

nidhi




From: Nidhi Mittal Hada (Product Engineering Service)
Sent: Wednesday, January 18, 2017 2:35:13 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private


HI Andreas,



Thanks for your reply.



I have no specific reason for using this network configuration in local.conf.

I have only basic knowledge of these config options even.

This local.conf network configurations used to work well with earlier

devstack openstack versions. So i did not change it..

Just this time its creating trouble.



I have not created any ovs bridge manually  before running devstack.

just created this local.conf and ran ./stack.sh in devstack folder.



Can you please suggest changes if i have not created ovs-bridge manually.



At present my settings are - from local.conf - for reference -
FIXED_RANGE=10.11.12.0/24
NETWORK_GATEWAY=10.11.12.1
FIXED_NETWORK_SIZE=256

FLOATING_RANGE=10.0.2.0/24
Q_FLOATING_ALLOCATION_POOL=start=10.0.2.104,end=10.0.2.111
PUBLIC_NETWORK_GATEWAY=10.0.2.1
HOST_IP=10.0.2.15

PUBLIC_INTERFACE=eth0

Q_USE_SECGROUP=True
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1999
PHYSICAL_NETWORK=default
OVS_PHYSICAL_BRIDGE=br-ex


Q_USE_PROVIDER_NETWORKING=True
Q_L3_ENABLED=False

PROVIDER_SUBNET_NAME="provider_net"
PROVIDER_NETWORK_TYPE="vlan"
SEGMENTATION_ID=2010









Thanks

Nidhi






From: Andreas Scheuring 
mailto:scheu...@linux.vnet.ibm.com>>
Sent: Wednesday, January 18, 2017 1:09:17 PM
To: openstack-dev@lists.openstack.org<mailto:openstack-dev@lists.openstack.org>
Subject: Re: [openstack-dev] [OpenStack-Dev][DevStack][Neutron] facing problem 
in devstack install - No Network found for private

** This mail has been sent from an external source **

Without looking into the details

you're specifying
Q_USE_PROVIDER_NETWORKING=True
in your local.conf - usually this results in the creation of a single
provider network called "public". But the manila devstack plugin seems
not to be able to deal with provider networks as it's expecting a
network named "private" to be present.


Why are you using provider networks? Just for sake of VLANs? You can
also configure devstack to use vlans with the default setup. This has
worked for me in the past - results in a private network using vlans
(assuming you have created ovs b bridge br-data manually):


OVS_PHYSICAL_BRIDGE=br-data
PHYSICAL_NETWORK=phys-data

ENABLE_TENANT_TUNNELS=False
Q_ML2_TENANT_NETWORK_TYPE=vlan
ENABLE_TENANT_VLANS=True
TENANT_VLAN_RANGE=1000:1000




--
-
Andreas
IRC: andreas_s



On Mi, 2017-01-18 at 06:59 +, 
nidhi.h...@wipro.com<mailto:nidhi.h...@wipro.com> wrote:
> Hi All,
>
>
&

Re: [openstack-dev] [devstack] systemd + ENABLED_SERVICES + user_init_file

2017-05-31 Thread Markus Zoeller
On 11.05.2017 15:56, Markus Zoeller wrote:
> I'm working on a nova live-migration hook which configures and starts
> the nova-serialproxy service, runs a subset of tempest tests, and tears
> down the previously started service.
> 
>https://review.openstack.org/#/c/347471/47
> 
> After the change to "systemd", I thought all I have to do was to start
> the service with:
> 
>systemctl enable devstack@n-sproxy
>systemctl restart devstack@n-sproxy
> 
> But this results in the error "Failed to execute operation: No such file
> or directory". The reason is, that there is no systemd "user unit file".
> This file gets written in Devstack at:
> 
> https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1512-L1529
> 
> For that to happen, a service must be in the list "ENABLED_SERVICES":
> 
> https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1572-L1574
> 
> Which is *not* the case for the "n-sproxy" service:
> 
> https://github.com/openstack-dev/devstack/blob/8b8441f3becbae2e704932569bff384dcc5c6713/stackrc#L55-L56
> 
> I'm not sure how to approach this problem. I could:
> 1) add "n-sproxy" to the default ENABLED_SERVICES list for Nova in
>Devstack
> 2) always write the systemd user unit file in Devstack
>(despite being an enabled service)
> 3) Write the "user unit file" on the fly in the hook (in Nova).
> 4) ?
> 
> Right now I tend to option 2, as I think it brings more flexibility (for
> other services too) with less change in the set of default enabled
> services in the gate.
> 
> Is this the right direction? Any other thoughts?
> 
> 

FWIW, here's my attempt to implement 2):
https://review.openstack.org/#/c/469390/

-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Davanum Srinivas
Thanks for the help Mikhail,

So just FYI for others, the etcd 3.2.0 is in RC1, We will get a full
set of arch(es) covered once that goes GA

Thanks,
Dims

On Wed, May 24, 2017 at 8:45 AM, Mikhail S Medvedev  wrote:
>
> On 05/24/2017 06:59 AM, Sean Dague wrote:
>>
>> On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
>> > Hi together,
>> >
>> > recently etcd3 was enabled as service in devstack [1]. This breaks
>> > devstack on s390x Linux, as there are no s390x binaries availabe and
>> > there's no way to disable the etcd3 service.
>> >
>> > I pushed a patch to allow disabling the etcd3 service in local.conf [2].
>> > It would be great if we could get that merged soon to get devstack going
>> > again. It seems like that is not used by any of the default services
>> > (nova, neutron, cinder,...) right now.
>> >
>> > In the long run I would like to understand the plans of etcd3 in
>> > devstack. Are the plans to make the default services dependent on etcd3
>> > in the future?
>> >
>> > Thanks a lot!
>> >
>> > Andreas
>> >
>> >
>> > [1]
>> >
>> > https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
>> > [2] https://review.openstack.org/467597
>>
>> Yes, it is designed to be required by base services. See -
>> http://lists.openstack.org/pipermail/openstack-dev/2017-May/117370.html
>>
>> -Sean
>>
> It is designed to be required, but please be aware of other arches. E.g. the
> original change do DevStack [3] did not allow much flexibility, and only
> worked on x86 and aarch. The d-g change [4] broke d-g for non-x86 arches. I
> have submitted [5] to add some flexibility to be able to specify a different
> mirror from which to pull non-x86 etcd3.
>
> In the last couple days I am playing a whack-a-mole with all of that and
> more. At some point I did request a permission to add PowerKVM CI (ppc64) to
> devstack-gate patches, which might have helped to identify the problem
> earlier. Maybe it should be revisited?
>
> [3] https://review.openstack.org/#/c/445432/
> [4] https://review.openstack.org/#/c/466817/
> [5] https://review.openstack.org/#/c/467437/
>
> ---
> Mikhail Medvedev (mmedvede)
> OpenStack CI for KVM on Power
> IBM
>
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev



-- 
Davanum Srinivas :: https://twitter.com/dims

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Andreas Scheuring
In the meanwhile I found some more information like [1].

I understood that devstack downloads the binaries from github as distros
don't have the latest version available. But the binaries for s390x are
not yet provided there. I opened a issue to figure out what would need
to be done to get the s390x binary posted as well[2].

If that is not working, we might need to start thinking in a different
direction, e.g. 

- enhance devstack build etcd3 from source (for certain architectures)
- check that etcd3 is already installed (we could install it upfront on
our systems)

I opened a bug against devstack to track the discussion [3]



[1] https://review.openstack.org/#/c/467436/
[2] https://github.com/coreos/etcd/issues/7978
[3] https://bugs.launchpad.net/devstack/+bug/1693192


-- 
-
Andreas 
IRC: andreas_s



On Mi, 2017-05-24 at 13:48 +0200, Andreas Scheuring wrote:
> Hi together, 
> 
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
> 
> I pushed a patch to allow disabling the etcd3 service in local.conf [2].
> It would be great if we could get that merged soon to get devstack going
> again. It seems like that is not used by any of the default services
> (nova, neutron, cinder,...) right now.
> 
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
> 
> Thanks a lot!
> 
> Andreas
> 
> 
> [1]
> https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597
> 
> 
> 


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Mikhail S Medvedev


On 05/24/2017 06:59 AM, Sean Dague wrote:

On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
> Hi together,
>
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
>
> I pushed a patch to allow disabling the etcd3 service in local.conf [2].
> It would be great if we could get that merged soon to get devstack going
> again. It seems like that is not used by any of the default services
> (nova, neutron, cinder,...) right now.
>
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
>
> Thanks a lot!
>
> Andreas
>
>
> [1]
> 
https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597

Yes, it is designed to be required by base services. See -
http://lists.openstack.org/pipermail/openstack-dev/2017-May/117370.html

-Sean


It is designed to be required, but please be aware of other arches. E.g. the 
original change do DevStack [3] did not allow much flexibility, and only worked 
on x86 and aarch. The d-g change [4] broke d-g for non-x86 arches. I have 
submitted [5] to add some flexibility to be able to specify a different mirror 
from which to pull non-x86 etcd3.

In the last couple days I am playing a whack-a-mole with all of that and more. 
At some point I did request a permission to add PowerKVM CI (ppc64) to 
devstack-gate patches, which might have helped to identify the problem earlier. 
Maybe it should be revisited?

[3] https://review.openstack.org/#/c/445432/
[4] https://review.openstack.org/#/c/466817/
[5] https://review.openstack.org/#/c/467437/

---
Mikhail Medvedev (mmedvede)
OpenStack CI for KVM on Power
IBM

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Sean Dague
On 05/24/2017 07:48 AM, Andreas Scheuring wrote:
> Hi together, 
> 
> recently etcd3 was enabled as service in devstack [1]. This breaks
> devstack on s390x Linux, as there are no s390x binaries availabe and
> there's no way to disable the etcd3 service.
> 
> I pushed a patch to allow disabling the etcd3 service in local.conf [2].
> It would be great if we could get that merged soon to get devstack going
> again. It seems like that is not used by any of the default services
> (nova, neutron, cinder,...) right now.
> 
> In the long run I would like to understand the plans of etcd3 in
> devstack. Are the plans to make the default services dependent on etcd3
> in the future?
> 
> Thanks a lot!
> 
> Andreas
> 
> 
> [1]
> https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
> [2] https://review.openstack.org/467597

Yes, it is designed to be required by base services. See -
http://lists.openstack.org/pipermail/openstack-dev/2017-May/117370.html

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] etcd3 enabled breaks s390x arch

2017-05-24 Thread Andreas Scheuring
Hi together, 

recently etcd3 was enabled as service in devstack [1]. This breaks
devstack on s390x Linux, as there are no s390x binaries availabe and
there's no way to disable the etcd3 service.

I pushed a patch to allow disabling the etcd3 service in local.conf [2].
It would be great if we could get that merged soon to get devstack going
again. It seems like that is not used by any of the default services
(nova, neutron, cinder,...) right now.

In the long run I would like to understand the plans of etcd3 in
devstack. Are the plans to make the default services dependent on etcd3
in the future?

Thanks a lot!

Andreas


[1]
https://github.com/openstack-dev/devstack/commit/546656fc0543ec2bc5b422fd9eee17f1b8122758
[2] https://review.openstack.org/467597



-- 
-
Andreas 
IRC: andreas_s





__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] systemd + ENABLED_SERVICES + user_init_file

2017-05-11 Thread Markus Zoeller
I'm working on a nova live-migration hook which configures and starts
the nova-serialproxy service, runs a subset of tempest tests, and tears
down the previously started service.

   https://review.openstack.org/#/c/347471/47

After the change to "systemd", I thought all I have to do was to start
the service with:

   systemctl enable devstack@n-sproxy
   systemctl restart devstack@n-sproxy

But this results in the error "Failed to execute operation: No such file
or directory". The reason is, that there is no systemd "user unit file".
This file gets written in Devstack at:

https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1512-L1529

For that to happen, a service must be in the list "ENABLED_SERVICES":

https://github.com/openstack-dev/devstack/blob/37a6b0b2d7d9615b9e89bbc8e8848cffc3bddd6d/functions-common#L1572-L1574

Which is *not* the case for the "n-sproxy" service:

https://github.com/openstack-dev/devstack/blob/8b8441f3becbae2e704932569bff384dcc5c6713/stackrc#L55-L56

I'm not sure how to approach this problem. I could:
1) add "n-sproxy" to the default ENABLED_SERVICES list for Nova in
   Devstack
2) always write the systemd user unit file in Devstack
   (despite being an enabled service)
3) Write the "user unit file" on the fly in the hook (in Nova).
4) ?

Right now I tend to option 2, as I think it brings more flexibility (for
other services too) with less change in the set of default enabled
services in the gate.

Is this the right direction? Any other thoughts?


-- 
Regards, Markus Zoeller (markus_z)


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-04 Thread Anne Gentle
On Wed, May 3, 2017 at 6:14 PM, Sean Dague  wrote:

> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
>>
>>> Screen is going away in Queens.
>>>
>>> Making the dev / test runtimes as similar as possible is really
>>> important. And there is so much weird debt around trying to make screen
>>> launch things reliably (like random sleeps) because screen has funny
>>> races in it.
>>>
>>> It does mean some tricks people figured out in screen are going away.
>>>
>>
>> It sounds like maybe we should start building a shared repository of new
>> tips & tricks for systemd/journald.
>>
>
> Agreed, the devstack docs have the following beginnings of that:
>
> https://docs.openstack.org/developer/devstack/development.html - for
> basic flow
>
> which also links to a systemd primer - https://docs.openstack.org/dev
> eloper/devstack/systemd.html
>
> But more contributions are welcomed for sure.
>
> (These docs exist in the devstack tree under doc/source)


Another set of docs that helped me figure out screen in DevStack are in the
Ops Guide [1][2]. Low-hanging fruit, the way I see it, so I've also logged
a doc bug[3].

Anne

1.
https://github.com/openstack/openstack-manuals/blob/master/doc/ops-guide/source/ops-customize-objectstorage.rst

2.
https://github.com/openstack/openstack-manuals/blob/master/doc/ops-guide/source/ops-customize-compute.rst

3. https://bugs.launchpad.net/openstack-manuals/+bug/1688245


>
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 

Read my blog: justwrite.click 
Subscribe to Docs|Code: docslikecode.com
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-04 Thread David Shrewsbury
These docs are great. As someone who has avoided learning systemd, I really
appreciate
the time folks put into making these docs. Well done.

-Dave

On Wed, May 3, 2017 at 7:14 PM, Sean Dague  wrote:

> On 05/03/2017 07:08 PM, Doug Hellmann wrote:
>
>> Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
>>
>>> Screen is going away in Queens.
>>>
>>> Making the dev / test runtimes as similar as possible is really
>>> important. And there is so much weird debt around trying to make screen
>>> launch things reliably (like random sleeps) because screen has funny
>>> races in it.
>>>
>>> It does mean some tricks people figured out in screen are going away.
>>>
>>
>> It sounds like maybe we should start building a shared repository of new
>> tips & tricks for systemd/journald.
>>
>
> Agreed, the devstack docs have the following beginnings of that:
>
> https://docs.openstack.org/developer/devstack/development.html - for
> basic flow
>
> which also links to a systemd primer - https://docs.openstack.org/dev
> eloper/devstack/systemd.html
>
> But more contributions are welcomed for sure.
>
> (These docs exist in the devstack tree under doc/source)
>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>



-- 
David Shrewsbury (Shrews)
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-04 Thread Sean Dague
This is the cantrip in devstack-gate that's collecting the logs into the
compat format:

https://github.com/openstack-infra/devstack-gate/blob/3a21366743d6624fb5c51588fcdb26f818fbd8b5/functions.sh#L794-L797

It's also probably worth dumping the whole journal in native format for
people to download and query later if they want (I expect that will
become more of a thing):

https://github.com/openstack-infra/devstack-gate/blob/3a21366743d6624fb5c51588fcdb26f818fbd8b5/functions.sh#L802-L803


If you are using devstack-gate already, this should be happening for
you. If things are running differently, those are probably the missing
bits you need.

-Sean



On 05/04/2017 03:09 AM, Guy Rozendorn wrote:
> In regards to 3rd party CIs:
> Before this change, the screen logs were saved under $LOGDIR and copied
> to the log servers, and it was pretty much under the same location for
> all the jobs/projects.
> 
> What’s the convention now with switch to systemd?
> * should the logs be collected in journal exported format? or dump to
> simple text files so they could be viewed in the browser? or in journal
> json format?
> * is there a utility function in devstack/devstack-gate that takes care
> of the log collection so it’ll be the same for all jobs/projects?
> 
> 
> 
> On 3 May 2017 at 13:17:14, Sean Dague (s...@dague.net
> ) wrote:
> 
>> As a follow up, there are definitely a few edge conditions we've hit
>> with some jobs, so the following is provided as information in case you
>> have a job that seems to fail in one of these ways.


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Monty Taylor

On 05/03/2017 06:45 PM, James Slagle wrote:

On Tue, May 2, 2017 at 9:19 AM, Monty Taylor  wrote:

I absolutely cannot believe I'm saying this given what the change implements
and my general steaming hatred associated with it ... but this is awesome
work and a definite improvement over what existed before it. If we're going
to be stuck sharing the Bad Trip that is Lennart's projected consciousness,
this is a pleasant surprise of a positive outcome.


In my opinion, these comments about Lennart are quite out of line.
Regardless of whether or not that individual is a member of the
OpenStack community, there are constructive ways to voice your
opinions about systemd without resorting to these types of personal
comments.


Totally fair, and I apologize.


systemd is an open source driven community project. I'd suggest
directing your energy at those technology choices and working towards
what you see as improvements in those choices instead of making
comments such as what you've done here.

While minor (with some thinly veiled praise sprinkled in), I'm a bit
shocked no one else has called attention to your response. It is not
friendly, considerate, and above all else -- it is not respectful.


You are totally right. It is an unacceptable way for me to have 
expressed myself. I will endeavor to do better in the future - and 
although I doubt he's reading this list at the moment, I do earnestly 
apologize to Lennart as well. Personally directed statements such as 
that are, in fact, totally inappropriate.


Monty


__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Sean Dague

On 05/03/2017 07:08 PM, Doug Hellmann wrote:

Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:

Screen is going away in Queens.

Making the dev / test runtimes as similar as possible is really
important. And there is so much weird debt around trying to make screen
launch things reliably (like random sleeps) because screen has funny
races in it.

It does mean some tricks people figured out in screen are going away.


It sounds like maybe we should start building a shared repository of new
tips & tricks for systemd/journald.


Agreed, the devstack docs have the following beginnings of that:

https://docs.openstack.org/developer/devstack/development.html - for 
basic flow


which also links to a systemd primer - 
https://docs.openstack.org/developer/devstack/systemd.html


But more contributions are welcomed for sure.

(These docs exist in the devstack tree under doc/source)

-Sean

--
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Doug Hellmann
Excerpts from Sean Dague's message of 2017-05-03 16:16:29 -0400:
> Screen is going away in Queens.
> 
> Making the dev / test runtimes as similar as possible is really
> important. And there is so much weird debt around trying to make screen
> launch things reliably (like random sleeps) because screen has funny
> races in it.
> 
> It does mean some tricks people figured out in screen are going away.

It sounds like maybe we should start building a shared repository of new
tips & tricks for systemd/journald.

Doug

> 
> Journalctl provides some pretty serious improvements in querying logs
> https://www.freedesktop.org/software/systemd/man/journalctl.html - you
> can search in time ranges, filter by units (one or more of them), and if
> we get to the bottom of the eventlet interaction, we'll be able to
> search by things like REQUEST_ID as well.
> 
> Plus every modern Linux system uses systemd now, so skills learned
> around systemd and journalctl are transferable both from OpenStack to
> other systems, as well as for new people coming in that understand how
> this works outside of OpenStack. So it helps remove a difference from
> the way we do things from the rest of the world.
> 
> -Sean
> 
> On 05/03/2017 04:02 PM, Hongbin Lu wrote:
> > Hi Sean,
> > 
> > I tried the new systemd devstack and frankly I don't like it. There are 
> > several handy operations in screen that seems to be impossible after 
> > switching to systemd. For example, freeze a process by "Ctrl + a + [". In 
> > addition, navigating though the logs seems difficult (perhaps I am not 
> > familiar with journalctl).
> > 
> > From my understanding, the plan is dropping screen entirely in devstack? I 
> > would argue that it is better to keep both screen and systemd, and let 
> > users choose one of them based on their preference.
> > 
> > Best regards,
> > Hongbin
> > 
> >> -Original Message-
> >> From: Sean Dague [mailto:s...@dague.net]
> >> Sent: May-03-17 6:10 AM
> >> To: openstack-dev@lists.openstack.org
> >> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
> >> default
> >>
> >> On 05/02/2017 08:30 AM, Sean Dague wrote:
> >>> We started running systemd for devstack in the gate yesterday, so far
> >>> so good.
> >>>
> >>> The following patch (which will hopefully land soon), will convert
> >> the
> >>> default local use of devstack to systemd as well -
> >>> https://review.openstack.org/#/c/461716/. It also includes
> >>> substantially updated documentation.
> >>>
> >>> Once you take this patch, a "./clean.sh" is recommended. Flipping
> >>> modes can cause some cruft to build up, and ./clean.sh should be
> >>> pretty good at eliminating them.
> >>>
> >>> https://review.openstack.org/#/c/461716/2/doc/source/development.rst
> >>> is probably specifically interesting / useful for people to read, as
> >>> it shows how the standard development workflows will change (for the
> >>> better) with systemd.
> >>>
> >>> -Sean
> >>
> >> As a follow up, there are definitely a few edge conditions we've hit
> >> with some jobs, so the following is provided as information in case you
> >> have a job that seems to fail in one of these ways.
> >>
> >> Doing process stop / start
> >> ==
> >>
> >> The nova live migration job is special, it was restarting services
> >> manually, however it was doing so with some copy / pasted devstack code,
> >> which means it didn't evolve with the rest of devstack. So the stop
> >> code stopped working (and wasn't robust enough to make it clear that
> >> was the issue).
> >>
> >> https://review.openstack.org/#/c/461803/ is the fix (merged)
> >>
> >> run_process limitations
> >> ===
> >>
> >> When doing the systemd conversion I looked for a path forward which was
> >> going to make 90% of everything just work. The key trick here was that
> >> services start as the "stack" user, and aren't daemonizing away from
> >> the console. We can take the run_process command and make that the
> >> ExecStart in a unit file.
> >>
> >> *Except* that only works if the command is specified by an *absolute
> >> path*.
> >>
> >> So things like this in kuryr-libnetwork become an issue
>

Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread James Slagle
On Tue, May 2, 2017 at 9:19 AM, Monty Taylor  wrote:
> I absolutely cannot believe I'm saying this given what the change implements
> and my general steaming hatred associated with it ... but this is awesome
> work and a definite improvement over what existed before it. If we're going
> to be stuck sharing the Bad Trip that is Lennart's projected consciousness,
> this is a pleasant surprise of a positive outcome.

In my opinion, these comments about Lennart are quite out of line.
Regardless of whether or not that individual is a member of the
OpenStack community, there are constructive ways to voice your
opinions about systemd without resorting to these types of personal
comments.

systemd is an open source driven community project. I'd suggest
directing your energy at those technology choices and working towards
what you see as improvements in those choices instead of making
comments such as what you've done here.

While minor (with some thinly veiled praise sprinkled in), I'm a bit
shocked no one else has called attention to your response. It is not
friendly, considerate, and above all else -- it is not respectful.

-- 
-- James Slagle
--

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Sean Dague
Screen is going away in Queens.

Making the dev / test runtimes as similar as possible is really
important. And there is so much weird debt around trying to make screen
launch things reliably (like random sleeps) because screen has funny
races in it.

It does mean some tricks people figured out in screen are going away.

Journalctl provides some pretty serious improvements in querying logs
https://www.freedesktop.org/software/systemd/man/journalctl.html - you
can search in time ranges, filter by units (one or more of them), and if
we get to the bottom of the eventlet interaction, we'll be able to
search by things like REQUEST_ID as well.

Plus every modern Linux system uses systemd now, so skills learned
around systemd and journalctl are transferable both from OpenStack to
other systems, as well as for new people coming in that understand how
this works outside of OpenStack. So it helps remove a difference from
the way we do things from the rest of the world.

-Sean

On 05/03/2017 04:02 PM, Hongbin Lu wrote:
> Hi Sean,
> 
> I tried the new systemd devstack and frankly I don't like it. There are 
> several handy operations in screen that seems to be impossible after 
> switching to systemd. For example, freeze a process by "Ctrl + a + [". In 
> addition, navigating though the logs seems difficult (perhaps I am not 
> familiar with journalctl).
> 
> From my understanding, the plan is dropping screen entirely in devstack? I 
> would argue that it is better to keep both screen and systemd, and let users 
> choose one of them based on their preference.
> 
> Best regards,
> Hongbin
> 
>> -Original Message-
>> From: Sean Dague [mailto:s...@dague.net]
>> Sent: May-03-17 6:10 AM
>> To: openstack-dev@lists.openstack.org
>> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
>> default
>>
>> On 05/02/2017 08:30 AM, Sean Dague wrote:
>>> We started running systemd for devstack in the gate yesterday, so far
>>> so good.
>>>
>>> The following patch (which will hopefully land soon), will convert
>> the
>>> default local use of devstack to systemd as well -
>>> https://review.openstack.org/#/c/461716/. It also includes
>>> substantially updated documentation.
>>>
>>> Once you take this patch, a "./clean.sh" is recommended. Flipping
>>> modes can cause some cruft to build up, and ./clean.sh should be
>>> pretty good at eliminating them.
>>>
>>> https://review.openstack.org/#/c/461716/2/doc/source/development.rst
>>> is probably specifically interesting / useful for people to read, as
>>> it shows how the standard development workflows will change (for the
>>> better) with systemd.
>>>
>>> -Sean
>>
>> As a follow up, there are definitely a few edge conditions we've hit
>> with some jobs, so the following is provided as information in case you
>> have a job that seems to fail in one of these ways.
>>
>> Doing process stop / start
>> ==
>>
>> The nova live migration job is special, it was restarting services
>> manually, however it was doing so with some copy / pasted devstack code,
>> which means it didn't evolve with the rest of devstack. So the stop
>> code stopped working (and wasn't robust enough to make it clear that
>> was the issue).
>>
>> https://review.openstack.org/#/c/461803/ is the fix (merged)
>>
>> run_process limitations
>> ===
>>
>> When doing the systemd conversion I looked for a path forward which was
>> going to make 90% of everything just work. The key trick here was that
>> services start as the "stack" user, and aren't daemonizing away from
>> the console. We can take the run_process command and make that the
>> ExecStart in a unit file.
>>
>> *Except* that only works if the command is specified by an *absolute
>> path*.
>>
>> So things like this in kuryr-libnetwork become an issue
>> https://github.com/openstack/kuryr-
>> libnetwork/blob/3e2891d6fc5d55b3712258c932a5a8b9b323f6c2/devstack/plugi
>> n.sh#L148
>>
>> There is also a second issue there, which is calling sudo in the
>> run_process line. If you need to run as a user/group different than the
>> default, you need to specify that directly.
>>
>> The run_process command now supports that -
>> https://github.com/openstack-
>> dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5b164ec/functions-
>> common#L1531-L1535
>>
>> And lastly, run_process really always did expect that the thing you
&

Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Hongbin Lu
Hi Sean,

I tried the new systemd devstack and frankly I don't like it. There are several 
handy operations in screen that seems to be impossible after switching to 
systemd. For example, freeze a process by "Ctrl + a + [". In addition, 
navigating though the logs seems difficult (perhaps I am not familiar with 
journalctl).

From my understanding, the plan is dropping screen entirely in devstack? I 
would argue that it is better to keep both screen and systemd, and let users 
choose one of them based on their preference.

Best regards,
Hongbin

> -Original Message-
> From: Sean Dague [mailto:s...@dague.net]
> Sent: May-03-17 6:10 AM
> To: openstack-dev@lists.openstack.org
> Subject: Re: [openstack-dev] [devstack] [all] systemd in devstack by
> default
> 
> On 05/02/2017 08:30 AM, Sean Dague wrote:
> > We started running systemd for devstack in the gate yesterday, so far
> > so good.
> >
> > The following patch (which will hopefully land soon), will convert
> the
> > default local use of devstack to systemd as well -
> > https://review.openstack.org/#/c/461716/. It also includes
> > substantially updated documentation.
> >
> > Once you take this patch, a "./clean.sh" is recommended. Flipping
> > modes can cause some cruft to build up, and ./clean.sh should be
> > pretty good at eliminating them.
> >
> > https://review.openstack.org/#/c/461716/2/doc/source/development.rst
> > is probably specifically interesting / useful for people to read, as
> > it shows how the standard development workflows will change (for the
> > better) with systemd.
> >
> > -Sean
> 
> As a follow up, there are definitely a few edge conditions we've hit
> with some jobs, so the following is provided as information in case you
> have a job that seems to fail in one of these ways.
> 
> Doing process stop / start
> ==
> 
> The nova live migration job is special, it was restarting services
> manually, however it was doing so with some copy / pasted devstack code,
> which means it didn't evolve with the rest of devstack. So the stop
> code stopped working (and wasn't robust enough to make it clear that
> was the issue).
> 
> https://review.openstack.org/#/c/461803/ is the fix (merged)
> 
> run_process limitations
> ===
> 
> When doing the systemd conversion I looked for a path forward which was
> going to make 90% of everything just work. The key trick here was that
> services start as the "stack" user, and aren't daemonizing away from
> the console. We can take the run_process command and make that the
> ExecStart in a unit file.
> 
> *Except* that only works if the command is specified by an *absolute
> path*.
> 
> So things like this in kuryr-libnetwork become an issue
> https://github.com/openstack/kuryr-
> libnetwork/blob/3e2891d6fc5d55b3712258c932a5a8b9b323f6c2/devstack/plugi
> n.sh#L148
> 
> There is also a second issue there, which is calling sudo in the
> run_process line. If you need to run as a user/group different than the
> default, you need to specify that directly.
> 
> The run_process command now supports that -
> https://github.com/openstack-
> dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5b164ec/functions-
> common#L1531-L1535
> 
> And lastly, run_process really always did expect that the thing you
> started remained attached to the console. These are run as "simple"
> services in systemd. If you are running a thing which already
> daemonizes systemd is going to assume (correctly in this simple mode)
> the fact that the process detatched from it means it died, and kill and
> clean it up.
> 
> This is the issue the OpenDaylight plugin ran into.
> https://review.openstack.org/#/c/461889/ is the proposed fix.
> 
> 
> 
> If you run into any other issues please pop into #openstack-qa (or
> respond to this email) and we'll try to work through them.
> 
>   -Sean
> 
> --
> Sean Dague
> http://dague.net
> 
> ___
> ___
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: OpenStack-dev-
> requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Matt Riedemann

On 5/3/2017 5:09 AM, Sean Dague wrote:

If you run into any other issues please pop into #openstack-qa (or
respond to this email) and we'll try to work through them.


Something has definitely gone haywire in the cells v1 job since 5/1 and 
the journal log handler:


http://status.openstack.org/elastic-recheck/#1580728

We're seeing UnicodeDecodeErrors. I don't know why it's just that job 
that's failing though since the same code and test tickling it exists in 
all of the other jobs too. It could just be something to do with how 
cells v1 handles vm state changes at the top which turns it into a hard 
failure.


--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-03 Thread Sean Dague
On 05/02/2017 08:30 AM, Sean Dague wrote:
> We started running systemd for devstack in the gate yesterday, so far so
> good.
> 
> The following patch (which will hopefully land soon), will convert the
> default local use of devstack to systemd as well -
> https://review.openstack.org/#/c/461716/. It also includes substantially
> updated documentation.
> 
> Once you take this patch, a "./clean.sh" is recommended. Flipping modes
> can cause some cruft to build up, and ./clean.sh should be pretty good
> at eliminating them.
> 
> https://review.openstack.org/#/c/461716/2/doc/source/development.rst is
> probably specifically interesting / useful for people to read, as it
> shows how the standard development workflows will change (for the
> better) with systemd.
> 
>   -Sean

As a follow up, there are definitely a few edge conditions we've hit
with some jobs, so the following is provided as information in case you
have a job that seems to fail in one of these ways.

Doing process stop / start
==

The nova live migration job is special, it was restarting services
manually, however it was doing so with some copy / pasted devstack
code, which means it didn't evolve with the rest of devstack. So the
stop code stopped working (and wasn't robust enough to make it clear
that was the issue).

https://review.openstack.org/#/c/461803/ is the fix (merged)

run_process limitations
===

When doing the systemd conversion I looked for a path forward which
was going to make 90% of everything just work. The key trick here was
that services start as the "stack" user, and aren't daemonizing away
from the console. We can take the run_process command and make that
the ExecStart in a unit file.

*Except* that only works if the command is specified by an *absolute
path*.

So things like this in kuryr-libnetwork become an issue
https://github.com/openstack/kuryr-libnetwork/blob/3e2891d6fc5d55b3712258c932a5a8b9b323f6c2/devstack/plugin.sh#L148

There is also a second issue there, which is calling sudo in the
run_process line. If you need to run as a user/group different than
the default, you need to specify that directly.

The run_process command now supports that -
https://github.com/openstack-dev/devstack/blob/803acffcf9254e328426ad67380a99f4f5b164ec/functions-common#L1531-L1535

And lastly, run_process really always did expect that the thing you
started remained attached to the console. These are run as "simple"
services in systemd. If you are running a thing which already
daemonizes systemd is going to assume (correctly in this simple mode)
the fact that the process detatched from it means it died, and kill
and clean it up.

This is the issue the OpenDaylight plugin ran
into. https://review.openstack.org/#/c/461889/ is the proposed fix.



If you run into any other issues please pop into #openstack-qa (or
respond to this email) and we'll try to work through them.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-02 Thread Monty Taylor

On 05/02/2017 08:30 AM, Sean Dague wrote:

We started running systemd for devstack in the gate yesterday, so far so
good.

The following patch (which will hopefully land soon), will convert the
default local use of devstack to systemd as well -
https://review.openstack.org/#/c/461716/. It also includes substantially
updated documentation.

Once you take this patch, a "./clean.sh" is recommended. Flipping modes
can cause some cruft to build up, and ./clean.sh should be pretty good
at eliminating them.

https://review.openstack.org/#/c/461716/2/doc/source/development.rst is
probably specifically interesting / useful for people to read, as it
shows how the standard development workflows will change (for the
better) with systemd.


I absolutely cannot believe I'm saying this given what the change 
implements and my general steaming hatred associated with it ... but 
this is awesome work and a definite improvement over what existed before 
it. If we're going to be stuck sharing the Bad Trip that is Lennart's 
projected consciousness, this is a pleasant surprise of a positive outcome.


Thank you for learning about the topic and for teaching me something in 
the process.


Monty

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [all] systemd in devstack by default

2017-05-02 Thread Sean Dague
We started running systemd for devstack in the gate yesterday, so far so
good.

The following patch (which will hopefully land soon), will convert the
default local use of devstack to systemd as well -
https://review.openstack.org/#/c/461716/. It also includes substantially
updated documentation.

Once you take this patch, a "./clean.sh" is recommended. Flipping modes
can cause some cruft to build up, and ./clean.sh should be pretty good
at eliminating them.

https://review.openstack.org/#/c/461716/2/doc/source/development.rst is
probably specifically interesting / useful for people to read, as it
shows how the standard development workflows will change (for the
better) with systemd.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] [all] systemd in the gate is coming soon

2017-04-26 Thread Sean Dague
I just proposed the following defaults change in the gate -
https://review.openstack.org/#/c/460062/

Which means we'll be using systemd by default for started services after
it lands. We'll hold for the week, and plan to land this on Monday. If
you would like to test that your jobs work in advance of this change, do
a Depends-On with that devstack change.

You can also test things locally by using that change, and setting
USE_SCREEN to False.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] uwsgi for API services

2017-04-20 Thread Takashi Yamamoto
On Thu, Apr 13, 2017 at 9:01 PM, Sean Dague  wrote:
> One of the many reasons for getting all our API services running wsgi
> under a real webserver is to get out of the custom ports for all
> services game. However, because of some of the limits of apache
> mod_wsgi, we really haven't been able to do that in our development
> enviroment. Plus, the moment we go to mod_wsgi for some services, the
> entire development workflow for "change this library, refresh the
> following services" changes dramatically.
>
> It would be better to have a consistent story here.
>
> So there is some early work up to use apache mod_proxy_uwsgi as the
> listener, and uwsgi processes running under systemd for all the
> services. These bind only to a unix local socket, not to a port.
> https://review.openstack.org/#/c/456344/
>
> Early testing locally has been showing progress. We still need to prove
> out a few things, but this should simplify a bunch of the setup. And
> coming with systemd will converge us back to a more consistent
> development workflow when updating common code in a project that has
> both API services and workers.
>
> For projects that did the mod_wsgi thing in a devstack plugin, this is
> going to require some adjustment. Exactly what is not yet clear, but
> it's going to be worth following that patch.

networking-midonet needed this change.
https://review.openstack.org/#/c/458305

i guess some other projects need similar changes.
http://codesearch.openstack.org/?q=KEYSTONE_AUTH_PORT&i=nope&files=&repos=

>
> -Sean
>
> --
> Sean Dague
> http://dague.net
>
> __
> OpenStack Development Mailing List (not for usage questions)
> Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] uwsgi for API services - RSN

2017-04-18 Thread Sean Dague
This is all merged now. If you run into any issues with real WSGI
running, please poke up in #openstack-qa and we'll see what we can to to
get things ironned out.

-Sean

On 04/18/2017 07:19 AM, Sean Dague wrote:
> Ok, the patch series has come together now, and
> https://review.openstack.org/#/c/456344/ remains the critical patch.
> 
> This introduces a new global config option: "WSGI_MODE", which will be
> either "uwsgi" or "mod_wsgi" (for the transition).
> 
> https://review.openstack.org/#/c/456717/6/lib/placement shows what it
> takes to make something that's current running under mod_wsgi to run
> under uwsgi in this model.
> 
> The intent is that uwsgi mode becomes primary RSN, as that provides a
> more consistent experience for development, and still exercises the API
> services as real wsgi applications.
> 
>   -Sean
> 
> On 04/13/2017 08:01 AM, Sean Dague wrote:
>> One of the many reasons for getting all our API services running wsgi
>> under a real webserver is to get out of the custom ports for all
>> services game. However, because of some of the limits of apache
>> mod_wsgi, we really haven't been able to do that in our development
>> enviroment. Plus, the moment we go to mod_wsgi for some services, the
>> entire development workflow for "change this library, refresh the
>> following services" changes dramatically.
>>
>> It would be better to have a consistent story here.
>>
>> So there is some early work up to use apache mod_proxy_uwsgi as the
>> listener, and uwsgi processes running under systemd for all the
>> services. These bind only to a unix local socket, not to a port.
>> https://review.openstack.org/#/c/456344/
>>
>> Early testing locally has been showing progress. We still need to prove
>> out a few things, but this should simplify a bunch of the setup. And
>> coming with systemd will converge us back to a more consistent
>> development workflow when updating common code in a project that has
>> both API services and workers.
>>
>> For projects that did the mod_wsgi thing in a devstack plugin, this is
>> going to require some adjustment. Exactly what is not yet clear, but
>> it's going to be worth following that patch.
>>
>>  -Sean
>>
> 
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] uwsgi for API services - RSN

2017-04-18 Thread Sean Dague
Ok, the patch series has come together now, and
https://review.openstack.org/#/c/456344/ remains the critical patch.

This introduces a new global config option: "WSGI_MODE", which will be
either "uwsgi" or "mod_wsgi" (for the transition).

https://review.openstack.org/#/c/456717/6/lib/placement shows what it
takes to make something that's current running under mod_wsgi to run
under uwsgi in this model.

The intent is that uwsgi mode becomes primary RSN, as that provides a
more consistent experience for development, and still exercises the API
services as real wsgi applications.

-Sean

On 04/13/2017 08:01 AM, Sean Dague wrote:
> One of the many reasons for getting all our API services running wsgi
> under a real webserver is to get out of the custom ports for all
> services game. However, because of some of the limits of apache
> mod_wsgi, we really haven't been able to do that in our development
> enviroment. Plus, the moment we go to mod_wsgi for some services, the
> entire development workflow for "change this library, refresh the
> following services" changes dramatically.
> 
> It would be better to have a consistent story here.
> 
> So there is some early work up to use apache mod_proxy_uwsgi as the
> listener, and uwsgi processes running under systemd for all the
> services. These bind only to a unix local socket, not to a port.
> https://review.openstack.org/#/c/456344/
> 
> Early testing locally has been showing progress. We still need to prove
> out a few things, but this should simplify a bunch of the setup. And
> coming with systemd will converge us back to a more consistent
> development workflow when updating common code in a project that has
> both API services and workers.
> 
> For projects that did the mod_wsgi thing in a devstack plugin, this is
> going to require some adjustment. Exactly what is not yet clear, but
> it's going to be worth following that patch.
> 
>   -Sean
> 


-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


[openstack-dev] [devstack] uwsgi for API services

2017-04-13 Thread Sean Dague
One of the many reasons for getting all our API services running wsgi
under a real webserver is to get out of the custom ports for all
services game. However, because of some of the limits of apache
mod_wsgi, we really haven't been able to do that in our development
enviroment. Plus, the moment we go to mod_wsgi for some services, the
entire development workflow for "change this library, refresh the
following services" changes dramatically.

It would be better to have a consistent story here.

So there is some early work up to use apache mod_proxy_uwsgi as the
listener, and uwsgi processes running under systemd for all the
services. These bind only to a unix local socket, not to a port.
https://review.openstack.org/#/c/456344/

Early testing locally has been showing progress. We still need to prove
out a few things, but this should simplify a bunch of the setup. And
coming with systemd will converge us back to a more consistent
development workflow when updating common code in a project that has
both API services and workers.

For projects that did the mod_wsgi thing in a devstack plugin, this is
going to require some adjustment. Exactly what is not yet clear, but
it's going to be worth following that patch.

-Sean

-- 
Sean Dague
http://dague.net

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


Re: [openstack-dev] [devstack] experimenting with systemd for services

2017-04-05 Thread Matt Riedemann

On 4/5/2017 3:09 PM, Sean Dague wrote:

At the PTG clayg brought up an excellent question about what the
expected flow was to restart a bunch of services in devstack after a
code changes that impacts many of them (be it common code, or a
library). People had created a bunch of various screen hacks over the
years, but screen is flakey, and this is definitely not an ideal workflow.

Over lunch clayg, clarkb, and I chatted about some options. Clark
brought up the idea of doing systemd units for all of the services. A
couple of weeks ago I decided it was time for me to understand systemd
better anyway, and started playing around with what this might look like.

The results landed here https://review.openstack.org/#/c/448323/.
Documentation is here
http://git.openstack.org/cgit/openstack-dev/devstack/tree/SYSTEMD.rst

This is currently an opt in. All the services in base devstack however
do work in this new model, and I and a few others have been using this
mode the last week or so. It's honestly really great. Working on
oslo.log changes it's now:

pip install -U .
sudo systemctl restart devstack@*

And the change is now in all your services.

There is also an oslo.log change for native systemd journal support
(https://review.openstack.org/#/c/451525/), which once that has landed
and been released, will let us do some neat query of the journal during
development to see slices across services like this
https://dague.net/2017/03/30/in-praise-of-systemd/.

ACTION REQUIRED:

If you maintain a devstack plugin that starts any services, now would be
a great time to test to see if this works for them. The biggest issue is
that the commands sent to run_process need to be absolute pathed.


My hope is that by end of cycle this is going to be the default mode in
devstack for *both* the gate and development, which eliminates one major
difference between the two. I'm also hoping that we'll be able to keep
and archive the journals from the runs, so you can download and query
those directly. Especially once the oslo.log enhancements are there to
add the additional structured data.

-Sean



Cool. I've always wanted this. When I started on OpenStack I was used to 
using sysv-init files and service commands with RHEL and wasn't used to 
screen, so pretty much hated that in devstack but felt like I couldn't 
ever express that because all of the cool kids loved screen so much.


Well well well, how the tables have turned.

--

Thanks,

Matt

__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev


  1   2   3   4   5   6   7   8   9   10   >