+ OpenStack list. From: Kaustubh Kelkar Sent: Tuesday, April 5, 2016 3:17 PM To: 'James Fleet' <jrfl...@istech-corp.com> Subject: RE: [Openstack] Cinder error issues on Liberty
I wanted to verify step 4 in the prerequisites section (http://docs.openstack.org/liberty/install-guide-rdo/cinder-storage-install.html#prerequisites). The physical volume name for the cinder-volumes group should go in /etc/lvm/lvm.conf. Maybe there is an issue with PV Name unknown device ? -Kaustubh From: James Fleet [mailto:jrfl...@istech-corp.com] Sent: Tuesday, April 5, 2016 3:02 PM To: Kaustubh Kelkar <kaustubh.kel...@casa-systems.com<mailto:kaustubh.kel...@casa-systems.com>> Subject: Re: [Openstack] Cinder error issues on Liberty pvdisply -bash: pvdisply: command not found [root@blocknode ~]# pvdisplay WARNING: Device for PV bNKdRC-v9t2-FxLc-PO2q-p3X1-g3ZN-zU3Yoz not found or rejected by a filter. WARNING: Device for PV uf5dAz-j8C7-2pYD-PsiC-xUJW-zKUO-v0SGzw not found or rejected by a filter. WARNING: Device for PV 5ZRbQa-r4Ja-cLQD-QJ2M-HyyP-AkPK-3eYaVn not found or rejected by a filter. WARNING: Device for PV uf5dAz-j8C7-2pYD-PsiC-xUJW-zKUO-v0SGzw not found or rejected by a filter. --- Physical volume --- PV Name unknown device VG Name centos_cloudnode PV Size 2.73 TiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 715396 Free PE 715396 Allocated PE 0 PV UUID uf5dAz-j8C7-2pYD-PsiC-xUJW-zKUO-v0SGzw WARNING: Device for PV bNKdRC-v9t2-FxLc-PO2q-p3X1-g3ZN-zU3Yoz not found or rejected by a filter. WARNING: Device for PV 5ZRbQa-r4Ja-cLQD-QJ2M-HyyP-AkPK-3eYaVn not found or rejected by a filter. --- Physical volume --- PV Name unknown device VG Name cinder-volumes PV Size 2.73 TiB / not usable 3.44 MiB Allocatable yes PE Size 4.00 MiB Total PE 715396 Free PE 715396 Allocated PE 0 PV UUID bNKdRC-v9t2-FxLc-PO2q-p3X1-g3ZN-zU3Yoz --- Physical volume --- PV Name unknown device VG Name cinder-volumes PV Size 2.73 TiB / not usable 3.00 MiB Allocatable yes PE Size 4.00 MiB Total PE 715396 Free PE 715396 Allocated PE 0 PV UUID 5ZRbQa-r4Ja-cLQD-QJ2M-HyyP-AkPK-3eYaVn --- Physical volume --- PV Name /dev/sda3 VG Name centos PV Size 2.73 TiB / not usable 2.00 MiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 715271 Free PE 0 Allocated PE 715271 PV UUID yEsAJB-hXr6-cibg-Zwao-AGvw-Dj3d-eg8xvw [root@blocknode ~]# fdisk -l Disk /dev/sda: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 * 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdb1 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sdc: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdc1 * 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/sdd: 3000.6 GB, 3000592982016 bytes, 5860533168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: dos Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdd1 * 1 4294967295 2147483647+ ee GPT Partition 1 does not start on physical sector boundary. Disk /dev/mapper/centos-swap: 16.9 GB, 16919822336 bytes, 33046528 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/mapper/centos-home: 2929.5 GB, 2929457102848 bytes, 5721595904 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes James R. Fleet Innovative Solutions Technology 484 Williamsport Pike #135 Martinsburg, WV 25404 888.809.0223 ext.702 On Tue, Apr 5, 2016 at 2:42 PM, Kaustubh Kelkar <kaustubh.kel...@casa-systems.com<mailto:kaustubh.kel...@casa-systems.com>> wrote: Hi, Can you post the output of “sudo pvdisplay” on storage nodes, and contents of /etc/lvm/lvm.conf (or relevant files) ? -Kaustubh From: James Fleet [mailto:jrfl...@istech-corp.com<mailto:jrfl...@istech-corp.com>] Sent: Tuesday, April 5, 2016 1:50 PM To: openstack@lists.openstack.org<mailto:openstack@lists.openstack.org> Subject: [Openstack] Cinder error issues on Liberty Hello, I have installed Openstack Liberty and I am having a problem with Cinder Volumes. I am using RDO running on CentOS 7.1 as I have checked all of my cinder.conf files making sure I followed the Liberty configuration instructions for the controller, compute and storage node. When I run cinder-mange service list I get this #cinder-manage service list No handlers could be found for logger "oslo_config.cfg" 2016-04-05 11:57:45.207 13660 DEBUG oslo_db.api [req-532c10c8-f3e8-4ec4-95b2-8e5b8cfcaafb - - - - -] Loading backend 'sqlalchemy' from 'cinder.db.sqlalchemy.api' _load_backend /usr/lib/python2.7/site-packages/oslo_db/api.py:230 /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/enginefacade.py:241: NotSupportedWarning: Configuration option(s) ['use_tpool'] not supported exception.NotSupportedWarning 2016-04-05 11:57:45.234 13660 DEBUG oslo_db.sqlalchemy.engines [req-532c10c8-f3e8-4ec4-95b2-8e5b8cfcaafb - - - - -] MySQL server mode set to STRICT_TRANS_TABLES,STRICT_ALL_TABLES,NO_ZERO_IN_DATE,NO_ZERO_DATE,ERROR_FOR_DIVISION_BY_ZERO,TRADITIONAL,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION _check_effective_sql_mode /usr/lib/python2.7/site-packages/oslo_db/sqlalchemy/engines.py:256 Binary Host Zone Status State Updated At cinder-scheduler controller nova enabled :-) 2016-04-05 15:57:36 cinder-volume cloudnode1@lvm nova enabled XXX 2016-03-28 22:50:41 cinder-volume blocknode@lvm nova enabled XXX None I have checked my Cinder volume logs on my Storage node along with my compute node with debug set and I see this in my logs: 2016-04-05 12:02:25.184 10926 ERROR cinder.service [-] Manager for service cinder-volume cloudnode1@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2016-04-05 12:02:33.621 10926 DEBUG oslo_service.periodic_task [req-bb5f7cd0-ff99-443b-8211-1679199c413b - - - - -] Running periodic task VolumeManager._publish_service_capabilities run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:213 2016-04-05 12:02:33.622 10926 DEBUG oslo_service.periodic_task [req-bb5f7cd0-ff99-443b-8211-1679199c413b - - - - -] Running periodic task VolumeManager._report_driver_status run_periodic_tasks /usr/lib/python2.7/site-packages/oslo_service/periodic_task.py:213 2016-04-05 12:02:33.623 10926 WARNING cinder.volume.manager [req-bb5f7cd0-ff99-443b-8211-1679199c413b - - - - -] Update driver status failed: (config name lvm) is uninitialized. 2016-04-05 12:02:35.185 10926 ERROR cinder.service [-] Manager for service cinder-volume cloudnode1@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2016-04-05 12:02:45.191 10926 ERROR cinder.service [-] Manager for service cinder-volume cloudnode1@lvm is reporting problems, not sending heartbeat. Service will appear "down". 2016-04-05 12:02:55.191 10926 ERROR cinder.service [-] Manager for service cinder-volume cloudnode1@lvm is reporting problems, not sending heartbeat. Service will appear "down". I am confused here is my cinder.conf file from my controller : [DEFAULT] verbose = True debug = True rpc_backend = rabbit auth_strategy = keystone my_ip = 10.0.0.30 [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = passwd [database] connection = mysql://cinder:passwd@controller/cinder [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = rabbit rabbit_password = xxxxxx [oslo_concurrency] lock_path = /var/lib/cinder/tmp Here is my cinder.conf file from my compute/ storage node [DEFAULT] rpc_backend = rabbit auth_strategy = keystone my_ip = 10.0.0.40 glance_host= controller enabled_backends = lvm [database] connection = mysql://cinder:password@10.0.0.30/cinder<http://cinder:password@10.0.0.30/cinder> [oslo_messaging_rabbit] rabbit_host = controller rabbit_userid = rabbit rabbit_password = passwd [keystone_authtoken] auth_uri = http://controller:5000 auth_url = http://controller:35357 auth_plugin = password project_domain_id = default user_domain_id = default project_name = service username = cinder password = passwd [lvm] volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes iscsi_protocol = iscsi iscsi_helper = lioadm iscsi_ip_address=10.0.0.30 So I am very confused as to why I can't create volumes. Thank You James R. Fleet Innovative Solutions Technology 484 Williamsport Pike #135 Martinsburg, WV 25404 888.809.0223 ext.702<tel:888.809.0223%20ext.702>
_______________________________________________ Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack Post to : openstack@lists.openstack.org Unsubscribe : http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack