Re: [ovirt-users] [Gluster-users] VM failed to start | Bad volume specification

2015-03-22 Thread Punit Dambiwal
Hi All,

Still i am facing the same issue...please help me to overcome this issue...

Thanks,
punit

On Fri, Mar 20, 2015 at 12:22 AM, Thomas Holkenbrink 
thomas.holkenbr...@fibercloud.com wrote:

  I’ve seen this before. The system thinks the storage system us up and
 running and then attempts to utilize it.

 The way I got around it was to put a delay in the startup of the gluster
 Node on the interface that the clients use to communicate.



 I use a bonded link, I then add a LINKDELAY to the interface to get the
 underlying system up and running before the network comes up. This then
 causes Network dependent features to wait for the network to finish.

 It adds about 10seconds to the startup time, in our environment it works
 well, you may not need as long of a delay.



 CentOS

 root@gls1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-bond0



 DEVICE=bond0

 ONBOOT=yes

 BOOTPROTO=static

 USERCTL=no

 NETMASK=255.255.248.0

 IPADDR=10.10.1.17

 MTU=9000

 IPV6INIT=no

 IPV6_AUTOCONF=no

 NETWORKING_IPV6=no

 NM_CONTROLLED=no

 LINKDELAY=10

 NAME=System Storage Bond0









 Hi Michal,



 The Storage domain is up and running and mounted on all the host
 nodes...as i updated before that it was working perfectly before but just
 after reboot can not make the VM poweron...



 [image: Inline image 1]



 [image: Inline image 2]



 [root@cpu01 log]# gluster volume info



 Volume Name: ds01

 Type: Distributed-Replicate

 Volume ID: 369d3fdc-c8eb-46b7-a33e-0a49f2451ff6

 Status: Started

 Number of Bricks: 48 x 2 = 96

 Transport-type: tcp

 Bricks:

 Brick1: cpu01:/bricks/1/vol1

 Brick2: cpu02:/bricks/1/vol1

 Brick3: cpu03:/bricks/1/vol1

 Brick4: cpu04:/bricks/1/vol1

 Brick5: cpu01:/bricks/2/vol1

 Brick6: cpu02:/bricks/2/vol1

 Brick7: cpu03:/bricks/2/vol1

 Brick8: cpu04:/bricks/2/vol1

 Brick9: cpu01:/bricks/3/vol1

 Brick10: cpu02:/bricks/3/vol1

 Brick11: cpu03:/bricks/3/vol1

 Brick12: cpu04:/bricks/3/vol1

 Brick13: cpu01:/bricks/4/vol1

 Brick14: cpu02:/bricks/4/vol1

 Brick15: cpu03:/bricks/4/vol1

 Brick16: cpu04:/bricks/4/vol1

 Brick17: cpu01:/bricks/5/vol1

 Brick18: cpu02:/bricks/5/vol1

 Brick19: cpu03:/bricks/5/vol1

 Brick20: cpu04:/bricks/5/vol1

 Brick21: cpu01:/bricks/6/vol1

 Brick22: cpu02:/bricks/6/vol1

 Brick23: cpu03:/bricks/6/vol1

 Brick24: cpu04:/bricks/6/vol1

 Brick25: cpu01:/bricks/7/vol1

 Brick26: cpu02:/bricks/7/vol1

 Brick27: cpu03:/bricks/7/vol1

 Brick28: cpu04:/bricks/7/vol1

 Brick29: cpu01:/bricks/8/vol1

 Brick30: cpu02:/bricks/8/vol1

 Brick31: cpu03:/bricks/8/vol1

 Brick32: cpu04:/bricks/8/vol1

 Brick33: cpu01:/bricks/9/vol1

 Brick34: cpu02:/bricks/9/vol1

 Brick35: cpu03:/bricks/9/vol1

 Brick36: cpu04:/bricks/9/vol1

 Brick37: cpu01:/bricks/10/vol1

 Brick38: cpu02:/bricks/10/vol1

 Brick39: cpu03:/bricks/10/vol1

 Brick40: cpu04:/bricks/10/vol1

 Brick41: cpu01:/bricks/11/vol1

 Brick42: cpu02:/bricks/11/vol1

 Brick43: cpu03:/bricks/11/vol1

 Brick44: cpu04:/bricks/11/vol1

 Brick45: cpu01:/bricks/12/vol1

 Brick46: cpu02:/bricks/12/vol1

 Brick47: cpu03:/bricks/12/vol1

 Brick48: cpu04:/bricks/12/vol1

 Brick49: cpu01:/bricks/13/vol1

 Brick50: cpu02:/bricks/13/vol1

 Brick51: cpu03:/bricks/13/vol1

 Brick52: cpu04:/bricks/13/vol1

 Brick53: cpu01:/bricks/14/vol1

 Brick54: cpu02:/bricks/14/vol1

 Brick55: cpu03:/bricks/14/vol1

 Brick56: cpu04:/bricks/14/vol1

 Brick57: cpu01:/bricks/15/vol1

 Brick58: cpu02:/bricks/15/vol1

 Brick59: cpu03:/bricks/15/vol1

 Brick60: cpu04:/bricks/15/vol1

 Brick61: cpu01:/bricks/16/vol1

 Brick62: cpu02:/bricks/16/vol1

 Brick63: cpu03:/bricks/16/vol1

 Brick64: cpu04:/bricks/16/vol1

 Brick65: cpu01:/bricks/17/vol1

 Brick66: cpu02:/bricks/17/vol1

 Brick67: cpu03:/bricks/17/vol1

 Brick68: cpu04:/bricks/17/vol1

 Brick69: cpu01:/bricks/18/vol1

 Brick70: cpu02:/bricks/18/vol1

 Brick71: cpu03:/bricks/18/vol1

 Brick72: cpu04:/bricks/18/vol1

 Brick73: cpu01:/bricks/19/vol1

 Brick74: cpu02:/bricks/19/vol1

 Brick75: cpu03:/bricks/19/vol1

 Brick76: cpu04:/bricks/19/vol1

 Brick77: cpu01:/bricks/20/vol1

 Brick78: cpu02:/bricks/20/vol1

 Brick79: cpu03:/bricks/20/vol1

 Brick80: cpu04:/bricks/20/vol1

 Brick81: cpu01:/bricks/21/vol1

 Brick82: cpu02:/bricks/21/vol1

 Brick83: cpu03:/bricks/21/vol1

 Brick84: cpu04:/bricks/21/vol1

 Brick85: cpu01:/bricks/22/vol1

 Brick86: cpu02:/bricks/22/vol1

 Brick87: cpu03:/bricks/22/vol1

 Brick88: cpu04:/bricks/22/vol1

 Brick89: cpu01:/bricks/23/vol1

 Brick90: cpu02:/bricks/23/vol1

 Brick91: cpu03:/bricks/23/vol1

 Brick92: cpu04:/bricks/23/vol1

 Brick93: cpu01:/bricks/24/vol1

 Brick94: cpu02:/bricks/24/vol1

 Brick95: cpu03:/bricks/24/vol1

 Brick96: cpu04:/bricks/24/vol1

 Options Reconfigured:

 diagnostics.count-fop-hits: on

 diagnostics.latency-measurement: on

 nfs.disable: on

 user.cifs: enable

 auth.allow: 10.10.0.*

 performance.quick-read: off

 performance.read-ahead: off

 performance.io-cache: off

 performance.stat-prefetch: off

 

Re: [ovirt-users] DWH Question

2015-03-22 Thread Yaniv Dary



On 03/18/2015 02:06 PM, Koen Vanoppen wrote:

Thanks!!
Only, I can't execute the query.. I added it to the reports as a SQL 
query, but I can't execute it... I never added a new one before... So 
maybe that will be the problem... :-)


You need to run this against the db. Use PSQL or pgAdmin.



2015-03-16 13:29 GMT+01:00 Shirly Radco sra...@redhat.com 
mailto:sra...@redhat.com:


Hi Koen,

I believe you can use this query:

SELECT v3_5_latest_configuration_hosts_interfaces.vlan_id,
v3_5_latest_configuration_hosts.host_id,
v3_5_statistics_vms_resources_usage_samples.vm_id
FROM v3_5_latest_configuration_hosts_interfaces
LEFT JOIN v3_5_latest_configuration_hosts ON
v3_5_latest_configuration_hosts_interfaces.host_id =
v3_5_latest_configuration_hosts.host_id
LEFT JOIN v3_5_statistics_vms_resources_usage_samples ON
v3_5_latest_configuration_hosts.history_id =

v3_5_statistics_vms_resources_usage_samples.current_host_configuration_version
LEFT JOIN v3_5_latest_configuration_vms ON
v3_5_latest_configuration_vms.history_id =
v3_5_statistics_vms_resources_usage_samples.vm_configuration_version
LEFT JOIN v3_5_latest_configuration_vms_interfaces ON
v3_5_latest_configuration_vms.history_id =
v3_5_latest_configuration_vms_interfaces.vm_configuration_version
WHERE v3_5_statistics_vms_resources_usage_samples.vm_status = 1
AND v3_5_latest_configuration_hosts_interfaces.vlan_id IS
NOT NULL
AND
v3_5_latest_configuration_vms_interfaces.logical_network_name =
v3_5_latest_configuration_hosts_interfaces.logical_network_name
GROUP BY v3_5_latest_configuration_hosts_interfaces.vlan_id,
v3_5_latest_configuration_hosts.host_id,
v3_5_statistics_vms_resources_usage_samples.vm_id
ORDER BY v3_5_latest_configuration_hosts_interfaces.vlan_id,
v3_5_latest_configuration_hosts.host_id,
v3_5_statistics_vms_resources_usage_samples.vm_id

If you need more details please let me know.

Best regards,
---
Shirly Radco
BI Software Engineer
Red Hat Israel Ltd.


- Original Message -
 From: Koen Vanoppen vanoppen.k...@gmail.com
mailto:vanoppen.k...@gmail.com
 To: users@ovirt.org mailto:users@ovirt.org
 Sent: Friday, March 13, 2015 9:17:29 AM
 Subject: [ovirt-users] DWH Question

 Dear all,

 Is it possible to pull a list of all VMS who are in vlanX?

 Kind regards,

 Koen

 ___
 Users mailing list
 Users@ovirt.org mailto:Users@ovirt.org
 http://lists.ovirt.org/mailman/listinfo/users





___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


--
Yaniv Dary
Red Hat Israel Ltd.
34 Jerusalem Road
Building A, 4th floor
Ra'anana, Israel 4350109

Tel : +972 (9) 7692306
8272306
Email: yd...@redhat.com
IRC : ydary

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ovirt hosted ha: Failed to start monitoring domain

2015-03-22 Thread Yedidyah Bar David
- Original Message -
 From: lof yer lof...@gmail.com
 To: users users@ovirt.org
 Sent: Wednesday, March 11, 2015 7:07:09 AM
 Subject: [ovirt-users] ovirt hosted ha: Failed to start monitoring domain
 
 I've got four nodes with ovirt-3.5.
 They share the a gluster domain of four replica with hosted-engine in it.
 After a reboot on each of them, the hosted-engine just cannot get up.
 --
 

Please check/post more logs, including vdsm.

Is your gluster storage ok? can you mount it?

Best,
-- 
Didi
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] expanding a lun on the san side

2015-03-22 Thread Fred Rolland
Nathanael hi,

Please refer to Aharon suggestion.

I don't know if you can extend without going to maintenance, and I don't think 
that updating the DB directly would be recommended.

Regards,

Fred

- Original Message -
 From: Nathanaël Blanchet blanc...@abes.fr
 To: Fred Rolland froll...@redhat.com
 Cc: users@ovirt.org, Aharon Canan aca...@redhat.com
 Sent: Monday, March 16, 2015 5:43:14 PM
 Subject: Re: [ovirt-users] expanding a lun on the san side
 
 Thank you Fred for your answer,
 Aharon suggests me :
 
 The flow should be like below -
 
   * Put the relevant domain in maintenance
   * Extend the lun (from storage side)
   * Resize the relevant PV (for example -  pvresize
 --setphysicalvolumesize 125G 
 /dev/mapper/3600601601282300056f0075c3f81e311
   * Activate the domain
 
 Check the storage domain total size, It should have the new size.
 
 
 I have a few question to do it manually:
 
   * Do I have to shutdown or live storage migrate to an other lun all my
 vms to perform the pv extend on the SPM? I know now it won't update
 the ovirt db but, is it a risk to do extend this in production?
   * My next question is : is it possible to fill manually the new lun
 size in the DB?
 
 All this would prevent me from a heavy maintenance or from a very long
 migration (4TB) due to maintenance mode.
 
 Le 16/03/2015 15:20, Fred Rolland a écrit :
  Nathanael hi,
 
  There is an open bug on this scenario :
  https://bugzilla.redhat.com/show_bug.cgi?id=609689
 
  Here are manual steps for overcome this issue:
  https://access.redhat.com/solutions/376873
 
 
  We are working on a solution for 3.6
  http://www.ovirt.org/Features/LUN_Resize
 
  Regards,
 
  Fred
 
  - Original Message -
  From: Nathanaël Blanchet blanc...@abes.fr
  To: users@ovirt.org
  Sent: Monday, March 2, 2015 6:35:30 PM
  Subject: [ovirt-users] expanding a lun on the san side
 
  Hi all,
 
  I've just expanded  the lun used as master by the domain storage. fdisk
  on the SPM is able to see the multipath device with its new size, but
  engine reports the old size.
  How to refresh this?
 
 
  ___
  Users mailing list
  Users@ovirt.org
  http://lists.ovirt.org/mailman/listinfo/users
 
 
 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Data Storage Creation issue

2015-03-22 Thread ronald

I am a still a new comer to Ovirt virtualization arena

I have installed an older ver of ovirt (3.1). I have installed the 
appropriate standalone hypervisor and ovirt manager on separate servers.


Presently, my configuration is 'internal'

I . I have successfully created  data center
2. I have successfully create cluster
3. I have successfully attached the cluster to the only host that I have 
(hypervisor)

4. I have a problem configuring my storage domain:
a. my objective is to use the exported file system from the third 
system for the storage domain.

   (note I don't have the vdsm user on the the third server)
b. I used 192.168.1.15:/stor  (which is exported from the third system)

I noticed that Ovirt is not happy if I keep the storage file system on 
the ovirt-manager server.  I require fqdn:/file system
But I don't have a domain controller running or directory server running 
so that I cannot attach fqdn:/file system to the manager.


It there a work around for this so that I can attach a storage?

Appreciate your assistance.
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Hosted Engine on oVirt Node Hypervisor

2015-03-22 Thread Jason Keltz
I'm setting up some new oVirt infrastructure, and wanted to give hosted 
engine a try.  I downloaded and installed the oVirt Node Hypervisor ISO 
(3.5-0.999.201502231653.el7.centos) on one of 3 nodes.  One of the 
options in the hypervisor menu is Hosted Engine.  This requires an 
Engine ISO/OVA URL for download.  The thing is - as far as I can tell, 
there is no download link for this ISO/OVA on the ovirt release web 
site.  I also can't find anything in the documentation that refers to it 
(or even this menu in the hypervisor). I did find this after some searching:


http://jenkins.ovirt.org/user/fabiand/my-views/view/Node/job/ovirt-appliance_engine-3.5_master_merged/oVirt-Engine-Appliance-CentOS-x86_64-7-20150319.424.ova

(Now replaced with a build from 0322).  I asked on the ovirt IRC channel 
and was told that this might work, but because of new functionality 
introduced recently that it also might not. If the feature is available 
in the node ISO, shouldn't there be an appropriate release of the hosted 
engine ISO/OVA that works hand in hand with the node that I've 
downloaded?   If it's not there because it isn't ready, isn't this 
functionality something that should be added to maybe a beta node 
release and tested before being released into the stable node hypervisor 
release?


I asked on the IRC channel whether it might be possible for me to 
kickstart my own engine from the node.  I ran into trouble with that as 
well.   On the installed node, I can only configure one network 
interface.  This is, of course, intended to enable ovirtmgmt for 
communication with engine which would take over and configure everything 
else for you.  Of course, when you don't yet have engine installed and 
need to get it, this leads to a chicken and egg problem.  To kickstart 
engine on node, I need an IP (from mgmt), an image (I guess it could 
come from the mgmt network), but then I also need access to the external 
network (on another NIC) to be able to install the appropriate ovirt yum 
repository, and download the engine!  If I installed my own node 
manually instead if using ISO, I guess I could configure the network, 
and make it work, but I'm trying to take advantage of the work that has 
already been put into node to make this all possible.


Anyway, I'm certainly interested in any feedback from users who have 
been able to make this work.  I guess I could kickstart one node as an 
engine, create the virtual image there, suck the ova down to the mgmt 
server, install node, then use node to re-suck down the hosted engine 
image, but it just seems like a lot of extra work.  Somehow I think it's 
intended to be a little more straightforward than that.


Jason.

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users