Re: [ovirt-users] multipath.conf changes removed on host activation

2015-05-06 Thread Yeela Kaplan
Hi Rik,
What version of vdsm are you using?

You can avoid overriding /etc/multipath.conf by editing it,
and adding the following line:
# RHEV PRIVATE
as the second line in the conf file,
right after the first line which is supposed to state the version of vdsm's 
multipath configuration.

Let me know if it helps.

Yeela

- Original Message -
> From: "Rik Theys" 
> To: users@ovirt.org
> Sent: Wednesday, May 6, 2015 11:30:06 AM
> Subject: [ovirt-users] multipath.conf changes removed on host activation
> 
> Hi,
> 
> I have some specific device settings in multipath.conf for my storage
> box as it's not yet in the default settings of multipath for this device.
> 
> Upon activation of my host, the multipath.conf file is always replaced
> by the default version and my changes are lost.
> 
> How can I either prevent vdsm from touching the file, or merge my
> configuration?
> 
> Regards,
> 
> Rik
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ERROR 'no free file handlers in pool' while creating VM from template

2014-12-17 Thread Yeela Kaplan
Just another thought,
from looking at the vdsm logs it looks like there are too many calls to 
getVolumeSize that are eating up all handlers
and eventually a timeout occurs.
Adam, do you have any idea about this? 

- Original Message -
> From: "Yeela Kaplan" 
> To: "Tiemen Ruiten" 
> Cc: "Users@ovirt.org" 
> Sent: Wednesday, December 17, 2014 1:30:52 PM
> Subject: Re: [ovirt-users] ERROR 'no free file handlers in pool' while 
> creating VM from template
> 
> 
> 
> - Original Message -
> > From: "Tiemen Ruiten" 
> > To: "Yeela Kaplan" 
> > Cc: "Users@ovirt.org" 
> > Sent: Wednesday, December 17, 2014 1:22:59 PM
> > Subject: Re: [ovirt-users] ERROR 'no free file handlers in pool' while
> > creating VM from template
> > 
> > Thank you, I will try to increase to 20 and see what happens. Bug is filed:
> > https://bugzilla.redhat.com/show_bug.cgi?id=1175255
> 
> Thank you
> and I forgot to mention that you have to restart vdsmd so the change will
> apply...
> 
> 
> > 
> > On 17 December 2014 at 11:48, Yeela Kaplan  wrote:
> > >
> > > Hi,
> > > You are right, the problem is with the file handlers.
> > > You can increase the number of handlers in pool using the vdsm config
> > > file, which is supposed to be under the following directory in your
> > > system:
> > >
> > > /usr/lib/python2.6/site-packages/vdsm/config.py
> > >
> > > The default value for 'process_pool_max_slots_per_domain' is 10, so you
> > > can increase it by a bit, but not too much.
> > >
> > > But I suspect the problem is in a larger scale, and this is only a
> > > temporary relief for your system and this needs much more attention and a
> > > proper fix.
> > > could you please open a bug on RHEV/vdsm in bugzilla stating all of the
> > > details of your setup and logs?
> > >
> > > thanks,
> > > Yeela
> > >
> > > - Original Message -
> > > > From: "Tiemen Ruiten" 
> > > > To: "Users@ovirt.org" 
> > > > Sent: Wednesday, December 17, 2014 10:53:39 AM
> > > > Subject: Re: [ovirt-users] ERROR 'no free file handlers in pool' while
> > > creating VM from template
> > > >
> > > > Would this be limits for the vdsm process? Then what is the proper way
> > > > to
> > > > change ulimits for VDSM?
> > > >
> > > > On 16 December 2014 at 20:45, Donny Davis < do...@cloudspin.me > wrote:
> > > >
> > > >
> > > >
> > > >
> > > > The only thing I can think of would be file hard and soft limits, but I
> > > am no
> > > > oVirt pro.
> > > >
> > > >
> > > >
> > > > 'no free file handlers in pool' that would make sense to me…
> > > >
> > > > Donny
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > From: Tiemen Ruiten [mailto: t.rui...@rdmedia.com ]
> > > > Sent: Tuesday, December 16, 2014 12:40 PM
> > > > To: Donny Davis
> > > > Cc: Users@ovirt.org
> > > > Subject: Re: [ovirt-users] ERROR 'no free file handlers in pool' while
> > > > creating VM from template
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > About 25-30. The nodes are Xeon(R) CPU E5-2650 0 @ 2.00GHz with 16
> > > > hyperthreaded cores and 64 GB of RAM each. At the moment I created the
> > > VM,
> > > > processor load on both nodes was less than 1.
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > On 16 December 2014 at 20:29, Donny Davis < do...@cloudspin.me > wrote:
> > > >
> > > > What is the VM load you are running on your servers?
> > > >
> > > >
> > > >
> > > > -Original Message-
> > > > From: users-boun...@ovirt.org [mailto: users-boun...@ovirt.org ] On
> > > Behalf Of
> > > > Tiemen Ruiten
> > > > Sent: Tuesday, December 16, 2014 12:27 PM
> > > > To: users@ovirt.org
> > > > Subject: [ovirt-users] ERROR 'no free file handlers in pool' while
> > > creating
> > > > VM from template
> > > >
> > > > Hello,
> > > >
> > > > I

Re: [ovirt-users] ERROR 'no free file handlers in pool' while creating VM from template

2014-12-17 Thread Yeela Kaplan


- Original Message -
> From: "Tiemen Ruiten" 
> To: "Yeela Kaplan" 
> Cc: "Users@ovirt.org" 
> Sent: Wednesday, December 17, 2014 1:22:59 PM
> Subject: Re: [ovirt-users] ERROR 'no free file handlers in pool' while 
> creating VM from template
> 
> Thank you, I will try to increase to 20 and see what happens. Bug is filed:
> https://bugzilla.redhat.com/show_bug.cgi?id=1175255

Thank you
and I forgot to mention that you have to restart vdsmd so the change will 
apply...


> 
> On 17 December 2014 at 11:48, Yeela Kaplan  wrote:
> >
> > Hi,
> > You are right, the problem is with the file handlers.
> > You can increase the number of handlers in pool using the vdsm config
> > file, which is supposed to be under the following directory in your system:
> >
> > /usr/lib/python2.6/site-packages/vdsm/config.py
> >
> > The default value for 'process_pool_max_slots_per_domain' is 10, so you
> > can increase it by a bit, but not too much.
> >
> > But I suspect the problem is in a larger scale, and this is only a
> > temporary relief for your system and this needs much more attention and a
> > proper fix.
> > could you please open a bug on RHEV/vdsm in bugzilla stating all of the
> > details of your setup and logs?
> >
> > thanks,
> > Yeela
> >
> > - Original Message -
> > > From: "Tiemen Ruiten" 
> > > To: "Users@ovirt.org" 
> > > Sent: Wednesday, December 17, 2014 10:53:39 AM
> > > Subject: Re: [ovirt-users] ERROR 'no free file handlers in pool' while
> > creating VM from template
> > >
> > > Would this be limits for the vdsm process? Then what is the proper way to
> > > change ulimits for VDSM?
> > >
> > > On 16 December 2014 at 20:45, Donny Davis < do...@cloudspin.me > wrote:
> > >
> > >
> > >
> > >
> > > The only thing I can think of would be file hard and soft limits, but I
> > am no
> > > oVirt pro.
> > >
> > >
> > >
> > > 'no free file handlers in pool' that would make sense to me…
> > >
> > > Donny
> > >
> > >
> > >
> > >
> > >
> > > From: Tiemen Ruiten [mailto: t.rui...@rdmedia.com ]
> > > Sent: Tuesday, December 16, 2014 12:40 PM
> > > To: Donny Davis
> > > Cc: Users@ovirt.org
> > > Subject: Re: [ovirt-users] ERROR 'no free file handlers in pool' while
> > > creating VM from template
> > >
> > >
> > >
> > >
> > >
> > > About 25-30. The nodes are Xeon(R) CPU E5-2650 0 @ 2.00GHz with 16
> > > hyperthreaded cores and 64 GB of RAM each. At the moment I created the
> > VM,
> > > processor load on both nodes was less than 1.
> > >
> > >
> > >
> > >
> > >
> > > On 16 December 2014 at 20:29, Donny Davis < do...@cloudspin.me > wrote:
> > >
> > > What is the VM load you are running on your servers?
> > >
> > >
> > >
> > > -Original Message-
> > > From: users-boun...@ovirt.org [mailto: users-boun...@ovirt.org ] On
> > Behalf Of
> > > Tiemen Ruiten
> > > Sent: Tuesday, December 16, 2014 12:27 PM
> > > To: users@ovirt.org
> > > Subject: [ovirt-users] ERROR 'no free file handlers in pool' while
> > creating
> > > VM from template
> > >
> > > Hello,
> > >
> > > I ran into a nasty problem today when creating a new, cloned VM from a
> > > template (one virtual 20 GBdisk) on our two-node oVirt cluster: on the
> > node
> > > where I started a VM creation job, load skyrocketed and some VMs stopped
> > > responding until and after the job failed. Everything recovered without
> > > intervention, but this obviously shouldn't happen. I have attached the
> > > relevant vdsm log file. The button to create the VM was pressed around
> > > 11:17, the first error in the vdsm log is at 11:23:58.
> > >
> > > The ISO domain is a gluster volume exposed via NFS, the storage domain
> > for
> > > the VM's is also a gluster volume. The underlying filesystem is ZFS.
> > > The hypervisor nodes are full CentOS 6 installs.
> > >
> > > I'm guessing the 'no free file handlers in pool' in the vdsm log file is
> > key
> > > here. What can I do to prevent this from happening again? Apart from not
> > > creating new VMs of course :)
> > >
> > > Tiemen
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > --
> > >
> > >
> > > Tiemen Ruiten
> > > Systems Engineer
> > > R&D Media
> > >
> > >
> > > --
> > > Tiemen Ruiten
> > > Systems Engineer
> > > R&D Media
> > >
> > > ___
> > > Users mailing list
> > > Users@ovirt.org
> > > http://lists.ovirt.org/mailman/listinfo/users
> > >
> >
> 
> 
> --
> Tiemen Ruiten
> Systems Engineer
> R&D Media
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] ERROR 'no free file handlers in pool' while creating VM from template

2014-12-17 Thread Yeela Kaplan
Hi,
You are right, the problem is with the file handlers.
You can increase the number of handlers in pool using the vdsm config file, 
which is supposed to be under the following directory in your system:

/usr/lib/python2.6/site-packages/vdsm/config.py

The default value for 'process_pool_max_slots_per_domain' is 10, so you can 
increase it by a bit, but not too much.

But I suspect the problem is in a larger scale, and this is only a temporary 
relief for your system and this needs much more attention and a proper fix.
could you please open a bug on RHEV/vdsm in bugzilla stating all of the details 
of your setup and logs?

thanks,
Yeela

- Original Message -
> From: "Tiemen Ruiten" 
> To: "Users@ovirt.org" 
> Sent: Wednesday, December 17, 2014 10:53:39 AM
> Subject: Re: [ovirt-users] ERROR 'no free file handlers in pool' while 
> creating VM from template
> 
> Would this be limits for the vdsm process? Then what is the proper way to
> change ulimits for VDSM?
> 
> On 16 December 2014 at 20:45, Donny Davis < do...@cloudspin.me > wrote:
> 
> 
> 
> 
> The only thing I can think of would be file hard and soft limits, but I am no
> oVirt pro.
> 
> 
> 
> 'no free file handlers in pool' that would make sense to me…
> 
> Donny
> 
> 
> 
> 
> 
> From: Tiemen Ruiten [mailto: t.rui...@rdmedia.com ]
> Sent: Tuesday, December 16, 2014 12:40 PM
> To: Donny Davis
> Cc: Users@ovirt.org
> Subject: Re: [ovirt-users] ERROR 'no free file handlers in pool' while
> creating VM from template
> 
> 
> 
> 
> 
> About 25-30. The nodes are Xeon(R) CPU E5-2650 0 @ 2.00GHz with 16
> hyperthreaded cores and 64 GB of RAM each. At the moment I created the VM,
> processor load on both nodes was less than 1.
> 
> 
> 
> 
> 
> On 16 December 2014 at 20:29, Donny Davis < do...@cloudspin.me > wrote:
> 
> What is the VM load you are running on your servers?
> 
> 
> 
> -Original Message-
> From: users-boun...@ovirt.org [mailto: users-boun...@ovirt.org ] On Behalf Of
> Tiemen Ruiten
> Sent: Tuesday, December 16, 2014 12:27 PM
> To: users@ovirt.org
> Subject: [ovirt-users] ERROR 'no free file handlers in pool' while creating
> VM from template
> 
> Hello,
> 
> I ran into a nasty problem today when creating a new, cloned VM from a
> template (one virtual 20 GBdisk) on our two-node oVirt cluster: on the node
> where I started a VM creation job, load skyrocketed and some VMs stopped
> responding until and after the job failed. Everything recovered without
> intervention, but this obviously shouldn't happen. I have attached the
> relevant vdsm log file. The button to create the VM was pressed around
> 11:17, the first error in the vdsm log is at 11:23:58.
> 
> The ISO domain is a gluster volume exposed via NFS, the storage domain for
> the VM's is also a gluster volume. The underlying filesystem is ZFS.
> The hypervisor nodes are full CentOS 6 installs.
> 
> I'm guessing the 'no free file handlers in pool' in the vdsm log file is key
> here. What can I do to prevent this from happening again? Apart from not
> creating new VMs of course :)
> 
> Tiemen
> 
> 
> 
> 
> 
> 
> 
> 
> 
> --
> 
> 
> Tiemen Ruiten
> Systems Engineer
> R&D Media
> 
> 
> --
> Tiemen Ruiten
> Systems Engineer
> R&D Media
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] Fwd: bash: ./autogen.sh: No such file or directory

2014-12-15 Thread Yeela Kaplan


- Forwarded Message -
From: "Yeela Kaplan" 
To: "Ilan Hirsfeld" 
Sent: Monday, December 15, 2014 8:03:03 PM
Subject: Re: [ovirt-users] bash: ./autogen.sh: No such file or directory



- Original Message -
> From: "Ilan Hirsfeld" 
> To: "Yeela Kaplan" 
> Sent: Monday, December 15, 2014 6:35:49 PM
> Subject: Re: [ovirt-users] bash: ./autogen.sh: No such file or directory
> 
> Yeela,
> 
> [root@localhost vdsm]# cat /etc/redhat-release
> CentOS Linux release 7.0.1406 (Core)
> 
> I don't sure I'm understand your point.
> Please be more clear.
> 
> Should I do only:
> yum install make autoconf automake pyflakes logrotate gcc python-pep8
> libvirt-python python-devel \
> python-nose rpm-build sanlock-python genisoimage python-ordereddict
> python-pthreading libselinux-python\
> python-ethtool m2crypto python-dmidecode python-netaddr python-inotify
> python-argparse git \
> python-cpopen bridge-utils libguestfs-tools-c pyparted openssl libnl3
> libtool gettext-devel python-ioprocess \
> policycoreutils-python python-simplejson
> or also to do:
> 
> yum install http://download.fedoraproject.org/pub/epel/6/i386/epel-
> release-6-8.noarch.rpm
> yum install http://danken.fedorapeople.org/python-pep8-1.4.5-2.el6.
> noarch.rpm
> 

You need the epel repo for el7.
just look in google for instructions to install it. for example:
http://www.cyberciti.biz/faq/installing-rhel-epel-repo-on-centos-redhat-7-x/

and then you can install all of the above packages, but you need the repo first 
so you can fetch them.



> Regards,
> Ilan.
> 
> 
> On Mon, Dec 15, 2014 at 6:21 PM, Yeela Kaplan  wrote:
> >
> > You need the same packages just for EL7
> >
> > - Original Message -
> > > From: "Ilan Hirsfeld" 
> > > To: "Yeela Kaplan" 
> > > Cc: "Yedidyah Bar David" , "users" 
> > > Sent: Monday, December 15, 2014 6:00:47 PM
> > > Subject: Re: [ovirt-users] bash: ./autogen.sh: No such file or directory
> > >
> > > Are you sure???
> > > Because in instruction is written:
> > > "Fedora and Red Hat *Enterprise Linux 6 *users must verify the following
> > > packages are installed before attempting to build:"
> > >
> > > As far as I understand I EL7 not EL6. if you still think I have to do the
> > > requested command line so should I have to do the previous command lines
> > of
> > > EL6 such as:
> > >
> > > yum install
> > >
> > http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
> > > yum install
> > > http://danken.fedorapeople.org/python-pep8-1.4.5-2.el6.noarch.rpm
> > > Am I wrong?
> > >
> > > Regards,
> > > Ilan.
> > >
> > > On Mon, Dec 15, 2014 at 5:52 PM, Yeela Kaplan 
> > wrote:
> > > >
> > > > yum install make autoconf automake pyflakes logrotate gcc python-pep8
> > > > libvirt-python python-devel \
> > > >  python-nose rpm-build sanlock-python genisoimage python-ordereddict
> > > > python-pthreading libselinux-python\
> > > >  python-ethtool m2crypto python-dmidecode python-netaddr python-inotify
> > > > python-argparse git \
> > > >  python-cpopen bridge-utils libguestfs-tools-c pyparted openssl libnl3
> > > > libtool gettext-devel python-ioprocess \
> > > >  policycoreutils-python python-simplejson
> > > >
> > > > - Original Message -
> > > > > From: "Ilan Hirsfeld" 
> > > > > To: "Yedidyah Bar David" 
> > > > > Cc: "Yeela Kaplan" 
> > > > > Sent: Monday, December 15, 2014 5:39:13 PM
> > > > > Subject: Re: [ovirt-users] bash: ./autogen.sh: No such file or
> > directory
> > > > >
> > > > > [root@localhost Desktop]# pwd
> > > > > /home/bih016/Desktop
> > > > > [root@localhost Desktop]# cd vdsm
> > > > > [root@localhost vdsm]# ./autogen.sh --system
> > > > > ./autogen.sh: line 3: autoreconf: command not found
> > > > > Running ./configure with --prefix=/usr --sysconfdir=/etc
> > > > > --localstatedir=/var --libdir=/usr/lib64
> > > > > ./autogen.sh: line 26: ./configure: No such file or directory
> > > > > Regards,
> > > > > Ilan.
> > > > >
> > > > > On Mon, Dec 15, 2014 at 5:35 PM, Yedidyah Bar David  > >
> > > > wrote:
> > > 

Re: [ovirt-users] bash: ./autogen.sh: No such file or directory

2014-12-15 Thread Yeela Kaplan
You need the same packages just for EL7

- Original Message -
> From: "Ilan Hirsfeld" 
> To: "Yeela Kaplan" 
> Cc: "Yedidyah Bar David" , "users" 
> Sent: Monday, December 15, 2014 6:00:47 PM
> Subject: Re: [ovirt-users] bash: ./autogen.sh: No such file or directory
> 
> Are you sure???
> Because in instruction is written:
> "Fedora and Red Hat *Enterprise Linux 6 *users must verify the following
> packages are installed before attempting to build:"
> 
> As far as I understand I EL7 not EL6. if you still think I have to do the
> requested command line so should I have to do the previous command lines of
> EL6 such as:
> 
> yum install
> http://download.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
> yum install
> http://danken.fedorapeople.org/python-pep8-1.4.5-2.el6.noarch.rpm
> Am I wrong?
> 
> Regards,
> Ilan.
> 
> On Mon, Dec 15, 2014 at 5:52 PM, Yeela Kaplan  wrote:
> >
> > yum install make autoconf automake pyflakes logrotate gcc python-pep8
> > libvirt-python python-devel \
> >  python-nose rpm-build sanlock-python genisoimage python-ordereddict
> > python-pthreading libselinux-python\
> >  python-ethtool m2crypto python-dmidecode python-netaddr python-inotify
> > python-argparse git \
> >  python-cpopen bridge-utils libguestfs-tools-c pyparted openssl libnl3
> > libtool gettext-devel python-ioprocess \
> >  policycoreutils-python python-simplejson
> >
> > - Original Message -
> > > From: "Ilan Hirsfeld" 
> > > To: "Yedidyah Bar David" 
> > > Cc: "Yeela Kaplan" 
> > > Sent: Monday, December 15, 2014 5:39:13 PM
> > > Subject: Re: [ovirt-users] bash: ./autogen.sh: No such file or directory
> > >
> > > [root@localhost Desktop]# pwd
> > > /home/bih016/Desktop
> > > [root@localhost Desktop]# cd vdsm
> > > [root@localhost vdsm]# ./autogen.sh --system
> > > ./autogen.sh: line 3: autoreconf: command not found
> > > Running ./configure with --prefix=/usr --sysconfdir=/etc
> > > --localstatedir=/var --libdir=/usr/lib64
> > > ./autogen.sh: line 26: ./configure: No such file or directory
> > > Regards,
> > > Ilan.
> > >
> > > On Mon, Dec 15, 2014 at 5:35 PM, Yedidyah Bar David 
> > wrote:
> > > >
> > > > - Original Message -
> > > > > From: "Ilan Hirsfeld" 
> > > > > To: "Yeela Kaplan" , d...@redhat.com
> > > > > Sent: Monday, December 15, 2014 5:31:12 PM
> > > > > Subject: Re: [ovirt-users] bash: ./autogen.sh: No such file or
> > directory
> > > > >
> > > > > On Mon, Dec 15, 2014 at 5:29 PM, Ilan Hirsfeld <
> > ilan.hirsf...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > Hi,
> > > > > >
> > > > > > The OS is:
> > > > > > [root@localhost Desktop]# cat /etc/redhat-release
> > > > > > CentOS Linux release 7.0.1406 (Core)
> > > > > > [root@localhost Desktop]# uname -r
> > > > > > 3.10.0-123.13.1.el7.x86_64
> > > > > > [root@localhost Desktop]# rpm -qa | grep release
> > > > > > ovirt-release35-001-0.5.rc2.noarch
> > > > > > centos-release-7-0.1406.el7.centos.2.5.x86_64
> > > > > >
> > > > > > [root@localhost Desktop]# pwd
> > > > > > /home/bih016/Desktop
> > > > > >
> > > > > > I follow the instructions on the site
> > > > http://www.ovirt.org/Vdsm_Developers
> > > > > > :
> > > > > >
> > > > > > *
> > http://www.ovirt.org/Vdsm_Developers#Installing_the_required_packages
> > > > > > <
> > http://www.ovirt.org/Vdsm_Developers#Installing_the_required_packages
> > > > >*
> > > > > >
> > > > > > 1. yum install
> > > > > >
> > http://resources.ovirt.org/releases/ovirt-release/ovirt-release35.rpm
> > > > > >
> > > > > > 2. rpm -q wget 2> /dev/null || yum install wget
> > > > > > wget -O /etc/yum.repos.d/glusterfs-epel.repo
> > > > > >
> > > >
> > http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
> > > > > >
> > > > > > *http://www.ovirt.org/Vdsm_Developers#Getting_the_source
> > > > > >

Re: [ovirt-users] bash: ./autogen.sh: No such file or directory

2014-12-15 Thread Yeela Kaplan
yum install make autoconf automake pyflakes logrotate gcc python-pep8 
libvirt-python python-devel \
 python-nose rpm-build sanlock-python genisoimage python-ordereddict 
python-pthreading libselinux-python\
 python-ethtool m2crypto python-dmidecode python-netaddr python-inotify 
python-argparse git \
 python-cpopen bridge-utils libguestfs-tools-c pyparted openssl libnl3 libtool 
gettext-devel python-ioprocess \
 policycoreutils-python python-simplejson

- Original Message -
> From: "Ilan Hirsfeld" 
> To: "Yedidyah Bar David" 
> Cc: "Yeela Kaplan" 
> Sent: Monday, December 15, 2014 5:39:13 PM
> Subject: Re: [ovirt-users] bash: ./autogen.sh: No such file or directory
> 
> [root@localhost Desktop]# pwd
> /home/bih016/Desktop
> [root@localhost Desktop]# cd vdsm
> [root@localhost vdsm]# ./autogen.sh --system
> ./autogen.sh: line 3: autoreconf: command not found
> Running ./configure with --prefix=/usr --sysconfdir=/etc
> --localstatedir=/var --libdir=/usr/lib64
> ./autogen.sh: line 26: ./configure: No such file or directory
> Regards,
> Ilan.
> 
> On Mon, Dec 15, 2014 at 5:35 PM, Yedidyah Bar David  wrote:
> >
> > - Original Message -
> > > From: "Ilan Hirsfeld" 
> > > To: "Yeela Kaplan" , d...@redhat.com
> > > Sent: Monday, December 15, 2014 5:31:12 PM
> > > Subject: Re: [ovirt-users] bash: ./autogen.sh: No such file or directory
> > >
> > > On Mon, Dec 15, 2014 at 5:29 PM, Ilan Hirsfeld 
> > > wrote:
> > > >
> > > > Hi,
> > > >
> > > > The OS is:
> > > > [root@localhost Desktop]# cat /etc/redhat-release
> > > > CentOS Linux release 7.0.1406 (Core)
> > > > [root@localhost Desktop]# uname -r
> > > > 3.10.0-123.13.1.el7.x86_64
> > > > [root@localhost Desktop]# rpm -qa | grep release
> > > > ovirt-release35-001-0.5.rc2.noarch
> > > > centos-release-7-0.1406.el7.centos.2.5.x86_64
> > > >
> > > > [root@localhost Desktop]# pwd
> > > > /home/bih016/Desktop
> > > >
> > > > I follow the instructions on the site
> > http://www.ovirt.org/Vdsm_Developers
> > > > :
> > > >
> > > > *http://www.ovirt.org/Vdsm_Developers#Installing_the_required_packages
> > > > <http://www.ovirt.org/Vdsm_Developers#Installing_the_required_packages
> > >*
> > > >
> > > > 1. yum install
> > > > http://resources.ovirt.org/releases/ovirt-release/ovirt-release35.rpm
> > > >
> > > > 2. rpm -q wget 2> /dev/null || yum install wget
> > > > wget -O /etc/yum.repos.d/glusterfs-epel.repo
> > > >
> > http://download.gluster.org/pub/gluster/glusterfs/LATEST/EPEL.repo/glusterfs-epel.repo
> > > >
> > > > *http://www.ovirt.org/Vdsm_Developers#Getting_the_source
> > > > <http://www.ovirt.org/Vdsm_Developers#Getting_the_source>:*
> > > > git clone http://gerrit.ovirt.org/p/vdsm.git
> > > >
> > > > *http://www.ovirt.org/Vdsm_Developers#Building_a_VDSM_RPM
> > > > <http://www.ovirt.org/Vdsm_Developers#Building_a_VDSM_RPM>:*
> > > >  [root@localhost Desktop]# ./autogen.sh --system
> > > > bash: ./autogen.sh: No such file or directory
> > > >
> > > > Regards,
> > > > Ilan.
> > > >
> > > >
> > > >
> > > > On Mon, Dec 15, 2014 at 5:07 PM, Yeela Kaplan 
> > wrote:
> > > >>
> > > >> what is the directory you're running it from (pwd)?
> > > >> You should be under vdsm.
> >
> > So you should follow Yeela's advice.
> >
> > > >> Try listing the files and see if the script autogen.sh is there.
> > > >>
> > > >> - Original Message -
> > > >> > From: "Ilan Hirsfeld" 
> > > >> > To: "users" 
> > > >> > Sent: Monday, December 15, 2014 4:54:52 PM
> > > >> > Subject: [ovirt-users] bash: ./autogen.sh: No such file or directory
> > > >> >
> > > >> > Hi,
> > > >> > I'm trying to do a Building a VDSM RPM and in command line I type
> > the
> > > >> > following:
> > > >> > ./autogen.sh --system
> > > >> > bash: ./autogen.sh: No such file or directory
> > > >> > Can anybody help what was wrong?
> > > >> > Any help will be blessed.
> > > >> > Regards,
> > > >> > Ilan.
> > > >> >
> > > >> > ___
> > > >> > Users mailing list
> > > >> > Users@ovirt.org
> > > >> > http://lists.ovirt.org/mailman/listinfo/users
> > > >> >
> > > >>
> > > >
> > >
> >
> > --
> > Didi
> >
> >
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] bash: ./autogen.sh: No such file or directory

2014-12-15 Thread Yeela Kaplan
what is the directory you're running it from (pwd)?
You should be under vdsm.
Try listing the files and see if the script autogen.sh is there.

- Original Message -
> From: "Ilan Hirsfeld" 
> To: "users" 
> Sent: Monday, December 15, 2014 4:54:52 PM
> Subject: [ovirt-users] bash: ./autogen.sh: No such file or directory
> 
> Hi,
> I'm trying to do a Building a VDSM RPM and in command line I type the
> following:
> ./autogen.sh --system
> bash: ./autogen.sh: No such file or directory
> Can anybody help what was wrong?
> Any help will be blessed.
> Regards,
> Ilan.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Could not add iscsi domain to data-center

2013-10-01 Thread Yeela Kaplan
Hi,
Was just looking at it before you resolved it, so just out of curiousity...
What was the problem?

- Original Message -
> From: "Saurabh" 
> To: users@ovirt.org
> Sent: Tuesday, October 1, 2013 2:40:51 PM
> Subject: Re: [Users] Could not add iscsi domain to data-center
> 
> On 30/09/13 14:37, Saurabh wrote:
> 
> 
> Hi all,
> 
> I am using ovirt 3.3 on Centos6. Things were running fine untill I had nfs as
> the deata storage for my data center.
> Now I have created a new data center of type iscsi, but I am not able to add
> the iscsi storage to this Datacenter.
> 
> 
> This is all what i did.
> 
> 1>I created a new datacenter of type iscsi
> 2>configured a cluster to this datacenter
> 3>moved my host to this cluster.
> 4>Now navigated to the storage tab
> 5>Clicked on new domain
> 6>selected data/iscsi
> 7>discovered the iscsi iqn
> 
> but when I login to that iqn the login is not successful.
> I am attaching you a screenshot.
> 
> Thanks.
> saurabh.
> 
> 
> ___
> Users mailing list Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> resolved the problem!!
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM Crash at end of disk move

2013-09-11 Thread Yeela Kaplan
Hi Markus,
Relevant logs you can look at are: 
on the hypervisor: 
under /var/log/libvirt/..
The vdsm log under /var/log/vdsm/vdsm.log

and on the engine host:
under /var/log/ovirt-engine/engine.log

hope this helps,
Yeela

- Original Message -
> From: "Markus Stockhausen" 
> To: users@ovirt.org
> Sent: Wednesday, September 11, 2013 7:48:22 PM
> Subject: Re: [Users] VM Crash at end of disk move
> 
> > Von: Markus Stockhausen
> > Gesendet: Dienstag, 10. September 2013 21:43
> > An: users@ovirt.org
> > Betreff: VM Crash at end of disk move
> > 
> > Hello,
> > 
> > Thanks for the assistance to get this all running.  But now
> > I'm working on the next error on my new ovirt cluster.
> > 
> > I migrated a Windows 7 64Bit VMware VM into ovirt using
> > virt-v2v. VMWare tools were deinstalled in advance. After
> > two boots on ovirt everything seems fine. Just for curiousity
> > I moved the boot disk to another storage domain. At the
> > end the machine crashes without any indication of the
> > reason. ovirt webinterface log reads:
> > 
> > 2013-Sep-10, 21:32 User admin@internal have failed to move disk
> > colvm40_1IB_win7x64_Office2010_Test_Disk to domain NAS3_IB.
> > 2013-Sep-10, 21:32 VM colvm40_1IB_win7x64_Office2010_Test is down. Exit
> > message: Lost connection with qemu process.
> > 2013-Sep-10, 21:19 User admin@internal moving disk
> > colvm40_1IB_win7x64_Office2010_Test_Disk1 to domain NAS3_IB.
> > 2013-Sep-10, 21:19 Snapshot 'Auto-generated for Live Storage Migration'
> > creation for VM 'colvm40_1IB_win7x64_Office2010_Test' has been completed.
> > 2013-Sep-10, 21:18 Snapshot 'Auto-generated for Live Storage Migration'
> > creation for VM 'colvm40_1IB_win7x64_Office2010_Test' was initiated by
> > admin@internal.
> > 2013-Sep-10, 21:17 VM colvm40_1IB_win7x64_Office2010_Test started on Host
> > colovn1
> > 
> > After this crash VM can be started again. The disk image is
> > taken from the target storage domain. So it seems as if the
> > move was successful. Another (SLES 11 SP3) VM that was
> > newly created inside the cluster does not show this problem.
> > 
> > As I did not find qemu logs, what would be the best place to
> > start analysis with.
> > 
> > Best regards.
> > 
> > Markus
> 
> Memo for myself: Install ALL Windows VIRTIO drivers. virt-v2v
> only does the basics to get the VM up after the migration.
> 
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Host stuck in unresponsive state

2013-09-04 Thread Yeela Kaplan
Hi Frank,
Can you attach vdsm+engine+messages logs so to better understand the issue?

The multipath and iscsi daemons shouldn't be causing you any trouble. 
Also the warnings you saw from multipathd about lines 5 and 18, are just 
warnings, multipath knows to ignore these lines (they are there for BC).

Thanks,
Yeela

- Original Message -
> From: "Frank Wall" 
> To: users@ovirt.org
> Sent: Monday, September 2, 2013 12:41:27 PM
> Subject: Re: [Users] Host stuck in unresponsive state
> 
> Hi René,
> 
> On 01.09.2013 19:55, René Koch wrote:
> > I sometimes have (had) the same issues with all-in-one-setups, so I
> > don't use local storage in all-in-one-setup anymore.
> 
> thanks for the hint, I'll definitely give this a try if the problem
> persists. As of yet I found that completely disabling all blocking
> services helps stabilizing vdsmd-startup:
> 
> [root@aio ~]# systemctl disable iscsid.service
> [root@aio ~]# systemctl disable multipathd.service
> 
> So far vdsmd starts and runs without any issues.
> 
> 
> Thanks
> - Frank
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Master domain locked, error code 304

2013-04-25 Thread Yeela Kaplan
Hi,
Your problem is that the master domain is locked, so the engine does not send 
connectStorageServer to the vdsm host,
and therefore the host does not see the master domain.
You need to change the status of the master domain in the db from locked while 
the host is in maintenance.
This can be tricky and not very recommended because if you do it wrong you 
might corrupt the db.
Another, safer, way that I recommend is try to do connectStorageServer to the 
masterSD from vdsClient on the vdsm host and see what happens, it might solve 
your problem.

--
Yeela

- Original Message -
> From: "Tommy McNeely" 
> To: "Juan Jose" 
> Cc: users@ovirt.org
> Sent: Wednesday, April 24, 2013 7:30:20 PM
> Subject: Re: [Users] Master domain locked, error code 304
> 
> Hi Juan,
> 
> That sounds like a possible path to follow. Our "master" domain does not have
> any VMs in it. If no one else responds with an official path to resolution,
> then I will try going into the database and hacking it like that. I think it
> has something to do with the version or the metadata??
> 
> [root@vmserver3 dom_md]# cat metadata
> CLASS=Data
> DESCRIPTION=SFOTestMaster1
> IOOPTIMEOUTSEC=10
> LEASERETRIES=3
> LEASETIMESEC=60
> LOCKPOLICY=
> LOCKRENEWALINTERVALSEC=5
> MASTER_VERSION=1
> POOL_DESCRIPTION=SFODC01
> POOL_DOMAINS=774e3604-f449-4b3e-8c06-7cd16f98720c:Active,758c0abb-ea9a-43fb-bcd9-435f75cd0baa:Active,baa42b1c-ae2e-4486-88a1-e09e1f7a59cb:Active
> POOL_SPM_ID=1
> POOL_SPM_LVER=4
> POOL_UUID=0f63de0e-7d98-48ce-99ec-add109f83c4f
> REMOTE_PATH=10.101.0.148:/c/vpt1-master
> ROLE=Master
> SDUUID=774e3604-f449-4b3e-8c06-7cd16f98720c
> TYPE=NFS
> VERSION=0
> _SHA_CKSUM=fa8ef0e7cd5e50e107384a146e4bfc838d24ba08
> 
> 
> On Wed, Apr 24, 2013 at 5:57 AM, Juan Jose < jj197...@gmail.com > wrote:
> 
> 
> 
> Hello Tommy,
> 
> I had a similar experience and after try to recover my storage domain, I
> realized that my VMs had missed. You have to verify if your VM disks are
> inside of your storage domain. In my case, I had to add a new a new Storage
> domain as Master domain to be able to remove the old VMs from DB and
> reattach the old storage domain. I hope this were not your case. If you
> haven't lost your VMs it's possible that you can recover them.
> 
> Good luck,
> 
> Juanjo.
> 
> 
> On Wed, Apr 24, 2013 at 6:43 AM, Tommy McNeely < tommythe...@gmail.com >
> wrote:
> 
> 
> 
> 
> We had a hard crash (network, then power) on our 2 node Ovirt Cluster. We
> have NFS datastore on CentOS 6 (3.2.0-1.39.el6). We can no longer get the
> hosts to activate. They are unable to activate the "master" domain. The
> master storage domain show "locked" while the other storage domains show
> Unknown (disks) and inactive (ISO) All the domains are on the same NFS
> server, we are able to mount it, the permissions are good. We believe we
> might be getting bit by https://bugzilla.redhat.com/show_bug.cgi?id=920694
> or http://gerrit.ovirt.org/#/c/13709/ which says to cease working on it:
> 
> Michael KublinApr 10
> 
> 
> Patch Set 5: Do not submit
> 
> Liron, please abondon this work. This interacts with host life cycle which
> will be changed, during a change a following problem will be solved as well.
> 
> 
> So, We were wondering what we can do to get our oVirt back online, or rather
> what the correct way is to solve this. We have a few VMs that are down which
> we are looking for ways to recover as quickly as possible.
> 
> Thanks in advance,
> Tommy
> 
> Here are the ovirt-engine logs:
> 
> 2013-04-23 21:30:04,041 ERROR
> [org.ovirt.engine.core.vdsbroker.VDSCommandBase] (pool-3-thread-49) Command
> ConnectStoragePoolVDS execution failed. Exception:
> IRSNoMasterDomainException: IRSGenericException: IRSErrorException:
> IRSNoMasterDomainException: Cannot find master domain:
> 'spUUID=0f63de0e-7d98-48ce-99ec-add109f83c4f,
> msdUUID=774e3604-f449-4b3e-8c06-7cd16f98720c'
> 2013-04-23 21:30:04,043 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.ConnectStoragePoolVDSCommand]
> (pool-3-thread-49) FINISH, ConnectStoragePoolVDSCommand, log id: 50524b34
> 2013-04-23 21:30:04,049 WARN
> [org.ovirt.engine.core.bll.storage.ReconstructMasterDomainCommand]
> (pool-3-thread-49) [7c5867d6] CanDoAction of action ReconstructMasterDomain
> failed.
> Reasons:VAR__ACTION__RECONSTRUCT_MASTER,VAR__TYPE__STORAGE__DOMAIN,ACTION_TYPE_FAILED_STORAGE_DOMAIN_STATUS_ILLEGAL2,$status
> Locked
> 
> 
> 
> Here are the logs from vdsm:
> 
> Thread-29::DEBUG::2013-04-23
> 21:36:05,906::misc::84::Storage.Misc.excCmd::() '/usr/bin/sudo -n
> /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3
> 10.101.0.148:/c/vpt1-vmdisks1
> /rhev/data-center/mnt/10.101.0.148:_c_vpt1-vmdisks1' (cwd None)
> Thread-29::DEBUG::2013-04-23
> 21:36:06,008::misc::84::Storage.Misc.excCmd::() '/usr/bin/sudo -n
> /bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6,nfsvers=3
> 10.101.0.148:/c/vpool-iso /rhev/data-center/mnt/10.101.0.148:_c_vpool-iso'
> (cwd None)
> Thread-29:

Re: [Users] oVirt storage is down and doesn't come up

2013-04-21 Thread Yeela Kaplan
Is the host up? do you have another sd besides the master you can use (from the 
logs I saw that you have another one), maybe you can try to re-initialize on it?
sorry, please attach 'tree /rhev/data-canter/'


- Original Message -
> From: "Yuval M" 
> To: "Yeela Kaplan" 
> Cc: users@ovirt.org, "Nezer Zaidenberg" , "Limor Gavish" 
> 
> Sent: Sunday, April 21, 2013 5:02:19 PM
> Subject: Re: [Users] oVirt storage is down and doesn't come up
> 
> Hi,
> I am unable to add an additional storage domain - the "Use Host" selection
> box is empty and I cannot select any host:
> 
> 
> [image: Inline image 2]
> 
> # ls -ltr /rhev/data-center/
> total 8
> drwxr-xr-x  2 vdsm kvm 4096 Apr 18 18:27 hsm-tasks
> drwxr-xr-x. 4 vdsm kvm 4096 Apr 21 16:56 mnt
> 
> 
> Yuval
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt storage is down and doesn't come up

2013-04-21 Thread Yeela Kaplan
It looks to me like the master it is looking for is no longer there, I'm not 
sure what happened.
What you can try to do to get your storage back up is to create a new storage 
domain(it won't be able to attach since your pool is not connected to the host).
Then right click on the new SD and choose to re-initialize data center.
It will try to reconstruct your master SD.
also, just so that I have more information about the state of your system, 
please attach the result of: 'ls -ltr /rhev/data-center/'.
If you want to make this more interactive, you can connect to IRC.

- Original Message -
> From: "Yuval M" 
> To: users@ovirt.org, "Nezer Zaidenberg" , "Limor Gavish" 
> 
> Sent: Thursday, April 18, 2013 7:11:57 PM
> Subject: [Users]  oVirt storage is down and doesn't come up
> 
> No luck.
> 
> [wil@bufferoverflow ~]$ sudo systemctl stop vdsmd.service
> [wil@bufferoverflow ~]$ sudo rm -rf /rhev/data-canter/*
> [wil@bufferoverflow ~]$ ls -lad /rhev/data-center/
> drwxr-xr-x. 4 vdsm kvm 4096 Apr 18 18:27 /rhev/data-center/
> [wil@bufferoverflow ~]$ ls -la /rhev/data-center/
> total 16
> drwxr-xr-x. 4 vdsm kvm 4096 Apr 18 18:27 .
> drwxr-xr-x. 3 root root 4096 Mar 13 15:32 ..
> drwxr-xr-x 2 vdsm kvm 4096 Apr 18 18:27 hsm-tasks
> drwxr-xr-x. 4 vdsm kvm 4096 Apr 18 18:25 mnt
> [wil@bufferoverflow ~]$ sudo reboot
> Connection to bufferoverflow closed by remote host.
> ...
> Last login: Thu Apr 18 18:40:19 2013
> [wil@bufferoverflow ~]$ ./engine-service stop
> Stopping engine-service: [ OK ]
> [wil@bufferoverflow ~]$ ./engine-service start
> Starting engine-service: [ OK ]
> [wil@bufferoverflow ~]$
> 
> 
> Logs attached.
> 
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Disk move when VM is running problem

2013-04-18 Thread Yeela Kaplan
You have used live storage migration.
In order to understand better what happened in your environment I need the 
following details: 
1) What is your vdsm and libvirt version?
You probably need to upgrade libvirt, there is a known bug there.
2) attach the output of: run on the vdsm host: 'tree '/rhev/data-center/'
3) attach the output of: run 'lvs' on the vdsm host
4) Did you check what is on the disk of the vm (the target disk of the 
replication)? because the replication failed so it should probably be empty.
5) Does your disk have any snapshots?

--
Yeela

- Original Message -
> From: "Gianluca Cecchi" 
> To: "users" 
> Sent: Tuesday, April 16, 2013 6:50:03 PM
> Subject: Re: [Users] Disk move when VM is running problem
> 
> On Tue, Apr 16, 2013 at 5:43 PM, Gianluca Cecchi wrote:
> 
> >
> > After some minutes I get:
> > User admin@internal have failed to move disk zensrv_Disk1 to domain
> > DS6800_Z1_1181.
> >
> > But actually the vm is still running (I had a ssh terminal open on it)
> > and the disk appears to be the target one, as if the move operation
> > actually completed ok
> >
> > what to do? Can I safely shutdown and restart the vm?
> >
> 
> 
> 
> engine.log :
> https://docs.google.com/file/d/0BwoPbcrMv8mvaDMzY0JNcHFwVnc/edit?usp=sharing
> 
> vdsm.log:
> https://docs.google.com/file/d/0BwoPbcrMv8mvaDVIWTdEUzBPeVU/edit?usp=sharing
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt storage is down and doesn't come up

2013-04-18 Thread Yeela Kaplan
the vdsm.log.44.xz is exactly what I needed. It looks like your storage should 
be fine. 
please try 'rm -rf /rhev/data-canter/*' and then reboot the host.
let me know if it solves the problem. 
and if not, attach the new logs.

- Original Message -
> From: "Limor Gavish" 
> To: "Yeela Kaplan" 
> Cc: "Yuval M" , users@ovirt.org, "Nezer Zaidenberg" 
> 
> Sent: Wednesday, April 17, 2013 9:41:16 PM
> Subject: Re: [Users] oVirt storage is down and doesn't come up
> 
> Thank you very much for your reply.
> 
> I see that the problem appears in vdsm.log.44.xz but doesn't appear in
> vdsm.log.45.xz
> 
> *[wil@bufferoverflow vdsm]$ xzcat vdsm.log.45.xz | grep
> StoragePoolMasterNotFound | wc -l*
> *0*
> *[wil@bufferoverflow vdsm]$ xzcat vdsm.log.44.xz | grep
> StoragePoolMasterNotFound | wc -l*
> *52*
> 
> so I hope the source of the problem is in one of them (attached).
> 
> *[wil@bufferoverflow vdsm]$ ls -la vdsm.log.44.xz*
> *-rw-r--r-- 1 vdsm kvm 763808 Mar 24 20:00 vdsm.log.44.xz*
> *[wil@bufferoverflow vdsm]$ ls -la vdsm.log.45.xz*
> *-rw-r--r-- 1 vdsm kvm 706212 Mar 22 11:00 vdsm.log.45.xz*
> 
> Unfortunately, I do not have any engine logs from that time (between Mar 22
> 11:00 and Mar 24 20:00)
> 
> *[wil@bufferoverflow ovirt-engine]$ ls -la*
> *total 148720*
> *drwxrwxr-x 2 wil wil 4096 Apr 17 09:07 .*
> *drwxrwxr-x 3 wil wil 4096 Mar 26 20:13 ..*
> *-rw-rw-r-- 1 wil wil  304 Apr 17 16:31 boot.log*
> *-rw-rw 1 wil wil  510 Apr 17 16:31 console.log*
> *-rw-rw-r-- 1 wil wil  7398188 Apr 17 21:35 engine.log*
> *-rw-rw-r-- 1 wil wil 10485813 Apr 13 09:20 engine.log.1*
> *-rw-rw-r-- 1 wil wil 10485766 Apr 11 13:19 engine.log.2*
> *-rw-rw-r-- 1 wil wil 10486016 Apr 11 08:14 engine.log.3*
> *-rw-rw-r-- 1 wil wil 10485972 Apr 11 03:06 engine.log.4*
> *-rw-rw-r-- 1 wil wil 10486208 Apr 10 22:01 engine.log.5*
> *-rw-rw-r-- 1 wil wil  8439424 Apr 17 16:31 server.log*
> *-rw-rw-r-- 1 wil wil 10485867 Apr 17 09:07 server.log.1*
> *-rw-rw-r-- 1 wil wil 10485943 Apr 17 02:40 server.log.2*
> *-rw-rw-r-- 1 wil wil 10485867 Apr 16 20:15 server.log.3*
> *-rw-rw-r-- 1 wil wil 10485943 Apr 16 13:54 server.log.4*
> *-rw-rw-r-- 1 wil wil 10485867 Apr 16 07:32 server.log.5*
> *-rw-rw-r-- 1 wil wil 10485943 Apr 16 01:05 server.log.6*
> *-rw-rw-r-- 1 wil wil 10485867 Apr 15 18:46 server.log.7*
> *-rw-rw-r-- 1 wil wil 10485781 Apr 15 12:28 server.log.8*
> *[wil@bufferoverflow ovirt-engine]$ pwd*
> */home/wil/ovirt-engine/installation/var/log/ovirt-engine*
> 
> 
> 
> On Wed, Apr 17, 2013 at 6:54 PM, Yeela Kaplan  wrote:
> 
> > It looks like the link to the master domain is not in the tree.
> > I need to see the full logs and understand what happened. Including the
> > engine log.
> > Are you sure you don't have them? even if they were rotated they should be
> > kept as a vdsm.log.*.xz under /var/log/vdsm/
> >
> > - Original Message -
> > > From: "Yuval M" 
> > > To: "Yeela Kaplan" 
> > > Cc: "Limor Gavish" , users@ovirt.org, "Nezer
> > Zaidenberg" 
> > > Sent: Wednesday, April 17, 2013 4:56:55 PM
> > > Subject: Re: [Users] oVirt storage is down and doesn't come up
> > >
> > > 1. we do not have the logs from before the problem.
> > > 2.
> > > 
> > > $ tree /rhev/data-center/
> > > /rhev/data-center/
> > > âââ hsm-tasks
> > > âââ mnt
> > > âââ bufferoverflow.home:_home_BO__ISO__Domain
> > > â   âââ 45d24e2a-705e-440f-954c-fda3cab61298
> > > â   â   âââ dom_md
> > > â   â   â   âââ ids
> > > â   â   â   âââ inbox
> > > â   â   â   âââ leases
> > > â   â   â   âââ metadata
> > > â   â   â   âââ outbox
> > > â   â   âââ images
> > > â   â   âââ ----
> > > â   â   âââ Fedora-18-x86_64-DVD.iso
> > > â   â   âââ Fedora-18-x86_64-Live-Desktop.iso
> > > â   âââ __DIRECT_IO_TEST__
> > > âââ bufferoverflow.home:_home_BO__Ovirt__Storage
> > > âââ kernelpanic.home:_home_KP__Data__Domain
> > > âââ a8286508-db45-40d7-8645-e573f6bacdc7
> > > â   âââ dom_md
> > > â   â   âââ ids
> > > â   â   âââ inbox
> > > â   â   âââ leases
> > > â   â   âââ metadata
> > > â   â   âââ outbox
> > > â   âââ images
> > >

Re: [Users] oVirt storage is down and doesn't come up

2013-04-17 Thread Yeela Kaplan
It looks like the link to the master domain is not in the tree.
I need to see the full logs and understand what happened. Including the engine 
log.
Are you sure you don't have them? even if they were rotated they should be kept 
as a vdsm.log.*.xz under /var/log/vdsm/ 

- Original Message -
> From: "Yuval M" 
> To: "Yeela Kaplan" 
> Cc: "Limor Gavish" , users@ovirt.org, "Nezer Zaidenberg" 
> 
> Sent: Wednesday, April 17, 2013 4:56:55 PM
> Subject: Re: [Users] oVirt storage is down and doesn't come up
> 
> 1. we do not have the logs from before the problem.
> 2.
> 
> $ tree /rhev/data-center/
> /rhev/data-center/
> âââ hsm-tasks
> âââ mnt
> âââ bufferoverflow.home:_home_BO__ISO__Domain
> â   âââ 45d24e2a-705e-440f-954c-fda3cab61298
> â   â   âââ dom_md
> â   â   â   âââ ids
> â   â   â   âââ inbox
> â   â   â   âââ leases
> â   â   â   âââ metadata
> â   â   â   âââ outbox
> â   â   âââ images
> â   â   âââ ----
> â   â   âââ Fedora-18-x86_64-DVD.iso
> â   â   âââ Fedora-18-x86_64-Live-Desktop.iso
> â   âââ __DIRECT_IO_TEST__
> âââ bufferoverflow.home:_home_BO__Ovirt__Storage
> âââ kernelpanic.home:_home_KP__Data__Domain
> âââ a8286508-db45-40d7-8645-e573f6bacdc7
> â   âââ dom_md
> â   â   âââ ids
> â   â   âââ inbox
> â   â   âââ leases
> â   â   âââ metadata
> â   â   âââ outbox
> â   âââ images
> â   âââ 0df45336-de35-4dc0-9958-95b27d5d4701
> â   â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d
> â   â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d.lease
> â   â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d.meta
> â   â   âââ b245184f-f8e3-479b-8559-8b6af2473b7c
> â   â   âââ b245184f-f8e3-479b-8559-8b6af2473b7c.lease
> â   â   âââ b245184f-f8e3-479b-8559-8b6af2473b7c.meta
> â   âââ 0e1ebaf7-3909-44cd-8560-d05a63eb4c4e
> â   â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d
> â   â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d.lease
> â   â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d.meta
> â   â   âââ 562b9043-bde8-4595-bbea-fa8871f0e19e
> â   â   âââ 562b9043-bde8-4595-bbea-fa8871f0e19e.lease
> â   â   âââ 562b9043-bde8-4595-bbea-fa8871f0e19e.meta
> â   âââ 32ebb85a-0dde-47fe-90c7-7f4fb2c0f1e5
> â   â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d
> â   â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d.lease
> â   â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d.meta
> â   â   âââ 4774095e-db3d-4561-8284-53eabfd28f66
> â   â   âââ 4774095e-db3d-4561-8284-53eabfd28f66.lease
> â   â   âââ 4774095e-db3d-4561-8284-53eabfd28f66.meta
> â   âââ a7e13a25-1694-4509-9e6b-e88583a4d970
> â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d
> â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d.lease
> â   âââ 0d33efc8-a608-439f-abe2-43884c1ce72d.meta
> âââ __DIRECT_IO_TEST__
> 
> 16 directories, 35 files
> 
> 
> 3. We have 3 domains:
> BO_Ovirt_Storage (data domain, on the same machine as engine and vdsm, via
> NFS)
> BO_ISO_Domain (ISO domain, same machine via NFS)
> KP_Data_Domain (data domain on an NFS mount on a different machine)
> 
> Yuval
> 
> 
> 
> On Wed, Apr 17, 2013 at 4:28 PM, Yeela Kaplan  wrote:
> 
> > Hi Limor,
> > 1) Your log starts exactly after the vdsm restart. I need to see the full
> > vdsm log from before the domains went down in order to understand the
> > problem. Can you attach them?
> > 2) can you send the printout of 'tree /rhev/data-center/'
> > 3) how many domains are attached to your DC, and what type are they(ISO,
> > export,data) and (The DC is nfs right)?
> >
> > Thanks,
> > Yeela
> >
> > - Original Message -
> > > From: "Limor Gavish" 
> > > To: "Tal Nisan" 
> > > Cc: "Yuval M" , users@ovirt.org, "Nezer Zaidenberg" <
> > nzaidenb...@mac.com>
> > > Sent: Monday, April 15, 2013 5:10:16 PM
> > > Subject: Re: [Users] oVirt storage is down and doesn't come up
> > >
> > > Thank you very much for your reply.
> > > I ran the commands you asked (see below)

Re: [Users] Export domain was working... then... NFS, rpc.statd issues

2013-04-17 Thread Yeela Kaplan
Hi Nicolas,
Please send the full vdsm log. 
Also, what is the name of the previous mount and mount point.
and what is the name of the new mount and mount point.
also, what are the exact steps you used to switch the 2 exports domains? Have 
you done anything through the engine?

- Original Message -
> From: "Nicolas Ecarnot" 
> To: "users" 
> Sent: Tuesday, April 16, 2013 1:18:40 PM
> Subject: [Users] Export domain was working... then... NFS, rpc.statd issues
> 
> Hi,
> 
> [oVirt 3.1, F17]
> My good old NFS export domain was OK, but getting too small for our needs.
> Then I unmounted it, created another bigger one somewhere else, and
> tried to mount the new one.
> 
> Long things short, the NFS is not mounted and the relevant error is here:
> 
> Thread-142::DEBUG::2013-04-16
> 10:08:25,973::__init__::1249::Storage.Misc.excCmd::(_log) '/usr/bin/sudo
> -n /usr/bin/mount -t nfs -o soft,nosharecache,timeo=600,retrans=6
> serv-vm-adm7.xxx:/data/vmex
> /rhev/data-center/mnt/serv-vm-adm7.xxx:_data_vmex' (cwd None)
> Thread-142::ERROR::2013-04-16
> 10:08:26,047::hsm::1932::Storage.HSM::(connectStorageServer) Could not
> connect to storageServer
> Traceback (most recent call last):
>File "/usr/share/vdsm/storage/hsm.py", line 1929, in connectStorageServer
>File "/usr/share/vdsm/storage/storageServer.py", line 256, in connect
>File "/usr/share/vdsm/storage/storageServer.py", line 179, in connect
>File "/usr/share/vdsm/storage/mount.py", line 190, in mount
>File "/usr/share/vdsm/storage/mount.py", line 206, in _runcmd
> MountError: (32, ";mount.nfs: rpc.statd is not running but is required
> for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks
> local, or start statd.\nmount.nfs: an incorrect mount option was
> specified\n")
> 
> I confirm trying to manually mount the same from the node, and using the
> nolock option does work.
> 
> While googling, I checked the /etc/services : no problem.
> I don't know what to change, what I did wrong, what to improve?
> 
> --
> Nicolas Ecarnot
> [Very rare msg from me using HTML and colors... I'm ready to wear a tie ;)]
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] oVirt storage is down and doesn't come up

2013-04-17 Thread Yeela Kaplan
Hi Limor,
1) Your log starts exactly after the vdsm restart. I need to see the full vdsm 
log from before the domains went down in order to understand the problem. Can 
you attach them?
2) can you send the printout of 'tree /rhev/data-center/' 
3) how many domains are attached to your DC, and what type are they(ISO, 
export,data) and (The DC is nfs right)?

Thanks,
Yeela

- Original Message -
> From: "Limor Gavish" 
> To: "Tal Nisan" 
> Cc: "Yuval M" , users@ovirt.org, "Nezer Zaidenberg" 
> 
> Sent: Monday, April 15, 2013 5:10:16 PM
> Subject: Re: [Users] oVirt storage is down and doesn't come up
> 
> Thank you very much for your reply.
> I ran the commands you asked (see below) but a directory named as the uuid of
> the master domain is not mounted. We tried to restart the VDSM and the
> entire machine it didn't help.
> We succeeded to manually mount " /home/BO_Ovirt_Storage" to a temporary
> directory.
> 
> postgres=# \connect engine;
> You are now connected to database "engine" as user "postgres".
> engine=# select current_database();
> current_database
> --
> engine
> (1 row)
> engine=# select sds.id , ssc.connection from storage_domain_static sds join
> storage_server_connections ssc on sds.storage= ssc.id where sds.id
> ='1083422e-a5db-41b6-b667-b9ef1ef244f0';
> id | connection
> --+
> 1083422e-a5db-41b6-b667-b9ef1ef244f0 |
> bufferoverflow.home:/home/BO_Ovirt_Storage
> (1 row)
> 
> [wil@bufferoverflow ~] $ mount
> proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
> sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
> devtmpfs on /dev type devtmpfs
> (rw,nosuid,size=8131256k,nr_inodes=2032814,mode=755)
> securityfs on /sys/kernel/security type securityfs
> (rw,nosuid,nodev,noexec,relatime)
> tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
> devpts on /dev/pts type devpts
> (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
> tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
> tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
> cgroup on /sys/fs/cgroup/systemd type cgroup
> (rw,nosuid,nodev,noexec,relatime,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
> cgroup on /sys/fs/cgroup/cpuset type cgroup
> (rw,nosuid,nodev,noexec,relatime,cpuset)
> cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
> (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
> cgroup on /sys/fs/cgroup/memory type cgroup
> (rw,nosuid,nodev,noexec,relatime,memory)
> cgroup on /sys/fs/cgroup/devices type cgroup
> (rw,nosuid,nodev,noexec,relatime,devices)
> cgroup on /sys/fs/cgroup/freezer type cgroup
> (rw,nosuid,nodev,noexec,relatime,freezer)
> cgroup on /sys/fs/cgroup/net_cls type cgroup
> (rw,nosuid,nodev,noexec,relatime,net_cls)
> cgroup on /sys/fs/cgroup/blkio type cgroup
> (rw,nosuid,nodev,noexec,relatime,blkio)
> cgroup on /sys/fs/cgroup/perf_event type cgroup
> (rw,nosuid,nodev,noexec,relatime,perf_event)
> /dev/sda3 on / type ext4 (rw,relatime,data=ordered)
> rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
> debugfs on /sys/kernel/debug type debugfs (rw,relatime)
> sunrpc on /proc/fs/nfsd type nfsd (rw,relatime)
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
> systemd-1 on /proc/sys/fs/binfmt_misc type autofs
> (rw,relatime,fd=34,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
> mqueue on /dev/mqueue type mqueue (rw,relatime)
> tmpfs on /tmp type tmpfs (rw)
> configfs on /sys/kernel/config type configfs (rw,relatime)
> binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
> /dev/sda5 on /home type ext4 (rw,relatime,data=ordered)
> /dev/sda1 on /boot type ext4 (rw,relatime,data=ordered)
> kernelpanic.home:/home/KP_Data_Domain on
> /rhev/data-center/mnt/kernelpanic.home:_home_KP__Data__Domain type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.100.101.100,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.100.101.100)
> bufferoverflow.home:/home/BO_ISO_Domain on
> /rhev/data-center/mnt/bufferoverflow.home:_home_BO__ISO__Domain type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=10.100.101.108,mountvers=3,mountport=20048,mountproto=udp,local_lock=none,addr=10.100.101.108)
> 
> [wil@bufferoverflow ~]$ ls -la /home/
> total 36
> drwxr-xr-x. 6 root root 4096 Mar 22 11:25 .
> dr-xr-xr-x. 19 root root 4096 Apr 12 18:53 ..
> drwxr-xr-x. 3 vdsm kvm 4096 Mar 27 17:33 BO_ISO_Domain
> drwxr-xr-x. 3 vdsm kvm 4096 Mar 27 17:33 BO_Ovirt_Storage
> drwx--. 2 root root 16384 Mar 6 09:11 lost+found
> drwx--. 27 wil wil 4096 Apr 15 01:50 wil
> [wil@bufferoverflow ~]$ cd /home/BO_Ovirt_Storage/
> [wil@bufferoverflow BO_Ovirt_Storage]$ ls -la
> total 12
> drwxr-xr-x. 3 vdsm kvm 4096 Mar 27 17:33 .
> drwxr-xr-x. 6 root root 4096 Mar 22 11:25 ..
> drwxr-xr

Re: [Users] How to remove storage domain

2013-04-17 Thread Yeela Kaplan
Gianluca,
You need to first put the domain to maintenance,
then detach the storage domain from the data center,
and then the 'remove' option will be available.

--
Yeela



- Original Message -
> From: "Gianluca Cecchi" 
> To: "users" 
> Sent: Tuesday, April 16, 2013 7:01:54 PM
> Subject: [Users] How to remove storage domain
> 
> Hello,
> oVirt 3.2.1 on f18.
> 
> I have an FC datacenter where I have several storage domains.
> I want to remove one storage domain, so I move all its disks to other ones
> (disk --> move)
> At the end its state is active, but the "remove" option is greyed out.
> I can only do "destroy".
> 
> What to do to have the "delete" option possible?
> 
> Thanks,
> Gianluca
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] SPM is always contending

2013-04-08 Thread Yeela Kaplan
Hi Andy,
I saw in another thread that you have resolved the issue,
but I would like to further investigate the cause.
Can you please send an sos report as soon as possible?
Thanks,
Yeela

- Original Message -
> From: "Andy Singleton" 
> To: users@ovirt.org
> Sent: Friday, April 5, 2013 12:07:24 PM
> Subject: [Users] SPM is always contending
> 
> After the node acting as spm was (accidentally) put into maintenance
> mode, the spm role is always contending. The original node doesnt take
> the role back either.
> About 10 instances were still running on the node when it was put into
> maintenance.
> 
> I have ovirt 3.1.0-4 with a targetcli iscsi store.
> 
> Here are the contents of the engine.log.
> 
> Andy
> 
> 2013-04-04 15:25:23,979 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> (QuartzScheduler_Worker-96) [5449512a] spmStart polling ended. spm status:
> Free
> 2013-04-04 15:25:23,993 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (QuartzScheduler_Worker-96) [5449512a] START, HSMClearTaskVDSCommand(vd
> sId = d265245e-3cc5-11e2-bce7-001018fc3b14,
> taskId=7c1cf7c6-21d4-4504-882d-3d465742b6ef), log id: d2da8ba
> 2013-04-04 15:25:24,011 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.HSMClearTaskVDSCommand]
> (QuartzScheduler_Worker-96) [5449512a] FINISH, HSMClearTaskVDSCommand,
> log id: d2da8ba
> 2013-04-04 15:25:24,012 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> (QuartzScheduler_Worker-96) [5449512a] FINISH, SpmStartVDSCommand, return:
> org.ovirt.engine.core.common.businessentities.SpmStatusResult@30a05218,
> log id: 5980d6f4
> 2013-04-04 15:25:24,014 INFO
> [org.ovirt.engine.core.bll.storage.SetStoragePoolStatusCommand]
> (QuartzScheduler_Worker-96) [6d6dbe98] Running command: SetStoragePoolStat
> usCommand internal: true. Entities affected :  ID:
> 0ba15357-9b3b-4a76-8dac-b2b66b922174 Type: StoragePool
> 2013-04-04 15:25:24,843 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (QuartzScheduler_Worker-96) [6d6dbe98]
> IrsBroker::Failed::GetStoragePoolInfoV
> DS due to: IrsSpmStartFailedException: IRSGenericException:
> IRSErrorException: SpmStart failed
> 2013-04-04 15:25:24,900 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (QuartzScheduler_Worker-96) [6d6dbe98] Irs placed on server null failed.
> Proc
> eed Failover
> 2013-04-04 15:25:34,977 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (QuartzScheduler_Worker-96) [6d6dbe98] hostFromVds::selectedVds -
> moon-palace
> , spmStatus Free, storage pool Primary
> 2013-04-04 15:25:35,806 INFO
> [org.ovirt.engine.core.vdsbroker.irsbroker.IrsBrokerCommand]
> (QuartzScheduler_Worker-96) [6d6dbe98] starting spm on vds moon-palace,
> stora
> ge pool Primary, prevId -1, LVER 2520
> 2013-04-04 15:25:35,851 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> (QuartzScheduler_Worker-96) [6d6dbe98] START, SpmStartVDSCommand(vdsId = 3d
> 88c8b0-84bc-11e2-96b1-001018fc3b14, storagePoolId =
> 0ba15357-9b3b-4a76-8dac-b2b66b922174, prevId=-1, prevLVER=2520,
> storagePoolFormatType=V2, recoveryMode=Manual, SCSIF
> encing=false), log id: 30f4f64e
> 2013-04-04 15:25:45,901 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> (QuartzScheduler_Worker-96) [6d6dbe98] spmStart polling started: taskId = 6
> 8002cff-478d-4c2c-8839-b935d8a858b1
> 2013-04-04 15:26:11,903 INFO [org.ovirt.engine.core.bll.VdsLoadBalancer]
> (QuartzScheduler_Worker-81) [45db3aaa] VdsLoadBalancer: Starting load
> balance for cluster: BP_
> Primary, algorithm: EvenlyDistribute.
> 2013-04-04 15:26:12,807 INFO [org.ovirt.engine.core.bll.VdsLoadBalancer]
> (QuartzScheduler_Worker-81) [45db3aaa] VdsLoadBalancer: high util: 51,
> low util: 0, duration:
> 2, threashold: 80
> 2013-04-04 15:26:12,845 INFO
> [org.ovirt.engine.core.bll.VdsLoadBalancingAlgorithm]
> (QuartzScheduler_Worker-81) [45db3aaa] VdsLoadBalancer: number of
> relevant vdss (no
> migration, no pending): 3.
> 2013-04-04 15:26:12,846 INFO
> [org.ovirt.engine.core.bll.VdsCpuVdsLoadBalancingAlgorithm]
> (QuartzScheduler_Worker-81) [45db3aaa] VdsLoadBalancer: number of over
> utilize
> d vdss found: 0.
> 2013-04-04 15:26:12,846 INFO
> [org.ovirt.engine.core.bll.VdsCpuVdsLoadBalancingAlgorithm]
> (QuartzScheduler_Worker-81) [45db3aaa] VdsLoadBalancer: max cpu limit:
> 40, num
> ber of ready to migration vdss: 3
> 2013-04-04 15:26:19,024 INFO
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> (QuartzScheduler_Worker-96) [6d6dbe98] spmStart polling ended: taskId = 680
> 02cff-478d-4c2c-8839-b935d8a858b1 task status = finished
> 2013-04-04 15:26:19,024 ERROR
> [org.ovirt.engine.core.vdsbroker.vdsbroker.SpmStartVDSCommand]
> (QuartzScheduler_Worker-96) [6d6dbe98] Start SPM Task failed - result: clea
> nSuccess, message: VDSGenericException: VDSErrorException: Failed in
> vdscommand to HSMGetTaskStatusVDS, error = BlockSD master file system
> FSC

Re: [Users] l,

2013-03-20 Thread Yeela Kaplan
Hi Eduardo,
It is not currently supported in oVirt.
Please see the following thread:
http://www.mail-archive.com/users@ovirt.org/msg05989.html

- Original Message -
> From: "Eduardo Ramos" 
> To: users@ovirt.org
> Sent: Wednesday, March 20, 2013 5:00:01 PM
> Subject: [Users] l,
> 
> 
> 
> 
> Hi all.
> 
> I'd like to know if there is a way to resize VM disk based on a iscsi
> domain.
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] how you all read vdsm code?

2013-02-28 Thread Yeela Kaplan
I use netbeans,
but you can also use eclipse with pydev plugin,
and also vim as Antoni suggested.

- Original Message -
> From: "Antoni Segura Puimedon" 
> To: "bigclouds" 
> Cc: users@ovirt.org
> Sent: Thursday, February 28, 2013 1:42:04 PM
> Subject: Re: [Users] how you all read vdsm code?
> 
> I personally just use:
> 
> vim with the following plugins:
> 
> pathogen
> |-fugitive
> |-minibufexplorer
> |-syntastic
> |-nerdcommenter
> 
> Best,
> 
> Toni
> 
> - Original Message -
> > From: "bigclouds" 
> > To: users@ovirt.org
> > Sent: Thursday, February 28, 2013 12:38:42 PM
> > Subject: [Users] how you all read vdsm code?
> > 
> > 
> > 
> > hi:
> > vdsm is not a project, i import code into eclipse,but its
> > dependency
> > is error(not parsed automaticly).
> > developers hot you all write the codes?
> > which IDE is suitable?
> > thanks
> > 
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] default mutipath.conf config for fedora 18 invalid

2013-01-24 Thread Yeela Kaplan
Hi,
I've tested the new patch on fedora 18 vdsm host (created iscsi storage domain, 
attached, activated) and it works well.
Even though multipath.conf no longer uses getuid_callout to recognize the 
device's wwid,
it still knows how to deal with the attribute's existence in the conf file when 
running multipath command (only output is to stdout which we don't use anyway, 
stderr empty and rc=0). 
The relevant patch is: http://gerrit.ovirt.org/#/c/10824/

Yeela

- Original Message -
> From: "Ayal Baron" 
> To: "Gianluca Cecchi" 
> Cc: "users" , "Dan Kenigsberg" , "Yeela 
> Kaplan" 
> Sent: Wednesday, January 23, 2013 7:51:28 PM
> Subject: Re: [Users] default mutipath.conf config for fedora 18 invalid
> 
> 
> 
> - Original Message -
> > On Wed, Jan 23, 2013 at 4:41 PM, Yeela Kaplan  wrote:
> > > Yes, you need a different DC and host for iSCSI SDs.
> > 
> > Possibly I can test tomorrow adding another host that should go
> > into
> > the same DC but I can temporarily put it in another newly created
> > iSCSI DC for testing.
> > What is the workflow when I have a host in a DC and then I want to
> > put
> > it into another one, in general and when the two DCs have
> > configured
> > different SD types?
> > 
> 
> As long as the host has visibility to the target storage domains, all
> you need to do is put the host in maintenance and then edit it and
> change the cluster/dc it belongs to.
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] default mutipath.conf config for fedora 18 invalid

2013-01-23 Thread Yeela Kaplan


- Original Message -
> From: "Gianluca Cecchi" 
> To: "Yeela Kaplan" 
> Cc: "users" , "Ayal Baron" , "Dan 
> Kenigsberg" 
> Sent: Wednesday, January 23, 2013 4:05:34 PM
> Subject: Re: [Users] default mutipath.conf config for fedora 18 invalid
> 
> On Wed, Jan 23, 2013 at 2:34 PM,  wrote:
> > Hi Gianluca,
> > I was wandering if you could help me verify this issue.
> > Do you still have this fedora 18 setup?
> > Can you check if it's possible to add an iscsi storage domain with
> > the current vdsm multipath.conf?
> > And also if we remove getuid_callout from multipath.conf can you
> > add a storage domain then?
> >
> > Thanks,
> > Yeela
> 
> I can confirm that the environment is still present.
> It is the same as at this other thread:
> http://lists.ovirt.org/pipermail/users/2013-January/011593.html
> 
> I confirm that after
> - removing getuid_callout entry
> - blacklisting devices that was part of clustered volumes
> (pre-existing CentOS 6.3 + KVM nodes I'm migrating to oVirt)
> 
> I was able to successfully create FCP storage domain (see myself
> follow up of thread above)
> 

Thanks
> To add iSCSI domain I need to have another host, correct? As I can't
> mix FCP and iSCSI domain types for the same host/cluster
> 

Yes, you need a different DC and host for iSCSI SDs.
> Gianluca
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] default mutipath.conf config for fedora 18 invalid

2013-01-23 Thread Yeela Kaplan
Hi Gianluca,
I was wandering if you could help me verify this issue.
Do you still have this fedora 18 setup?
Can you check if it's possible to add an iscsi storage domain with the current 
vdsm multipath.conf?
And also if we remove getuid_callout from multipath.conf can you add a storage 
domain then?

Thanks,
Yeela 

- Original Message -
> From: "Dan Kenigsberg" 
> To: "Gianluca Cecchi" , "Yeela Kaplan" 
> 
> Cc: "users" , "Ayal Baron" 
> Sent: Wednesday, January 16, 2013 9:46:05 AM
> Subject: Re: [Users] default mutipath.conf config for fedora 18 invalid
> 
> On Wed, Jan 16, 2013 at 01:22:38AM +0100, Gianluca Cecchi wrote:
> > Hello,
> > configuring All-In-One on Fedora 18 puts these lines in
> > multipath.conf
> > (at least on ovrt-njghtly for f18 of some days ago)
> > 
> > # RHEV REVISION 0.9
> > ...
> > defaults {
> > polling_interval5
> > getuid_callout  "/lib/udev/scsi_id --whitelisted
> > --device=/dev/%n"
> > ...
> > device {
> > vendor  "HITACHI"
> > product "DF.*"
> > getuid_callout  "/lib/udev/scsi_id --whitelisted
> > --device=/dev/%n"
> > ...
> > 
> > Actually Fedora 18 has device-mapper-multipath 0.49 without
> > getuid_callout;
> > from changelog:
> > 
> > multipath no longer uses the getuid callout.  It now gets the
> >   wwid from the udev database or the environment variables
> > 
> > so the two getuid_callouts lines have to be removed for f18
> > 
> > multipath -l gives
> > 
> > Jan 16 00:30:15 | multipath.conf +5, invalid keyword:
> > getuid_callout
> > Jan 16 00:30:15 | multipath.conf +18, invalid keyword:
> > getuid_callout
> > 
> > I think it has to be considered.
> 
> Hmm, it seems that the title of "Bug 886087 - Rest query add storage
> domain fails on fedora18: missing /sbin/scsi_id" is inaccurate.
> 
> I've marked the bug as an ovirt-3.2 blocker, and nacked the patch
> that
> attempts to fix it http://gerrit.ovirt.org/#/c/10824/
> 
> Dan.
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] local variable 'volPath' referenced before assignment

2013-01-10 Thread Yeela Kaplan


- Original Message -
> From: "Frank Wall" 
> To: users@ovirt.org
> Sent: Wednesday, January 9, 2013 6:54:22 PM
> Subject: Re: [Users] local variable 'volPath' referenced before assignment
> 
> Hi Yeela,
> 
> On Tue, Jan 08, 2013 at 12:39:08PM -0500, Yeela Kaplan wrote:
> > Can you tell if the vdsm version installed on your host includes
> > this patch?
> > (you can check under /usr/share/vdsm/clientIF.py).
> 
> well, I'm not sure if this patch is included in my version, but
> according to
> the output of diff is seems that it is actually NOT included:
> 
> --- clientIF_new-617e328d546570a94e4357b3802a062e6a7610cb.py
>2012-08-08 14:52:28.0 +0200
> +++ /usr/share/vdsm/clientIF.py 2012-10-04 22:46:42.0 +0200
> 
> [...skipping other differences...]
> 
> @@ -289,15 +255,11 @@
>  if drive['device'] == 'cdrom':
>  volPath =
>  supervdsm.getProxy().mkIsoFs(vmId,
>  files)
>  elif drive['device'] == 'floppy':
> -volPath = \
> -
>   supervdsm.getProxy().mkFloppyFs(vmId,
> files)
> +volPath =
> supervdsm.getProxy().mkFloppyFs(vmId, files)
>  
> -elif "path" in drive:
> +elif drive.has_key("path"):
>  volPath = drive['path']
>  
> -else:
> -raise vm.VolumeError(drive)
> -

Frank, it looks like you don't have the patch inside your version of vdsm.
Please add it and see if it solves the problem.

>  # For BC sake: None as argument
>  elif not drive:
>  volPath = drive
> 
> 
> Apparently the part from the fix with "raise vm.VolumeError(drive)"
> is missing,
> although I'm running a newer version of vdsm. According to the bug
> report at
> https://bugzilla.redhat.com/show_bug.cgi?id=843387
> the fix should be in vdsm-4.9.6-29.0 (RHEL6), while I'm running
> vdsm-4.10.0-10.fc17.x86_64:
> 
> # rpm -q --whatprovides /usr/share/vdsm/clientIF.py
> vdsm-4.10.0-10.fc17.x86_64
> 
> I must admit that this is oVirt on FC17 and not RHEV on RHEL, so this
> may
> explain the different versions of vdsm.
> 
> > If it's in there please send the full logs (engine+vdsm) and the
> > bug might
> > need to be reopened, otherwise you can just upgrade vdsm and
> > hopefully it
> > would solve the problem.
> 
> I've attached the full logs. It contains all log entries from
> activating
> the ovirt node until trying to start the VM (both engine+vdsm).
> 
> 
> Thanks
> - Frank
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Best practice to resize a WM disk image

2013-01-09 Thread Yeela Kaplan


- Original Message -
> From: "Karli Sjöberg" 
> To: "Yeela Kaplan" 
> Cc: "Rocky" , Users@ovirt.org
> Sent: Wednesday, January 9, 2013 4:30:35 PM
> Subject: Re: [Users] Best practice to resize a WM disk image
> 
> ons 2013-01-09 klockan 09:13 -0500 skrev Yeela Kaplan:
> 
> - Original Message -
> > From: "Karli Sjöberg" < karli.sjob...@slu.se >
> > To: "Yeela Kaplan" < ykap...@redhat.com >
> > Cc: "Rocky" < rockyba...@gmail.com >, Users@ovirt.org > Sent:
> > Wednesday, January 9, 2013 1:56:32 PM
> > Subject: Re: [Users] Best practice to resize a WM disk image
> > 
> > tis 2013-01-08 klockan 11:03 -0500 skrev Yeela Kaplan:
> > 
> > So, first of all, you should know that resizing a disk is not yet
> > supported in oVirt.
> > If you decide that you must use it anyway, you should know in
> > advance
> > that it's not recommended,
> > and that your data is at risk when you perform these kind of
> > actions.
> > 
> > There are several ways to perform this.
> > One of them is to create a second (larger) disk for the vm,
> > run the vm from live cd and use dd to copy the first disk contents
> > into the second one,
> > and finally remove the first disk and make sure that the new disk
> > is
> > configured as your system disk.
> > Here you guide for the dd operation
> > to be done from within the guest system, but booted from live.
> > Can this be done directly from the NFS storage itself instead?
> > 
> 
> Karli, it can be done by using dd (or rsync), when your source is the
> volume of the current disk image
> and the destination is the volume of the new disk image created.
> You just have to find the images in the internals of the vdsm host,
> which is a bit more tricky
> and can cause more damage if done wrong. You mean since the VM's and
> disks are called like "c3dbfb5f-7b3b-4602-961f-624c69618734" you
> have to query the api to figure out what´s what, but other than
> that, you´re saying it´ll "just work", so that´s good to know, since
> I think letting the storage itself do the dd copy locally is going
> to be much much faster than through the VM, over the network.
> Thanks!
> Will it matter if the disks are "Thin Provision" or "Preallocated"?
> 
> 

As long as it's done on the base volume it doesn't matter.

> 
> > 
> > 
> > The second, riskier, option is to export the vm to an export
> > domain,
> > resize the image volume size to the new larger size using qemu-img
> > and also modify the vm's metadata in its ovf,
> > as you can see this option is more complicated and requires deeper
> > understanding and altering of the metadata...
> > finally you'll need to import the vm back.
> > 
> > 
> > 
> > - Original Message -
> > > From: "Rocky" < rockyba...@gmail.com >
> > > To: "Yeela Kaplan" < ykap...@redhat.com >
> > > Cc: Users@ovirt.org > Sent: Tuesday, January 8, 2013 11:30:00 AM
> > > Subject: Re: [Users] Best practice to resize a WM disk image
> > > 
> > > Its just a theoretical question as I think the issue will come
> > > for
> > > us
> > > and other users.
> > > 
> > > I think there can be one or more snapshots in the WM over the
> > > time.
> > > But
> > > if that is an issue we can always collapse them I think.
> > > If its a base image it should be RAW, right?
> > > In this case its on file storage (NFS).
> > > 
> > > Regards //Ricky
> > > 
> > > On 2013-01-08 10:07, Yeela Kaplan wrote:
> > > > Hi Ricky,
> > > > In order to give you a detailed answer I need additional
> > > > details
> > > > regarding the disk:
> > > > - Is the disk image composed as a chain of volumes or just a
> > > > base
> > > > volume?
> > > > (if it's a chain it will be more complicated, you might want to
> > > > collapse the chain first to make it easier).
> > > > - Is the disk image raw? (you can use qemu-img info to check)
> > > > - Is the disk image on block or file storage?
> > > >
> > > > Regards,
> > > > Yeela
> > > >
> > > > - Original Message -
> > > >> From: "Ricky" < rockyba...@gmail.com >
> > > >> To: Users@ovirt.org >

Re: [Users] Best practice to resize a WM disk image

2013-01-09 Thread Yeela Kaplan


- Original Message -
> From: "Karli Sjöberg" 
> To: "Yeela Kaplan" 
> Cc: "Rocky" , Users@ovirt.org
> Sent: Wednesday, January 9, 2013 1:56:32 PM
> Subject: Re: [Users] Best practice to resize a WM disk image
> 
> tis 2013-01-08 klockan 11:03 -0500 skrev Yeela Kaplan:
> 
> So, first of all, you should know that resizing a disk is not yet
> supported in oVirt.
> If you decide that you must use it anyway, you should know in advance
> that it's not recommended,
> and that your data is at risk when you perform these kind of actions.
> 
> There are several ways to perform this.
> One of them is to create a second (larger) disk for the vm,
> run the vm from live cd and use dd to copy the first disk contents
> into the second one,
> and finally remove the first disk and make sure that the new disk is
> configured as your system disk. 
> Here you guide for the dd operation
> to be done from within the guest system, but booted from live.
> Can this be done directly from the NFS storage itself instead?
> 

Karli, it can be done by using dd (or rsync), when your source is the volume of 
the current disk image
and the destination is the volume of the new disk image created.
You just have to find the images in the internals of the vdsm host, which is a 
bit more tricky
and can cause more damage if done wrong.

> 
> 
> The second, riskier, option is to export the vm to an export domain,
> resize the image volume size to the new larger size using qemu-img
> and also modify the vm's metadata in its ovf,
> as you can see this option is more complicated and requires deeper
> understanding and altering of the metadata...
> finally you'll need to import the vm back.
> 
> 
> 
> - Original Message -
> > From: "Rocky" < rockyba...@gmail.com >
> > To: "Yeela Kaplan" < ykap...@redhat.com >
> > Cc: Users@ovirt.org > Sent: Tuesday, January 8, 2013 11:30:00 AM
> > Subject: Re: [Users] Best practice to resize a WM disk image
> > 
> > Its just a theoretical question as I think the issue will come for
> > us
> > and other users.
> > 
> > I think there can be one or more snapshots in the WM over the time.
> > But
> > if that is an issue we can always collapse them I think.
> > If its a base image it should be RAW, right?
> > In this case its on file storage (NFS).
> > 
> > Regards //Ricky
> > 
> > On 2013-01-08 10:07, Yeela Kaplan wrote:
> > > Hi Ricky,
> > > In order to give you a detailed answer I need additional details
> > > regarding the disk:
> > > - Is the disk image composed as a chain of volumes or just a base
> > > volume?
> > > (if it's a chain it will be more complicated, you might want to
> > > collapse the chain first to make it easier).
> > > - Is the disk image raw? (you can use qemu-img info to check)
> > > - Is the disk image on block or file storage?
> > >
> > > Regards,
> > > Yeela
> > >
> > > - Original Message -
> > >> From: "Ricky" < rockyba...@gmail.com >
> > >> To: Users@ovirt.org > >> Sent: Tuesday, January 8, 2013 10:40:27
> > >> AM
> > >> Subject: [Users] Best practice to resize a WM disk image
> > >>
> > >> Hi,
> > >>
> > >> If I have a VM that has run out of disk space, how can I
> > >> increase
> > >> the
> > >> space in best way? One way is to add a second bigger disk to the
> > >> WM
> > >> and then use dd or similar to copy. But is it possible to
> > >> stretch
> > >> the
> > >> original disk inside or outside oVirt and get oVirt to know the
> > >> bigger
> > >> size?
> > >>
> > >> Regards //Ricky
> > >> ___
> > >> Users mailing list
> > >> Users@ovirt.org > >>
> > >> http://lists.ovirt.org/mailman/listinfo/users > >>
> > 
> > 
> ___
> Users mailing list Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] local variable 'volPath' referenced before assignment

2013-01-08 Thread Yeela Kaplan
Hi Frank,
It looks like the same issue as in the bug,
the bug also references a Change-Id for a fix: 
I8ad50c3a3485812f57800bbe6b7318a90fe5b962
and you can also access this patch in the following link: 
http://gerrit.ovirt.org/#/c/6794/2

Can you tell if the vdsm version installed on your host includes this patch?
(you can check under /usr/share/vdsm/clientIF.py).
If it's in there please send the full logs (engine+vdsm) and the bug might need 
to be reopened,
otherwise you can just upgrade vdsm and hopefully it would solve the problem.

Regards,
Yeela

- Original Message -
> From: "Frank Wall" 
> To: users@ovirt.org
> Sent: Tuesday, January 8, 2013 7:02:50 PM
> Subject: [Users] local variable 'volPath' referenced before assignment
> 
> Hi,
> 
> I've updated my oVirt engine and node from version 3.1.0-2 to the
> more recent 3.1.0-4. As far as I can tell from the update log, the
> engine update went fine by following these instructions:
> https://www.rvanderlinden.net/wordpress/ovirt/engine-installation/engine-upgrade/
> 
> Now I'm unable to start a VM through the Admin Portal. It fails with
> the following error message:
> 
> VM test_srv is down. Exit message: local variable 'volPath'
> referenced before assignment.
> 
> What's wrong? It seems this message is related to this bug report:
> https://bugzilla.redhat.com/show_bug.cgi?id=843387
> But apparently there is no solution.
> 
> What is the fix or workaround to get VMs working again?
> 
> 
> Thanks
> - Frank
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Best practice to resize a WM disk image

2013-01-08 Thread Yeela Kaplan
So, first of all, you should know that resizing a disk is not yet supported in 
oVirt.
If you decide that you must use it anyway, you should know in advance that it's 
not recommended,
and that your data is at risk when you perform these kind of actions.

There are several ways to perform this.
One of them is to create a second (larger) disk for the vm, 
run the vm from live cd and use dd to copy the first disk contents into the 
second one,
and finally remove the first disk and make sure that the new disk is configured 
as your system disk.
The second, riskier, option is to export the vm to an export domain,
resize the image volume size to the new larger size using qemu-img and also 
modify the vm's metadata in its ovf,
as you can see this option is more complicated and requires deeper 
understanding and altering of the metadata...
finally you'll need to import the vm back.



- Original Message -
> From: "Rocky" 
> To: "Yeela Kaplan" 
> Cc: Users@ovirt.org
> Sent: Tuesday, January 8, 2013 11:30:00 AM
> Subject: Re: [Users] Best practice to resize a WM disk image
> 
> Its just a theoretical question as I think the issue will come for us
> and other users.
> 
> I think there can be one or more snapshots in the WM over the time.
> But
> if that is an issue we can always collapse them I think.
> If its a base image it should be RAW, right?
> In this case its on file storage (NFS).
> 
> Regards //Ricky
> 
> On 2013-01-08 10:07, Yeela Kaplan wrote:
> > Hi Ricky,
> > In order to give you a detailed answer I need additional details
> > regarding the disk:
> > - Is the disk image composed as a chain of volumes or just a base
> > volume?
> > (if it's a chain it will be more complicated, you might want to
> > collapse the chain first to make it easier).
> > - Is the disk image raw? (you can use qemu-img info to check)
> > - Is the disk image on block or file storage?
> >
> > Regards,
> > Yeela
> >
> > - Original Message -
> >> From: "Ricky" 
> >> To: Users@ovirt.org
> >> Sent: Tuesday, January 8, 2013 10:40:27 AM
> >> Subject: [Users] Best practice to resize a WM disk image
> >>
> >> Hi,
> >>
> >> If I have a VM that has run out of disk space, how can I increase
> >> the
> >> space in best way? One way is to add a second bigger disk to the
> >> WM
> >> and then use dd or similar to copy. But is it possible to stretch
> >> the
> >> original disk inside or outside oVirt and get oVirt to know the
> >> bigger
> >> size?
> >>
> >> Regards //Ricky
> >> ___
> >> Users mailing list
> >> Users@ovirt.org
> >> http://lists.ovirt.org/mailman/listinfo/users
> >>
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Best practice to resize a WM disk image

2013-01-08 Thread Yeela Kaplan
Hi Ricky,
In order to give you a detailed answer I need additional details regarding the 
disk:
- Is the disk image composed as a chain of volumes or just a base volume? 
(if it's a chain it will be more complicated, you might want to collapse the 
chain first to make it easier).
- Is the disk image raw? (you can use qemu-img info to check)
- Is the disk image on block or file storage?

Regards, 
Yeela

- Original Message -
> From: "Ricky" 
> To: Users@ovirt.org
> Sent: Tuesday, January 8, 2013 10:40:27 AM
> Subject: [Users] Best practice to resize a WM disk image
> 
> Hi,
> 
> If I have a VM that has run out of disk space, how can I increase the
> space in best way? One way is to add a second bigger disk to the WM
> and then use dd or similar to copy. But is it possible to stretch the
> original disk inside or outside oVirt and get oVirt to know the
> bigger
> size?
> 
> Regards //Ricky
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Storage domain issue iSCSI

2012-12-05 Thread Yeela Kaplan
Simon,
I need to see the logs at the same time from both engine and vdsm,
they are a few days apart.
Can you get me the right logs?
thanks.

- Original Message -
> From: "Simon Donnellan" 
> To: "Yeela Kaplan" 
> Cc: users@ovirt.org
> Sent: Tuesday, December 4, 2012 12:02:08 AM
> Subject: Re: [Users] Storage domain issue iSCSI
> 
> Hi Yeela,
> 
> Attached are the engine logs.
> 
> Kind Regards
> 
> Simon
> 
> 
> 
> 
> On Sun, Dec 2, 2012 at 4:46 PM, Yeela Kaplan < ykap...@redhat.com >
> wrote:
> 
> 
> Just a clarification, the UUID of the pool is
> f1b40ecc-b6a9-44e7-92cb-0fdf445e3175
> and the UUID of the msd it is looking for is
> 68d8b0e2-c348-4cfe-a896-08c62d491dfb (according to the logs you sent
> me).
> The problem really is in the msd version (Thanks Shu) but I need more
> details in order to solve the issue,
> can you also attach the engine logs?
> thanks.
> 
> 
> - Original Message -
> > From: "Simon Donnellan" < f...@baconwho.re >
> 
> 
> > To: "Shu Ming" < shum...@linux.vnet.ibm.com >
> > Cc: "Yeela Kaplan" < ykap...@redhat.com >, users@ovirt.org
> > Sent: Sunday, December 2, 2012 5:55:00 PM
> > Subject: Re: [Users] Storage domain issue iSCSI
> > 
> > Hi Yeela, Shu,
> > 
> > Many thanks for your replies.
> > 
> > I'm aware of the one type rule, the two NFS shares you noticed are
> > the ISO share and an export store. (I'm unable to find a way to
> > create these as iSCSI type)
> > 
> > The UUID of the master iSCSI domain is f1b40ecc-b6a9-44e7-
> > 92cb-0fdf445e3175
> > 
> > it's name in the gui is "512gb2"
> > 
> > I too believe there is a meta data corruption, is there any way to
> > get my systems back up and running?
> > 
> > Kind Regards
> > 
> > Simon
> > 
> > 
> > 
> > 
> > On Sun, Dec 2, 2012 at 3:13 PM, Shu Ming <
> > shum...@linux.vnet.ibm.com
> > > wrote:
> > 
> > 
> > 
> > 
> > I think the error is clear. Engine was expecting a master storage
> > domain metadata format version 3, while the master storage metadata
> > gave version 4. I am wondering if the master storage domain
> > metadata
> > was corrupted during the power off.
> > 
> > See: the error came from.
> > Thread-54::ERROR::2012-11-29
> > 20:06:02,491::sp::1532::Storage.StoragePool::(getMasterDomain)
> > Requested master domain 68d8b0e2-c348-4cfe-a896-08c62d491dfb does
> > not have expected version 3 it is version 4
> > 
> > See: 'MASTER_VERSION=4' below:
> > 
> > Thread-49::DEBUG::2012-11-29
> > 20:05:58,337::persistentDict::234::Storage.PersistentDict::(refresh)
> > read lines (VGTagMetadataRW)=['VERSION=2',
> > u'PV0=pv:36001405c2f5e9d2d3be7d41a8db27dd6,uuid:b62d1B-zFVl-LKrH-fekH-vprs-znOZ-IF6jJy,pestart:0,pecount:4093,mapoffset:0',
> > 'TYPE=ISCSI', 'LOGBLKSIZE=512',
> > 'SDUUID=68d8b0e2-c348-4cfe-a896-08c62d491dfb', 'LEASERETRIES=3',
> > 'LOCKRENEWALINTERVALSEC=5', 'LOCKPOLICY=', 'PHYBLKSIZE=512',
> > 'VGUUID=q7sGQ7-1G03-s9mh-dIyx-NSmo-P0zk-23c3Ad',
> > 'DESCRIPTION=512gb2', 'CLASS=Data',
> > 'POOL_UUID=f1b40ecc-b6a9-44e7-92cb-0fdf445e3175',
> > 'IOOPTIMEOUTSEC=10', 'LEASETIMESEC=60', 'MASTER_VERSION=4' ,
> > 'ROLE=Master', 'POOL_DESCRIPTION=UB1',
> > u'POOL_DOMAINS=afec8026-ccac-4366-bb4b-2150d8731e4c:Active,c2b01420-fc73-4ccc-a560-3e1c5aa28a9f:Active,45fa93a8-1761-4522-bafb-c5d3ab45f731:Attached,68d8b0e2-c348-4cfe-a896-08c62d491dfb:Active',
> > 'POOL_SPM_LVER=541',
> > '_SHA_CKSUM=d54ce32f30c8040449f2a91ccc6e115e35894a5e',
> > 'POOL_SPM_ID=-1']
> > 
> > 
> > 2012-11-30 4:08, Simon Donnellan:
> > 
> > 
> > 
> > 
> > Hi Yeela,
> > 
> > Thanks for the reply, I've attached a vdsm.log file containing an
> > attempt to activate the host, then activate the iSCSI storage
> > domain.
> > 
> > Thanks
> > 
> > Simon
> > 
> > 
> > 
> > 
> > On Thu, Nov 29, 2012 at 6:03 PM, Yeela Kaplan < ykap...@redhat.com
> > >
> > wrote:
> > 
> > 
> > Hi Simon,
> > We could use some more information in order to understand the
> > problem,
> > could you please attach the vdsm logs?
> > Thanks,
> > Yeela
> &g

Re: [Users] VM has paused due to unknown storage error

2012-12-05 Thread Yeela Kaplan
Hi David,
You can also check the libvirt log.
Please send me the vdsm and libvirt logs so I can look at it too.
thanks.

- Original Message -
> From: "David Wilson" 
> To: "Dan Kenigsberg" 
> Cc: "Yeela Kaplan" , users@ovirt.org
> Sent: Monday, December 3, 2012 5:51:26 PM
> Subject: Re: [Users] VM has paused due to unknown storage error
> 
> 
> Hi everyone,
> 
> Thank you for your responses.
> 
> Yeela:
> >When oVirt started the vm it mounted the corrupt disk image that
> >seemed fine, but it couldn't find the OS because of the corrupted
> >fs, and the error caused it to pause the guest.
> 
> To clear things up it was the /var mount that had the corrupted
> filesystem, not the guest's root filesystem.
> oVirt still continued to pause the guest even when I had booted the
> guest off a CentOS DVD ISO, run rescue mode and manually activated
> the /var logical volume. In this case I did not activate or mount
> the guests root filesystem and only activated the guests /var
> filesystem so that I could fsck it. fsck would run for around 5-10
> minutes with the message "Deleting orphaned inode.." and then
> oVirt would simply pause the entire guest.
> The only information I could find was on the physical host's
> vdsm.log, which specified the following:
> libvirtEventLoop:: INFO::2012-12-02
> 09:05:50,296::libvirtvm::1965::vm.Vm::(_onAbnormalStop)
> vmId=`23b9212c-1e25-4003-aa18-b1e819bf6bb1`::abnormal vm stop device
> ide0-0-1 error eother
> 
> Perh aps there was another log I should have examined to see if more
> i nforma tion was provided about wh y oVirt was pausing the guest?
> 
> 
> 
> Shu:
> This is what I did to dd the images off, and to work around the
> problem :
> 1.) O n the physical host : C reated an NFS mount to another
> temporary Linux system that had sufficient storage for the 500GB
> filesystem
> 2. ) O n the physical host : U sed 'dd' to dump the /var filesystem's
> logical vol ume to an image file via NFS on t he temporar y Linux
> system .
> 3.) On the temporary Linux s ystem that now contained the filesystem
> image file, I ran "qemu-img info " and noticed that the fi lesystem
> image was qc ow2 type and specified a ba cking file.
> 4.) On t he physical host : Used 'dd' to dump the lo gical volume
> specified as a backing file, to an image file via NFS on t he
> temporar y Linux system.
> 5. ) On the temporary Linux system: Used ' qemu-img reba se ' to
> change the backing file to the local copy of the back ing file
> image.
> 6.) On the temporary Linux system: Used 'qemu-img commit' to commit
> the changes stored in the filesystem image file to the backing file
> image.
> 7.) O n the temporary Linux sy ste m : U sed 'qemu-img convert' to
> convert the backing file image to raw format.
> 8.) On the temporary Linux system: Used 'losetup', 'kpart' and 'fsck'
> to repair the backing file image. Fsck displayed t he same 'Del
> eting orphaned in ode ' message but managed to con tinue and
> completed ok.
> 9.) O n the tempo rary Linux system: Mo unted the loop filesystem and
> confirmed that the data was intact and was current.
> 10 .) I n the oVirt GUI: Deactivated the faulty Virtual D isk
> attached to the guest.
> 11 .) In the oVirt GUI: Created a new 'preallocated ' Virtual Disk of
> sufficient size for the guest.
> 1 2 .) O n the physical host: Used 'dd' to upload the raw ba cking
> file image from (7) to the new lo gical volume.
> 1 3 .) I then conf igured the guest to boot from the CentOS D VD ISO
> into res cue mode to confirm that the lo gical volume for th e guest
> 's /var filesystem was accessi ble and mountable.
> 1 4 .) Reconfigured the guest to boo t from it's primary Virtual Disk
> and sta rted up the guest.
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> Get important Linux and industry-related news at: facebook.com/dcdata
> 
> Kind regards,
> 
> David Wilson
> CNS,CLS, LINUX+, CLA, DCTS, LPIC3
> LinuxTech CC t/a DcData
> CK number: 2001/058368/23
> 
>   Website:http://www.dcdata.co.za
>   Support:+27(0)860-1-LINUX
>   Mobile: +27(0)824147413
>   Tel:+27(0)333446100
>   Fax:+27(0)866878971 On 12/03/2012 01:07 PM, Dan Kenigsberg wrote:
> 
> 
> On Mon, Dec 03, 2012 at 04:37:01AM -0500, Yeela Kaplan wrote:
> 
> Glad to hear it worked out.
> 
> When oVirt started the vm it mounted the corrupt disk image that
> seemed fine,
> but it couldn't find the OS because of the corrupted fs,
> and the error cause

Re: [Users] VM has paused due to unknown storage error

2012-12-03 Thread Yeela Kaplan
Glad to hear it worked out.

When oVirt started the vm it mounted the corrupt disk image that seemed fine,  
but it couldn't find the OS because of the corrupted fs,
and the error caused it to pause the guest.

- Original Message -
> From: "David Wilson" 
> To: users@ovirt.org
> Sent: Monday, December 3, 2012 10:55:37 AM
> Subject: Re: [Users] VM has paused due to unknown storage error
> 
> 
> 
> H i everyone,
> 
> Some feedback on this error , and a question I hope that someone
> could possibly assist with.
> I managed to get the guest running again by using dd and q emu-img to
> export the filesystem image and base image , convert to raw format
> and run a fsck on the loop ed part ition.
> When mounted as a loop filesystem, fsck easily repaired it .
> 
> Any ideas why oVirt kept pausing the guest and did no t allow fsck to
> repair the filesystem?
> 
> 
> 
> 
> 
> Get important Linux and industry-related news at: facebook.com/dcdata
> 
> Kind regards,
> 
> David Wilson
> CNS,CLS, LINUX+, CLA, DCTS, LPIC3
> LinuxTech CC t/a DcData
> CK number: 2001/058368/23
> 
>   Website:http://www.dcdata.co.za
>   Support:+27(0)860-1-LINUX
>   Mobile: +27(0)824147413
>   Tel:+27(0)333446100
>   Fax:+27(0)866878971 On 12/02/2012 11:17 AM, David Wilson wrote:
> 
> 
> H i everyone,
> 
> Sunday fun : )
> 
> One of our cr itical Linux VMs ran out of disk space. I cleared up
> space by removing old application logs within the guest, rebooted
> the g uest and was presented with the following information message
> during bootup when the filesystem ext3/ 4 scan runs:
> Clearing orhpaned inode .
> Shortly after this message the the guest is p aused by oVirt and the
> "VM has paused due to unknown storage error" message is displayed in
> the oVirt web GUI.
> 
> On the physical node itself, I fo und t he following error in
> /var/log/vdsm/vd sm.log:
> libvirtEventLoop:: INFO::2012-12-02
> 09:05:50,296::libvirtvm::1965::vm.Vm::(_onAbnormalStop)
> vmId=`23b9212c-1e25-4003-aa18-b1e819bf6bb1`::abnormal vm stop device
> ide0-0-1 error eother
> 
> Any ideas on how to get more inf ormation on what exactly the error
> is?
> 
> I've tried booting the guest into r es cue mode , run a manual fsck
> on the logical v olume but each time the guest is automaticall
> paused due to the "unknown storage error".
> 
> We are using a FC SAN for s torage , s o what I've decided to try is
> expo rt the disk using "dd" on the physical node and t hen try
> repairing it by mounting it as a loop filesystem on a nother
> physical ser ver , and then use "dd" to import it back in again.
> 
> Is this the right approach to take to get the system back online?
> 
> 
> 
> 
> --
> 
> 
> Get important Linux and industry-related news at: facebook.com/dcdata
> 
> Kind regards,
> 
> David Wilson
> CNS,CLS, LINUX+, CLA, DCTS, LPIC3
> LinuxTech CC t/a DcData
> CK number: 2001/058368/23
> 
>   Website:http://www.dcdata.co.za
>   Support:+27(0)860-1-LINUX
>   Mobile: +27(0)824147413
>   Tel:+27(0)333446100
>   Fax:+27(0)866878971
> 
> 
> 
> 
> Time to evaluate your email security provider? Watch the video and
> take advantage of Mimecast’s first ever limited promotion.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Lvm vgs failed on allinone setup

2012-12-03 Thread Yeela Kaplan
Hi Adrian,
Your domain ('033a2b87-def7-4c27-ab74-9860531f2ed4') is probably NFS.
Don't worry about the failed vgs errors, it is not the problem.
Please attach the full vdsm and engine logs so we can detect why the domain is 
set to 
maintenance after reboot.

--
Yeela

- Original Message -
> From: "Adrian Gibanel" 
> To: users@ovirt.org
> Sent: Tuesday, November 27, 2012 6:53:50 PM
> Subject: Re: [Users] Lvm vgs failed on allinone setup
> 
> I've debugged the lvm vgs failed part. It seems to run this command:
> 
> [adrian@server ~]$ sudo vgs --noheadings --units b --nosuffix
> --separator \| -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
> 033a2b87-def7-4c27-ab74-9860531f2ed4
> Volume group "033a2b87-def7-4c27-ab74-9860531f2ed4" not found
> 
> If I run the same commands without arguments I found out the current
> VGs:
> 
> [adrian@server ~]$ sudo vgs --noheadings --units b --nosuffix
> --separator \| -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
> Na9Bdu-aELy-1G3m-JgcV-dBYM-PfaP-arsvUo|maquinasVG|wz--n-|1989115117568|406419668992|4194304|474242|96898||1048064|522240
> 
> So I try to run it with the uuid seen:
> [adrian@server ~]$ sudo vgs --noheadings --units b --nosuffix
> --separator \| -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
> Na9Bdu-aELy-1G3m-JgcV-dBYM-PfaP-arsvUo Volume group
> "Na9Bdu-aELy-1G3m-JgcV-dBYM-PfaP-arsvUo" not found
> it says it cannot find it.
> 
> Now with the name:
> [adrian@server ~]$ sudo vgs --noheadings --units b --nosuffix
> --separator \| -o
> uuid,name,attr,size,free,extent_size,extent_count,free_count,tags,vg_mda_size,vg_mda_free
> maquinasVG
> Na9Bdu-aELy-1G3m-JgcV-dBYM-PfaP-arsvUo|maquinasVG|wz--n-|1989115117568|406419668992|4194304|474242|96898||1048064|522240
> 
> It works!
> 
> So that would maybe fix it. Although I'm not very sure because this
> is just a WARNING. And because I think it's not a good idea to
> rename current uuids for storage or iso domains (not sure which it's
> exactly) in the database just to match the LVM uuids.
> 
> Now I'm going to debug the other messages which are ERRORs and not
> warnings.
> 
> - Mensaje original -
> 
> > De: "Adrian Gibanel" 
> > Para: users@ovirt.org
> > Enviados: Martes, 27 de Noviembre 2012 17:16:50
> > Asunto: Re: [Users] Lvm vgs failed on allinone setup
> 
> > After seeing that I had many named errors I've disabled ipv6 by
> > adding -4 to OPTIONS line as described in:
> > http://www.hafizonline.net/blog/?p=164 . After had I have rebooted
> > the logs regarding lvm change a bit. Here you are:
> 
> > Nov 27 17:09:15 server vdsm Storage.LVM WARNING lvm vgs failed: 5
> > []
> > [' Volume group "c20da291-baf5-480a-b314-b775a6dde6e8" not found']
> 
> --
> 
> --
> Adrián Gibanel
> I.T. Manager
> 
> +34 675 683 301
> www.btactic.com
> 
> 
> 
> Ens podeu seguir a/Nos podeis seguir en:
> 
> i
> 
> 
> Abans d´imprimir aquest missatge, pensa en el medi ambient. El medi
> ambient és cosa de tothom. / Antes de imprimir el mensaje piensa en
> el medio ambiente. El medio ambiente es cosa de todos.
> 
> AVIS:
> El contingut d'aquest missatge i els seus annexos és confidencial. Si
> no en sou el destinatari, us fem saber que està prohibit
> utilitzar-lo, divulgar-lo i/o copiar-lo sense tenir l'autorització
> corresponent. Si heu rebut aquest missatge per error, us agrairem
> que ho feu saber immediatament al remitent i que procediu a destruir
> el missatge .
> 
> AVISO:
> El contenido de este mensaje y de sus anexos es confidencial. Si no
> es el destinatario, les hacemos saber que está prohibido utilizarlo,
> divulgarlo y/o copiarlo sin tener la autorización correspondiente.
> Si han recibido este mensaje por error, les agradeceríamos que lo
> hagan saber inmediatamente al remitente y que procedan a destruir el
> mensaje .
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] VM has paused due to unknown storage error

2012-12-03 Thread Yeela Kaplan
It seems that your guest file system is corrupted.
The dd approach won't help cause this is a virtual disk,
you need to fix the virtual disk inside the lv, that's why any operations on 
the lv won't help. 
You can try shutting down the vm (not paused, but literally down) and add the 
guest vm disk as
a second disk to another vm.
In the second vm try to run fsck on the disk and see if it helps.
In any case before trying to manipulate the disk image please save a backup!!!
let me know how it goes...

- Original Message -
> From: "David Wilson" 
> To: users@ovirt.org
> Sent: Sunday, December 2, 2012 11:17:43 AM
> Subject: [Users] VM has paused due to unknown storage error
> 
> 
> H i everyone,
> 
> Sunday fun : )
> 
> One of our cr itical Linux VMs ran out of disk space. I cleared up
> space by removing old application logs within the guest, rebooted
> the g uest and was presented with the following information message
> during bootup when the filesystem ext3/ 4 scan runs:
> Clearing orhpaned inode .
> Shortly after this message the the guest is p aused by oVirt and the
> "VM has paused due to unknown storage error" message is displayed in
> the oVirt web GUI.
> 
> On the physical node itself, I fo und t he following error in
> /var/log/vdsm/vd sm.log:
> libvirtEventLoop:: INFO::2012-12-02
> 09:05:50,296::libvirtvm::1965::vm.Vm::(_onAbnormalStop)
> vmId=`23b9212c-1e25-4003-aa18-b1e819bf6bb1`::abnormal vm stop device
> ide0-0-1 error eother
> 
> Any ideas on how to get more inf ormation on what exactly the error
> is?
> 
> I've tried booting the guest into r es cue mode , run a manual fsck
> on the logical v olume but each time the guest is automaticall
> paused due to the "unknown storage error".
> 
> We are using a FC SAN for s torage , s o what I've decided to try is
> expo rt the disk using "dd" on the physical node and t hen try
> repairing it by mounting it as a loop filesystem on a nother
> physical ser ver , and then use "dd" to import it back in again.
> 
> Is this the right approach to take to get the system back online?
> 
> 
> 
> 
> --
> 
> 
> Get important Linux and industry-related news at: facebook.com/dcdata
> 
> Kind regards,
> 
> David Wilson
> CNS,CLS, LINUX+, CLA, DCTS, LPIC3
> LinuxTech CC t/a DcData
> CK number: 2001/058368/23
> 
>   Website:http://www.dcdata.co.za
>   Support:+27(0)860-1-LINUX
>   Mobile: +27(0)824147413
>   Tel:+27(0)333446100
>   Fax:+27(0)866878971
> 
> 
> 
> Time to evaluate your email security provider? Watch the video and
> take advantage of Mimecast’s first ever limited promotion.
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Storage domain issue iSCSI

2012-12-02 Thread Yeela Kaplan
Just a clarification, the UUID of the pool is 
f1b40ecc-b6a9-44e7-92cb-0fdf445e3175
and the UUID of the msd it is looking for is 
68d8b0e2-c348-4cfe-a896-08c62d491dfb (according to the logs you sent me).
The problem really is in the msd version (Thanks Shu) but I need more details 
in order to solve the issue, 
can you also attach the engine logs?
thanks.

- Original Message -
> From: "Simon Donnellan" 
> To: "Shu Ming" 
> Cc: "Yeela Kaplan" , users@ovirt.org
> Sent: Sunday, December 2, 2012 5:55:00 PM
> Subject: Re: [Users] Storage domain issue iSCSI
> 
> Hi Yeela, Shu,
> 
> Many thanks for your replies.
> 
> I'm aware of the one type rule, the two NFS shares you noticed are
> the ISO share and an export store. (I'm unable to find a way to
> create these as iSCSI type)
> 
> The UUID of the master iSCSI domain is f1b40ecc-b6a9-44e7-
> 92cb-0fdf445e3175
> 
> it's name in the gui is "512gb2"
> 
> I too believe there is a meta data corruption, is there any way to
> get my systems back up and running?
> 
> Kind Regards
> 
> Simon
> 
> 
> 
> 
> On Sun, Dec 2, 2012 at 3:13 PM, Shu Ming < shum...@linux.vnet.ibm.com
> > wrote:
> 
> 
> 
> 
> I think the error is clear. Engine was expecting a master storage
> domain metadata format version 3, while the master storage metadata
> gave version 4. I am wondering if the master storage domain metadata
> was corrupted during the power off.
> 
> See: the error came from.
> Thread-54::ERROR::2012-11-29
> 20:06:02,491::sp::1532::Storage.StoragePool::(getMasterDomain)
> Requested master domain 68d8b0e2-c348-4cfe-a896-08c62d491dfb does
> not have expected version 3 it is version 4
> 
> See: 'MASTER_VERSION=4' below:
> 
> Thread-49::DEBUG::2012-11-29
> 20:05:58,337::persistentDict::234::Storage.PersistentDict::(refresh)
> read lines (VGTagMetadataRW)=['VERSION=2',
> u'PV0=pv:36001405c2f5e9d2d3be7d41a8db27dd6,uuid:b62d1B-zFVl-LKrH-fekH-vprs-znOZ-IF6jJy,pestart:0,pecount:4093,mapoffset:0',
> 'TYPE=ISCSI', 'LOGBLKSIZE=512',
> 'SDUUID=68d8b0e2-c348-4cfe-a896-08c62d491dfb', 'LEASERETRIES=3',
> 'LOCKRENEWALINTERVALSEC=5', 'LOCKPOLICY=', 'PHYBLKSIZE=512',
> 'VGUUID=q7sGQ7-1G03-s9mh-dIyx-NSmo-P0zk-23c3Ad',
> 'DESCRIPTION=512gb2', 'CLASS=Data',
> 'POOL_UUID=f1b40ecc-b6a9-44e7-92cb-0fdf445e3175',
> 'IOOPTIMEOUTSEC=10', 'LEASETIMESEC=60', 'MASTER_VERSION=4' ,
> 'ROLE=Master', 'POOL_DESCRIPTION=UB1',
> u'POOL_DOMAINS=afec8026-ccac-4366-bb4b-2150d8731e4c:Active,c2b01420-fc73-4ccc-a560-3e1c5aa28a9f:Active,45fa93a8-1761-4522-bafb-c5d3ab45f731:Attached,68d8b0e2-c348-4cfe-a896-08c62d491dfb:Active',
> 'POOL_SPM_LVER=541',
> '_SHA_CKSUM=d54ce32f30c8040449f2a91ccc6e115e35894a5e',
> 'POOL_SPM_ID=-1']
> 
> 
> 2012-11-30 4:08, Simon Donnellan:
> 
> 
> 
> 
> Hi Yeela,
> 
> Thanks for the reply, I've attached a vdsm.log file containing an
> attempt to activate the host, then activate the iSCSI storage
> domain.
> 
> Thanks
> 
> Simon
> 
> 
> 
> 
> On Thu, Nov 29, 2012 at 6:03 PM, Yeela Kaplan < ykap...@redhat.com >
> wrote:
> 
> 
> Hi Simon,
> We could use some more information in order to understand the
> problem,
> could you please attach the vdsm logs?
> Thanks,
> Yeela
> 
> 
> 
> - Original Message -
> > From: "Simon Donnellan" < f...@baconwho.re >
> > To: users@ovirt.org
> > Sent: Thursday, November 29, 2012 7:35:18 PM
> > Subject: [Users] Storage domain issue iSCSI
> > 
> > 
> > Hi Everyone,
> > 
> > I'm having an issue following a power cut, none of my 3 nodes (All
> > Fedora 17 / oVirt 3.1) are able to attach the master Domain.
> > 
> > in /var/log/messages I see the following on each node:
> > 
> > Nov 29 17:15:41 hades vdsm Storage.StoragePool ERROR Requested
> > master
> > domain 68d8b0e2-c348-4cfe-a896-08c62d491dfb does not have expected
> > version 3 it is version 4
> > Nov 29 17:15:41 hades vdsm TaskManager.Task ERROR
> > Task=`f06fd1bb-46d1-47d7-80ca-c2e01becdc51`::Unexpected error
> > Nov 29 17:15:41 hades vdsm Storage.Dispatcher.Protect ERROR
> > {'status': {'message': "Wrong Master domain or its version:
> > 'SD=68d8b0e2-c348-4cfe-a896-08c62d491dfb,
> > pool=f1b40ecc-b6a9-44e7-92cb-0fdf445e3175'", 'code': 324}}
> > 
> > I&#

Re: [Users] Storage domain issue iSCSI

2012-12-02 Thread Yeela Kaplan
Your log indicates connection only to nfs storage:
/home/iso
/share/MD0_DATA/VMs
can you tell me their UUIDs?
Also a DC is allowed to contain only one storage type (nfs / block),
which does not fit with your question regarding the iSCSI SD, since you have 
attached nfs SD to your DC.
Please check this again and return with more details...

- Original Message -
> From: "Simon Donnellan" 
> To: "Yeela Kaplan" 
> Cc: users@ovirt.org
> Sent: Thursday, November 29, 2012 10:08:17 PM
> Subject: Re: [Users] Storage domain issue iSCSI
> 
> Hi Yeela,
> 
> Thanks for the reply, I've attached a vdsm.log file containing an
> attempt to activate the host, then activate the iSCSI storage
> domain.
> 
> Thanks
> 
> Simon
> 
> 
> 
> 
> On Thu, Nov 29, 2012 at 6:03 PM, Yeela Kaplan < ykap...@redhat.com >
> wrote:
> 
> 
> Hi Simon,
> We could use some more information in order to understand the
> problem,
> could you please attach the vdsm logs?
> Thanks,
> Yeela
> 
> 
> 
> - Original Message -
> > From: "Simon Donnellan" < f...@baconwho.re >
> > To: users@ovirt.org
> > Sent: Thursday, November 29, 2012 7:35:18 PM
> > Subject: [Users] Storage domain issue iSCSI
> > 
> > 
> > Hi Everyone,
> > 
> > I'm having an issue following a power cut, none of my 3 nodes (All
> > Fedora 17 / oVirt 3.1) are able to attach the master Domain.
> > 
> > in /var/log/messages I see the following on each node:
> > 
> > Nov 29 17:15:41 hades vdsm Storage.StoragePool ERROR Requested
> > master
> > domain 68d8b0e2-c348-4cfe-a896-08c62d491dfb does not have expected
> > version 3 it is version 4
> > Nov 29 17:15:41 hades vdsm TaskManager.Task ERROR
> > Task=`f06fd1bb-46d1-47d7-80ca-c2e01becdc51`::Unexpected error
> > Nov 29 17:15:41 hades vdsm Storage.Dispatcher.Protect ERROR
> > {'status': {'message': "Wrong Master domain or its version:
> > 'SD=68d8b0e2-c348-4cfe-a896-08c62d491dfb,
> > pool=f1b40ecc-b6a9-44e7-92cb-0fdf445e3175'", 'code': 324}}
> > 
> > I've tried reboots/restarts/node re-installs
> > 
> > I can see the PV and the iSCSI sessions fine from the shell.
> > 
> > As this is the master, none of my nodes will start.
> > 
> > Any help would be great.
> > 
> > Kind Regards
> > 
> > Simon
> > 
> > ___
> > Users mailing list
> > Users@ovirt.org
> > http://lists.ovirt.org/mailman/listinfo/users
> > 
> 
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [Users] Storage domain issue iSCSI

2012-11-29 Thread Yeela Kaplan
Hi Simon,
We could use some more information in order to understand the problem,
could you please attach the vdsm logs?
Thanks,
Yeela 

- Original Message -
> From: "Simon Donnellan" 
> To: users@ovirt.org
> Sent: Thursday, November 29, 2012 7:35:18 PM
> Subject: [Users] Storage domain issue iSCSI
> 
> 
> Hi Everyone,
> 
> I'm having an issue following a power cut, none of my 3 nodes (All
> Fedora 17 / oVirt 3.1) are able to attach the master Domain.
> 
> in /var/log/messages I see the following on each node:
> 
> Nov 29 17:15:41 hades vdsm Storage.StoragePool ERROR Requested master
> domain 68d8b0e2-c348-4cfe-a896-08c62d491dfb does not have expected
> version 3 it is version 4
> Nov 29 17:15:41 hades vdsm TaskManager.Task ERROR
> Task=`f06fd1bb-46d1-47d7-80ca-c2e01becdc51`::Unexpected error
> Nov 29 17:15:41 hades vdsm Storage.Dispatcher.Protect ERROR
> {'status': {'message': "Wrong Master domain or its version:
> 'SD=68d8b0e2-c348-4cfe-a896-08c62d491dfb,
> pool=f1b40ecc-b6a9-44e7-92cb-0fdf445e3175'", 'code': 324}}
> 
> I've tried reboots/restarts/node re-installs
> 
> I can see the PV and the iSCSI sessions fine from the shell.
> 
> As this is the master, none of my nodes will start.
> 
> Any help would be great.
> 
> Kind Regards
> 
> Simon
> 
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
> 
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users