Hi Nicolas,
Which DC level are you using?
iSCSI multipath should be supported only from DC with compatibility
version of 3.4
regards,
Maor
On 06/09/2014 01:06 PM, Nicolas Ecarnot wrote:
> Hi,
>
> Context here :
> - 2 setups (2 datacenters) in oVirt 3.4.1 with CentOS 6.4 and 6.5 hosts
> - connec
Hi John,
You are right, if you want to create many VMs from Template you can
create a pool.
I think the main difference between creating a single VM and creating a
pool is that in a pool you can not create a VM with cloned disks.
regards,
Maor
On 06/09/2014 08:45 AM, John Xue wrote:
> Dear all,
quot;).
Just to be sure use SELECT before running the command to check that only
one image is returned:
SELECT * FROM images where image_guid = (SELECT image_guid from
all_disks where disk_alias = 'proxy02_Disk0' and vm_names ='proxy02');
>
> Thank you!
>
> Regard
Neil Wilson.
>
>
>
> On Wed, May 14, 2014 at 2:43 PM, Maor Lipchuk wrote:
>> Hi Neil,
>>
>> Can u please attach the logs of engine and VDSM.
>> What there is in the event log, was there any operation being done on
>> the disk before?
>>
>> re
ay in the SAN which
> consists of 4x3TB drives = 6TB(12x465MB) and each of the LUNS is
> assigned to the main storage domain.
>
> Sorry, perhaps i'm misunderstanding.
>
> Thank you.
>
> Regards.
>
> Neil Wilson.
>
>
>
>
> On Wed, May 14, 2014
Hi Neil,
The luns are showing as "select-able" since they are not being used by
the engine, although other setups might use them which we can't be aware of.
We can't be sure that the LUns are used actively or not because there
could also be a VG which was not cleaned up properly (For example of an
Hi Neil,
Can u please attach the logs of engine and VDSM.
What there is in the event log, was there any operation being done on
the disk before?
regards,
Maor
On 05/14/2014 03:35 PM, Neil wrote:
> Hi guys,
>
> I'm trying to remove a VM and reclaim the space that the VM was using.
> This particu
Thanks
On 05/11/2014 09:24 PM, Maurice James wrote:
> Bug opened
> https://bugzilla.redhat.com/show_bug.cgi?id=1096529
>
> - Original Message -----
> From: "Maor Lipchuk"
> To: "Maurice James"
> Cc: "users"
> Sent: Sunday, May
6888dfc in /rhev/data-center.
Thanks,
Maor
On 05/11/2014 02:37 AM, Maurice James wrote:
> This fails for all VM disks
>
> - Original Message -
> From: "Maor Lipchuk"
> To: "Maurice James"
> Cc: "users"
> Sent: Saturday, May 10, 201
For now, it is not possible, but we are working on it (see
http://www.ovirt.org/Features/Live_Merge)
regards,
Maor
On 05/11/2014 05:53 AM, Yair Zaslavsky wrote:
> From what I see in the code of the remove snapshot command,
> the vm should be in DOWN state in order for the snapshot to be removed (
ues under
image 21484146-1a6c-4a31-896e-da1156888dfc in /rhev/data-center.
Does this only fails for a specific VM or is it also failing for other VMs?
regards,
Maor
On 05/11/2014 12:43 AM, Maurice James wrote:
> VDSM logs from the source and destination are attached
>
>
>
>
&g
Hi Maurice,
I was looking at your engine and VDSM logs, it looks like the operation
of live storage migration has been done on a Host called Staurn, but the
VDSM logs seems to be from Beetlejuice host, can u check this please
regards,
Maor
On 05/10/2014 03:33 AM, Maurice James wrote:
> Live disk
You should be able only to extend disk size, not shrinking it.
There was an idea to integrate virt-sprsify for thin provisioned disks,
(see
http://www.google-melange.com/gsoc/proposal/review/org/google/gsoc2014/utkarshsins/5629499534213120)
although this is not supported yet.
regards,
Maor
On 04
----
>
>
> On 27 March 2014 16:02, Maor Lipchuk <mailto:mlipc...@redhat.com>> wrote:
>
> Hi Gary,
>
> Please take a look at
> http://www.ovirt.org/Feature/iSCSI-Multipath#User_Experience
>
> Regards,
> Maor
&
Hi Gary,
Please take a look at
http://www.ovirt.org/Feature/iSCSI-Multipath#User_Experience
Regards,
Maor
On 03/27/2014 05:59 PM, Gary Lloyd wrote:
> Hello
>
> I have just deployed Ovirt 3.4 on our test environment. Does anyone know
> how the ISCSI multipath issue is resolved ? At the moment it
On 03/12/2014 05:57 PM, Alon Bar-Lev wrote:
>
>
> - Original Message -
>> From: "Itamar Heim"
>> To: "Eli Mesika" , users@ovirt.org
>> Cc: "Maor Lipchuk" , "Tomasz Kołek"
>> , "infra"
>> Sent:
ill be more
generic.
>
>
> BR,
> Tomek
>
> -Original Message-
> From: Maor Lipchuk [mailto:mlipc...@redhat.com]
> Sent: Tuesday, March 11, 2014 8:40 PM
> To: Meital Bourvine; Eyal Edri
> Cc: Tomasz Kołek; users@ovirt.org; infra
> Subject: Re: [Users] [G
On 03/11/2014 05:20 PM, Itamar Heim wrote:
> On 03/11/2014 05:14 PM, Eyal Edri wrote:
>>
>>
>> - Original Message -
>>> From: "Itamar Heim"
>>> To: "Eyal Edri" , "Tomasz Kołek"
>>> , users@ovirt.org, "infra"
>>> Sent: Tuesday, March 11, 2014 5:10:54 PM
>>> Subject: Re: [Users] [GSOC][Gerr
Hi Tomasz,
I'm very please to hear that you are interested in the GSOC project,
see my answers inline.
It's great to see that you know the material pretty well, I'm interested
to hear some more ideas and feedbacks.
I will start a thread soon (Maybe also schedule a call) once getting
more feedback
Hi mad,
Since oVirt version 3.4 we support only two Data Center types, shared
and local.
changing the Data Center between the two is supported, though the Data
Center should not contain any Storage Domains.
So that means that if you have VMs with disks, you will need to export
them to an export do
Hi Boyan,
Generally we don't disconnecting external Lun disks when we remove them
from the oVirt management.
You can disconnect them manually from the host, or use restart.
IIRC one reason for that, is because we might have Storage Domains which
use the same target.
another reason is that we kee
Hi Andreas,
Basically it means that the snapshot was created but the process of the
QEMU is still writing on the original volume (The snapshot), so any
changes you will made while this VM is running will be in the snapshot.
This could be fixed when restarting the VM (as described in the event),
a
On 02/03/2014 07:46 PM, Dafna Ron wrote:
> On 02/03/2014 05:34 PM, Maor Lipchuk wrote:
>> On 02/03/2014 07:18 PM, Dafna Ron wrote:
>>> Maor,
>>>
>>> If snapshotVDSCommand is for live snapshot, what is the offline create
>>> snapshot command?
>
ibvirt, if that
kind of behaviour could happen.
>
> Dafna
>
>
> On 02/03/2014 05:08 PM, Maor Lipchuk wrote:
>> From the engine logs it seems that indeed live snapshot is called (The
>> command is snapshotVDSCommand see [1]).
>> This is done right after the snapshot has b
From the engine logs it seems that indeed live snapshot is called (The
command is snapshotVDSCommand see [1]).
This is done right after the snapshot has been created in the VM and it
signals the qemu process to start using the new volume created.
When live snapshot does not succeed we should see i
On 02/02/2014 02:26 PM, Gianluca Cecchi wrote:
> On Sun, Feb 2, 2014 at 11:21 AM, Maor Lipchuk wrote:
>
>> That is correct, you can also see the size and the fields through the
>> API or ovirt-cli
>> (see
>> http://documentation-devel.engineering.redha
On 02/01/2014 06:52 PM, Itamar Heim wrote:
> On 01/29/2014 10:45 AM, Maor Lipchuk wrote:
>> Hi Sandy,
>>
>> virtual size is the size of the disk the VM knows, it is actually the
>> size you chose to create it with.
>> The true size is the summerise of all the true
Hi,
Please see inline response.
Regards,
Maor
On 01/29/2014 02:35 PM, Nicolas Ecarnot wrote:
> Le 29/01/2014 13:29, Maor Lipchuk a écrit :
>> Hi Nicolas,
>>
>> Can u please attach the VDSM logs of the problematic nodes and valid
>> nodes, the engine log and also the s
Hi Nicolas,
Can u please attach the VDSM logs of the problematic nodes and valid
nodes, the engine log and also the sanlock log.
You wrote that many nodes suddenly began to become
unresponsive,
Do you mean that the hosts switched to non-responsive status in the engine?
I'm asking that because non
which is
> still in place and was in place during the crash.
>
> Thanks
> - Trey
>
> On Tue, Jan 28, 2014 at 2:45 AM, Maor Lipchuk wrote:
>> Hi Trey,
>>
>> Can you please also attach the engine/vdsm logs.
>>
>> Thanks,
>> Maor
>>
>> On
Hi Sandy,
virtual size is the size of the disk the VM knows, it is actually the
size you chose to create it with.
The true size is the summerise of all the true size which the volumes
related to disk.
So for example if you have one disk of 20G and you occupied 18GB of it.
Then you created a snaps
Hi Trey,
Can you please also attach the engine/vdsm logs.
Thanks,
Maor
On 01/27/2014 06:12 PM, Trey Dockendorf wrote:
> I setup my first oVirt instance since 3.0 a few days ago and it went
> very well, and I left the single host cluster running with 1 VM over
> the weekend. Today I come back an
Adding Sandro to the thread
On 01/12/2014 11:26 AM, Maor Lipchuk wrote:
> On 01/09/2014 12:29 PM, Nicolas Ecarnot wrote:
>> Hi Maor, hi everyone,
>>
>> Le 07/01/2014 04:09, Maor Lipchuk a écrit :
>>> Looking at bugzilla, it could be related to
>>> https://bu
On 01/09/2014 12:29 PM, Nicolas Ecarnot wrote:
> Hi Maor, hi everyone,
>
> Le 07/01/2014 04:09, Maor Lipchuk a écrit :
>> Looking at bugzilla, it could be related to
>> https://bugzilla.redhat.com/1029069
>> (based on the exception described at
>> https://bu
Hi Nicolas,
I think that the initial problem started at 10:06 when VDSM tried to
clear records of the ancestor volume
c50561d9-c3ba-4366-b2bc-49bbfaa4cd23 (see [1])
Looking at bugzilla, it could be related to
https://bugzilla.redhat.com/1029069
(based on the exception described at
https://bugzilla
Hi Nicolas,
Can u please also add the VDSM log.
Thanks,
Maor
On 01/06/2014 11:25 AM, Nicolas Ecarnot wrote:
> Hi,
>
> With our oVirt 3.3, I created a snapshot 3 weeks ago on a VM I've
> properly shutdown.
> It ran so far.
> Today, after having shut it down properly, I'm trying to delete the
> sn
Hi
There is a patch which adds a validation to block detaching an ISO
domain while there are VMs with attached ISO files.
(see http://gerrit.ovirt.org/#/c/20331)
Regarding to Ernest question, IIRC The VM should run even if the ISO is
not reachable, Ernest, can u please attach the logs of VDSM and
Hi Sven,
What is the path of the third local storage domain you can't add.
Could it be the same as the other existing connections (/home/DATA,
/data/images/rhev)?
It could be an issue of this bug https://bugzilla.redhat.com/1023739 - -
Cannot create same Local SD path on different Hosts
Regards,
Hi Juan,
Based on the code, exporting of a VM only exports images of a VM/Template.
It will not export direct luns or shared disks.
Regards,
Maor
On 11/11/2013 05:31 PM, Juan Pablo Lorier wrote:
> Hi Liron,
>
> I was told the opposite prior in the list. Snapshots of direct-luns are
> not impleme
he 'select' returns 0 registers. Without 'where' clause, it returns the
> three domains that I have.
>
> When I tried remove a disk, it failed. Follow the logs of engine and spm:
>
> ENGINE
> http://pastebin.com/aFMvC5tN
>
> SPM
> http://pastebin.com/KQ9mt
ted with a export storage domain, that was removed one year
> ago!
>
> So, from where SPM is getting this UUID and why it's trying to get
> information from it?
>
> Thanks.
>
> On 07/30/2013 04:25 AM, Maor Lipchuk wrote:
>> Hi Eduardo,
>> Can u please also add
Hi Joop, sounds great.
Can u please just send an update of the BZ number which was opened,just
to have it scrubbed more quickly.
Thanks,
Maor
On 07/30/2013 03:00 PM, noc wrote:
> Hi All,
>
> Because of discussion on IRC I'll make a RFE for the following:
> When you have lots (>25) ISCSI LUN's wh
Hi Eduardo,
Can u please also add the engine log and the full VDSM log (if you have
other hosts then please add their vdsm.log as well)
Thanks.
Maor
On 07/29/2013 11:10 PM, Eduardo Ramos wrote:
> Hi all!
>
> My SPM has logging such a strange message on vdsm.log. I tries to get
> information from
aved if the VM has no spare disks or space?
> Regards,
>
> On 06/02/2013 03:07 PM, Itamar Heim wrote:
>> On 06/02/2013 02:27 PM, Maor Lipchuk wrote:
>>> Hi Juan,
>>> Snapshot saves the VM at a particular state in time using qcow (Creates
>>> a new volume and
Hi Juan,
Snapshot saves the VM at a particular state in time using qcow (Creates
a new volume and only write the changes from that particular time).
Export domain also saves the VM data but it using copy process of the
volumes, there for, snapshot is more faster process but dependent on the
VM disk
Hi Alessandro,
Please see inline comments
Regards,
Maor
On 06/02/2013 01:16 PM, Alessandro Bianchi wrote:
> Il 02/06/2013 11:00, Maor Lipchuk ha scritto:
>> Hi Alessandro,
>> Please see inline comments
>>
>> Regards,
>> Maor
>>
>> On 05/31/2013 10:49
Hi Alessandro,
Please see inline comments
Regards,
Maor
On 05/31/2013 10:49 PM, Joop wrote:
> Alessandro Bianchi wrote:
>> Hi all
>>
>> I'm unable to activate a domain but if I mount it from the shell it
>> mounts with no problem at all
>>
> Could you add ovirt version and OS version?
>
> Joop
>
6 111 |E-Mail
> s.knohsa...@netbiscuits.com | Skype: netbiscuits.admin
> Netbiscuits GmbH | Europaallee 10 | 67657 | GERMANY
>
>
> -Ursprüngliche Nachricht-
> Von: Maor Lipchuk [mailto:mlipc...@redhat.com]
> Gesendet: Mittwoch, 29. Mai 2013 15:32
> An: Sven Knohsalla
>
Hi Sven,
It looks like a bug.
It sounds like that disk can be unlocked from the DB.
Just to make sure, can u log into your SPM host and run:
vdsClient -s 0 getAllTasksStatuses, see if there are not tasks running?
Regards,
Maor
On 05/29/2013 03:27 PM, Sven Knohsalla wrote:
> Hi,
>
>
>
> I ran
hose VMs were slimmer then the 3.1 VM.
Did u managed to reproduced it on other Vms from 3.1?
Was it reproduced all the time?
Regards,
Maor
On 05/27/2013 02:47 PM, ov...@qip.ru wrote:
> Hi, Maor
>
> + messages
>
> Thanks,
> Vadim
>
> Пнд 27 Май 2013 12:56:50 +0400, M
Thanks,
> Vadim
>
> Вск 26 Май 2013 21:41:54 +0400, Maor Lipchuk написал:
>> Hi,
>> Can u please also attach the engine log, and the full log of VDSM
>>
>> In the log you sent I see you got the message
>> supervdsm::190::SuperVdsmProxy::(_connect) Connect to sv
Hi,
Can u please also attach the engine log, and the full log of VDSM
In the log you sent I see you got the message
supervdsm::190::SuperVdsmProxy::(_connect) Connect to svdsm failed
This behaviour could be related to https://bugzilla.redhat.com/910005
which was fixed in later version of VDSM.
But
...@logicworks.pt wrote:
> I'm using Version 3.2.1-, libvirt-0.10.2.4-1.fc18, vdsm-4.10.3-10.fc18
> is there an upgrade?
>
> Regards
> Jose
>
> --------
> *From: *"Maor Lipchuk"
> *To: *supo
8e279f5'", 'code': 268}}
>
> Can you please teach me how to apply the fix?
>
> Thanks
> Jose
>
>
> *From: *"Maor Lipchuk"
> *To: *supo...@logicworks.pt
> *Cc: *"users@oVirt.org"
> *Sent: *Segu
Hi suporte,
You probably encountered this bug https://bugzilla.redhat.com/884635.
The fix for this bug should delete the disk from the engine even though
the image does not exists in the storage.
Can u please attach or check the VDSM and engine log just to be sure,
that you encountered the same sc
013 17:10, "Maor Lipchuk" <mailto:mlipc...@redhat.com>> ha scritto:
>>
>> FYI, the patch is at http://gerrit.ovirt.org/#/c/13172/
>>
>> Thanks for your attention,
>> Maor
>>
>> On 03/19/2013 05:52 PM, Maor Lipchuk wrote:
>>
On 03/27/2013 10:37 AM, Dan Kenigsberg wrote:
> (dropping announce list. they only care about the finished product, not
> the road to get it done)
>
> On Tue, Mar 26, 2013 at 10:24:16AM +0100, Gianluca Cecchi wrote:
>> On Thu, Mar 21, 2013 at 2:36 PM, Mike Burns wrote:
>>
do they
>From the VDSM log, it seems that the master storage domain was not
responding.
Thread-23::DEBUG::2013-03-22
18:50:20,263::domainMonitor::216::Storage.DomainMonitorThread::(_monitorDomain)
Domain 1083422e-a5db-41b6-b667-b9ef1ef244f0 changed its status to Invalid
Traceback (most recent call la
Hi Gianluce,
The Host which creates the disks, or any storage related allocation
operation is only the SPM in the DC.
Regards,
Maor
On 03/24/2013 10:30 AM, Gianluca Cecchi wrote:
> On Sun, Mar 24, 2013 at 9:05 AM, Maor Lipchuk wrote:
>> Hi Gianluca,
>> You can set the VM to ru
Hi Gianluca,
See my comments inline.
Regards,
Maor
On 03/22/2013 10:56 AM, Gianluca Cecchi wrote:
> Hello,
> this is my situation. Downtime of the VMs is not a problems as it is a
> test environment
> all based on 3.2.1 on f18 and ovirt stable repo
>
> DC1
> FC type with cluster1
> 2 hosts conne
Hi Gianluca,
You can set the VM to run on a specific host, when editing the VM and
choose the Host tab and Run On a specific host
Generally, you can configure your cluster to use three different types
of selections: None, Even Distributed and Power Saving.
This can be set at the Cluster Policy tab
On 03/21/2013 06:12 PM, Maor Lipchuk wrote:
> Was VDSM restart did any change?
>
> If your host is UP and running you can try to Re-initialize your data
> center (By right click on the Data Center) and pick a new storage to be
> the master.
You will need first to add a new stora
DB directly but I'm asking
> if it is possible to have some kind of workaround for these situations.
>
> Many thanks in avanced,
>
> Juanjo.
>
> On Thu, Mar 21, 2013 at 4:52 PM, Maor Lipchuk <mailto:mlipc...@redhat.com>> wrote:
>
> thanks for the
t;
> Many thanks,
>
> Juanjo.
>
> On Wed, Mar 20, 2013 at 5:02 PM, Maor Lipchuk <mailto:mlipc...@redhat.com>> wrote:
>
> On 03/20/2013 05:58 PM, Maor Lipchuk wrote:
> > Hi Juan,
> > I think you encountered this bug
> https://bugzi
On 03/20/2013 05:58 PM, Maor Lipchuk wrote:
> Hi Juan,
> I think you encountered this bug https://bugzilla.redhat.com/881941, the
> log there is quite the same.
> The auto recovery process should fix that after 15 minutes, but need to
> see if it is enabled in your environment.
Th
Hi Juan,
I think you encountered this bug https://bugzilla.redhat.com/881941, the
log there is quite the same.
The auto recovery process should fix that after 15 minutes, but need to
see if it is enabled in your environment.
On 03/20/2013 05:29 PM, Juan Jose wrote:
> I forgot the vdsm.log file,
>
Hi Jose, You first need to add the user.
After it was added you can mark the user, and then you should see the
permission sub-tab.
Regards,
Maor
On 03/20/2013 04:10 PM, supo...@logicworks.pt wrote:
> using the administrator portal, in the users tab -> add, I only have a
> search field.
> Don't f
Also adding Federico to the thread, could be a san lock issue:
SanlockException(-203, 'Sanlock lockspace add failure', 'Sanlock exception')
Regards,
Maor
On 03/20/2013 01:50 PM, Eli Mesika wrote:
>
>
> - Original Message -
>> From: "Limor Gavish"
>> To: "Eli Mesika"
>> Cc: users@ovirt
Adding Timothy to the thread.
Remove live snapshot is not supported yet, but it's in the roadmap.
For now VM must be shut down before removing a snapshot.
On 03/19/2013 07:11 PM, Gianluca Cecchi wrote:
> On Tue, Mar 19, 2013 at 5:52 PM, wrote:
>> I can create a snapshot, but how can I revert it
FYI, the patch is at http://gerrit.ovirt.org/#/c/13172/
Thanks for your attention,
Maor
On 03/19/2013 05:52 PM, Maor Lipchuk wrote:
> Hi Gianluca, I checked the code of 3.2 and you are right, the patch was
> not pushed there.
> Although it seems that the origin fix exists in 3.2, I wi
Hi Gianluca, I checked the code of 3.2 and you are right, the patch was
not pushed there.
Although it seems that the origin fix exists in 3.2, I will send a patch
to fix that.
Regards,
Maor
On 03/19/2013 05:41 PM, Gianluca Cecchi wrote:
> Hello,
> in my opinion 3.2.1 for fedora 18 doesn't contain
Hi xianghuadu,
any chance it is the ip tables which rejects the communication from your
engine to your ovirt node?
Regards,
Maor
On 02/25/2013 04:07 PM, xianghuadu wrote:
> I am in installation ovirt - engine + ovirt - node, when ovirt - engine add
> node - host node unable to activate. I throug
Hi Json can you please add the logs of engine and VDSM.
There is an open bug on it which might reflect the issue you encountered
https://bugzilla.redhat.com/884635.
I can verify it, the moment you will send the logs.
Regards,
Maor
On 02/05/2013 06:31 AM, Jason Lawer wrote:
> Hi,
>
> I appear t
o putting the host into Maint mode ? Or can I only do that
> when the host is in Maint mode ?
>
>
> On 17 January 2013 20:42, Maor Lipchuk <mailto:mlipc...@redhat.com>> wrote:
>
> Hi Alex,
> See my last respond, I add a comment i
Would changing the SPM priority in the host config have the desired
> effect w/o putting the host into Maint mode ? Or can I only do that when
> the host is in Maint mode ?
You can't change your SPM host while your host is up and running.
>
>
> On 17 January 2013 20:42, Maor L
storage domains, one of the reasons
we use the SPM is to avoid collisions when hosts write to the same
storage domain.
>
> Alex
>
>
> On 01/17/2013 07:39 PM, Maor Lipchuk wrote:
>> Hi Alex,
>> The storage domains are part of the Data Center, and should not be
>>
Hi Alex,
The storage domains are part of the Data Center, and should not be
related to clusters.
Clusters are used for migrating VMs, which are the qemu processes,
between hosts.
Disks are created on the storage domains regardless to clusters.
The host which write the data in the storage domain to
Hi Adrian,
Sorry for the late respond.
Please see inline comments,
and feel free if there are any more questions, or other issues which you
want us to address to.
p.s. Since some of the responds were in different threads, I gathered
all of them to this email.
> 3) About the attached logs the vi
Hi Alex,
Can you please open a bug on that issue.
It appears that since your template have dots in its name it cannot be
import.
Regards,
Maor
On 01/16/2013 05:55 PM, Omer Frenkel wrote:
> [adding Gilad]
> what engine version is this? (latest upstream looks different and i am
> able to import "co
Hi Jiri,
Perhaps you are referring to quota (http://www.ovirt.org/Features/Quota)
Regards,
Maor
On 01/09/2013 12:15 PM, Jiri Belka wrote:
> Hi,
>
> in vSphere you can create a resource pool[1] and define to it access control
> and delegation...
>
>
> Access control and delegation - When a top
Hi Alex, if the VMs that you created used thin provisioning for the
template, then the disks of the created VM are based on the volumes of
the template images using qcow, and there for, they will be related to
this template.
If you don't want the VM to be based on the template images you can use
t
Hi Adrián,
Does the template disk allocation policy indicates thin provisioning or
preallocate?
I also didn't found any bug related to it.
Can you please attach VDSM and engine logs.
Thanks,
Maor
On 12/20/2012 11:28 PM, Adrian Gibanel wrote:
> I'm going to describe the bug.
>
> You create a vir
Hi Erik,
How much free storage space do you have in your domains?
Can you please add VDSM log.
Regards,
Maor
On 12/19/2012 05:58 AM, Erik Jacobs wrote:
> I attempted to take a snapshot of a machine while it was running. I
> noticed that the machine was paused, and then I attempted to resume it.
Hi Vince,
Live merge is not implemented yet, although it is in the roadmap.
Regards,
Maor
On 11/22/2012 03:29 PM, Vincent Miszczak wrote:
> Hi,
>
>
>
> I’ve tried live VM disks snapshot. Its ok. It appears I cannot delete
> those snapshots while the VM is running.
>
>
>
> Is it a bug? If
Hi Jörn,
If disk was failed to move to a destination storage domain, because VDSM
failed to complete the task successfully , it can be fixed from the DB,
although it might be a bit complicated and risky.
In case no one of the disks are shareable, you need to update the disks
status back to OK, and
On 10/26/2012 04:20 PM, Greg Padgett wrote:
> On 10/26/2012 08:21 AM, Karli Sjöberg wrote:
>> Hi,
>>
>> today I had a VM that had a couple of hard drives created, that I wanted
>> destroyed, so I shift-clicked to mark all of them:
>>
>>
>> This screen showed up:
>>
>>
>> I clicked "OK", the disks w
Hi Changsen,
Can you please add logs of engine and VDSM.
Regards,
Maor
On 09/10/2012 11:36 AM, Changsen Xu wrote:
> Hi, all,
>
> I'm happy to get NFS domain added, but somehow, I destroyed
> one non-master domain (probably clicked twice seeing no immediately
> response).
>
> Then somehow, late
Hi Umarzuki,
can you please add engine and VSDM logs.
Regards,
Maor
On 09/06/2012 06:22 AM, Umarzuki Mochlis wrote:
> Hi,
>
> I installed ovirt 3.1 from dre repo on centos 6 using allinone plugin
> from that repo
> it timed-out during adding local host
>
> when I logged into web management, I c
On 08/16/2012 12:29 AM, Darrell Budic wrote:
> I had a Thin Provision disk that was about 6Gb prior to moving it to a
> new storage domain. Now it's 40GB (the full size of the volume) even
> though it still says it's a Thin Provision allocation. Is this expected
> or is there any way to avoid it? T
m checking what caused the bug.
Regards,
Maor
On 08/14/2012 02:53 PM, Ricky Schneberger wrote:
> Hi
>
> I attached the logs.
>
> //Ricky
>
> On 2012-08-14 12:56, Maor Lipchuk wrote:
>> Hi Ricky can you please add the log of VDSM, also the full engine log.
>>
>
Hi Ricky can you please add the log of VDSM, also the full engine log.
Regards,
Maor
On 08/14/2012 01:22 PM, Ricky Schneberger wrote:
> After an normal "yum update" i am unable to get one of the storage
> domains "UP".
>
> My systems is running Centos 6.3.
>
> If I try to activate the domain i
On 05/28/2012 02:39 PM, Elad Tabak wrote:
> Hi,
> I'm trying to attach a virtual disk to a VM.
> Once I create a virtual disk for the VM, the VM attempts to start (sand
> clock icon), then after a minute goes into down state, the virtual disk
> disappear, and there's an alarm with the following err
201 - 292 of 292 matches
Mail list logo