Re: [ovirt-users] unreachable iso domain

2016-05-19 Thread Erik Brakke
Hey Fabrice,

I had this issue too (just goofing around on a home lab).  The problem for
me was the NFS export was not configured.  I followed the howto (
http://www.ovirt.org/documentation/how-to/troubleshooting/troubleshooting-nfs-storage-issues/)
and exported the ISO domain and I was able to attach in the GUI and use the
ISO uploader.  Not sure why engine-setup didn't export it for me, but that
was all that was missing.

-Erik

On Thu, May 19, 2016 at 12:45 PM Fabrice Bacchella <
fabrice.bacche...@icloud.com> wrote:

> I try to attach an iso domain but it keep saying that it doesn't exist.
> But if I look in /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf,
> the directory given in OVESETUP_CONFIG/isoDomainStorageDir exists and the
> logs says :
>
> 2016-05-19 19:23:23,900 ERROR
> [org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand]
> (org.ovirt.thread.pool-8-thread-19) [78115edf] Command
> 'AttachStorageDomainVDSCommand(
> AttachStorageDomainVDSCommandParameters:{runAsync='true',
> storagePoolId='17434f4e-8d1a-4a88-ae39-d2ddd46b3b9b',
> ignoreFailoverLimit='false',
> storageDomainId='2a9fe2d7-ea38-4ced-a274-32734b7b571b'})' execution failed:
> IRSGenericException: IRSErrorException: Failed to AttachStorageDomainVDS,
> error = Storage domain does not exist:
> (u'2a9fe2d7-ea38-4ced-a274-32734b7b571b',), code = 358
>
> Does code = 358 means something important ?
>
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] unreachable iso domain

2016-05-19 Thread Fabrice Bacchella
I try to attach an iso domain but it keep saying that it doesn't exist. But if 
I look in /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf, the 
directory given in OVESETUP_CONFIG/isoDomainStorageDir exists and the logs says 
:
 
2016-05-19 19:23:23,900 ERROR 
[org.ovirt.engine.core.vdsbroker.irsbroker.AttachStorageDomainVDSCommand] 
(org.ovirt.thread.pool-8-thread-19) [78115edf] Command 
'AttachStorageDomainVDSCommand( 
AttachStorageDomainVDSCommandParameters:{runAsync='true', 
storagePoolId='17434f4e-8d1a-4a88-ae39-d2ddd46b3b9b', 
ignoreFailoverLimit='false', 
storageDomainId='2a9fe2d7-ea38-4ced-a274-32734b7b571b'})' execution failed: 
IRSGenericException: IRSErrorException: Failed to AttachStorageDomainVDS, error 
= Storage domain does not exist: (u'2a9fe2d7-ea38-4ced-a274-32734b7b571b',), 
code = 358

Does code = 358 means something important ?

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


[ovirt-users] gluster VM disk permissions

2016-05-19 Thread Bill James
I tried posting this to ovirt-users list but got no response so I'll try 
here too.



I just setup a new ovirt cluster with gluster & nfs data domains.

VMs on the NFS domain startup with no issues.
VMs on the gluster domains complain of "Permission denied" on startup.

2016-05-17 14:14:51,959 ERROR [org.ovirt.engine.core.dal.dbbroker.audi
tloghandling.AuditLogDirector] (ForkJoinPool-1-worker-11) [] Correlation 
ID: null, Call Stack: null, Custom Event ID: -1, Message: VM 
billj7-2.j2noc.com is down with error. Exit message: internal error: 
process exited while connecting to monitor: 2016-05-17T21:14:51.162932Z 
qemu-kvm: -drive 
file=/rhev/data-center/0001-0001-0001-0001-02c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e,if=none,id=drive-virtio-disk0,format=raw,serial=2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25,cache=none,werror=stop,rerror=stop,aio=threads: 
Could not open 
'/rhev/data-center/0001-0001-0001-0001-02c5/22df0943-c131-4ed8-ba9c-05923afcf8e3/images/2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25/a2b0a04d-041f-4342-9687-142cc641b35e': 
Permission denied



I did setup gluster permissions:
gluster volume set gv1 storage.owner-uid 36
gluster volume set gv1 storage.owner-gid 36

files look fine:
[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# ls -lah
total 2.0G
drwxr-xr-x  2 vdsm kvm 4.0K May 17 09:39 .
drwxr-xr-x 11 vdsm kvm 4.0K May 17 10:40 ..
-rw-rw  1 vdsm kvm  20G May 17 10:33 
a2b0a04d-041f-4342-9687-142cc641b35e
-rw-rw  1 vdsm kvm 1.0M May 17 09:38 
a2b0a04d-041f-4342-9687-142cc641b35e.lease
-rw-r--r--  1 vdsm kvm  259 May 17 09:39 
a2b0a04d-041f-4342-9687-142cc641b35e.meta


I did check and vdsm user can read the file just fine.
*If I change mod disk to 666 VM starts up fine.*
ALso if I chgrp to qemu VM starts up fine.

[root@ovirt2 prod a7af2477-4a19-4f01-9de1-c939c99e53ad]# ls -l 
253f9615-f111-45ca-bdce-cbc9e70406df
-rw-rw 1 vdsm qemu 21474836480 May 18 11:38 
253f9615-f111-45ca-bdce-cbc9e70406df



Seems similar to issue here but that suggests it was fixed:
https://bugzilla.redhat.com/show_bug.cgi?id=1052114



[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep 36 
/etc/passwd /etc/group

/etc/passwd:vdsm:x:36:36:Node Virtualization Manager:/:/bin/bash
/etc/group:kvm:x:36:qemu,sanlock


ovirt-engine-3.6.4.1-1.el7.centos.noarch
glusterfs-3.7.11-1.el7.x86_64
qemu-img-ev-2.3.0-31.el7_2.4.1.x86_64
qemu-kvm-ev-2.3.0-31.el7_2.4.1.x86_64
libvirt-daemon-1.2.17-13.el7_2.4.x86_64


I also set libvirt qemu user to root, for import-to-ovirt.pl script.

[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# grep ^user 
/etc/libvirt/qemu.conf

user = "root"


[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume 
info gv1


Volume Name: gv1
Type: Replicate
Volume ID: 062aa1a5-91e8-420d-800e-b8bc4aff20d8
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: ovirt1-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick2: ovirt2-gl.j2noc.com:/ovirt-store/brick1/gv1
Brick3: ovirt3-gl.j2noc.com:/ovirt-store/brick1/gv1
Options Reconfigured:
performance.readdir-ahead: on
performance.quick-read: off
performance.read-ahead: off
performance.io-cache: off
performance.stat-prefetch: off
cluster.eager-lock: enable
network.remote-dio: enable
cluster.quorum-type: auto
cluster.server-quorum-type: server
features.shard: on
features.shard-block-size: 64MB
storage.owner-uid: 36
storage.owner-gid: 36

[root@ovirt1 prod 2ddf0d0e-6a7e-4eb9-b1d5-6d7792da0d25]# gluster volume 
status gv1

Status of volume: gv1
Gluster process TCP Port  RDMA Port Online  Pid
--
Brick ovirt1-gl.j2noc.com:/ovirt-store/bric
k1/gv1  49152 0 Y   2046
Brick ovirt2-gl.j2noc.com:/ovirt-store/bric
k1/gv1  49152 0 Y   22532
Brick ovirt3-gl.j2noc.com:/ovirt-store/bric
k1/gv1  49152 0 Y   59683
NFS Server on localhost 2049  0 Y   2200
Self-heal Daemon on localhost   N/A   N/A Y   2232
NFS Server on ovirt3-gl.j2noc.com   2049  0 Y   65363
Self-heal Daemon on ovirt3-gl.j2noc.com N/A   N/A Y   65371
NFS Server on ovirt2-gl.j2noc.com   2049  0 Y   17621
Self-heal Daemon on ovirt2-gl.j2noc.com N/A   N/A Y   17629

Task Status of Volume gv1
--
There are no active volume tasks



any ideas on why ovirt thinks it needs group of qemu??



___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 100% memory usage on desktop environments

2016-05-19 Thread nicolas

El 2016-05-19 07:36, Yaniv Kaul escribió:

On Wed, May 18, 2016 at 6:52 PM, Nicolás  wrote:


Hi,

Probably not an oVirt issue, but maybe someone can help. I've
deployed a pretty basic VM (ubuntu 14.04 server, 4GB RAM, 4 CPUs,
15GB storage). Each time I install an additional desktop environment
(Gnome, KDE, whatever...), CPU usage rises to 100% all time to the
extreme that interacting with the machine becomes impossible (maybe
a mouse movement is propagated 3 minutes later or so...).


Is X properly configured with QXL?
xorg-x11-drv-qxl RPM (in Fedora / EL - I assume something similar in
Ubuntu) installed, and x.org [2] CONF file properly set up - something
like:

Section "Device"
 Identifier "qxl"
 Driver "qxl"
EndSection

Section "Screen"
 Identifier "screen"
 Device "qxl"
 DefaultDepth 24
 SubSection "Display"
 Depth 24
 Modes "1024x768"
 EndSubSection
EndSection

Y.
 


Indeed, the package was installed and everything configured as should. I 
tried removing the qxl driver package from the OS and now everything 
goes smooth as silk... Strange, as supposedly this driver should 
optimize the graphics part.


Thanks all for the help!

Regards.




To debug this, I installed LXDE, which is a lightweight desktop
environment based on Xorg. I could see there is an Xorg process
consuming one of the CPUs and the machine stops responding as far as
the desktop environment goes. I have not changed anything in the
configuration file.

I could also see this only happens when QXL is chosen as the
display driver. When CIRRUS is chosen, everything works smoothly and
CPU is ~100% idle. The downside is that we want to use SPICE and
CIRRUS won't allow it.

Why does this happen? Is this an OS-side driver issue? Any hint how
can it be fixed?

Thanks.

Nicolás
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users [1]




Links:
--
[1] http://lists.ovirt.org/mailman/listinfo/users
[2] http://x.org

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt All-in-One upgrade path and requested improvements

2016-05-19 Thread Neal Gompa
On Tue, May 10, 2016 at 11:56 AM, Nir Soffer  wrote:
>
> I agree that hosted engine is not a replacement for all in one
> configuration. It adds lot of unneeded complexity that is not useful
> for single host use case.
>
> Can you explain why you use ovirt for your single host use case,
> and not simpler solution like virt-manager?
>
> ovirt adds lot of complexity and overhead that is not required for
> running couple of vms on a single machine with local storage.
>

My "single host" is a rather large and powerful server. Ideally, it
would be the start of an oVirt cluster, but it looks like I'd have to
rebuild it to grow it. That said, the all-in-one configuration makes
it easy to get started using and learning oVirt. And most certainly,
using virt-manager is fine for a couple of VMs, but when you're
running more than a dozen, it gets extremely annoying to manage in
virt-manager. oVirt makes that a pretty pleasurable experience. Maybe
that's not the case you guys designed it for, but I've appreciated
being able to stand up a super-powerful individual virtualization host
with oVirt and potentially have the flexibility to grow it into a
useful cluster.

On Tue, May 10, 2016 at 4:16 PM, Yedidyah Bar David  wrote:
> On Tue, May 10, 2016 at 6:20 PM, Neal Gompa  wrote:
>> Hello,
>>
>> I recently found out that oVirt "deprecated" the All-in-One
>> configuration option in oVirt 3.6 and removed it in oVirt 4.0. This is
>> a huge problem for me, as it means that my oVirt machines don't have
>> an upgrade path.
>>
>> My experiments with the self-hosted engine have ended in failure for a
>> couple of reasons:
>> * The hosted engine deploy expects that a FQDN is already paired with
>> an IP address. This is obviously false in most home environments, and
>> even in the work environment where I use oVirt regularly. There's no
>> workaround for this (except having a third machine to run the
>> engine!), and this utterly breaks the only way to use oVirt in a
>> DHCP-centric environment where I may not control the network
>> addressing.
>
> There is a simple workaround:
>
> 1. Pick up an FQDN for the engine
> 2. Add a relevant line to /etc/hosts on the host, with a fake
> IP address
> 3. After the engine VM is up and you know its IP address, fix
> the line in /etc/hosts to map to the real address. Also add the
> same line to /etc/hosts on the engine vm (and all other hosts).
>
> I agree it's ugly, but I use this quite a lot when I have to.
>

I'll try that next time around. Thanks.

>>
>> * Other error states have caused the whole thing to break and just
>> leave the system a broken mess. With no way to clean up, I'm left
>> guessing how to undo everything, which is hellish and leads me to just
>> wipe the whole system and start over.
>
> Unlike engine-setup, which is used also for upgrades, hosted-engine
> --deploy is ran only for a clean install, so reinstalling the host
> is much easier than implementing full rollback as in engine-setup.
>
> See also:
>
> https://bugzilla.redhat.com/show_bug.cgi?id=1001181
> http://www.ovirt.org/documentation/how-to/hosted-engine/#recoving-from-failed-install
>

Being able to clean up a hosted-engine --deploy would allow me to try
again if it fails. But I guess the script in the docs is sufficient...

>>
>> In addition, I was hoping that there would be improvements with the
>> single system case, rather than destruction of this capability. Some
>> of the improvements are things I think would be useful in even a
>> multi-node setup, too.
>>
>> For example, I would like to see live migration capabilities with
>> local storage across datacenters, as this capability in vMotion makes
>> deployments a lot more flexible. Sometimes, local storage is really
>> the only way to get the kind of speed needed for some workloads, and
>> being able to offer some kind of HA for VMs on local storage would be
>> excellent. In addition to being useful for all-in-one setups, it's
>> quite useful for self-hosted engine configurations, too.
>>
>> It's also rather irritating that there's no way to migrate stuff from
>> shared storage to local storage and back. On top of that, datacenters
>> that have local storage can't have shared storage or vice versa.
>>
>> On top of that, it looks like the all-in-one code is being kept around
>> anyway for the oVirt Live stuff, so why not just keep the capability
>> and improve it? oVirt should become the best virtualization solution
>> for everyone, not just people who have access to huge datacenters
>> where all the conditions are perfect.
>
> You mention several different issues, some unrelated or even irrelevant
> to all-in-one. I'd suggest opening a thread and/or RFE per each. I'll not
> personally comment because it's not my expertise.

I've opened up an RFE for local storage live migration feature some
time ago[1], when I first found out about the all-in-one feature going
away. I've just filed an RFE for supporting local and shared storage
in the same data center[2] 

[ovirt-users] [ANN] oVirt 4.0.0 First Beta Release is now available for testing

2016-05-19 Thread Sandro Bonazzola
The oVirt Project is pleased to announce the availability of the First
Beta Release of oVirt 4.0.0 for testing, as of May 19th, 2016

This is pre-release software. Please take a look at our community page[1]
to know how to ask questions and interact with developers and users.
All issues or bugs should be reported via oVirt Bugzilla[2].
This pre-release should not to be used in production.

This release is available now for:
* Fedora 23
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later

This release supports Hypervisor Hosts running:
* Red Hat Enterprise Linux 7.2 or later
* CentOS Linux (or similar) 7.2 or later
* Fedora 23
* oVirt Next Generation Node 4.0

See the release notes draft [3] for installation / upgrade instructions and
a list of new features and bugs fixed.

Notes:
* A new oVirt Live ISO is already available [4].
* A new oVirt Next Generation Node is already available [4]
* A new oVirt Engine Appliance is already available [4]
* A new oVirt Guest Tools ISO is already available [4]
* Mirrors[5] might need up to one day to synchronize.

Additional Resources:
* Read more about the oVirt 4.0.0 alpha release highlights:
http://www.ovirt.org/release/4.0.0/
* Get more oVirt Project updates on Twitter: https://twitter.com/ovirt
* Check out the latest project news on the oVirt blog:
http://www.ovirt.org/blog/

[1] https://www.ovirt.org/community/
[2] https://bugzilla.redhat.com/enter_bug.cgi?classification=oVirt
[3] http://www.ovirt.org/release/4.0.0/
[4] http://resources.ovirt.org/pub/ovirt-4.0_beta1/iso/
[5] http://www.ovirt.org/Repository_mirrors#Current_mirrors

-- 
Sandro Bonazzola
Better technology. Faster innovation. Powered by community collaboration.
See how it works at redhat.com
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] How to change host networks?

2016-05-19 Thread Roderick Mooi
Hi!

Both. My immediate problem is in email 17 May "Cannot sync networks"
(existing setup) with no answer as yet. But I need to do this again even if
I start over (see detail below - I tried but it isn't working so if there's
a better way I'd like to know)...

So essentially, the network I use for setup isn't the same as the final
production network for various reasons - one of them being the installers
requirements re reachability and connectivity; the other that the test lab
has a different subnet and I need to get it working there first; third
requirement is isolation of the management network once setup is complete.

I've almost managed to achieve this by putting the engine into global
maintenance, shut down engine, manually changing the network config on the
hosts (incl vdsm) and engine, rebooting all hosts (which eventually brings
the engine back up), re-configuring logical networks and syncing the
networks - this all works except the sync - it goes on for a few minutes,
then reports lost connectivity to the host and I presume resets the network
config - i.e. the sync doesn't succeed. Besides the hosts showing network
not synced all functionality except migration seems to be working (all
hosts visible and reachable and I can start a VM on any of them)

Thanks very much,

Roderick

Roderick Mooi

Senior Engineer: South African National Research Network (SANReN)
Meraka Institute, CSIR

roder...@sanren.ac.za | +27 12 841 4111 | www.sanren.ac.za

On Wed, May 18, 2016 at 3:25 PM, Shmuel Melamud  wrote:

> Hi!
>
> ​Do you have any existing setup that you want to migrate or you're only
> planning?
>
> ​Shmuel​
>
> On Wed, May 18, 2016 at 2:04 PM, Roderick Mooi 
> wrote:
>
>> Good day
>>
>> I would like to change the IP addresses of the host networks used by
>> Ovirt including the ovirtmgmt bridge. I need to move them to a different
>> subnet post installation. What is the best/safest way to do this?
>>
>> Thanks,
>>
>> Roderick Mooi
>>
>> Senior Engineer: South African National Research Network (SANReN)
>> Meraka Institute, CSIR
>>
>> roder...@sanren.ac.za | +27 12 841 4111 | www.sanren.ac.za
>>
>> ___
>> Users mailing list
>> Users@ovirt.org
>> http://lists.ovirt.org/mailman/listinfo/users
>>
>>
>
___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] 100% memory usage on desktop environments

2016-05-19 Thread David Jaša
On St, 2016-05-18 at 16:52 +0100, Nicolás wrote:
> Hi,
> 
> Probably not an oVirt issue, but maybe someone can help. I've deployed a 
> pretty basic VM (ubuntu 14.04 server, 4GB RAM, 4 CPUs, 15GB storage). 
> Each time I install an additional desktop environment (Gnome, KDE, 
> whatever...), CPU usage rises to 100% all time to the extreme that 
> interacting with the machine becomes impossible (maybe a mouse movement 
> is propagated 3 minutes later or so...).
> 
> To debug this, I installed LXDE, which is a lightweight desktop 
> environment based on Xorg. I could see there is an Xorg process 
> consuming one of the CPUs and the machine stops responding as far as the 
> desktop environment goes. I have not changed anything in the 
> configuration file.
> 
> I could also see this only happens when QXL is chosen as the display 
> driver. When CIRRUS is chosen, everything works smoothly and CPU is 
> ~100% idle. The downside is that we want to use SPICE and CIRRUS won't 
> allow it.
> 
> Why does this happen? Is this an OS-side driver issue? Any hint how can 
> it be fixed?
> 
> Thanks.
> 
> Nicolás
> ___
> Users mailing list
> Users@ovirt.org
> http://lists.ovirt.org/mailman/listinfo/users

Try disabling KMS (by blacklisting qxl module). IIRC it was new by 14.04
release time while UMS was already mature.

David

___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


Re: [ovirt-users] oVirt hosted-engine setup stuck in final phase registering host to engine

2016-05-19 Thread Ralf Schenk
Hello,

don't waste time on it. I reinstalled ovirt-hosted-engine-ha.noarch an
then after some time engine magically started. I'm now adding hosts to
the engine and will deploy two other instances of the engine on two
other hosts to get it highly available. So far my gluster seems usable
inside the engine and the hosts.

If its interesing for you:  I also setup up HA nfs-ganesha on the hosts
to provide NFS Shares to multiple VM's (will be php-fpm Backends to
Nginx) in an efficient way. I also tested and benchmarked (only
sysbench) using one host as MDS for pNFS with gluster FSAL. So I'm able
to mount my gluster via "mount ... type nfs4 -o minorversion=1" an am
rewarded with pnfs=LAYOUT_NFSV4_1_FILES in "/proc/self/mountstats". I
can see good network distribution and connections to multiple servers of
the cluster when benchmarking an NFS mount.

What I don't understand: Engine and also setup seem to have a problem
with my type 6 bond. That type proved to be best in glusterfs and nfs
performance and distribution over my 2 network-interfaces. Additionally
I'm loosing my IPMI on shared LAN if I use a type 4 802.3ad Bond.

Thats what i have:

eth0___bond0_br0 (192.168.252.x) for VMs/Hosts
eth1__/   \__bond0.10_ovirtmgmt (172.16.252.x, VLAN 10) for
Gluster, NFS, Migration, Management

Is this ok ?

Thanks a lot for your effort. I hope that I can give back something to
the community by actively using the mailing-list.

Bye

Am 18.05.2016 um 16:36 schrieb Simone Tiraboschi:
> Really really strange,
> adding Martin here.
>
>
>
> On Wed, May 18, 2016 at 4:32 PM, Ralf Schenk  > wrote:
>
> Hello,
>
> When I restart (systemctl restart ovirt-ha-broker ovirt-ha-agent)
> broker seems to fail: (from journalctl -xe)
>
> -- Unit ovirt-ha-agent.service has begun starting up.
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: Traceback
> (most recent call last):
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/share/ovirt-hosted-engine-ha/ovirt-ha-broker", line 25, in
> 
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> broker.Broker().run()
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> line 56, in run
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> self._initialize_logging(options.daemon)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> 
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/broker/broker.py",
> line 131, in _initialize_logging
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> level=logging.DEBUG)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 1529, in basicConfig
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: hdlr =
> FileHandler(filename, mode)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 902, in __init__
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]:
> StreamHandler.__init__(self, self._open())
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 925, in _open
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: stream =
> open(self.baseFilename, self.mode)
> May 18 16:26:19 microcloud21 ovirt-ha-broker[2429]: IOError:
> [Errno 6] No such device or address: '/dev/stdout'
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: Traceback (most
> recent call last):
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/share/ovirt-hosted-engine-ha/ovirt-ha-agent", line 25, in
> 
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: agent.Agent().run()
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 77, in run
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]:
> self._initialize_logging(options.daemon)
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib/python2.7/site-packages/ovirt_hosted_engine_ha/agent/agent.py",
> line 159, in _initialize_logging
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]:
> level=logging.DEBUG)
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 1529, in basicConfig
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: hdlr =
> FileHandler(filename, mode)
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 902, in __init__
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]:
> StreamHandler.__init__(self, self._open())
> May 18 16:26:19 microcloud21 ovirt-ha-agent[2430]: File
> "/usr/lib64/python2.7/logging/__init__.py", line 

[ovirt-users] thin provisioned vm suspend

2016-05-19 Thread Dobó László

Hi,

I have an annoying problem with thin provisioned vms,  which are on 
iscsi lvm.
When I'm copying big files, qemu often suspending the vm while vdsm 
extending the volume. (VM test2 has been paused due to no Storage space 
error)
Can i tune this free space detection behavior somehow? - should be good 
to start lvextend much more earlier.


Regards,
enax


___
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users