Re: how to use "virsh shutdown domain --mode [initctl|signal|paravirt) ?

2022-06-01 Thread Lentes, Bernd


- On Jun 1, 2022, at 12:25 PM, Peter Krempa pkre...@redhat.com wrote:

> On Wed, Jun 01, 2022 at 12:05:58 +0200, Lentes, Bernd wrote:
>> Hi,
>> 
>> ocasionally my Virtual Domains running on a pacmekaer cluster don't shutdown,
>> although beeing told to do it.
>> "virsh help shutdown" says:
>>  ...
>> --mode   shutdown mode: acpi|agent|initctl|signal|paravirt
>> 
>> How is it possible to use initctl or signal or paravirt ?
>> What do i have to do ? What are the prerequisites ?
> 
> I presume you use qemu/kvm as virt, right? 

yes.

> In such case only 'acpi' and
> 'agent' are available.
> 

OK. Interesting.

> 'initctl'/'signal' is meant for LXC containers, and
> 'paravirt' is a mode available for the XEN hypervisor

Thanks.

Bernd

smime.p7s
Description: S/MIME Cryptographic Signature


how to use "virsh shutdown domain --mode [initctl|signal|paravirt) ?

2022-06-01 Thread Lentes, Bernd
Hi,

ocasionally my Virtual Domains running on a pacmekaer cluster don't shutdown, 
although beeing told to do it.
"virsh help shutdown" says:
 ...
--mode   shutdown mode: acpi|agent|initctl|signal|paravirt

How is it possible to use initctl or signal or paravirt ?
What do i have to do ? What are the prerequisites ?

Bernd

-- 

Bernd Lentes 
System Administrator 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 


Public key: 

30 82 01 0a 02 82 01 01 00 b3 72 3e ce 2c 0a 6f 58 49 2c 92 23 c7 b9 c1 ff 6c 
3a 53 be f7 9e e9 24 b7 49 fa 3c e8 de 28 85 2c d3 ed f7 70 03 3f 4d 82 fc cc 
96 4f 18 27 1f df 25 b3 13 00 db 4b 1d ec 7f 1b cf f9 cd e8 5b 1f 11 b3 a7 48 
f8 c8 37 ed 41 ff 18 9f d7 83 51 a9 bd 86 c2 32 b3 d6 2d 77 ff 32 83 92 67 9e 
ae ae 9c 99 ce 42 27 6f bf d8 c2 a1 54 fd 2b 6b 12 65 0e 8a 79 56 be 53 89 70 
51 02 6a eb 76 b8 92 25 2d 88 aa 57 08 42 ef 57 fb fe 00 71 8e 90 ef b2 e3 22 
f3 34 4f 7b f1 c4 b1 7c 2f 1d 6f bd c8 a6 a1 1f 25 f3 e4 4b 6a 23 d3 d2 fa 27 
ae 97 80 a3 f0 5a c4 50 4a 45 e3 45 4d 82 9f 8b 87 90 d0 f9 92 2d a7 d2 67 53 
e6 ae 1e 72 3e e9 e0 c9 d3 1c 23 e0 75 78 4a 45 60 94 f8 e3 03 0b 09 85 08 d0 
6c f3 ff ce fa 50 25 d9 da 81 7b 2a dc 9e 28 8b 83 04 b4 0a 9f 37 b8 ac 58 f1 
38 43 0e 72 af 02 03 01 00 01


smime.p7s
Description: S/MIME Cryptographic Signature


how to reliably shutdown domains ?

2022-03-08 Thread Lentes, Bernd
Hey guys,

i have a two-node cluster with around 20 domains. Cluster-Software is pacemaker 
and corosync, OS is SLES 12 SP5.
The scripts for starting/stopping the domains use virsh. Is there a way to 
reliably shutdown the domains via virsh ?
I'm testing around, but sometimes the domains stop, sometimes they don't, 
sometimes it takes very long so that the cluster times out
and fence the respective node.
I'm using "virsh shutdown domain --mode acpi,agent" for the windows domains 
(not reliable) and for the linux domains
"virsh shutdown domain", also not reliable.

What can i do ?

Bernd

-- 

Bernd Lentes 
System Administrator 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 


Public key: 

30 82 01 0a 02 82 01 01 00 b3 72 3e ce 2c 0a 6f 58 49 2c 92 23 c7 b9 c1 ff 6c 
3a 53 be f7 9e e9 24 b7 49 fa 3c e8 de 28 85 2c d3 ed f7 70 03 3f 4d 82 fc cc 
96 4f 18 27 1f df 25 b3 13 00 db 4b 1d ec 7f 1b cf f9 cd e8 5b 1f 11 b3 a7 48 
f8 c8 37 ed 41 ff 18 9f d7 83 51 a9 bd 86 c2 32 b3 d6 2d 77 ff 32 83 92 67 9e 
ae ae 9c 99 ce 42 27 6f bf d8 c2 a1 54 fd 2b 6b 12 65 0e 8a 79 56 be 53 89 70 
51 02 6a eb 76 b8 92 25 2d 88 aa 57 08 42 ef 57 fb fe 00 71 8e 90 ef b2 e3 22 
f3 34 4f 7b f1 c4 b1 7c 2f 1d 6f bd c8 a6 a1 1f 25 f3 e4 4b 6a 23 d3 d2 fa 27 
ae 97 80 a3 f0 5a c4 50 4a 45 e3 45 4d 82 9f 8b 87 90 d0 f9 92 2d a7 d2 67 53 
e6 ae 1e 72 3e e9 e0 c9 d3 1c 23 e0 75 78 4a 45 60 94 f8 e3 03 0b 09 85 08 d0 
6c f3 ff ce fa 50 25 d9 da 81 7b 2a dc 9e 28 8b 83 04 b4 0a 9f 37 b8 ac 58 f1 
38 43 0e 72 af 02 03 01 00 01


smime.p7s
Description: S/MIME Cryptographic Signature


how can i measure reliably the speed of a virtual disk ?

2021-10-25 Thread Lentes, Bernd
Hi,

we have some domains running on a two-node pacemaker cluster. The disks for the 
domains are raw files
which reside on a SAN.
I measured the speed of one domain inside the guest and got completely weird 
results:
between 18MB/s and 750MB/s !?!

I measured with hdparm -t.
I think this is related because, from the host point of view, some data is on 
the disk and some data is on chache.
The -t i used for hdparm does not influence the host, just the guest.

Measures on the host give reliable and reasonable results about 350 MByte/s.

How can i measure reliably the speed of a virtual disk ?

Bernd

-- 

Bernd Lentes 
System Administrator 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 


Public key: 

30 82 01 0a 02 82 01 01 00 b3 72 3e ce 2c 0a 6f 58 49 2c 92 23 c7 b9 c1 ff 6c 
3a 53 be f7 9e e9 24 b7 49 fa 3c e8 de 28 85 2c d3 ed f7 70 03 3f 4d 82 fc cc 
96 4f 18 27 1f df 25 b3 13 00 db 4b 1d ec 7f 1b cf f9 cd e8 5b 1f 11 b3 a7 48 
f8 c8 37 ed 41 ff 18 9f d7 83 51 a9 bd 86 c2 32 b3 d6 2d 77 ff 32 83 92 67 9e 
ae ae 9c 99 ce 42 27 6f bf d8 c2 a1 54 fd 2b 6b 12 65 0e 8a 79 56 be 53 89 70 
51 02 6a eb 76 b8 92 25 2d 88 aa 57 08 42 ef 57 fb fe 00 71 8e 90 ef b2 e3 22 
f3 34 4f 7b f1 c4 b1 7c 2f 1d 6f bd c8 a6 a1 1f 25 f3 e4 4b 6a 23 d3 d2 fa 27 
ae 97 80 a3 f0 5a c4 50 4a 45 e3 45 4d 82 9f 8b 87 90 d0 f9 92 2d a7 d2 67 53 
e6 ae 1e 72 3e e9 e0 c9 d3 1c 23 e0 75 78 4a 45 60 94 f8 e3 03 0b 09 85 08 d0 
6c f3 ff ce fa 50 25 d9 da 81 7b 2a dc 9e 28 8b 83 04 b4 0a 9f 37 b8 ac 58 f1 
38 43 0e 72 af 02 03 01 00 01


smime.p7s
Description: S/MIME Cryptographic Signature


is there a way to stop a domain in 'D' process state (uninterruptible) ?

2021-10-21 Thread Lentes, Bernd
Hi,

how can i stop/shutdown a domain which is in process state 'D' ?
'D' means uninterruptible and a process in 'D' can't be terminated by kill, 
even not with kill -9.

Bernd

-- 

Bernd Lentes 
System Administrator 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 


Public key: 

30 82 01 0a 02 82 01 01 00 b3 72 3e ce 2c 0a 6f 58 49 2c 92 23 c7 b9 c1 ff 6c 
3a 53 be f7 9e e9 24 b7 49 fa 3c e8 de 28 85 2c d3 ed f7 70 03 3f 4d 82 fc cc 
96 4f 18 27 1f df 25 b3 13 00 db 4b 1d ec 7f 1b cf f9 cd e8 5b 1f 11 b3 a7 48 
f8 c8 37 ed 41 ff 18 9f d7 83 51 a9 bd 86 c2 32 b3 d6 2d 77 ff 32 83 92 67 9e 
ae ae 9c 99 ce 42 27 6f bf d8 c2 a1 54 fd 2b 6b 12 65 0e 8a 79 56 be 53 89 70 
51 02 6a eb 76 b8 92 25 2d 88 aa 57 08 42 ef 57 fb fe 00 71 8e 90 ef b2 e3 22 
f3 34 4f 7b f1 c4 b1 7c 2f 1d 6f bd c8 a6 a1 1f 25 f3 e4 4b 6a 23 d3 d2 fa 27 
ae 97 80 a3 f0 5a c4 50 4a 45 e3 45 4d 82 9f 8b 87 90 d0 f9 92 2d a7 d2 67 53 
e6 ae 1e 72 3e e9 e0 c9 d3 1c 23 e0 75 78 4a 45 60 94 f8 e3 03 0b 09 85 08 d0 
6c f3 ff ce fa 50 25 d9 da 81 7b 2a dc 9e 28 8b 83 04 b4 0a 9f 37 b8 ac 58 f1 
38 43 0e 72 af 02 03 01 00 01


smime.p7s
Description: S/MIME Cryptographic Signature


virsh dommemstat doesn't update its information

2021-03-29 Thread Lentes, Bernd
Hi,

i'm playing a bit around with my domains and the balloon driver.
To get information about ballooning i use virsh dommemstat.
But i only get very few information:

virsh # dommemstat vm_idcc_devel
actual 1044480
last_update 0
rss 1030144

Also configuring "dommemstat --domain vm_idcc_devel --period 5 --live"
or "dommemstat --domain vm_idcc_devel --period 5 --current" does neither update 
nor extend the information.

In vm_idcc_devel virtio_balloon is loaded:
idcc-devel:~ # lsmod|grep balloon
virtio_balloon 22788  0

Guest OS is SLES 10 SP4. Is that too old ?
Host OS is SLES 12 SP5.
There are other domains in which the information is updated.
Here is the config from vm_idcc_devel:

virsh # dumpxml vm_idcc_devel

  vm_idcc_devel
  4993009b-42ff-45d9-b1e0-145b8c0c8f82
  2044928
  1044480
  1
  
/machine
  
  
hvm



  
  


  
  
  destroy
  restart
  destroy
  
/usr/bin/qemu-kvm

  
  
  
  
  
  


  
  
  
  
  


  
  


  


  
  


  
  
  
  
  
  


  
  

  
  


  
  
  


  


  


  


  
  
  


  
  
  

  




Bernd


-- 

Bernd Lentes 
System Administrator 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 


Public key: 

30 82 01 0a 02 82 01 01 00 b3 72 3e ce 2c 0a 6f 58 49 2c 92 23 c7 b9 c1 ff 6c 
3a 53 be f7 9e e9 24 b7 49 fa 3c e8 de 28 85 2c d3 ed f7 70 03 3f 4d 82 fc cc 
96 4f 18 27 1f df 25 b3 13 00 db 4b 1d ec 7f 1b cf f9 cd e8 5b 1f 11 b3 a7 48 
f8 c8 37 ed 41 ff 18 9f d7 83 51 a9 bd 86 c2 32 b3 d6 2d 77 ff 32 83 92 67 9e 
ae ae 9c 99 ce 42 27 6f bf d8 c2 a1 54 fd 2b 6b 12 65 0e 8a 79 56 be 53 89 70 
51 02 6a eb 76 b8 92 25 2d 88 aa 57 08 42 ef 57 fb fe 00 71 8e 90 ef b2 e3 22 
f3 34 4f 7b f1 c4 b1 7c 2f 1d 6f bd c8 a6 a1 1f 25 f3 e4 4b 6a 23 d3 d2 fa 27 
ae 97 80 a3 f0 5a c4 50 4a 45 e3 45 4d 82 9f 8b 87 90 d0 f9 92 2d a7 d2 67 53 
e6 ae 1e 72 3e e9 e0 c9 d3 1c 23 e0 75 78 4a 45 60 94 f8 e3 03 0b 09 85 08 d0 
6c f3 ff ce fa 50 25 d9 da 81 7b 2a dc 9e 28 8b 83 04 b4 0a 9f 37 b8 ac 58 f1 
38 43 0e 72 af 02 03 01 00 01


smime.p7s
Description: S/MIME Cryptographic Signature


Re: how to check a virtual disk

2021-03-29 Thread Lentes, Bernd


- On Mar 29, 2021, at 2:09 PM, Peter Krempa pkre...@redhat.com wrote:

> On Mon, Mar 29, 2021 at 13:59:11 +0200, Lentes, Bernd wrote:
>> 
>> - On Mar 29, 2021, at 12:58 PM, Bernd Lentes
>> bernd.len...@helmholtz-muenchen.de wrote:
> 
> [...]
> 
>> 
>> > 
>> 
>> I forgot:
>> host is SLES 12 SP5, virtual domain too.
>> The image file is in raw format.
> 
> Please always attach the VM config XMLs, so that we don't have to guess
> how your disks are configured.




  vm_geneious
  7337ee89-1699-470f-95c4-05ee19203847
  8192000
  8192000
  2
  
hvm

  
  



  
  



  
  destroy
  restart
  destroy
  


  
  
/usr/bin/qemu-kvm

  
  
  
  
  



  


  
  


  
  


  
  



  


  


  


  
  
  
  


  

  


  


  
  


  
  


  




  


  
  


  


  


  


  /dev/urandom
  

  



smime.p7s
Description: S/MIME Cryptographic Signature


Re: how to check a virtual disk

2021-03-29 Thread Lentes, Bernd

- On Mar 29, 2021, at 12:58 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

> Hi,
> 
> we have a two-node cluster with pacemaker a SAN.
> The resources are inside virtual domains.
> The images of the virtual disks reside on the SAN.
> On one domain i have errors from the hd in my log:
> 
> 2021-03-24T21:02:28.416504+01:00 geneious kernel: [2159685.909613] JBD2:
> Detected IO errors while flushing file data on dm-1-8
> 2021-03-24T21:02:46.505323+01:00 geneious kernel: [2159704.012213] JBD2:
> Detected IO errors while flushing file data on dm-1-8
> 2021-03-24T21:02:55.573149+01:00 geneious kernel: [2159713.078560] JBD2:
> Detected IO errors while flushing file data on dm-1-8
> 2021-03-24T21:03:23.702946+01:00 geneious kernel: [2159741.202546] JBD2:
> Detected IO errors while flushing file data on dm-1-8
> 2021-03-24T21:03:30.289606+01:00 geneious kernel: [2159747.796192] 
> [
> cut here ]
> 2021-03-24T21:03:30.289635+01:00 geneious kernel: [2159747.796207] WARNING: 
> CPU:
> 0 PID: 457 at ../fs/buffer.c:1108 mark_buffer_dirty+0xe8/0x100
> 2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796208] Modules
> linked in: st sr_mod cdrom lp parport_pc ppdev parport xfrm_user xfrm_algo
> binfmt_misc uinput nf_log_ipv6 xt_comme
> nt nf_log_ipv4 nf_log_common xt_LOG xt_limit af_packet iscsi_ibft
> iscsi_boot_sysfs ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ipt_REJECT
> xt_pkttype xt_tcpudp iptable_filter ip6table_mangl
> e nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_ipv4
> nf_defrag_ipv4 ip_tables xt_conntrack nf_conntrack libcrc32c ip6table_filter
> ip6_tables x_tables joydev virtio_net net_fai
> lover failover virtio_balloon i2c_piix4 qemu_fw_cfg pcspkr button ext4 crc16
> jbd2 mbcache ata_generic hid_generic usbhid ata_piix sd_mod virtio_rng ahci
> floppy libahci serio_raw ehci_pci bo
> chs_drm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm
> uhci_hcd ehci_hcd usbcore virtio_pci
> 2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796374]
> drm_panel_orientation_quirks libata dm_mirror dm_region_hash dm_log sg
> dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_
> dh_alua scsi_mod autofs4 [last unloaded: parport_pc]
> 2021-03-24T21:03:30.289643+01:00 geneious kernel: [2159747.796400] Supported:
> Yes
> 2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796405] CPU: 0 PID:
> 457 Comm: jbd2/dm-0-8 Not tainted 4.12.14-122.57-default #1 SLE12-SP5
> 2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796406] Hardware
> name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
> rel-1.12.0-0-ga698c89-rebuilt.suse.com 04/01/2014
> 2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796407] task:
> 8ba32766c380 task.stack: 99954124c000
> 2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796409] RIP:
> 0010:mark_buffer_dirty+0xe8/0x100
> 2021-03-24T21:03:30.289646+01:00 geneious kernel: [2159747.796409] RSP:
> 0018:99954124fcf0 EFLAGS: 00010246
> 2021-03-24T21:03:30.289650+01:00 geneious kernel: [2159747.796413] RAX:
> 00a20828 RBX: 8ba209a58d90 RCX: 8ba3292d7958
> 2021-03-24T21:03:30.289651+01:00 geneious kernel: [2159747.796413] RDX:
> 8ba209a585b0 RSI: 8ba24270b690 RDI: 8ba3292d7958
> 2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796414] RBP:
> 8ba3292d7958 R08: 8ba209a585b0 R09: 0001
> 2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796415] R10:
> 8ba328c1c0b0 R11: 8ba287805380 R12: 8ba3292d795a
> 2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796415] R13:
>  R14: 8ba3292d7958 R15: 8ba209a58d90
> 2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796417] FS:
> () GS:8ba333c0() knlGS:
> 2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796417] CS:  0010 
> DS:
>  ES:  CR0: 80050033
> 2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796418] CR2:
> 99bff000 CR3: 000101b06000 CR4: 06f0
> 2021-03-24T21:03:30.289655+01:00 geneious kernel: [2159747.796424] Call Trace:
> 2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796470]
> __jbd2_journal_refile_buffer+0xbb/0xe0 [jbd2]
> 2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796479]
> jbd2_journal_commit_transaction+0xf1a/0x1870 [jbd2]
> 2021-03-24T21:03:30.289657+01:00 geneious kernel: [2159747.796489]  ?
> __switch_to_asm+0x41/0x70
> 2021-03-24T21:03:30.289658+01:00 geneious kernel: [2159747.796490]  ?
> __switch_to_asm+0x35/0x70
> 2021-03-24T21:03:30.289662+01:00 geneious kernel: [2159747.796493]
> kjournald2+0xbb/0x230 [jbd2]
> 2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796499]  ?
> wait_woken+0x80/0x80
> 2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796503]
> kthread+0xf6/0x130
> 2021-03-24T21:03:30.289664+01:00 geneious kernel: [2159747.796508]  ?
> commit_time

how to check a virtual disk

2021-03-29 Thread Lentes, Bernd
Hi,

we have a two-node cluster with pacemaker a SAN.
The resources are inside virtual domains.
The images of the virtual disks reside on the SAN.
On one domain i have errors from the hd in my log:

2021-03-24T21:02:28.416504+01:00 geneious kernel: [2159685.909613] JBD2: 
Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:02:46.505323+01:00 geneious kernel: [2159704.012213] JBD2: 
Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:02:55.573149+01:00 geneious kernel: [2159713.078560] JBD2: 
Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:03:23.702946+01:00 geneious kernel: [2159741.202546] JBD2: 
Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:03:30.289606+01:00 geneious kernel: [2159747.796192] 
[ cut here ]
2021-03-24T21:03:30.289635+01:00 geneious kernel: [2159747.796207] WARNING: 
CPU: 0 PID: 457 at ../fs/buffer.c:1108 mark_buffer_dirty+0xe8/0x100
2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796208] Modules 
linked in: st sr_mod cdrom lp parport_pc ppdev parport xfrm_user xfrm_algo 
binfmt_misc uinput nf_log_ipv6 xt_comme
nt nf_log_ipv4 nf_log_common xt_LOG xt_limit af_packet iscsi_ibft 
iscsi_boot_sysfs ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ipt_REJECT 
xt_pkttype xt_tcpudp iptable_filter ip6table_mangl
e nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_ipv4 
nf_defrag_ipv4 ip_tables xt_conntrack nf_conntrack libcrc32c ip6table_filter 
ip6_tables x_tables joydev virtio_net net_fai
lover failover virtio_balloon i2c_piix4 qemu_fw_cfg pcspkr button ext4 crc16 
jbd2 mbcache ata_generic hid_generic usbhid ata_piix sd_mod virtio_rng ahci 
floppy libahci serio_raw ehci_pci bo
chs_drm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm 
uhci_hcd ehci_hcd usbcore virtio_pci
2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796374]  
drm_panel_orientation_quirks libata dm_mirror dm_region_hash dm_log sg 
dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_
dh_alua scsi_mod autofs4 [last unloaded: parport_pc]
2021-03-24T21:03:30.289643+01:00 geneious kernel: [2159747.796400] Supported: 
Yes
2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796405] CPU: 0 PID: 
457 Comm: jbd2/dm-0-8 Not tainted 4.12.14-122.57-default #1 SLE12-SP5
2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796406] Hardware 
name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
rel-1.12.0-0-ga698c89-rebuilt.suse.com 04/01/2014
2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796407] task: 
8ba32766c380 task.stack: 99954124c000
2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796409] RIP: 
0010:mark_buffer_dirty+0xe8/0x100
2021-03-24T21:03:30.289646+01:00 geneious kernel: [2159747.796409] RSP: 
0018:99954124fcf0 EFLAGS: 00010246
2021-03-24T21:03:30.289650+01:00 geneious kernel: [2159747.796413] RAX: 
00a20828 RBX: 8ba209a58d90 RCX: 8ba3292d7958
2021-03-24T21:03:30.289651+01:00 geneious kernel: [2159747.796413] RDX: 
8ba209a585b0 RSI: 8ba24270b690 RDI: 8ba3292d7958
2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796414] RBP: 
8ba3292d7958 R08: 8ba209a585b0 R09: 0001
2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796415] R10: 
8ba328c1c0b0 R11: 8ba287805380 R12: 8ba3292d795a
2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796415] R13: 
 R14: 8ba3292d7958 R15: 8ba209a58d90
2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796417] FS:  
() GS:8ba333c0() knlGS:
2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796417] CS:  0010 
DS:  ES:  CR0: 80050033
2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796418] CR2: 
99bff000 CR3: 000101b06000 CR4: 06f0
2021-03-24T21:03:30.289655+01:00 geneious kernel: [2159747.796424] Call Trace:
2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796470]  
__jbd2_journal_refile_buffer+0xbb/0xe0 [jbd2]
2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796479]  
jbd2_journal_commit_transaction+0xf1a/0x1870 [jbd2]
2021-03-24T21:03:30.289657+01:00 geneious kernel: [2159747.796489]  ? 
__switch_to_asm+0x41/0x70
2021-03-24T21:03:30.289658+01:00 geneious kernel: [2159747.796490]  ? 
__switch_to_asm+0x35/0x70
2021-03-24T21:03:30.289662+01:00 geneious kernel: [2159747.796493]  
kjournald2+0xbb/0x230 [jbd2]
2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796499]  ? 
wait_woken+0x80/0x80
2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796503]  
kthread+0xf6/0x130
2021-03-24T21:03:30.289664+01:00 geneious kernel: [2159747.796508]  ? 
commit_timeout+0x10/0x10 [jbd2]
2021-03-24T21:03:30.289664+01:00 geneious kernel: [2159747.796510]  ? 
kthread_bind+0x10/0x10
2021-03-24T21:03:30.289665+01:00 geneious kernel: [2159747.796511]  
ret_from_fork+0x35/0x40
2021-03-24T21:0

Access to an OCFS2 Partition outside the VM

2021-03-18 Thread Lentes, Bernd
Hi,

i'm thinking about installing two Ubuntu versions on one pc, one as a host 
system with KVM, the other in a virtual domain.
I need access from both to several OCFS2 partitions with big amount of data. 
The partitions are on harddisks attached to that pc.
Is it possible to access an OCFS2 partition which is outside the domain ?
And if yes how can i do that ?
I know that i have to configure OCFS2 and that i need DLM.
Buth systems have network, that shouldn't be the problem.


Bernd

-- 

Bernd Lentes 
System Administrator 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671




Re: Is it possible that "virsh destroy" does not stop a domain ?

2020-10-08 Thread Lentes, Bernd



- On Oct 7, 2020, at 7:26 PM, Peter Crowther peter.crowt...@melandra.com 
wrote:

> Bernd, another option would be a mismatch between the message that "virsh
> destroy" issues and the message that force_stop() in the pacemaker agent
> expects to receive. Pacemaker is trying to determine the success or failure of
> the destroy based on the concatenation of the text of the exit code and the
> text output by virsh; if either of those have changed between virsh versions,
> and especially if virsh destroy ever exits with a status other than zero, then
> you'll get that OCF error.

> Do you know what $VIRSH_OPTIONS ends up as in your Pacemaker config,
> particularly whether --graceful is specified?

> Cheers,

> - Peter


Hi Peter,

that means in the end that with "virsh destroy" i can't be 100% sure that a 
domain is stopped.
Is there another way ?

Bernd

Helmholtz Zentrum München

Helmholtz Zentrum München



Is it possible that "virsh destroy" does not stop a domain ?

2020-10-07 Thread Lentes, Bernd
Hi,

Is it possible that "virsh destroy" does not stop a domain ?
I'm asking because i have some domains running in a two-node HA-Cluster 
(pacemaker).
And sometimes one node get fenced (killed) because it couldn't stop a domain.
That's very ugly.

This is also the reason why i asked before what "virsh destroy" really does ?
IIRC a kill -9 can't terminate a process which is in "D" state (uninterruptible 
sleep).
So if the process of the domain is in "D" state, it can't be finished. Right ?

Pacemaker tries to shutdown or destroy a domain with a resource agent, which is 
a shell script, similar 
to an init script.

Here is an excerp from the resource agent for virtual domains:

force_stop()
{
local out ex translate
local status=0

ocf_log info "Issuing forced shutdown (destroy) request for domain 
${DOMAIN_NAME}."
out=$(LANG=C virsh $VIRSH_OPTIONS destroy ${DOMAIN_NAME} 2>&1)  
# hier wird die domain destroyed
ex=$?
translate=$(echo $out|tr 'A-Z' 'a-z')
echo >&2 "$translate"
case $ex$translate in
*"error:"*"domain is not running"*|*"error:"*"domain not 
found"*|\
*"error:"*"failed to get domain"*)
: ;; # unexpected path to the intended outcome, all is 
well   sucess
[!0]*)
ocf_exit_reason "forced stop failed"   # < 
fail of destroy seems to be possible
return $OCF_ERR_GENERIC ;; 
0*)
while [ $status != $OCF_NOT_RUNNING ]; do
VirtualDomain_status
status=$?
done ;;
esac
return $OCF_SUCCESS
}

The function force_stop is responsible for stop/destroy the domain.
And it cares about a non-working "virsh destroy".
Is there a developer who can explain what "virsh destroy" really does ?
Or is there another ML for the developers ?

Bernd

-- 

Bernd Lentes 
Systemadministration 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 

stay healthy
Helmholtz Zentrum München

Helmholtz Zentrum München




Re: time in domain very unstable

2020-10-07 Thread Lentes, Bernd



- Am 6. Okt 2020 um 22:52 schrieb Jim Fehlig jfeh...@suse.com:

> On 10/6/20 7:55 AM, Lentes, Bernd wrote:
>> Hi,
>> 
>> i have a domain (SLES 10 SP4) running with KVM.
> 
> Wow, that's old! I'm surprised time keeping is your only problem :-).

It is indeed the only problem.
> 
>> Time is very wrong when booting unless ntp synchronizes.
>> 
>> What can i do ?
> 
> What type of  are you using? Have you tried the common ones? hpet, pit,
> rtc?
> 
> Regards,
> Jim

Hi Jim,

that's all i find in the config:

  


Bernd
Helmholtz Zentrum München

Helmholtz Zentrum München



time in domain very unstable

2020-10-06 Thread Lentes, Bernd
Hi,

i have a domain (SLES 10 SP4) running with KVM.
Time is very wrong when booting unless ntp synchronizes.

What can i do ?

Bernd

-- 

Bernd Lentes 
Systemadministration 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 

stay healthy
Helmholtz Zentrum München

Helmholtz Zentrum München




Re: what does "virsh destroy" really ?

2020-10-06 Thread Lentes, Bernd



- Am 6. Okt 2020 um 1:12 schrieb Digimer li...@alteeve.ca:

> On 2020-10-05 6:04 p.m., Lentes, Bernd wrote:
>> Hi,
>> 
>> what does "virsh destroy" with the domain ? Send a kill -9 to the process ?
>> 
>> Bernd
>> 
> 
> It forces the guest off, like pulling the power on a hardware machine.
> Not sure of the exact mechanism behind the scenes. It does leave the
> server defined and you can reboot it again later (albeit like restoring
> power to a normal machine, so it might need to replay journals, etc).


Hi,

I know what it does, i'd like to know _how_ it does it.
Maybe i have to look in the source code, although i'm not a big code reader and 
much less a code developer.
Where can i find it ?

Bernd
Helmholtz Zentrum München

Helmholtz Zentrum München



what does "virsh destroy" really ?

2020-10-05 Thread Lentes, Bernd
Hi,

what does "virsh destroy" with the domain ? Send a kill -9 to the process ?

Bernd

-- 

Bernd Lentes 
Systemadministration 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 

stay healthy
Helmholtz Zentrum München

Helmholtz Zentrum München




Fwd: can't define domain - error: cannot open /dev/null: Operation not permitted

2020-09-21 Thread Lentes, Bernd


> Von: "Lentes, Bernd" 
> Datum: 21. September 2020 um 18:38:48 MESZ
> An: Martin Kletzander 
> Betreff: Aw:  can't define domain - error: cannot open /dev/null: Operation 
> not permitted
> 
> Hi Martin,
> 
> after configuring the logging and a restart of the service the problem 
> dissapeared !?!
> 
> Bernd
> 
> Bernd Lentes
> 
> > Am 21.09.2020 um 10:02 schrieb Martin Kletzander :
> > 
> > On Sun, Sep 20, 2020 at 01:09:51PM +0200, Lentes, Bernd wrote:
> >> Hi,
> >> 
> > 
> > Hi, I'll start with the usual...
> > 
> >> i have a two-node cluster running on SLES 12 with pacemaker.
> >> The cluster refused to start the domains on one node.
> >> So i took some of the domains out of the cluster and tried to start it 
> >> manually.
> >> This is what happened:
> >> 
> >> virsh # define /mnt/share/vm_documents-oo.xml
> >> error: Failed to define domain from /mnt/share/vm_documents-oo.xml
> >> error: cannot open /dev/null: Operation not permitted
> >> 
> >> Same with another domain.
> >> 
> > 
> > What does the XML look like? What do the logs[0] say?
> > 
> >> On the other node domains are defined and started without problems.
> > 
> > Are the configs (libvirtd.conf and qemu.conf) the same on both nodes?
> > 
> > Have a nice day,
> > Martin
> > 
> > [0] https://libvirt.org/kbase/debuglogs.html
> > 
> >> Permissions on /dev and /dev/null are the same:
> >> 
> >> ha-idg-1:/mnt/share # ll -d /dev
> >> drwxr-xr-x 25 root root 5420 Sep 17 20:47 /dev
> >> ha-idg-1:/mnt/share # ll /dev/null
> >> crw-rw-rw- 1 root root 1, 3 Aug 24 14:39 /dev/null
> >> 
> >> ha-idg-2:/mnt/share # ll -d /dev
> >> drwxr-xr-x 25 root root 5340 Sep 9 10:31 /dev
> >> ha-idg-2:/mnt/share # ll /dev/null
> >> crw-rw-rw- 1 root root 1, 3 Aug 24 15:48 /dev/null
> >> 
> >> ha-idg-1 is the one causing trouble.
> >> Both systems SLES 12 SP4, same patchlevel.
> >> livirt is:
> >> ha-idg-1:/mnt/share # rpm -qa|grep -i libvirt
> >> libvirt-daemon-driver-storage-iscsi-4.0.0-8.15.2.x86_64
> >> libvirt-libs-4.0.0-8.15.2.x86_64
> >> python-libvirt-python-4.0.0-2.34.x86_64
> >> libvirt-daemon-driver-nwfilter-4.0.0-8.15.2.x86_64
> >> libvirt-glib-1_0-0-0.2.1-1.2.x86_64
> >> typelib-1_0-LibvirtGLib-1_0-0.2.1-1.2.x86_64
> >> libvirt-admin-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-storage-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-storage-core-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-storage-scsi-4.0.0-8.15.2.x86_64
> >> libvirt-client-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-nodedev-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-storage-logical-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-qemu-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-secret-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-storage-rbd-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-interface-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-network-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-storage-mpath-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-qemu-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-config-network-4.0.0-8.15.2.x86_64
> >> libvirt-daemon-driver-storage-disk-4.0.0-8.15.2.x86_64
> >> 
> >> Any ideas ?
> >> Maybe a restart would help, but it's Linux, not Windows ...
> >> I'd like to understand what's going wrong.
> >> 
> >> Thanks.
> >> 
> >> 
> >> Bernd
> >> -- 
> >> 
> >> Bernd Lentes
> >> Systemadministration
> >> Institute for Metabolism and Cell Death (MCD)
> >> Building 25 - office 122
> >> HelmholtzZentrum München
> >> bernd.len...@helmholtz-muenchen.de
> >> phone: +49 89 3187 1241
> >> phone: +49 89 3187 3827
> >> fax: +49 89 3187 2294
> >> http://www.helmholtz-muenchen.de/mcd
> >> 
> >> stay healthy
> >> Helmholtz Zentrum München
> >> 
> >> Helmholtz Zentrum München
> >> 
> >> 

Helmholtz Zentrum Münche
Helmholtz Zentrum Münche

Fwd: can't define domain - error: cannot open /dev/null: Operation not permitted

2020-09-21 Thread Lentes, Bernd


> Von: "Lentes, Bernd" 
> Datum: 21. September 2020 um 18:38:48 MESZ
> An: Martin Kletzander 
> Betreff: Aw:  can't define domain - error: cannot open /dev/null: Operation 
> not permitted
> 

Helmholtz Zentrum München


Helmholtz Zentrum München



can't define domain - error: cannot open /dev/null: Operation not permitted

2020-09-20 Thread Lentes, Bernd
Hi,

i have a two-node cluster running on SLES 12 with pacemaker.
The cluster refused to start the domains on one node.
So i took some of the domains out of the cluster and tried to start it manually.
This is what happened:

virsh # define /mnt/share/vm_documents-oo.xml
error: Failed to define domain from /mnt/share/vm_documents-oo.xml
error: cannot open /dev/null: Operation not permitted

Same with another domain.

On the other node domains are defined and started without problems.
Permissions on /dev and /dev/null are the same:

ha-idg-1:/mnt/share # ll -d /dev
drwxr-xr-x 25 root root 5420 Sep 17 20:47 /dev
ha-idg-1:/mnt/share # ll /dev/null
crw-rw-rw- 1 root root 1, 3 Aug 24 14:39 /dev/null

ha-idg-2:/mnt/share # ll -d /dev
drwxr-xr-x 25 root root 5340 Sep  9 10:31 /dev
ha-idg-2:/mnt/share # ll /dev/null
crw-rw-rw- 1 root root 1, 3 Aug 24 15:48 /dev/null

ha-idg-1 is the one causing trouble.
Both systems SLES 12 SP4, same patchlevel.
livirt is:
ha-idg-1:/mnt/share # rpm -qa|grep -i libvirt
libvirt-daemon-driver-storage-iscsi-4.0.0-8.15.2.x86_64
libvirt-libs-4.0.0-8.15.2.x86_64
python-libvirt-python-4.0.0-2.34.x86_64
libvirt-daemon-driver-nwfilter-4.0.0-8.15.2.x86_64
libvirt-glib-1_0-0-0.2.1-1.2.x86_64
typelib-1_0-LibvirtGLib-1_0-0.2.1-1.2.x86_64
libvirt-admin-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-storage-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-storage-core-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-storage-scsi-4.0.0-8.15.2.x86_64
libvirt-client-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-nodedev-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-storage-logical-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-qemu-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-secret-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-storage-rbd-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-interface-4.0.0-8.15.2.x86_64
libvirt-daemon-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-network-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-storage-mpath-4.0.0-8.15.2.x86_64
libvirt-daemon-qemu-4.0.0-8.15.2.x86_64
libvirt-daemon-config-network-4.0.0-8.15.2.x86_64
libvirt-daemon-driver-storage-disk-4.0.0-8.15.2.x86_64

Any ideas ?
Maybe a restart would help, but it's Linux, not Windows ...
I'd like to understand what's going wrong.

Thanks.


Bernd
-- 

Bernd Lentes 
Systemadministration 
Institute for Metabolism and Cell Death (MCD) 
Building 25 - office 122 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 

stay healthy
Helmholtz Zentrum München

Helmholtz Zentrum München




changing memory size with virsh setmem - results only visible in domain, not on host

2020-03-06 Thread Lentes, Bernd
Hi,

i have a Linux domain (Ubuntu 14.04) where we like to be able to change the 
amount of usable memory.
We have a balloon device and statistics are switched in 5 sec. rhythm.
The domain shows very quickly changes (following top) when we change the memory 
size with setmem, but the host does not show changes in the use 
of memory for the respective domain.
Is that expected behavior ?

Host is SLES 12 SP4, libvirt is 4.0.0-8.15.2

Bernd
-- 

Bernd Lentes 
Systemadministration 
Institute for Metabolism and Cell Death (MCD) 
Building 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 

Perfekt ist wer keine Fehler macht 
Also sind Tote perfekt
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






Re: Virtio-disk with driver from Microsoft from 2006 ?

2020-03-05 Thread Lentes, Bernd



- On Mar 4, 2020, at 7:50 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

> Hi,
> 
> i wanted to benchmark a windows guest, compare standard driver and virtio
> driver.
> I installed the domain first with an IDE disk.
> I followed
> https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows to
> install the virtIO driver.
> In the device manager my VirtIO disk is recognized as a VirtIO SCSI disk from
> RedHat which seems ok for me.
> But the driver is, following the device manager, a driver from Microsft from
> 21/06/2006, version 10.0.18362.1.
> That's strange in my eyes. This driver is outdated and not from RedHat or
> Fedora, what i expected.
> Also updating the driver and pointing to the CD, even the respective folder,
> didn't work.
> It says the driver is the most recent.
> I used the virtIO-ISO 0.1.173, Windows 10 64bit Edition 1903.
> 

Hi,

i found it out by myself. Important is the driver for the SCSI-Controller, not 
for the disk itself.
The one for the controller is from Red Hat and its date is 12/08/2019.

See here: 
https://forum.qnapclub.de/thread/48403-virtio-treiber-welchen-zeitstempel-hat-der-treiber/
 (unfortunately in german)
and here: 
https://www.reddit.com/r/VFIO/comments/9ci0y3/correct_virtio_disk_driver/

Bernd
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






Virtio-disk with driver from Microsoft from 2006 ?

2020-03-04 Thread Lentes, Bernd
Hi,

i wanted to benchmark a windows guest, compare standard driver and virtio 
driver.
I installed the domain first with an IDE disk.
I followed 
https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows to 
install the virtIO driver.
In the device manager my VirtIO disk is recognized as a VirtIO SCSI disk from 
RedHat which seems ok for me.
But the driver is, following the device manager, a driver from Microsft from 
21/06/2006, version 10.0.18362.1.
That's strange in my eyes. This driver is outdated and not from RedHat or 
Fedora, what i expected.
Also updating the driver and pointing to the CD, even the respective folder, 
didn't work.
It says the driver is the most recent.
I used the virtIO-ISO 0.1.173, Windows 10 64bit Edition 1903.

Bernd

-- 

Bernd Lentes 
Systemadministration 
Institute for Metabolism and Cell Death (MCD) 
Building 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 

Perfekt ist wer keine Fehler macht 
Also sind Tote perfekt
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






Re: can hotplug vcpus to running Windows 10 guest, but not unplug

2020-02-17 Thread Lentes, Bernd



- On Feb 15, 2020, at 12:47 AM, Marc Roos m.r...@f1-outsourcing.eu wrote:

> Would you mind sharing your xml? I have strange high host load on idle
> windows guest/domain


  pathway
  8235e5ae-0756-4286-5407-9fa02d372046
  Pathway Studio Dietrich
  16146944
  4147456
  
  4
  




  
  
hvm

  
  



  
  

  
  destroy
  restart
  destroy
  


  
  
/usr/bin/qemu-kvm

  
  
  
  


  
  
  
  


  


  



  
  
  
  





  


  
  


  
  

  

Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






Re: can hotplug vcpus to running Windows 10 guest, but not unplug

2020-02-14 Thread Lentes, Bernd



- On Feb 14, 2020, at 4:13 PM, Peter Krempa pkre...@redhat.com wrote:

> 
> Sounds like qemu doesn't support unplug of vcpus. Which version of qemu
> do you use?

ha-idg-2:~ # rpm -qa|grep qemu
qemu-seabios-1.11.0-5.18.1.noarch
qemu-ovmf-x86_64-2017+git1510945757.b2662641d5-3.16.1.noarch
qemu-block-curl-2.11.2-5.18.1.x86_64
qemu-sgabios-8-5.18.1.noarch
qemu-x86-2.11.2-5.18.1.x86_64
libvirt-daemon-qemu-4.0.0-8.15.2.x86_64
qemu-ipxe-1.0.0+-5.18.1.noarch
qemu-tools-2.11.2-5.18.1.x86_64
qemu-vgabios-1.11.0-5.18.1.noarch
qemu-block-ssh-2.11.2-5.18.1.x86_64
qemu-kvm-2.11.2-5.18.1.x86_64
qemu-2.11.2-5.18.1.x86_64
libvirt-daemon-driver-qemu-4.0.0-8.15.2.x86_64
qemu-block-rbd-2.11.2-5.18.1.x86_64

I found a table on 
https://access.redhat.com/documentation/en-us/red_hat_virtualization/4.3/html/virtual_machine_management_guide/cpu_hot_plug
saying that hotplugging is possible but no hotunplugging.
But i don't know how recent this information is and if RedHat uses libvirt/qemu.

Bernd
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






can hotplug vcpus to running Windows 10 guest, but not unplug

2020-02-14 Thread Lentes, Bernd
Hi,

i'm playing a bit around with vcpus.
My guest is Windows 10 1903.
This is the excerpt from the config:
 ...
 4
  




  
 ...

I'm able to hotplug vcpus, but when i want to unplug them i get the following:

virsh # setvcpus pathway 3 --live
virsh # setvcpus pathway 4 --live
virsh # setvcpus pathway 2 --live
error: internal error: unable to execute QEMU command 'device_del': acpi: 
device unplug request for not supported device type: qemu64-x86_64-cpu

Does anyone know why it can't be unplugged ?

Thanks.


Brtnf

-- 

Bernd Lentes 
Systemadministration 
Institute for Metabolism and Cell Death (MCD) 
Building 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 

Perfekt ist wer keine Fehler macht 
Also sind Tote perfekt
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






Re: problems with understanding of the memory parameters in the xml file

2020-02-13 Thread Lentes, Bernd
- On Feb 12, 2020, at 8:34 AM, Peter Krempa pkre...@redhat.com wrote:

> to briefly summarize what those three knobs do:
> 
> 1) memory - this is the initial memory size for a VM. qemu grants this
> amount of memory to the VM on start. This is also the memory the guest
> is able to use if the balloon driver is not loaded as the balloon driver
> voluntarily gives up memory from the guest OS to the host.
> 
> 2) currentmemory - in case the guest is using the balloon driver this is
> the actual memory size it's using. This field is dynamically updated to
> reflect the size reported by the balloon driver
> 
> 3) maxMemory - This knob controls the maximum size of the memory when
> memory hotplug is used. This basically sets the amount of address space
> and memory slots the VM has so that new memory can be plugged in later.

Aaaah.
 
> The above can also be added during runtime e.g. using virsh
> attach-device. Hence hotplug. It can also be unplugged during runtime
> but that requires guest cooperation and there are a few caveats of this.
> Namely to successfully unplug the memory the guest must not write any
> non-movable pages into it so that it can give up the memory later. On
> linux that means that no memory-mapped I/O regions can be created there
> which may lead to weird guest behaviour if the memory is onlined as
> movable. I'm not sure how windows behaves in this regard though, but
> AFAIK it supports memory hotplug just fine.
> 
>> What i find concerning ballooning is that it doesn't work automatically but 
>> has
>> to be adjusted
>> manually. Is that right ?
> 
> No, unfortunately none of this works automatically.
> 
>> Is my idea right, does that work basically ? If yes how do i have to set the
>> parameters ?
>> Is the memory released after the guest has e.g. finished his calculation ?
>> Does that work automatically or do i have to adjust that manually ?
> 
> When using the balloon driver you can set the 'currentMemory' size down
> to some reasonable value and the balloon driver will then return the
> memory to the host. There were some attempts to make this automatic, but
> I don't remember how they went. One other caveat is that any memory
> returned by the balloon driver to the host may be available to the guest
> again e.g. on reboot when the balloon driver is removed.
> 
> For a 1 NUMA node guest the memory hotplug an balloon can theoretically
> be combined but unplugging of the memory might not work while the ballon
> is inflated.
> 
> I hope this clarified it somehwat.

Yes it did.Thanks.
Ballon and Memeory hotplug are two different things, right ?
Which is better, where are the advantages and disadvantages ? 
I played a bit around with ballooning and it went like a charm.
If i try to use Hotplugging and inserts "maxMemory" and "memory model='dimm'" 
in the config,
libvirt complains i have to add a "numa" entry.
I don't know much about Numa, so maybe it's better not to use hotplugging.

While reading in the internet i stumbled across KSM for Linux, which is 
recommended for the host if you have
several guests of the same OS.
What do you think about it ? 

Btw: is it also possible to add cpu's to guests during runtime ?

Thanks.


Bernd
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






problems with understanding of the memory parameters in the xml file

2020-02-11 Thread Lentes, Bernd
Hi guys,

despite reading hours and hours in the internet i'm still struggling with
"memory", "currentmemory" and "maxMemory".

Maybe you can help me to sort it out.

My idea is that a guest has an initial value of memory (which "memory" seems to 
be) when booting.
We have some Windows 10 guests which calculate some stuff and i would like to 
increase memory during runtime
until it reaches a fixed maximum value.
My hope was that a higher "maxMemory" could solve this, that the guest claims 
more memory and gets it.
I didn't get it. Is my idea wrong ? Do i need a balloon driver for that ?
What i find concerning ballooning is that it doesn't work automatically but has 
to be adjusted
manually. Is that right ?
Balloon drivers for windows are available.

Is my idea right, does that work basically ? If yes how do i have to set the 
parameters ?
Is the memory released after the guest has e.g. finished his calculation ?
Does that work automatically or do i have to adjust that manually ?

Thanks.


Bernd

-- 

Bernd Lentes 
Systemadministration 
Institute for Metabolism and Cell Death (MCD) 
Building 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd 

Perfekt ist wer keine Fehler macht 
Also sind Tote perfekt
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






Re: does the guest have a snapshot ?

2020-02-10 Thread Lentes, Bernd



- On Feb 10, 2020, at 8:29 AM, Peter Krempa pkre...@redhat.com wrote:
> virsh blockpull VM vda
> 

Hi Peter,

that did the job.

Thanks.


Bernd
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






Re: does the guest have a snapshot ?

2020-02-07 Thread Lentes, Bernd



- On Feb 7, 2020, at 3:43 PM, Peter Krempa pkre...@redhat.com wrote:

> On Fri, Feb 07, 2020 at 15:25:22 +0100, Lentes, Bernd wrote:
 ...
> 
> Libvirt is probably lacking the metadata for the snapshot. That is not a
> problem though, because since libvirt doesn't support deletion of
> external snapshots anyways currently you'd need to use the below
> approach anyways.
> 
> virsh blockcommit crispor_1604 vda --active --pivot
> 
> in the case above. that merges the
> file='/var/lib/libvirt/images/crispor_1604/crispor_1604.sn'/
> into file='/var/lib/libvirt/images/crispor_1604/crispor_1604.img'/ and
> finishes the job.
> 
> If you have more complex backing chain you might want to use the --top
> and --base arguments to control which portion to merge as the command
> I've suggested merges everything into the bottom-most image.

Hi Peter,

i'm not lucky:

virsh help blockcommit does not know --active or --pivot.

virsh # help blockcommit

  SYNOPSIS
blockcommit   [] [] [--shallow] [] 
[--delete] [--wait] [--verbose] [--timeout ] [--async]

  OPTIONS
[--domain]   domain name, id or uuid
[--path]   fully-qualified path of disk
[--bandwidth]   bandwidth limit in MiB/s
[--base]   path of base file to commit into (default bottom of 
chain)
--shallowuse backing file of top as base
[--top]   path of top file to commit from (default top of chain)
--delete delete files that were successfully committed
--wait   wait for job to complete
--verbosewith --wait, display the progress
--timeout   with --wait, abort if copy exceeds timeout (in seconds)
--async  with --wait, don't wait for cancel to finish
It does not say anything about the device.

I tried:

virsh # blockcommit crispor_1604 /var/lib/libvirt/images/crispor_1604.sn vda 
--wait --verbose
error: bandwidth must be a number

virsh # blockcommit crispor_1604 /var/lib/libvirt/images/crispor_1604.sn --wait 
--verbose
error: invalid argument: No device found for specified path

virsh # blockcommit crispor_1604 crispor_1604.sn --wait --verbose
error: invalid argument: No device found for specified path

virsh # blockcommit crispor_1604 vda --wait --verbose
error: Operation not supported: committing the active layer not supported yet

virsh # blockcommit crispor_1604 crispor_1604.sn --wait --verbose
error: invalid argument: No device found for specified path

virsh # blockcommit crispor_1604 crispor_1604.sn --verbose
error: missing --wait option

virsh # blockcommit crispor_1604 crispor_1604.sn
error: invalid argument: No device found for specified path

virsh # blockcommit crispor_1604 crispor_1604.sn vda
error: bandwidth must be a number

Am i missing something ? Is there an error or is my libvirt version to old ?
If yes, would it be successfull to copy the files to a host with more recent 
libvirt, define it and then blockcommit ?

I have libvirt 1.2.5-15.3 (host is SLES 11 SP4).

Bernd
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






does the guest have a snapshot ?

2020-02-07 Thread Lentes, Bernd
Hi,

i'm cuurently a bit confused if a guest does have a valid snapshot or not.
This is the xml:
 ...
   
  
  
  



  
  
  
  

 ...

both files are currently in access by the respective qemu process.
lsof:
qemu-kvm  19533   root   13u  REG  253,0  29761732608   
12091393 /var/lib/libvirt/images/crispor_1604/crispor_1604.sn
qemu-kvm  19533   root   14r  REG  253,0 111561775513   
44793857 /var/lib/libvirt/images/crispor_1604/crispor_1604.img

Here are both files:
pc60181:/var/lib/libvirt/images/crispor_1604 # ll
 ...
-rw--- 1 root root 111561775513 Oct 22 15:23 crispor_1604.img
-rw-r--r-- 1 root root  29761732608 Feb  7 15:13 crispor_1604.sn

crispor_1604.sn has a recent timestamp.

The snapshot is currently in use:
virsh # domblklist crispor_1604
Target Source

vda/var/lib/libvirt/images/crispor_1604/crispor_1604.sn


But virsh does not show any snapshot:

virsh # snapshot-list crispor_1604
 Name Creation Time State



So i'm a bit confused. Does it have a valid snapshot or not. How can i find out 
and how can i get rid of it ?

Thanks.


Bernd

-- 

Bernd Lentes 
Systemadministration 
Institute for Metabolism and Cell Death (MCD) 
Building 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/mcd

Perfekt ist wer keine Fehler macht 
Also sind Tote perfekt
Helmholtz Zentrum München

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir.in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671






[libvirt-users] experience with balloon memory ?

2019-09-23 Thread Lentes, Bernd
Hi ML,

i'm thinking about using balloon memory for our domains. We have about 15 
domains running concurrently,
and i think it might be nice if a domain requires more RAM it grabs it, and if 
it don't need it anymore, it releases it.
But i have no experience with it. So i have some questions:

- is live migration possible with balloon ?
- is it stable ?
- the domain needs an appropriate driver i think ?
- are there drivers for Windows 7 and 10 ?
- are there drivers for Linux, which kernel version do i need ?
- does someone have experience with it ? Is there a kind of "best practise" ?

Thanks.


Bernd


-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/idg 

Perfekt ist wer keine Fehler macht 
Also sind Tote perfekt
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] Live-Migration not possible: error: operation failed: guest CPU doesn't match specification

2019-09-18 Thread Lentes, Bernd
Hi,

i have atwo node HA-cluster with pacemaker, corosync, libvirt and KVM.
Recently i configured a new VirtualDomain which runs fine, but live Migration 
does not succeed.
This is the error:

VirtualDomain(vm_snipanalysis)[14322]:  2019/09/18_16:56:54 ERROR: 
snipanalysis: live migration to ha-idg-2 failed: 1
Sep 18 16:56:54 [6970] ha-idg-1   lrmd:   notice: operation_finished:   
vm_snipanalysis_migrate_to_0:14322:stderr [ error: operation failed: guest CPU 
doesn't match specification: missing features: 
fma,movbe,xsave,avx,f16c,rdrand,fsgsbase,bmi1,avx2,smep,bmi2,erms,invpcid,md-clear,xsaveopt,abm
 ]

The two servers are from HP, they are similar, but not identical. Their CPU's 
are different:
One is "Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz", following /proc/cpuinfo, 
the other is "Intel(R) Xeon(R) CPU X5675  @ 3.07GHz".
Which guest CPU should i choose for having the guest running smoothly on each 
host ?

Bernd

-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/idg 

Perfekt ist wer keine Fehler macht 
Also sind Tote perfekt
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] does virsh have a history with timestamps ?

2019-08-12 Thread Lentes, Bernd
Hi,

i knwo that virsh has its own history, but are somewhere the respective 
timestamps logged ?


Bernd

-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/idg 

Perfekt ist wer keine Fehler macht 
Also sind Tote perfekt
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] blockcommit of domain not successfull

2019-06-14 Thread Lentes, Bernd



- On Jun 14, 2019, at 9:14 AM, Peter Krempa pkre...@redhat.com wrote:

> On Thu, Jun 13, 2019 at 16:01:18 +0200, Lentes, Bernd wrote:
>> 
>> - On Jun 13, 2019, at 1:08 PM, Bernd Lentes
>> bernd.len...@helmholtz-muenchen.de wrote:
>> 
>> I found further information in /var/log/messages for both occurrences:
>> 
>> 2019-06-01T03:05:31.620725+02:00 ha-idg-2 systemd-coredump[14253]: Core 
>> Dumping
>> has been disabled for process 30590 (qemu-system-x86).
>> 2019-06-01T03:05:31.712673+02:00 ha-idg-2 systemd-coredump[14253]: Process 
>> 30590
>> (qemu-system-x86) of user 488 dumped core.
>> 2019-06-01T03:05:32.173272+02:00 ha-idg-2 kernel: [294682.387828] br0: port
>> 4(vnet2) entered disabled state
>> 2019-06-01T03:05:32.177111+02:00 ha-idg-2 kernel: [294682.388384] device 
>> vnet2
>> left promiscuous mode
>> 2019-06-01T03:05:32.177122+02:00 ha-idg-2 kernel: [294682.388391] br0: port
>> 4(vnet2) entered disabled state
>> 2019-06-01T03:05:32.208916+02:00 ha-idg-2 wickedd[2954]: error retrieving tap
>> attribute from sysfs
>> 2019-06-01T03:05:41.395685+02:00 ha-idg-2 systemd-machined[2824]: Machine
>> qemu-31-severin terminated.
>> 
>> 
>> 2019-06-08T05:59:17.502899+02:00 ha-idg-1 systemd-coredump[31089]: Core 
>> Dumping
>> has been disabled for process 19489 (qemu-system-x86).
>> 2019-06-08T05:59:17.523050+02:00 ha-idg-1 systemd-coredump[31089]: Process 
>> 19489
>> (qemu-system-x86) of user 489 dumped core.
>> 2019-06-08T05:59:17.650334+02:00 ha-idg-1 kernel: [999258.577132] br0: port
>> 9(vnet7) entered disabled state
>> 2019-06-08T05:59:17.650354+02:00 ha-idg-1 kernel: [999258.578103] device 
>> vnet7
>> left promiscuous mode
>> 2019-06-08T05:59:17.650355+02:00 ha-idg-1 kernel: [999258.578108] br0: port
>> 9(vnet7) entered disabled state
>> 2019-06-08T05:59:25.983702+02:00 ha-idg-1 systemd-machined[1383]: Machine
>> qemu-205-severin terminated.
>> 
>> Core Dumping is disabled, but nevertheless a core dump has been created ?
>> Where could i find it ?
>> Would it be useful to provide it ?
> 
> So this really hints to qemu crashing. It certainly will be beneficial
> to collect the backtrace, but you really should report this (including
> the error message from the vm log file) to the qemu team.
> 
> They might have even fixed it by now, so a plain update might help.

Hi Peter,

thanks for your help. I'll continue on the Qemu ML:
https://lists.nongnu.org/archive/html/qemu-discuss/2019-06/msg00014.html


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] blockcommit of domain not successfull

2019-06-13 Thread Lentes, Bernd


- On Jun 13, 2019, at 1:08 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

I found further information in /var/log/messages for both occurrences:

2019-06-01T03:05:31.620725+02:00 ha-idg-2 systemd-coredump[14253]: Core Dumping 
has been disabled for process 30590 (qemu-system-x86).
2019-06-01T03:05:31.712673+02:00 ha-idg-2 systemd-coredump[14253]: Process 
30590 (qemu-system-x86) of user 488 dumped core.
2019-06-01T03:05:32.173272+02:00 ha-idg-2 kernel: [294682.387828] br0: port 
4(vnet2) entered disabled state
2019-06-01T03:05:32.177111+02:00 ha-idg-2 kernel: [294682.388384] device vnet2 
left promiscuous mode
2019-06-01T03:05:32.177122+02:00 ha-idg-2 kernel: [294682.388391] br0: port 
4(vnet2) entered disabled state
2019-06-01T03:05:32.208916+02:00 ha-idg-2 wickedd[2954]: error retrieving tap 
attribute from sysfs
2019-06-01T03:05:41.395685+02:00 ha-idg-2 systemd-machined[2824]: Machine 
qemu-31-severin terminated.


2019-06-08T05:59:17.502899+02:00 ha-idg-1 systemd-coredump[31089]: Core Dumping 
has been disabled for process 19489 (qemu-system-x86).
2019-06-08T05:59:17.523050+02:00 ha-idg-1 systemd-coredump[31089]: Process 
19489 (qemu-system-x86) of user 489 dumped core.
2019-06-08T05:59:17.650334+02:00 ha-idg-1 kernel: [999258.577132] br0: port 
9(vnet7) entered disabled state
2019-06-08T05:59:17.650354+02:00 ha-idg-1 kernel: [999258.578103] device vnet7 
left promiscuous mode
2019-06-08T05:59:17.650355+02:00 ha-idg-1 kernel: [999258.578108] br0: port 
9(vnet7) entered disabled state
2019-06-08T05:59:25.983702+02:00 ha-idg-1 systemd-machined[1383]: Machine 
qemu-205-severin terminated.

Core Dumping is disabled, but nevertheless a core dump has been created ?
Where could i find it ?
Would it be useful to provide it ?


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] blockcommit of domain not successfull

2019-06-13 Thread Lentes, Bernd



- On Jun 13, 2019, at 9:56 AM, Peter Krempa pkre...@redhat.com wrote:


> 
> Thanks for comming back to me with the information.
> 
> Unfortunately this is not a full debug log but I can try to tell you
> what I see here:

I configured libvirtd that way:
ha-idg-1:~ # grep -Ev '^$|#' /etc/libvirt/libvirtd.conf

log_level = 1
log_filters="1:qemu 3:remote 4:event 3:util.json 3:rpc"
log_outputs="1:file:/var/log/libvirt/libvirtd.log"
keepalive_interval = -1

That's what i found on https://wiki.libvirt.org/page/DebugLogs .
Isn't that correct ? That should create informative logfiles.
The other host has excat the same configuration but produce much bigger 
logfiles !?!
I have libvirt-daemon-4.0.0-8.12.1.x86_64.

> 
>> 2019-06-07 20:30:57.170+: 30299: error : qemuMonitorIO:719 : internal 
>> error:
>> End of file from qemu monitor
>> 2019-06-08 03:59:17.690+: 30299: error : qemuMonitorIO:719 : internal 
>> error:
>> End of file from qemu monitor
> 
> So this looks like qemu crashed. Or at least it's the usual symptom we
> get. Is there anything in /var/log/libvirt/qemu/$VMNAME.log?

That's all:
qemu-system-x86_64: block/mirror.c:864: mirror_run: Assertion 
`((&bs->tracked_requests)->lh_first == ((void *)0))' failed.


> 
>> 2019-06-08 03:59:26.145+: 30300: warning : qemuGetProcessInfo:1461 : 
>> cannot
>> parse process status data
>> 2019-06-08 03:59:26.191+: 30303: warning : qemuGetProcessInfo:1461 : 
>> cannot
>> parse process status data
>> 2019-06-08 03:59:56.095+: 27956: warning :
>> qemuDomainObjBeginJobInternal:4865 : Cannot start job (destroy, none) for
>> domain severin; current job is (modify, none) owned by (13061
>> remoteDispatchDomainBlockJobAbort, 0 ) for (38s,
>>  0s)
> 
> And this looks to me as if the Abort job can't be interrupted properly
> while waiting synchronously for the job to finish. This seems to be the
> problem. If the VM indeed crashed there's a problem in job waiting
> apparently.
> 
> I'd still really like to have debug logs in this case to really see what
> happened.

I configured logging as i  found on https://wiki.libvirt.org/page/DebugLogs.
What else can i do ?

Bernd

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] blockcommit of domain not successfull

2019-06-11 Thread Lentes, Bernd


- On Jun 5, 2019, at 4:49 PM, Peter Krempa pkre...@redhat.com wrote:

> On Wed, Jun 05, 2019 at 13:33:49 +0200, Lentes, Bernd wrote:
>> Hi Peter,
>> 
>> thanks for your help.
>> 
>> - On Jun 5, 2019, at 9:27 AM, Peter Krempa pkre...@redhat.com wrote:
> 
> [...]
> 
>> 
>> > 
>> > So that's interresting. Usually assertion failure in qemu leads to
>> > calling abort() and thus the vm would have crashed. Didn't you HA
>> > solution restart it?
>> 
>> No. As said the VM didn't crash. It kept running.
> 
> That's interresting. I hope you manage to reproduce it then.
> 
>>  
>> > At any rate it would be really beneficial if you could collect debug
>> > logs for libvirtd which also contain the monitor interactions with qemu:
>> > 
>> > https://wiki.libvirt.org/page/DebugLogs
>> > 
>> > The qemu assertion failure above should ideally be reported to qemu, but
>> > if you are able to reproduce the problem with libvirtd debug logs
>> > enabled I can extract more useful info from there which the qemu project
>> > would ask you anyways.
>> 
>> I can't reproduce it. It seems to happen accidentally. But i can collect the
>> logs. Do they get very large ?
>> I can contact you the next time it happen. Is that ok for you ?
> 
> Unfortunately they do get very large if there's some monitoring
> gathering stats through libvirt, but it's okay to nuke them prior
> to attempting the block commit, or daily or so.
> 
> Please do contact me if you gather anything interresting.

Hi,

it happened again.
Following the log of my script it started on 8th of june at 5:59:09 (UTC+2) to 
blockcommit the domain.
These are the related lines in libvirtd.log:
===
2019-06-07 20:30:57.170+: 30299: error : qemuMonitorIO:719 : internal 
error: End of file from qemu monitor
2019-06-08 03:59:17.690+: 30299: error : qemuMonitorIO:719 : internal 
error: End of file from qemu monitor
2019-06-08 03:59:26.145+: 30300: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-08 03:59:26.191+: 30303: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-08 03:59:56.095+: 27956: warning : 
qemuDomainObjBeginJobInternal:4865 : Cannot start job (destroy, none) for 
domain severin; current job is (modify, none) owned by (13061 
remoteDispatchDomainBlockJobAbort, 0 ) for (38s,
 0s)
2019-06-08 03:59:56.095+: 27956: error : qemuDomainObjBeginJobInternal:4877 
: Timed out during operation: cannot acquire state change lock (held by 
remoteDispatchDomainBlockJobAbort)
2019-06-08 03:59:56.325+: 13060: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-08 03:59:56.372+: 30304: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-08 04:00:26.503+: 13060: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data


Since then the script is stuck.

Thanks for your help.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Prof. Dr. Veronika von Messling
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] blockcommit of domain not successfull

2019-06-05 Thread Lentes, Bernd



- On Jun 5, 2019, at 4:49 PM, Peter Krempa pkre...@redhat.com wrote:

> On Wed, Jun 05, 2019 at 13:33:49 +0200, Lentes, Bernd wrote:
>> Hi Peter,
>> 
>> thanks for your help.
>> 
>> - On Jun 5, 2019, at 9:27 AM, Peter Krempa pkre...@redhat.com wrote:
> 
> [...]
> 
>> 
>> > 
>> > So that's interresting. Usually assertion failure in qemu leads to
>> > calling abort() and thus the vm would have crashed. Didn't you HA
>> > solution restart it?
>> 
>> No. As said the VM didn't crash. It kept running.
> 
> That's interresting. I hope you manage to reproduce it then.
> 

>> I can't reproduce it. It seems to happen accidentally. But i can collect the
>> logs. Do they get very large ?
>> I can contact you the next time it happen. Is that ok for you ?
> 
> Unfortunately they do get very large if there's some monitoring
> gathering stats through libvirt, but it's okay to nuke them prior
> to attempting the block commit, or daily or so.
> 
> Please do contact me if you gather anything interresting.

Hi,

i followed https://wiki.libvirt.org/page/DebugLogs.
Where do i have to set LIBVIRT_LOG_OUTPUTS="1:file:/tmp/libvirt_client.log" ?
Also in /etc/libvirt/libvirtd.conf ?

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] blockcommit of domain not successfull

2019-06-05 Thread Lentes, Bernd
Hi Peter,

thanks for your help.

- On Jun 5, 2019, at 9:27 AM, Peter Krempa pkre...@redhat.com wrote:


>> =
>>  ...
>> 2019-05-31 20:31:34.481+: 4170: error : qemuMonitorIO:719 : internal 
>> error:
>> End of file from qemu monitor
>> 2019-06-01 01:05:32.233+: 4170: error : qemuMonitorIO:719 : internal 
>> error:
>> End of file from qemu monitor
> 
> This message is printed if qemu crashes for some reason and then closes
> the monitor socket unexpectedly.
> 
>> 2019-06-01 01:05:43.804+: 22605: warning : qemuGetProcessInfo:1461 : 
>> cannot
>> parse process status data
>> 2019-06-01 01:05:43.848+: 22596: warning : qemuGetProcessInfo:1461 : 
>> cannot
>> parse process status data
>> 2019-06-01 01:06:11.438+: 26112: warning :
>> qemuDomainObjBeginJobInternal:4865 : Cannot start job (destroy, none) for 
>> doma
>> in severin; current job is (modify, none) owned by (5372
>> remoteDispatchDomainBlockJobAbort, 0 ) for (39s, 0s)
>> 2019-06-01 01:06:11.438+: 26112: error : 
>> qemuDomainObjBeginJobInternal:4877
>> : Timed out during operation: cannot acquire
>> state change lock (held by remoteDispatchDomainBlockJobAbort)
> 
> So this means that the virDomainBlockJobAbort API which is also used for
> --pivot got stuck for some time.
> 
> This is kind of strange if the VM crashed, there might also be a bug in
> the synchronous block job handling, but it's hard to tell from this log.

The VM didn't crash. It kept running.
See "last":
root pts/49   ha-idg-2.scidom. Tue Jun  4 14:02 - 13:18  (23:16)
root pts/47   pc60337.scidom.d Mon Jun  3 15:13   still logged in
reboot   system boot  2.6.4-52-smp Wed May 15 20:19 (20+17:02)
reboot   system boot  2.6.4-52-smp Fri Mar 15 17:38 (81+18:44)
reboot   system boot  2.6.4-52-smp Wed Feb 27 20:29 (15+21:04)

>> The syslog from the domain itself didn't reveal anything, it just continues 
>> to
>> run.
>> The libvirt log from the domains just says:
>> qemu-system-x86_64: block/mirror.c:864: mirror_run: Assertion
>> `((&bs->tracked_requests)->lh_first == ((void *)0))' failed.
> 
> So that's interresting. Usually assertion failure in qemu leads to
> calling abort() and thus the vm would have crashed. Didn't you HA
> solution restart it?

No. As said the VM didn't crash. It kept running.
 
> At any rate it would be really beneficial if you could collect debug
> logs for libvirtd which also contain the monitor interactions with qemu:
> 
> https://wiki.libvirt.org/page/DebugLogs
> 
> The qemu assertion failure above should ideally be reported to qemu, but
> if you are able to reproduce the problem with libvirtd debug logs
> enabled I can extract more useful info from there which the qemu project
> would ask you anyways.

I can't reproduce it. It seems to happen accidentally. But i can collect the 
logs. Do they get very large ?
I can contact you the next time it happen. Is that ok for you ?


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] blockcommit of domain not successfull

2019-06-04 Thread Lentes, Bernd
Hi,

i have several domains running on a 2-node HA-cluster.
Each night i create snapshots of the domains, after copying the consistent raw 
file to a CIFS server i blockcommit the changes into the raw files.
That's running quite well.
But recent the blockcommit didn't work for one domain:
I create a logfile from the whole procedure:
===
 ...
Sat Jun  1 03:05:24 CEST 2019
Target Source

vdb/mnt/snap/severin.sn
hdc-

/usr/bin/virsh blockcommit severin /mnt/snap/severin.sn --verbose --active 
--pivot
Block commit: [  0 %]Block commit: [ 15 %]Block commit: [ 28 %]Block commit: [ 
35 %]Block commit: [ 43 %]Block commit: [ 53 %]Block commit: [ 63 %]Block 
commit: [ 73 %]Block commit: [ 82 %]Block commit: [ 89 %]Block commit: [ 98 
%]Block commit: [100 %]Target Source

vdb/mnt/snap/severin.sn
 ...
==

The libvirtd-log says (it's UTC IIRC):
=
 ...
2019-05-31 20:31:34.481+: 4170: error : qemuMonitorIO:719 : internal error: 
End of file from qemu monitor
2019-06-01 01:05:32.233+: 4170: error : qemuMonitorIO:719 : internal error: 
End of file from qemu monitor
2019-06-01 01:05:43.804+: 22605: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-01 01:05:43.848+: 22596: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-01 01:06:11.438+: 26112: warning : 
qemuDomainObjBeginJobInternal:4865 : Cannot start job (destroy, none) for doma
in severin; current job is (modify, none) owned by (5372 
remoteDispatchDomainBlockJobAbort, 0 ) for (39s, 0s)
2019-06-01 01:06:11.438+: 26112: error : qemuDomainObjBeginJobInternal:4877 
: Timed out during operation: cannot acquire
state change lock (held by remoteDispatchDomainBlockJobAbort)
2019-06-01 01:06:13.976+: 5369: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-01 01:06:14.028+: 22596: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-01 01:06:44.165+: 5371: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-01 01:06:44.218+: 22605: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-01 01:07:14.343+: 5369: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-01 01:07:14.387+: 22598: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
2019-06-01 01:07:44.495+: 22605: warning : qemuGetProcessInfo:1461 : cannot 
parse process status data
 ...
===
and "cannot parse process status data" continuously until the end of the 
logfile.

The syslog from the domain itself didn't reveal anything, it just continues to 
run.
The libvirt log from the domains just says:
qemu-system-x86_64: block/mirror.c:864: mirror_run: Assertion 
`((&bs->tracked_requests)->lh_first == ((void *)0))' failed.

Hosts are SLES 12 SP4 with libvirt-daemon-4.0.0-8.9.1.x86_64.


Bernd




-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/idg 

wer Fehler macht kann etwas lernen 
wer nichts macht kann auch nichts lernen
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] logging of domains

2019-05-29 Thread Lentes, Bernd
Hi,

recently i had some domains stopped without any obvious reason for me. 
Unfortunately i didn't find the cause.
I'd like to log information about the domains that i have more information the 
next time this will happen.
In /etc/libvirt/libvirtd.conf i have:
log_level = 3
log_outputs="3:file:/var/log/libvirt/libvirtd.log"
which creates enormous log files, but with logrotate and xz i can manage that. 
But i think this is just
related to libvirtd.
The logs for the domains under /var/log/libvirt/qemu are poor and very small, 
nearly no information.
Is there a way to be more verbose with the domains so that i may find helpful 
information in these logs when the domains stop for the next time ?

Thanks.


Bernd
-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/idg 

wer Fehler macht kann etwas lernen 
wer nichts macht kann auch nichts lernen
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] domain still running although snapshot-file is deleted !?!

2019-05-15 Thread Lentes, Bernd
Hi,

i have a strange situation:
A domain is still running where domblklist points to a snapshot file and also 
dumpxml says the current drive is that snapshot file.
But the file has been deleted hours ago. And the domain is still running. I can 
login via ssh, the database and the webserver are still running,
domain is performant.
How can that be ?
Also lsof shows that the file is deleted:
qemu-syst 27007 qemu   15ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)
qemu-syst 27007 qemu   16ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)
qemu-syst 27007 27288   qemu   15ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)
qemu-syst 27007 27288   qemu   16ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)
CPU\x200/ 27007 27308   qemu   15ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)
CPU\x200/ 27007 27308   qemu   16ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)
CPU\x201/ 27007 27309   qemu   15ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)
CPU\x201/ 27007 27309   qemu   16ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)
vnc_worke 27007 27321   qemu   15ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)
vnc_worke 27007 27321   qemu   16ur REG 254,14335609856 
183599 /mnt/snap/sim.sn (deleted)

Bernd

-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/idg 

wer Fehler macht kann etwas lernen 
wer nichts macht kann auch nichts lernen
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] domains paused without any obvious reason

2019-05-14 Thread Lentes, Bernd


- Am 14. Mai 2019 um 11:08 schrieb Daniel P. Berrangé berra...@redhat.com:

> 
> 'virsh domstate --reason $GUEST'
> 
> will tell you what event caused the guest to pause in the first place.
> 
> If you can resume successfully, this indicates the event was a transient
> problem.   Given the domblkerror message 'no space' I'm it looks that
> you had a problem running out of disk space temporarily which then
> resolved itself.
> 
> Regards,
> Daniel


Hi,

i have a clue what happened.
The script shuts down the domains, snapshots them, restarts them and then copy 
the backing files to a CIFS
server. After the copy is done (which lasts several hours), the domains are 
blockcommitted.
Finally the script deletes the local snap files. I think the snap files got too 
big,
because the logical volume for them has just 20GB and i'm snapshotting 
currently 8 domains.
Limit of the LV was reached. And because i finally deleted the snapshot files i 
didn't see that.
I will monitor now the LV for the snap files in my script to see how big they 
are growing.

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] domains paused without any obvious reason

2019-05-13 Thread Lentes, Bernd



- On May 13, 2019, at 3:34 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

> Hi,
> 
> i have a two node HA-Cluster with several domains as resources.
> Currently it's running in test mode.
> Some domains (all on the same host) stopped running, virsh list shows them as
> "paused".
> All stopped at the same time (11th of may, 7:00 am), my monitoring system 
> began
> to yell.
> I don't have any clue why this happened.
> virsh domblkerror says for all the domains (5) "no space". The days before the
> domains were running fine and i know that all disks inside the domain should
> have enough space.
> Also the host is not running out of space.
> The logs don't say anything sensefully, unfortunately i didn't have a log for
> the libvirtd daemon, i just configured that now.
> The domains are stopped each day by cron at 10:30 pm for a short moment, a
> snapshot is taken, domains are started again, the backing file is copied to a
> CIFS server and if that is finished the snapshot is blockcommited into the
> backing file.
> That's working fine already for several days. This cronjob creates a log and
> it's looking fine.
> The domains reside in naked Logical Volumes, the respective Volume Group has
> enough space.
> 
> 

I resumed one of the guests and it continued without any problem.
The log doesn't indicate any problem, and df -h shows enough space on
all partitions.


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] domains paused without any obvious reason

2019-05-13 Thread Lentes, Bernd
Hi,

i have a two node HA-Cluster with several domains as resources.
Currently it's running in test mode.
Some domains (all on the same host) stopped running, virsh list shows them as 
"paused".
All stopped at the same time (11th of may, 7:00 am), my monitoring system began 
to yell.
I don't have any clue why this happened.
virsh domblkerror says for all the domains (5) "no space". The days before the 
domains were running fine and i know that all disks inside the domain should 
have enough space.
Also the host is not running out of space.
The logs don't say anything sensefully, unfortunately i didn't have a log for 
the libvirtd daemon, i just configured that now.
The domains are stopped each day by cron at 10:30 pm for a short moment, a 
snapshot is taken, domains are started again, the backing file is copied to a 
CIFS server and if that is finished the snapshot is blockcommited into the 
backing file.
That's working fine already for several days. This cronjob creates a log and 
it's looking fine.
The domains reside in naked Logical Volumes, the respective Volume Group has 
enough space.


Bernd


-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/idg 

wer Fehler macht kann etwas lernen 
wer nichts macht kann auch nichts lernen
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] is it possible to create a snapshot from a guest residing in a plain partition ?

2019-04-04 Thread Lentes, Bernd



- On Apr 3, 2019, at 5:27 PM, Eric Blake ebl...@redhat.com wrote:



> It is possible to create an external snapshot (an internal one is not
> possible, unless you stored the guest disk as qcow2 format embedded
> inside the partition rather than directly as raw format).  Note that
> when you create an external snapshot, the partition becomes a read-only
> point in time (no further updates to that partition), and your new
> file.sn qcow2 wrapper file created by the snapshot operation stores all
> subsequent guest writes. (Well, unless you decide to do a commit
> operation to push the changes from the overlay back into the base file
> and get rid of the overlay)

Hi Eric,

thanks for your answer.
i tried it:

virsh # snapshot-create-as --disk-only --name sn --domain sim
error: unsupported configuration: source for disk 'vdb' is not a regular file; 
refusing to generate external snapshot name

This is the snippet from the conf for the guest:


  
  
  
  
  
  


I'm running SLES 12 SP4 with libvirt 4.0.0-6.13.x86_64.

Any ideas ?


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDirig'in Petra Steiner-Hoffmann
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] is it possible to create a snapshot from a guest residing in a plain partition ?

2019-04-03 Thread Lentes, Bernd
Hi,

i can store the disk of a guest in a plain partition which isn't formatted.
That's no problem, i did it already several times, although the promised speed 
increase didn't appear.

But is it possible to create from such a guest a snapshot in a .sn file using 
virsh ?

Regards,


Bernd

-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
bernd.len...@helmholtz-muenchen.de 
phone: +49 89 3187 1241 
phone: +49 89 3187 3827 
fax: +49 89 3187 2294 
http://www.helmholtz-muenchen.de/idg 

wer Fehler macht kann etwas lernen 
wer nichts macht kann auch nichts lernen
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDirig'in Petra Steiner-Hoffmann
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] concurrent migration of several domains rarely fails

2018-12-10 Thread Lentes, Bernd


Jim wrote:
>> 
>> What is meant by the "admin interface" ? virsh ?
> 
> virsh-admin, which you can use to change some admin settings of libvirtd, e.g.
> log_level. You are interested in the keepalive settings above those ones in
> libvirtd.conf, specifically
> 
> #keepalive_interval = 5
> #keepalive_count = 5
> 
>> What is meant by "client" in libvirtd.conf ? virsh ?
> 
> Yes, virsh is a client, as is virt-manager or any application connecting to
> libvirtd.
> 
>> Why do i have regular timeouts although my two hosts are very performant ? 
>> 128GB
>> RAM, 16 cores, 2 1GBit/s network adapter on each host in bonding.
>> During migration i don't see much load, although nearly no waiting for IO.
> 
> I'd think concurrently migrating 3 VMs on a 1G network might cause some
> congestion :-).
> 
>> Should i set admin_keepalive_interval to -1 ?
> 
> You should try 'keepalive_interval = -1'. You can also avoid sending keepalive
> messages from virsh with the '-k' option, e.g. 'virsh -k 0 migrate ...'.
> 
> If this doesn't help, are you in a position to test a newer libvirt, 
> preferably
> master or the recent 4.10.0 release?

Hi Jim,

Unfortunately not.

I have some more questions, maybe you can help me a bit.
I found 
http://epic-alfa.kavli.tudelft.nl/share/doc/libvirt-devel-0.10.2/migration.html 
, which is
quite interesting.
When i migrate with virsh, i use:
virsh --connect=qemu:///system migrate --verbose --live  domain 
qemu+ssh://ha-idg-1/system

When pacemaker migrates, it creates this sequence:
virsh --connect=qemu:///system --quiet migrate --live  domain 
qemu+ssh://ha-idg-1/system
which is quite the same.
Do i understand the webpage correctly, is this a "Native migration, client to 
two libvirtd servers" ?

Furthermore the document says:
"To force migration over an alternate network interface the optional hypervisor 
specific URI must be provided".

I have both hosts also connected directly to each other with a bonding device 
using round-robin, and an internal ip (192.168.100.xx).
When i want to use this device, which is maybe a bit faster and more secure 
(directly connected), how do i have to specify that ?
virsh --connect=qemu:///system --quiet migrate --live  domain 
qemu+ssh://ha-idg-1/system tcp://192.168.100.xx
Does it have to be the ip from the source or the destination ? Does the source 
then use automatically use
also its device with 192.168.100.xx ?

Thanks.

Bernd

 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDirig.in Petra Steiner-Hoffmann
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] concurrent migration of several domains rarely fails

2018-12-06 Thread Lentes, Bernd


Hi,

sorry, i forgot my setup:

SLES 12 SP3 64bit

ha-idg-1:~ # rpm -qa|grep -i libvirt
libvirt-daemon-driver-secret-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-storage-mpath-3.3.0-5.22.1.x86_64
libvirt-glib-1_0-0-0.2.1-1.2.x86_64
typelib-1_0-LibvirtGLib-1_0-0.2.1-1.2.x86_64
libvirt-daemon-qemu-3.3.0-5.22.1.x86_64
libvirt-daemon-config-network-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-storage-scsi-3.3.0-5.22.1.x86_64
libvirt-client-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-network-3.3.0-5.22.1.x86_64
libvirt-libs-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-storage-disk-3.3.0-5.22.1.x86_64
libvirt-daemon-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-interface-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-qemu-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-storage-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-nwfilter-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-storage-logical-3.3.0-5.22.1.x86_64
libvirt-python-3.3.0-1.38.x86_64
libvirt-daemon-driver-nodedev-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-storage-iscsi-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-storage-rbd-3.3.0-5.22.1.x86_64
libvirt-daemon-driver-storage-core-3.3.0-5.22.1.x86_64


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDirig.in Petra Steiner-Hoffmann
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] concurrent migration of several domains rarely fails

2018-12-06 Thread Lentes, Bernd


> Hi,
> 
> i have a two-node cluster with several domains as resources. During testing i
> tried several times to migrate some domains concurrently.
> Usually it suceeded, but rarely it failed. I found one clue in the log:
> 
> Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+: 3252:
> error : virKeepAliveTimerInternal:143 : internal error: connection closed due
> to keepalive timeout
> 
> The domains are configured similar:
> primitive vm_geneious VirtualDomain \
>params config="/mnt/san/share/config.xml" \
>params hypervisor="qemu:///system" \
>params migration_transport=ssh \
>op start interval=0 timeout=120 trace_ra=1 \
>op stop interval=0 timeout=130 trace_ra=1 \
>op monitor interval=30 timeout=25 trace_ra=1 \
>op migrate_from interval=0 timeout=300 trace_ra=1 \
>op migrate_to interval=0 timeout=300 trace_ra=1 \
>meta allow-migrate=true target-role=Started is-managed=true \
>utilization cpu=2 hv_memory=8000
> 
> What is the algorithm to discover the port used for live migration ?
> I have the impression that "params migration_transport=ssh" is worthless, port
> 22 isn't involved for live migration.
> My experience is that for the migration tcp ports > 49151 are used. But the
> exact procedure isn't clear for me.
> Does live migration uses first tcp port 49152 and for each following domain 
> one
> port higher ?
> E.g. for the concurrent live migration of three domains 49152, 49153 and 
> 49154.
> 
> Why does live migration for three domains usually succeed, although on both
> hosts just 49152 and 49153 is open ?
> Is the migration not really concurrent, but sometimes sequential ?
> 
> Bernd
> 
Hi,

i tried to narrow down the problem.
My first assumption was that something with the network between the hosts is 
not ok.
I opened port 49152 - 49172 in the firewall - problem persisted.
So i deactivated the firewall on both nodes - problem persisted.

Then i wanted to exclude the HA-Cluster software (pacemaker).
I unmanaged the VirtualDomains in pacemaker and migrated them with virsh - 
problem persists.

I wrote a script to migrate three domains sequentially from host A to host B 
and vice versa via virsh.
I raised up the loglevel from libvirtd and found s.th. in the log which may be 
the culprit:

This is the output of my script:

Thu Dec  6 17:02:53 CET 2018
migrate sim
Migration: [100 %]
Thu Dec  6 17:03:07 CET 2018
migrate geneious
Migration: [100 %]
Thu Dec  6 17:03:16 CET 2018
migrate mausdb
Migration: [ 99 %]error: operation failed: migration job: unexpectedly failed   
 <= error !

Thu Dec  6 17:05:32 CET 2018  < time of error
Guests on ha-idg-1: \n
 IdName   State

 1 simrunning
 2 geneious   running
 - mausdb shut off

migrate to ha-idg-2\n
Thu Dec  6 17:05:32 CET 2018

This is what journalctl told:

Dec 06 17:05:32 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:32.481+: 12553: 
info : virKeepAliveTimerInternal:136 : RPC_KEEPALIVE_TIMEOUT: ka=0x55b2bb937740 
client=0x55b2bb930d50 countToDeath=0 idle=30
Dec 06 17:05:32 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:32.481+: 12553: 
error : virKeepAliveTimerInternal:143 : internal error: connection closed due 
to keepalive timeout
Dec 06 17:05:32 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:32.481+: 12553: 
info : virObjectUnref:259 : OBJECT_UNREF: obj=0x55b2bb937740

Dec 06 17:05:27 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:27.476+: 12553: 
info : virKeepAliveTimerInternal:136 : RPC_KEEPALIVE_TIMEOUT: ka=0x55b2bb937740 
client=0x55b2bb930d50 countToDeath=1 idle=25
Dec 06 17:05:27 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:27.476+: 12553: 
info : virKeepAliveMessage:107 : RPC_KEEPALIVE_SEND: ka=0x55b2bb937740 
client=0x55b2bb930d50 prog=1801807216 vers=1 proc=1

Dec 06 17:05:22 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:22.471+: 12553: 
info : virKeepAliveTimerInternal:136 : RPC_KEEPALIVE_TIMEOUT: ka=0x55b2bb937740 
client=0x55b2bb930d50 countToDeath=2 idle=20
Dec 06 17:05:22 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:22.471+: 12553: 
info : virKeepAliveMessage:107 : RPC_KEEPALIVE_SEND: ka=0x55b2bb937740 
client=0x55b2bb930d50 prog=1801807216 vers=1 proc=1

Dec 06 17:05:17 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:17.466+: 12553: 
info : virKeepAliveTimerInternal:136 : RPC_KEEPALIVE_TIMEOUT: ka=0x55b2bb937740 
client=0x55b2bb930d50 countToDeath=3 idle=15
Dec 06 17:05:17 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:17.466+: 12553: 
info : virKeepAliveMessage:107 : RPC_KEEPALIVE_SEND: ka=0x55b2bb937740 
client=0x55b2bb930d50 prog=1801807216 vers=1 proc=1

Dec 06 17:05:12 ha-idg-1 libvirtd[12553]: 2018-12-06 16:05:12.460+: 12553: 
info : virKeepAliveTimerInternal:136 : RPC_KEEPALIVE_TIMEOUT: ka=0x55b2bb937740 
client=0x55b2bb930d50 countToDeath

[libvirt-users] concurrent migration of several domains rarely fails

2018-12-04 Thread Lentes, Bernd
Hi,

i have a two-node cluster with several domains as resources. During testing i 
tried several times to migrate some domains concurrently.
Usually it suceeded, but rarely it failed. I found one clue in the log:

Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+: 3252: 
error : virKeepAliveTimerInternal:143 : internal error: connection closed due 
to keepalive timeout

The domains are configured similar:
primitive vm_geneious VirtualDomain \
params config="/mnt/san/share/config.xml" \
params hypervisor="qemu:///system" \
params migration_transport=ssh \
op start interval=0 timeout=120 trace_ra=1 \
op stop interval=0 timeout=130 trace_ra=1 \
op monitor interval=30 timeout=25 trace_ra=1 \
op migrate_from interval=0 timeout=300 trace_ra=1 \
op migrate_to interval=0 timeout=300 trace_ra=1 \
meta allow-migrate=true target-role=Started is-managed=true \
utilization cpu=2 hv_memory=8000

What is the algorithm to discover the port used for live migration ?
I have the impression that "params migration_transport=ssh" is worthless, port 
22 isn't involved for live migration.
My experience is that for the migration tcp ports > 49151 are used. But the 
exact procedure isn't clear for me.
Does live migration uses first tcp port 49152 and for each following domain one 
port higher ?
E.g. for the concurrent live migration of three domains 49152, 49153 and 49154.

Why does live migration for three domains usually succeed, although on both 
hosts just 49152 and 49153 is open ?
Is the migration not really concurrent, but sometimes sequential ?

Bernd






-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
[ mailto:bernd.len...@helmholtz-muenchen.de | 
bernd.len...@helmholtz-muenchen.de ] 
phone: +49 89 3187 1241 
fax: +49 89 3187 2294 
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ] 

wer Fehler macht kann etwas lernen 
wer nichts macht kann auch nichts lernen
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDirig.in Petra Steiner-Hoffmann
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] snapshots with virsh in a pacemaker cluster

2018-10-15 Thread Lentes, Bernd


- Am 15. Okt 2018 um 21:47 schrieb Peter Crowther 
peter.crowt...@melandra.com:

> Pacemaker always knows where its resources are running. Query it, stop the
> domain, then use the queried location as the host to which to issue the
> snapshot?

But can i be sure that the resource starts on the node it was running before ?
IMHO no.
What is if i start the snapshot on node A but the resource starts afterwards on 
node B ?
Then libvirt on node B does not know it should perform a snapshot.

> On Mon, 15 Oct 2018, 20:36 Lentes, Bernd, < [
> mailto:bernd.len...@helmholtz-muenchen.de | bernd.len...@helmholtz-muenchen.de
> ] > wrote:

>> Hi,

>> i have a two node cluster with virtual guests as resources.
>> I'd like to snapshot the guests once in the night and thought i had a 
>> procedure.
>> But i realize that things in a cluster are a bit more complicated than 
>> expected
>> :-))

>> I will shutdown the guests to have a clean snapshot.
>> I can shutdown the guests via pacemaker.
>> But then arises the first problem:

>> When i issue a "virsh snapshot-create-as" libvirt needs the domain name as a
>> parameter.
>> but libvirt does not know the domains any longer. When the guests are 
>> shutdown a
>> "virsh list --all"
>> on both nodes does not show any domain.

>> A look in the respective resource agent VirtualDomain explains why:
>> The domain is started with virsh create:

>> "# The 'create' command guarantees that the domain will be
>> # undefined on shutdown, ...

>> OK: Now i could of course define all domains with a virsh define.

>> But then i have immediately the next problem. Now i'd create the snapshots 
>> with
>> "virsh snapshot-create-as" and starts the domains afterwards via cluster.
>> But let's assume i issue that on node 1 and some guests are started 
>> afterwards
>> via pacemaker on node 2.
>> I can't predict on which node the guests are starting.

>> Then i don't get a snapshot, right ?

>> What to do ?

>> Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: NN
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] snapshots with virsh in a pacemaker cluster

2018-10-15 Thread Lentes, Bernd
Hi,

i have a two node cluster with virtual guests as resources.
I'd like to snapshot the guests once in the night and thought i had a procedure.
But i realize that things in a cluster are a bit more complicated than expected 
:-))

I will shutdown the guests to have a clean snapshot.
I can shutdown the guests via pacemaker.
But then arises the first problem:

When i issue a "virsh snapshot-create-as" libvirt needs the domain name as a 
parameter.
but libvirt does not know the domains any longer. When the guests are shutdown 
a "virsh list --all"
on both nodes does not show any domain.

A look in the respective resource agent VirtualDomain explains why:
The domain is started with virsh create:

"# The 'create' command guarantees that the domain will be
# undefined on shutdown, ...

OK: Now i could of course define all domains with a virsh define.

But then i have immediately the next problem. Now i'd create the snapshots with
"virsh snapshot-create-as" and starts the domains afterwards via cluster.
But let's assume i issue that on node 1 and some guests are started afterwards 
via pacemaker on node 2.
I can't predict on which node the guests are starting.

Then i don't get a snapshot, right ?

What to do ?

Bernd



-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
[ mailto:bernd.len...@helmholtz-muenchen.de | 
bernd.len...@helmholtz-muenchen.de ] 
phone: +49 89 3187 1241 
fax: +49 89 3187 2294 
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ] 

wer Fehler macht kann etwas lernen 
wer nichts macht kann auch nichts lernen
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: NN
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] how "safe" is blockcommit ?

2018-10-12 Thread Lentes, Bernd


> - On Sep 7, 2018, at 9:26 PM, Eric Blake ebl...@redhat.com wrote:
> 
>> On 09/07/2018 12:06 PM, Lentes, Bernd wrote:
>>> Hi,
>>> 
>>> currently i'm following
>>> https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit. I 'm
>>> playing around with it and it seems to be quite nice.
>>> What i want is a daily consistent backup of my image file of the guest.
>>> I have the idea of the following procedure:
>>> 
>>> - Shutdown the guest (i can live with a downtime of a few minutes, it will
>>> happen in the night).
>>>And i think it's the only way to have a real clean snapshot
>> 
>> Not the only way - you can also use qemu-guest-agent with a trusted
>> guest to quiesce all filesystem I/O after freezing database operations
>> at a consistent point, for a clean snapshot of a live guest.  But
>> shutting down is indeed safe, and easier to reason about than worrying
>> whether your qga interaction is properly hooked into all necessary
>> places for a live quiesce.
>> 
>>> - create a snapshot with snapshot-create-as: snapshot-create-as guest testsn
>>> --disk-only
>>> - start the guest again. Changes will now go into the overlay, as e.g. 
>>> inserts
>>> in a database
>>> - rsync the base file to a cifs server. With rsync not the complete, likely 
>>> big
>>> file is transferred but just the delta
>> 
>> We're also trying to add support for incremental backups into a future
>> version of libvirt on top of the qemu 3.0 feature of persistent bitmaps
>> in qcow2 images, which could indeed guarantee that you transfer only the
>> portions of the guest disk that were touched since the last backup.  But
>> as that's still something I'm trying to code up, your solution of using
>> rsync to pick out the deltas is as good as anything you can get right now.
>> 
>>> - blockcommit the overlay: blockcommit guest /path/to/testsn --active --wait
>>> --verbose --pivot
>>> - delete the snapshot: snapshot-delete guest --snapshotname testsn 
>>> --metadata
>>> - remove the overlay
>>> 
>>> Is that ok ? How "safe" is blockcommit on a running guest ?
>> 
>> Yep, that's the right way to do it!  It's perfectly safe - the guest
>> doesn't care whether it is reading/writing from the backing file or the
>> overlay, and even if the blockcommit action is aborted, you can restart
>> it for the same effects.
>> 
>> (Okay, if you want to get technical, you need to know that committing
>> from 'Base <- mid <- top' down to 'Base' leaves 'mid' in an inconsistent
>> state - but that's not something the guest can see through 'top'; and
>> your specific case is just committing to the immediate backing layer
>> rather than skipping layers like that).
>> 
>>> It's possible that during the rsync, when the guest is running, some 
>>> inserts are
>>> done in a database.
>> 
>> As long as the backing file is read-only during the rsync (which it is,
>> since all your guest writes are going to the overlay), then nothing the
>> guest can do will interfere with what rsync can see.  Just be sure that
>> you don't start the blockcommit until after rsync is done.
>> 
>>> Is it safe to copy the new sectors (i assume that's what blockcommit does) 
>>> under
>>> a running database ?
>>> Or is it only safe doing blockcommit on a stopped guest ?
>> 
>> Live blockcommit is safe, and exists so you don't have to stop the guest.

Hi,

i thought i got the procedure i needed, but a problem arise.
My guests are resources in a pacemaker cluster. I start/stop the guests through 
the cluster.
But when i stop the guest via pacemaker, libvirt doesn't know the guest any 
longer.
After a successfull stop "virsh list --all" doesn't show any guest. So i can't 
take a sanpshot with libvirt.
What is about qemu-img ? Could i still use my procedure ?
- stop the guest via cluster
- snapshot with qemu-img
- start the guest via cluster
- rsync the snapshot to a file server
- commit the snapshot with qemu-img to a running guest (i think this will be 
the problem)

Thanks for any thoughts.

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: NN
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, 
Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] how "safe" is blockcommit ?

2018-09-10 Thread Lentes, Bernd



- On Sep 7, 2018, at 9:26 PM, Eric Blake ebl...@redhat.com wrote:

> On 09/07/2018 12:06 PM, Lentes, Bernd wrote:
>> Hi,
>> 
>> currently i'm following
>> https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit. I 'm
>> playing around with it and it seems to be quite nice.
>> What i want is a daily consistent backup of my image file of the guest.
>> I have the idea of the following procedure:
>> 
>> - Shutdown the guest (i can live with a downtime of a few minutes, it will
>> happen in the night).
>>And i think it's the only way to have a real clean snapshot
> 
> Not the only way - you can also use qemu-guest-agent with a trusted
> guest to quiesce all filesystem I/O after freezing database operations
> at a consistent point, for a clean snapshot of a live guest.  But
> shutting down is indeed safe, and easier to reason about than worrying
> whether your qga interaction is properly hooked into all necessary
> places for a live quiesce.
> 
>> - create a snapshot with snapshot-create-as: snapshot-create-as guest testsn
>> --disk-only
>> - start the guest again. Changes will now go into the overlay, as e.g. 
>> inserts
>> in a database
>> - rsync the base file to a cifs server. With rsync not the complete, likely 
>> big
>> file is transferred but just the delta
> 
> We're also trying to add support for incremental backups into a future
> version of libvirt on top of the qemu 3.0 feature of persistent bitmaps
> in qcow2 images, which could indeed guarantee that you transfer only the
> portions of the guest disk that were touched since the last backup.  But
> as that's still something I'm trying to code up, your solution of using
> rsync to pick out the deltas is as good as anything you can get right now.
> 
>> - blockcommit the overlay: blockcommit guest /path/to/testsn --active --wait
>> --verbose --pivot
>> - delete the snapshot: snapshot-delete guest --snapshotname testsn --metadata
>> - remove the overlay
>> 
>> Is that ok ? How "safe" is blockcommit on a running guest ?
> 
> Yep, that's the right way to do it!  It's perfectly safe - the guest
> doesn't care whether it is reading/writing from the backing file or the
> overlay, and even if the blockcommit action is aborted, you can restart
> it for the same effects.
> 
> (Okay, if you want to get technical, you need to know that committing
> from 'Base <- mid <- top' down to 'Base' leaves 'mid' in an inconsistent
> state - but that's not something the guest can see through 'top'; and
> your specific case is just committing to the immediate backing layer
> rather than skipping layers like that).
> 
>> It's possible that during the rsync, when the guest is running, some inserts 
>> are
>> done in a database.
> 
> As long as the backing file is read-only during the rsync (which it is,
> since all your guest writes are going to the overlay), then nothing the
> guest can do will interfere with what rsync can see.  Just be sure that
> you don't start the blockcommit until after rsync is done.
> 
>> Is it safe to copy the new sectors (i assume that's what blockcommit does) 
>> under
>> a running database ?
>> Or is it only safe doing blockcommit on a stopped guest ?
> 
> Live blockcommit is safe, and exists so you don't have to stop the guest.
> 
> For a bit more insight into what is going on under the hood, the slides
> from my KVM Forum talk from a few years back may give some nice insights:
> http://events17.linuxfoundation.org/sites/events/files/slides/2015-qcow2-expanded.pdf
> 
>> 
>> Thanks for any answer.
>> 
>> Bernd
>> 
>> P.S. Is the same procedure possible when the guest disk(s) reside directly 
>> in a
>> plain logical volume, without a file system in-between ?
> 
> Live blockcommit works onto any host storage protocol (whether
> filesystem, block device via LVM, or even remote access such as NBD or
> ceph).  The key is that your overlay is a qcow2 file that is tracking
> the deltas during the time in which you are capturing your backup of the
> backing file, and then blockcommit safely writes those deltas back into
> the backing file prior to no longer needing the overlay.
> 
> --
> Eric Blake, Principal Software Engineer
> Red Hat, Inc.   +1-919-301-3266
> Virtualization:  qemu.org | libvirt.org

Hi Eric,

a big thanks to this outstanding clear and thorough answer.
It's a pleasure to get such information from the developers themselves, and so 
quick !

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias H. Tschoep, Heinrich 
Bassler, Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] how "safe" is blockcommit ?

2018-09-07 Thread Lentes, Bernd
Hi,

currently i'm following 
https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit. I 'm 
playing around with it and it seems to be quite nice.
What i want is a daily consistent backup of my image file of the guest.
I have the idea of the following procedure:

- Shutdown the guest (i can live with a downtime of a few minutes, it will 
happen in the night).
  And i think it's the only way to have a real clean snapshot
- create a snapshot with snapshot-create-as: snapshot-create-as guest testsn 
--disk-only
- start the guest again. Changes will now go into the overlay, as e.g. inserts 
in a database
- rsync the base file to a cifs server. With rsync not the complete, likely big 
file is transferred but just the delta
- blockcommit the overlay: blockcommit guest /path/to/testsn --active --wait 
--verbose --pivot
- delete the snapshot: snapshot-delete guest --snapshotname testsn --metadata
- remove the overlay

Is that ok ? How "safe" is blockcommit on a running guest ? It's possible that 
during the rsync, when the guest is running, some inserts are done in a 
database.
Is it safe to copy the new sectors (i assume that's what blockcommit does) 
under a running database ?
Or is it only safe doing blockcommit on a stopped guest ?

Thanks for any answer.

Bernd

P.S. Is the same procedure possible when the guest disk(s) reside directly in a 
plain logical volume, without a file system in-between ?



-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
[ mailto:bernd.len...@helmholtz-muenchen.de | 
bernd.len...@helmholtz-muenchen.de ] 
phone: +49 89 3187 1241 
fax: +49 89 3187 2294 
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ] 

wer Fehler macht kann etwas lernen 
wer nichts macht kann auch nichts lernen
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias H. Tschoep, Heinrich 
Bassler, Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671



___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] snapshot with libvirt tools or with lvm tools ?

2018-03-09 Thread Lentes, Bernd


- On Mar 9, 2018, at 7:05 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:


> 
> It does not work as expected :-(
> My lv's are clustered, but snapshotting a clustered lv requires to activate 
> the
> source lv exclusively on one node, which is not possible when it's mounted and
> files on it are open.
> So i have to try it with libvirt and qemu.
> I'd like to create the snapshot while running the guest, take the backup, and
> merge (or commit) the changes after the copy procedure, still with a running
> guest.
> Is there  a way to do this ? I found
> https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit , but 
> my
> software seems to be too old:
> I have libvirt-1.2.5-23.3.1. But my virsh offers blockcommit:
> 
> virsh # help blockcommit
>  NAME
>blockcommit - Start a block commit operation.
> 
>  SYNOPSIS
>blockcommit   [] [] [--shallow] []
>[--delete] [--wait] [--verbose] [--timeout ] [--async]
> 
>  DESCRIPTION
>Commit changes from a snapshot down to its backing image.
> 
>  OPTIONS
>[--domain]   domain name, id or uuid
>[--path]   fully-qualified path of disk
>[--bandwidth]   bandwidth limit in MiB/s
>[--base]   path of base file to commit into (default bottom of 
> chain)
>--shallowuse backing file of top as base
>[--top]   path of top file to commit from (default top of chain)
>--delete delete files that were successfully committed
>--wait   wait for job to complete
>--verbosewith --wait, display the progress
>--timeout   with --wait, abort if copy exceeds timeout (in seconds)
>--async  with --wait, don't wait for cancel to finish
> 
> But it doesn't work ? Although help offers it ?
> 
> Bernd
> 

Hmm,

it seems i really have a version too old. This is what i get:

virsh # blockcommit windows7x64 /cluster/guests/servers_alive/sa_snap.qcow2 
--wait --verbose
error: Operation not supported: committing the active layer not supported yet

Is there no way to achieve what i want ?


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] snapshot with libvirt tools or with lvm tools ?

2018-03-09 Thread Lentes, Bernd

- On Mar 9, 2018, at 4:39 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

> Hi,
> 
> i asked already a time ago about snapshots. I have some guests, each resides 
> in
> a raw file, placed on an ext3 fs which is on top of a logical volume, for each
> guest a dedicated lv.
> The raw is not a "must have", if there are obvious reasons i can convert them 
> to
> a qcow2.
> What i want is a consistent backup of each guest taken overnight. If possible 
> i
> won't have downtime.
> I can use the libvirt tools, but the lvm way seems to be more elegant.
> Before copying the file the guests resides in i take a snapshot from the
> respective lv. Then i mount the snapshot and transfer it via rsync on a CIFS
> share.
> Rsync seems to be the appropriate tool because i just transfer the changes in
> the file compared to the file from a day before. So i don't have to transfer
> complete and big raw files but just the difference.
> 
> The guest still can be running, and after the transfer of the snapshot file i
> just delete it, and the next night the same procedure.
> 
> What do you think ?
> 

It does not work as expected :-(
My lv's are clustered, but snapshotting a clustered lv requires to activate the 
source lv exclusively on one node, which is not possible when it's mounted and 
files on it are open.
So i have to try it with libvirt and qemu.
I'd like to create the snapshot while running the guest, take the backup, and 
merge (or commit) the changes after the copy procedure, still with a running 
guest.
Is there  a way to do this ? I found 
https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit , but my 
software seems to be too old:
I have libvirt-1.2.5-23.3.1. But my virsh offers blockcommit:

virsh # help blockcommit
  NAME
blockcommit - Start a block commit operation.

  SYNOPSIS
blockcommit   [] [] [--shallow] [] 
[--delete] [--wait] [--verbose] [--timeout ] [--async]

  DESCRIPTION
Commit changes from a snapshot down to its backing image.

  OPTIONS
[--domain]   domain name, id or uuid
[--path]   fully-qualified path of disk
[--bandwidth]   bandwidth limit in MiB/s
[--base]   path of base file to commit into (default bottom of 
chain)
--shallowuse backing file of top as base
[--top]   path of top file to commit from (default top of chain)
--delete delete files that were successfully committed
--wait   wait for job to complete
--verbosewith --wait, display the progress
--timeout   with --wait, abort if copy exceeds timeout (in seconds)
--async  with --wait, don't wait for cancel to finish

But it doesn't work ? Although help offers it ?

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] snapshot with libvirt tools or with lvm tools ?

2018-03-09 Thread Lentes, Bernd
Hi,

i asked already a time ago about snapshots. I have some guests, each resides in 
a raw file, placed on an ext3 fs which is on top of a logical volume, for each 
guest a dedicated lv.
The raw is not a "must have", if there are obvious reasons i can convert them 
to a qcow2.
What i want is a consistent backup of each guest taken overnight. If possible i 
won't have downtime.
I can use the libvirt tools, but the lvm way seems to be more elegant. 
Before copying the file the guests resides in i take a snapshot from the 
respective lv. Then i mount the snapshot and transfer it via rsync on a CIFS 
share.
Rsync seems to be the appropriate tool because i just transfer the changes in 
the file compared to the file from a day before. So i don't have to transfer 
complete and big raw files but just the difference.

The guest still can be running, and after the transfer of the snapshot file i 
just delete it, and the next night the same procedure.

What do you think ?

Bernd

-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
[ mailto:bernd.len...@helmholtz-muenchen.de | 
bernd.len...@helmholtz-muenchen.de ] 
phone: +49 89 3187 1241 
fax: +49 89 3187 2294 
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ] 

no backup - no mercy
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671



___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] snapshot of a guest with two disks

2018-02-20 Thread Lentes, Bernd
Hi,

i just realized that i have a guest with two disks. What would be the 
appropiate way to snapshot both of them ?

virsh snapshot-create-as --domain guest --diskspec 
vda,file=/path/to/snapshot/snapshot1.qcow2 -disk-only --atomic &&
virsh snapshot-create-as --domain guest --diskspec 
vdb,file=/path/to/snapshot/snapshot2.qcow2 -disk-only --atomic

or

virsh snapshot-create-as --domain guest --diskspec 
vda,file=/path/to/snapshot/snapshot1.qcow2 --diskspec 
vdb,file=/path/to/snapshot/snapshot2.qcow2 --disk-only --atomic



Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] snapshot of a raw file - how to revert ?

2018-02-16 Thread Lentes, Bernd


- On Feb 16, 2018, at 3:01 PM, Kashyap Chamarthy kcham...@redhat.com wrote:

> On Fri, Feb 16, 2018 at 01:59:01PM +0100, Lentes, Bernd wrote:
> 
> [...]
> 
>> Hi Kashyap,
>> 
>> thanks for your quick and detailed answers. Just to be complete.
>> The procedure in the above mentioned link does work with my old software ?
> 
> It _should_ work; but please try on a test VM and see what works and
> what doesn't in your environment.


Hi,

again thanks for your quick answer. I had a look on 
https://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit
and i liked it. I'm thinking of upgrading my systems to SLES 12 SP3. With that 
i have qemu 2.9.1 and libvirt 3.3.0, so it should work.
Does this procedure also work with raw files ?

Bernd

 

Helmholtz Zentrum München

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] snapshot of a raw file - how to revert ?

2018-02-16 Thread Lentes, Bernd


- On Feb 15, 2018, at 12:53 PM, Kashyap Chamarthy kcham...@redhat.com wrote:

> On Thu, Feb 15, 2018 at 11:41:37AM +0100, Lentes, Bernd wrote:
> 
> [...]
> 
>> Hi,
>> 
>> i found that:
>> https://dustymabe.com/2015/01/11/qemu-img-backing-files-a-poor-mans-snapshotrollback/
>> 
>> I tried it and it seemed to work, although my root fs was checked
>> after the commit, anything else seemed to work.  What do you think of
>> this procedure ?
> 
> Instead of 'qemu-img create', I'd suggest using `virsh
> snapshot-create-as` (as shown in my previous email).   This will tell
> libvirt to automatically use the just created QCOW2 overlay.
> 
> But yeah, one useful bit is to use the `virt-xml` tool to point to the
> desired disk image (instead of `virsh edit` that I mentioned in the
> previous email):
> 
>$ virt-xml F21server --edit target=vda \
>--disk driver_type=raw,path=./A.raw
> 
> 
> --

Hi Kashyap,

thanks for your quick and detailed answers. Just to be complete.
The procedure in the above mentioned link does work with my old software ?

pc59093:~ # rpm -qa|grep -iE 'libvirt|kvm'
libvirt-cim-0.5.12-0.7.16
libvirt-python-1.2.5-1.102
libvirt-client-1.2.5-15.3
kvm-1.4.2-47.1
sles-kvm_en-pdf-11.4-0.33.1
libvirt-1.2.5-15.3

Bernd
 

Helmholtz Zentrum München

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] snapshot of a raw file - how to revert ?

2018-02-15 Thread Lentes, Bernd


- On Feb 13, 2018, at 1:38 PM, Bernd Lentes 
bernd.len...@helmholtz-muenchen.de wrote:

> Hi,
> 
> i have the following system:
> 
> pc59093:~ # cat /etc/os-release
> NAME="SLES"
> VERSION="11.4"
> VERSION_ID="11.4"
> PRETTY_NAME="SUSE Linux Enterprise Server 11 SP4"
> ID="sles"
> ANSI_COLOR="0;32"
> CPE_NAME="cpe:/o:suse:sles:11:4"
> 
> pc59093:~ # uname -a
> Linux pc59093 3.0.101-84-default #1 SMP Tue Oct 18 10:32:51 UTC 2016 (15251d6)
> x86_64 x86_64 x86_64 GNU/Linux
> 
> pc59093:~ # rpm -qa|grep -iE 'libvirt|kvm'
> libvirt-cim-0.5.12-0.7.16
> libvirt-python-1.2.5-1.102
> libvirt-client-1.2.5-15.3
> kvm-1.4.2-47.1
> sles-kvm_en-pdf-11.4-0.33.1
> libvirt-1.2.5-15.3
> 
> 
> I have several guests running with raw files, which is sufficent for me. Now 
> i'd
> like to snapshot one guest because i make heavy configuration changes on it.
>>From what i read in the net is that libvirt supports snapshoting of raw files
>>when the guest is shutdown and the file of the snapshot becomes a qcow2. Right
>>?
> I try to avoid converting my raw file to a qcow2 file. I can shutdown the 
> guest
> for a certain time, that's no problem. I don't need a live snapshot.
> But how can i revert to my previous state if my configuration changes go 
> wrong ?
> Can i do this with snapshot-revert or do i have to edit the xml file and point
> the hd again to the origin raw file ?
> What i found in the net wasn't complete clear.
> 
> Thanks.
> 

Hi,

i found that: 
https://dustymabe.com/2015/01/11/qemu-img-backing-files-a-poor-mans-snapshotrollback/

I tried it and it seemed to work, although my root fs was checked after the 
commit, anything else seemed to work.
What do you think of this procedure ?

Bernd
 

Helmholtz Zentrum München

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] snapshot of a raw file - how to revert ?

2018-02-13 Thread Lentes, Bernd
Hi,

i have the following system:

pc59093:~ # cat /etc/os-release
NAME="SLES"
VERSION="11.4"
VERSION_ID="11.4"
PRETTY_NAME="SUSE Linux Enterprise Server 11 SP4"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:11:4"

pc59093:~ # uname -a
Linux pc59093 3.0.101-84-default #1 SMP Tue Oct 18 10:32:51 UTC 2016 (15251d6) 
x86_64 x86_64 x86_64 GNU/Linux

pc59093:~ # rpm -qa|grep -iE 'libvirt|kvm'
libvirt-cim-0.5.12-0.7.16
libvirt-python-1.2.5-1.102
libvirt-client-1.2.5-15.3
kvm-1.4.2-47.1
sles-kvm_en-pdf-11.4-0.33.1
libvirt-1.2.5-15.3


I have several guests running with raw files, which is sufficent for me. Now 
i'd like to snapshot one guest because i make heavy configuration changes on it.
>From what i read in the net is that libvirt supports snapshoting of raw files 
>when the guest is shutdown and the file of the snapshot becomes a qcow2. Right 
>?
I try to avoid converting my raw file to a qcow2 file. I can shutdown the guest 
for a certain time, that's no problem. I don't need a live snapshot.
But how can i revert to my previous state if my configuration changes go wrong ?
Can i do this with snapshot-revert or do i have to edit the xml file and point 
the hd again to the origin raw file ?
What i found in the net wasn't complete clear.

Thanks.


Bernd
-- 

Bernd Lentes 
Systemadministration 
Institut für Entwicklungsgenetik 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum münchen 
[ mailto:bernd.len...@helmholtz-muenchen.de | 
bernd.len...@helmholtz-muenchen.de ] 
phone: +49 89 3187 1241 
fax: +49 89 3187 2294 
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ] 

no backup - no mercy
 

Helmholtz Zentrum München


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] vm running slowly in powerful host

2017-02-17 Thread Lentes, Bernd
Hi,

i have a vm which has a poor performance.
E.g. top needs seconds to refresh its output on the console. Same with netstat.
The guest is hosting a MySQL DB with a webfrontend, its response is poor too.
I'm looking for the culprit.

Following top in the guest i get these hints:

Memory is free enough, system is not swapping.
System has 8GB RAM and two cpu's.
Cpu 0 is struggling with a lot of software interrupts, between 50% and 80%.
Cpu1 is often waiting for IO (wa), between 0% and 20%.

No application is consuming much cpu time.

Here is an example:

top - 11:19:18 up 18:19, 11 users,  load average: 1.44, 0.94, 0.66
Tasks:  95 total,   1 running,  94 sleeping,   0 stopped,   0 zombie
Cpu0  :  0.0%us,  0.0%sy,  0.0%ni, 20.0%id,  0.0%wa,  0.0%hi, 80.0%si,  0.0%st
Cpu1  :  1.9%us, 13.8%sy,  0.0%ni, 73.8%id, 10.5%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   7995216k total,  6385176k used,  1610040k free,   12k buffers
Swap:  2104472k total,0k used,  2104472k free,  5940884k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 6470 root  16   0 12844 1464  804 S   12  0.0   2:17.13 screen
 6022 root  15   0 41032 3052 2340 S3  0.0   1:10.99 sshd
 8322 root   0 -20 10460 4976 2268 S3  0.1  19:20.38 atop
10806 root  16   0  5540 1216  880 R0  0.0   0:00.51 top
  126 root  15   0 000 S0  0.0   0:23.33 pdflush
 3531 postgres  15   0 68616 1600  792 S0  0.0   0:41.24 postmaster

The host in which the guest runs has 96GB RAM and 8 cores.

It does not seem to do much:

top - 11:21:19 up 15 days, 15:53, 14 users,  load average: 1.40, 1.39, 1.40
Tasks: 221 total,   2 running, 219 sleeping,   0 stopped,   0 zombie
Cpu0  : 15.9%us,  2.7%sy,  0.0%ni, 81.4%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu1  :  5.0%us,  3.0%sy,  0.0%ni, 92.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu2  :  2.0%us,  0.3%sy,  0.0%ni, 97.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu3  :  0.3%us,  1.0%sy,  0.0%ni, 98.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu4  :  1.3%us,  0.3%sy,  0.0%ni, 98.3%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu5  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu6  :  0.3%us,  0.0%sy,  0.0%ni, 99.7%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Cpu7  :  0.0%us,  0.0%sy,  0.0%ni,100.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem: 96738M total,13466M used,83272M free,3M buffers
Swap: 2046M total,0M used, 2046M free, 3887M cached

  PID USER  PR  NI  VIRT  RES  SHR S   %CPU %MEMTIME+  COMMAND
21765 root  20   0  105m  15m 4244 S  5  0.0   0:00.15 crm
 3180 root  20   0 8572m 8.0g 8392 S  3  8.4  62:25.73 qemu-kvm
 8529 hacluste  10 -10 90820  14m 9400 S  0  0.0  29:52.48 cib
21329 root  20   0  9040 1364  940 R  0  0.0   0:00.16 top
28439 root  20   0 000 S  0  0.0   0:04.51 kworker/4:2
1 root  20   0 10560  828  692 S  0  0.0   0:07.67 init
2 root  20   0 000 S  0  0.0   0:00.28 kthreadd
3 root  20   0 000 S  0  0.0   3:03.23 ksoftirqd/0
6 root  RT   0 000 S  0  0.0   0:05.02 migration/0
7 root  RT   0 000 S  0  0.0   0:02.82 watchdog/0
8 root  RT   0 000 S  0  0.0   0:05.18 migration/1

I think the host is not the problem.

The vm resides on a SAN which is attached via FC. The whole system is a two 
node cluster.
The vm resides in a raw partition without a FS, which i read should be good for 
the performance.
It runs on the other node slow too. Inside the vm i have logical volumes 
(it was a physical system i migrated to a vm). The partitions are formatted 
with reiserfs
(The system is already some years old, at that time reiserfs was popular ...).

I use iostat on the guest:
This is a typical snapshot:

Device: rrqm/s   wrqm/s r/s w/srkB/swkB/s avgrq-sz 
avgqu-sz   await  svctm  %util
vda   0.00 3.050.002.05 0.0020.4019.90 
0.09   44.59  31.22   6.40
dm-0  0.00 0.000.004.55 0.0018.20 8.00 
0.24   52.31   7.74   3.52
dm-1  0.00 0.000.000.00 0.00 0.00 0.00 
0.000.00   0.00   0.00
dm-2  0.00 0.000.000.10 0.00 0.40 8.00 
0.01   92.00  56.00   0.56
dm-3  0.00 0.000.000.00 0.00 0.00 0.00 
0.000.00   0.00   0.00
dm-4  0.00 0.000.000.00 0.00 0.00 0.00 
0.000.00   0.00   0.00
dm-5  0.00 0.000.000.35 0.00 1.40 8.00 
0.03   90.29  65.71   2.30
dm-6  0.00 0.000.000.00 0.00 0.00 0.00 
0.000.00   0.00   0.00

vda has several partitions, one for /, one for swap, and two physical volumes 
for LVM.

Following "man iostat", the columns await and svctm  seem to be import

Re: [libvirt-users] VM's in a HA-configuration - synchronising vm config files

2016-03-02 Thread Lentes, Bernd
- On Mar 2, 2016, at 3:15 PM, Dominique Ramaekers 
dominique.ramaek...@cometal.be wrote:

>>Van: libvirt-users-boun...@redhat.com 
>>[mailto:libvirt-users-boun...@redhat.com]
>>Namens Lentes, Bernd
>>Verzonden: woensdag 2 maart 2016 15:04
>>Aan: libvirt-ML
>>Onderwerp: [libvirt-users] VM's in a HA-configuration - synchronising vm 
>>config
>>files
>>
>>Hi,
>>
>>i'd like to establish a HA-Cluster with two nodes. My services will run inside
>>vm's, the vm's are stored on a FC SAN, so every host has access to the vm's.
>>But how can i keep the config files >(xml-files under /etc/libvirt/qemu)
>>synchronised ? Is there a possibility to store the config files somewhere else
>>? E.g. a partitition with ocfs2 on the SAN ?
>>If not, what would you do ? Otherweise i'm thinking of a cron-job who
>>synchronises the file each minute with rsync.
>>
>>
>>Bernd
>>
>>--
> 
> This looks simple enough...
> 
> On host 1, virsh dumpxml for every running vm to a shared location
> On host 2, virsh define for these vm's
> 
> Do the same the other way around...

Hi Dominique,

i don't see a way to write with dumpxml directly into a file.
Or do you mean by redirection ?


Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] VM's in a HA-configuration - synchronising vm config files

2016-03-02 Thread Lentes, Bernd
Hi, 

i'd like to establish a HA-Cluster with two nodes. My services will run inside 
vm's, the vm's are stored on a FC SAN, so every host has access to the vm's. 
But how can i keep the config files (xml-files under /etc/libvirt/qemu) 
synchronised ? Is there a possibility to store the config files somewhere else 
? E.g. a partitition with ocfs2 on the SAN ? 
If not, what would you do ? Otherweise i'm thinking of a cron-job who 
synchronises the file each minute with rsync. 


Bernd 

-- 
Bernd Lentes 

Systemadministration 
institute of developmental genetics 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 (0)89 3187 1241 
fax: +49 (0)89 3187 2294 

Wer Visionen hat soll zum Hausarzt gehen 
Helmut Schmidt 

 


Helmholtz Zentrum Muenchen

Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)

Ingolstaedter Landstr. 1

85764 Neuherberg

www.helmholtz-muenchen.de

Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe

Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)

Registergericht: Amtsgericht Muenchen HRB 6466

USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] which is the config file for a vm ?

2016-03-02 Thread Lentes, Bernd
- On Mar 1, 2016, at 5:27 PM, Michal Privoznik mpriv...@redhat.com wrote:

> On 01.03.2016 14:57, Lentes, Bernd wrote:
>> Hi,
>> 

>> Pictures you find here: 
>> https://hmgubox.helmholtz-muenchen.de:8001/d/51feb02c02/
>> I thought the xml-file in /etc/libvirt/qemu ist the only responsable one. It 
>> is
>> that one which is configured when i issue a 'edit domain' in virsh. Or ?
> 
> Yes, it's the only location where libvirt keeps inactive domain
> configurations. However, in some cases domain configuration can be fed
> in from a different source, e.g. when restoring from a file. Moreover,
> if the file is managed by libvirt (so called managed save), doing 'virsh
> start' will run domain from there rather than from a fresh config kept
> under /etc/libvirt/qemu. You can check whether domain has a managed save
> by inspecting 'virsh dominfo' output.
> Or if you restore a domain from previously saved state use 'virsh
> save-image-edit' to check MAC address.
> 
>> Where does the VMM stores the configuration of the domains ?
> 

Hi,thanks for the information. I have some other questions:
what is the right way to copy (clone) a vm ? I have one vm which i want to use 
a second time, on the same host, just with another MAC-Address.

And how can i copy a vm to a plain partition (without fs) ? I read this is 
faster than having an image file on a file system.

Bernd
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] which is the config file for a vm ?

2016-03-01 Thread Lentes, Bernd
Hi, 

i have a weird problem. I have a vm (KVM) which seems to run fine. I believe 
the respective config file for this vm is /etc/libvirt/qemu/MausDB.xml. This is 
it:

=

  MausDB
  d4c7956c-b57f-967a-0454-99835a3a740b
  2353792
  2353792
  2
  
hvm

  
  



  
  
  destroy
  restart
  destroy
  
/usr/bin/qemu-kvm

  
  
  
  


  
  
  
  


  



  


  
  
  
  





  
  


  

  

=

As you see, the vm has one NIC. Its MAC-Address is: '52:54:00:37:92:03'.
I also see that MAC when i edit the config via virsh.

But when i boot that vm, it has a nic with another MAC: '52:54:00:37:92:B2' ??? 
lspci shows me just one nic in the vm.
This MAC-Address is also visible in the Virtual Machine Manager.
Pictures you find here: 
https://hmgubox.helmholtz-muenchen.de:8001/d/51feb02c02/ 
I thought the xml-file in /etc/libvirt/qemu ist the only responsable one. It is 
that one which is configured when i issue a 'edit domain' in virsh. Or ?
Where does the VMM stores the configuration of the domains ?
I found another xml: /var/run/libvirt/qemu/MausDB.xml . Inside it there is the 
MAC the booted vm has. What is the purpose of this xml ?
Also ps inside the host shows the MAC which is in the booted vm:

root 28237  4.8  2.4 2886084 2416116 ? Sl   Feb29  55:16 
/usr/bin/qemu-kvm -name MausDB -S -machine pc-i440fx-1.4,accel=kvm,usb=off -m 
2299 -smp 2,sockets=2,cores=1,threads=1 -uuid 
d4c7956c-b57f-967a-0454-99835a3a740b -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/MausDB.monitor,server,nowait 
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive 
file=/var/lib/kvm/images/MausDB/disk0.raw,if=none,id=drive-virtio-disk0,format=raw
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -drive if=none,id=drive-ide0-0-0,readonly=on,format=raw -device 
ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -netdev 
tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device 
virtio-net-pci,netdev=hostnet0,id=net0, 
mac=52:54:00:37:92:b2,bus=pci.0,addr=0x3 -vnc 127.0.0.1:0 -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5


Can anyone help sorting this out ?


Bernd

-- 
Bernd Lentes 

Systemadministration 
institute of developmental genetics 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 (0)89 3187 1241 
fax: +49 (0)89 3187 2294 

Wer Visionen hat soll zum Hausarzt gehen 
Helmut Schmidt
 

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Alfons Enhsen, Renate Schlusen 
(komm.)
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] snapshot of running vm's

2015-12-04 Thread Lentes, Bernd
Dominique wrote:
> 
> Never had that problem. Can it be a setting of the guest agent on the
> guest? With me all the commands of the ga are enabled...
> 
Where can i change the settings of the agent ?

> You can check the commands by using this:
> virsh qemu-agent-command $VM '{"execute": "guest-info"}'
> 
> This way you can at least check that the command is really disabled.
You'll
> have to find a way to enable the command then...
> 
> I can't help you further with this...
> 
> >
> > I managed to update to libvirt 1.2.11. Guest Agent is running in the
vm.
> > I'm trying to snapshot:
> >
> > virsh # snapshot-create-as --domain sles11 --name sn_sles11 --atomic
> > --disk- only --quiesce
> > error: internal error: unable to execute QEMU agent command
> > 'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been
> > disabled for this instance
> >
> > What is now going on ?
> > Guest Agent in VM is qemu-guest-agent-2.0.2-1.35.
> >

virsh # qemu-agent-command sles11 '{"execute": "guest-info"}'

{"return":{"version":"2.0.2","supported_commands":[{"enabled":true,"name":
"guest-set-vcpus","success-response":true},{"enabled":true,"name":"guest-g
et-vcpus"
,"success-response":true},{"enabled":true,"name":"guest-network-get-interf
aces","success-response":true},{"enabled":true,"name":"guest-suspend-hybri
d","succe
ss-response":false},{"enabled":true,"name":"guest-suspend-ram","success-re
sponse":false},{"enabled":true,"name":"guest-suspend-disk","success-respon
se":false
},{"enabled":true,"name":"guest-fstrim","success-response":true},{"enabled
":true,"name":"guest-fsfreeze-thaw","success-response":true},
{"enabled":true,"name":"guest-fsfreeze-freeze","success-response":true}
,{"enabled":true,"name":"guest-fsfreeze-status","success-response":true},{
"enabled":true,"name":"guest-file-
flush","success-response":true},{"enabled":true,"name":"guest-file-seek","
success-response":true},{"enabled":true,"name":"guest-file-write","success
-response
":true},{"enabled":true,"name":"guest-file-read","success-response":true},
{"enabled":true,"name":"guest-file-close","success-response":true},{"enabl
ed":true,
"name":"guest-file-open","success-response":true},{"enabled":true,"name":"
guest-shutdown","success-response":false},{"enabled":true,"name":"guest-in
fo","succ
ess-response":true},{"enabled":true,"name":"guest-set-time","success-respo
nse":true},{"enabled":true,"name":"guest-get-time","success-response":true
},{"enabl
ed":true,"name":"guest-ping","success-response":true},{"enabled":true,"nam
e":"guest-sync","success-response":true},{"enabled":true,"name":"guest-syn
c-delimit
ed","success-response":true}]}}

It shows:
"...{"enabled":true,"name":"guest-fsfreeze-freeze","success-response":true
}

It understand this that it should work.


Bernd
   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] snapshot of running vm's

2015-12-03 Thread Lentes, Bernd
Dominique wrote:
> 
> As stated in http://wiki.libvirt.org/page/Live-disk-backup-with-active-
> blockcommit
> You'll need libvirt 1.2.9 ...
> 
> It seems even Suse enterprise 12 is still using libivrt 1.2.5...
> 
> Bad luck.
> 
> Before 1.2.9 I made my backups with a "virsh save $VM" but it cost a lot
> of time to save a VM...
> 

I managed to update to libvirt 1.2.11. Guest Agent is running in the vm.
I'm trying to snapshot:

virsh # snapshot-create-as --domain sles11 --name sn_sles11 --atomic
--disk-only --quiesce
error: internal error: unable to execute QEMU agent command
'guest-fsfreeze-freeze': The command guest-fsfreeze-freeze has been
disabled for this instance

What is now going on ?
Guest Agent in VM is qemu-guest-agent-2.0.2-1.35.


Bernd
   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] snapshot of running vm's

2015-12-03 Thread Lentes, Bernd
Dominique wrote:
> 
> 

Having this configuration in the xml it worked:

  
  
  
  


The file exists now:

srwxrwxr-x 1 root root 0 Dec  3 14:15
/var/lib/libvirt/qemu/channel/target/sles11.org.qemu.guest_agent.0=

I had to add the directories channel and target. VM is starting fine,
snapshot is possible.
But still some slight problems:

virsh # snapshot-create-as --domain sles11 --name sn_sles11 --atomic
--disk-only --live --quiesce
error: Operation not supported: live snapshot creation is supported only
with external checkpoints

I thought --disk-only would create external snapshots ?

virsh # snapshot-create-as --domain sles11 --name sn_sles11 --atomic
--disk-only --live --quiesce --diskspec vdb,snapshot=external
error: Operation not supported: live snapshot creation is supported only
with external checkpoints

Hm. Specifying external explicitely also does not help.

virsh # snapshot-create-as --domain sles11 --name sn_sles11 --atomic
--disk-only --quiesce --diskspec
vdb,snapshot=external,file=/var/lib/kvm/images/sles11/sn_disk0.qcow2
Domain snapshot sn_sles11 created

Omitting --live is the key. But I thought I need it because i'm
snapshotting a running vm ?


> > I forgot: I'm still running libvirt 1.2.5. Do I need to update also
> > for this problem ?
> I don't think so... Until you want to use active block commit, your good
> with 1.2.5
> 
I still have 1.2.5.

I tried to blockcommit:

virsh # blockcommit sles11 vdb --path
/var/lib/kvm/images/sles11/sn_disk0.qcow2 --wait --verbose --delete
error: option --path already seen

what does that mean ?


virsh # blockcommit sles11 --path
/var/lib/kvm/images/sles11/sn_disk0.qcow2 --wait --verbose --delete
error: unsupported flags (0x2) in function qemuDomainBlockCommit

virsh # blockcommit sles11 --path
/var/lib/kvm/images/sles11/sn_disk0.qcow2 --wait --verbose
error: Operation not supported: committing the active layer not supported
yet

Ok. Omitting --delete removes one error, but now I have another one.  What
means "not supported yet" ? Does that mean I have a software which offers
blockcommit in the help,
but it is not completely implemented ?

virsh # blockcommit sles11 --path /var/lib/kvm/images/sles11/sn_disk0.raw
--wait --verbose
error: invalid argument: No device found for specified path

Which path do I have to provide ? The one to the base or the one to the
snapshot ? I tried both, but not working.

Do I have all these problems because I'm using 1.2.5 ? That's the official
version of libvirt which is included by SuSE for SLES 11 SP4. But it's
lacking functionality which is offered in the help ?
Oh my god.


Bernd

   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] snapshot of running vm's

2015-12-03 Thread Lentes, Bernd
Dominique wrote:

> -Original Message-
> From: Dominique Ramaekers
> [mailto:dominique.ramaek...@cometal.be]
> Sent: Thursday, December 03, 2015 9:46 AM
> To: Lentes, Bernd
> Subject: RE: snapshot of running vm's
> 
> 
> 
> > -Oorspronkelijk bericht-
> > Van: Lentes, Bernd [mailto:bernd.len...@helmholtz-muenchen.de]
> > Verzonden: woensdag 2 december 2015 21:22
> > Aan: Dominique Ramaekers; libvirt-ML
> > Onderwerp: RE: snapshot of running vm's
> >
> ...
> >
> > Hi,
> >
> > i have inserted:
> >
> > 
> >>
> path='/var/lib/libvirt/qemu/channel/target/sles11.org.qemu.guest_age
> nt
> > .0'/
> > >
> >   
> >   
> > 
> >
> >
> > I didn't insert the path, it was added automatically, the same with
> > "".
> > I tried already port 1 and 2, but get this error:
> >
> > sunhb58820:~/libvirt_1.2.11 # virsh start sles11
> > error: Failed to start domain sles11
> > error: internal error: process exited while connecting to monitor:
> > qemu-kvm: -chardev
> >
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/sles1
> > 1.or
> > g.qemu.guest_agent.0,server,nowait: Failed to bind socket: No such
> > file or directory
> > chardev: opening backend "socket" failed
> 
> Does the path /var/lib/libvirt/qemu/channel/target/ exists?
> Is libvirt able to write to this path?
> 
Hi Dominique,

no, the path does not exist. But from where does libvirt derive it ?


Bernd

   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] snapshot of running vm's

2015-12-03 Thread Lentes, Bernd
Dominique wrote:

> -Original Message-
> From: Dominique Ramaekers
> [mailto:dominique.ramaek...@cometal.be]
> Sent: Thursday, December 03, 2015 9:46 AM
> To: Lentes, Bernd
> Subject: RE: snapshot of running vm's
> 
> 
> 
> > -Oorspronkelijk bericht-
> > Van: Lentes, Bernd [mailto:bernd.len...@helmholtz-muenchen.de]
> > Verzonden: woensdag 2 december 2015 21:22
> > Aan: Dominique Ramaekers; libvirt-ML
> > Onderwerp: RE: snapshot of running vm's
> >
> ...
> >
> > Hi,
> >
> > i have inserted:
> >
> > 
> >>
> path='/var/lib/libvirt/qemu/channel/target/sles11.org.qemu.guest_age
> nt
> > .0'/
> > >
> >   
> >   
> > 
> >
> >
> > I didn't insert the path, it was added automatically, the same with
> > "".
> > I tried already port 1 and 2, but get this error:
> >
> > sunhb58820:~/libvirt_1.2.11 # virsh start sles11
> > error: Failed to start domain sles11
> > error: internal error: process exited while connecting to monitor:
> > qemu-kvm: -chardev
> >
> socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/sles1
> > 1.or
> > g.qemu.guest_agent.0,server,nowait: Failed to bind socket: No such
> > file or directory
> > chardev: opening backend "socket" failed
> 
> Does the path /var/lib/libvirt/qemu/channel/target/ exists?
> Is libvirt able to write to this path?
> 
Hi Dominique,

no, the path does not exist. But from where does libvirt derive it ?

I forgot: I'm still running libvirt 1.2.5. Do I need to update also for
this problem ?


Bernd

   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] snapshot of running vm's

2015-12-02 Thread Lentes, Bernd
Dominique wrote:

> > virsh # snapshot-create --domain sles11 --atomic --disk-only --quiesce
> > error: argument unsupported: QEMU guest agent is not configured The
> > system I'm testing with is SLES11 SP4 (host and guest).  I installed
the
> guest agent:
> >
> > vm58820-8:~ # rpm -q qemu-guest-agent
> > qemu-guest-agent-2.0.2-1.35
> > Is there something I have to configure ?
> Yes, you have to update the guest xml with 'virsh edit $VM' and enter
> these lines in the devices section 
>
> 
> 
> After this, it's possible the qemu guest agent is still not working,
edit the
> xml again and change the port number in the line to a free port (you'll
> see that for example spice is using the same controller, bus and
port...)
> 

Hi,

i have inserted:


  
  
  



I didn't insert the path, it was added automatically, the same with
"".
I tried already port 1 and 2, but get this error:

sunhb58820:~/libvirt_1.2.11 # virsh start sles11
error: Failed to start domain sles11
error: internal error: process exited while connecting to monitor:
qemu-kvm: -chardev
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/sles11.or
g.qemu.guest_agent.0,server,nowait: Failed to bind socket: No such file or
directory
chardev: opening backend "socket" failed


Bernd
   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] snapshot of running vm's

2015-12-02 Thread Lentes, Bernd
Dominique wrote:

> -Original Message-
> From: Dominique Ramaekers
> [mailto:dominique.ramaek...@cometal.be]
> Sent: Wednesday, December 02, 2015 1:34 PM
> To: Lentes, Bernd; libvirt-ML
> Subject: RE: snapshot of running vm's
> 
> 
> 
> > -Oorspronkelijk bericht-
> > Van: libvirt-users-boun...@redhat.com [mailto:libvirt-users-
> > boun...@redhat.com] Namens Lentes, Bernd
> > Verzonden: dinsdag 1 december 2015 16:31
> > Aan: libvirt-ML
> > Onderwerp: [libvirt-users] snapshot of running vm's
> >
> > Hi,
> >
> > i'd like to create snapshots of my running vm's. I have several hosts
> > with
> > SLES11 SP4 64bit. I use libvirt 1.2.5-7.1 . VM's are Windows 7, SLES,
> > Ubuntu, Opensuse.
> > I use raw files for the vm's.
> > I try to orientate myself by
> > http://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit
> .
> > The hosts are backuped every night by a network based backup
> solution
> > (Legato).
> > My idea is:
> >
> > - delete any existing snapshot
> > - create a new snapshot
> > - of course before Legato arrives
> > - then Legato might come and saves my snapshot
> > - next night the same
> >
> >
> > Questions:
> > When I delete a snapshot, is it merged automatically with its base
> image ?
> No
> 
> > Or is it something I have to do explicitly (maybe via blockcommit) ?
> When using external snapshots, the procedure works like this:
> 1. Create external snapshot => a new file is created ex. 'image.qcow2'
> 2. Backup the original file ex. 'image.raw' (this file wil not change
over
> time. All writes are done on image.qcow2).
> 3. Active block commit (commits all writes done on image.qcow2 to
> image.raw and activate image.raw) 4. delete image.qcow2
> 
> If you do a test, you can check which file is in use by 'virsh
domblklist
> $VM'
> 
> > How can I get rid of the unused snapshot afterwards ? just rm ?
> > Because I'm doing a live backup do I need to create an external
> snapshot ?
> > To create an external snapshot do I have to provide --disk-only ?
> Yes. And it's advisible to also use --quiesce to flush the guests disk
cache
> (the guest agent needs to be installed and running on the guest) And
> optional use --no-metadata to make shure libvirt isn't following up on
> these snapshots. This way you can just remove the image.qcow2 with rm
> 
> > Do I have to provide --live as a parameter ?
> Yes, the snapshot is done on a live guest/disk
> 
> > I tried --quiesce but got that message:
> >
> > virsh # snapshot-create --domain sles11 --atomic --disk-only --quiesce
> > error: argument unsupported: QEMU guest agent is not configured The
> > system I'm testing with is SLES11 SP4 (host and guest).  I installed
the
> guest agent:
> >
> > vm58820-8:~ # rpm -q qemu-guest-agent
> > qemu-guest-agent-2.0.2-1.35
> > Is there something I have to configure ?
> Yes, you have to update the guest xml with 'virsh edit $VM' and enter
> these lines in the devices section 
>
> 
> 
> After this, it's possible the qemu guest agent is still not working,
edit the
> xml again and change the port number in the line to a free port (you'll
> see that for example spice is using the same controller, bus and
port...)
> 
> 
> 
> 
> >
> > Is this way of backup a good solution ? Is there something to improve,
> This way you backup disk state. Most of the times this will be great.
For
> some applications, this isn't sufficient.
> For instance, a Autodesk Vault Server has a SQL database and a separate
> file storage. Using Active block commit to backup will not guarantee
data
> concistancy. That's why Autodesk has its own 'best practice'. Here I
> initiate the Autodesk procedure through a powershell script (I connect
to
> the guest with telnet).
> 
> 
> > something I have to take care of ?
> > Is this backup consistent ?
> > Does that all work with libvirt 1.2.5-7.1 or do I need a more recent
> > version
> You'll need libvirt 1.2.9 or more and Qemu 2.1 (with higher qemu's I
have
> experienced some trouble...)
> 
> 
> > (what I don't like because I tried already to update my libvirt but
> > had tons of
> > dependencies) ?

Ok. I have to try.

> >
> > I know. A lot of questions. But it's a backup, and I wanted to be sure
> > that it operates properly.
> >
> > Thanks.
> >
> >

Hi Dominique,

thanks for your very clear and detailed answer.


Bernd
   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] snapshot of running vm's

2015-12-01 Thread Lentes, Bernd
Hi,

i'd like to create snapshots of my running vm's. I have several hosts with
SLES11 SP4 64bit. I use libvirt 1.2.5-7.1 . VM's are Windows 7, SLES,
Ubuntu, Opensuse.
I use raw files for the vm's.
I try to orientate myself by
http://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit .
The hosts are backuped every night by a network based backup solution
(Legato).
My idea is:

- delete any existing snapshot
- create a new snapshot
- of course before Legato arrives
- then Legato might come and saves my snapshot
- next night the same


Questions:
When I delete a snapshot, is it merged automatically with its base image ?
Or is it something I have to do explicitly (maybe via blockcommit) ?
How can I get rid of the unused snapshot afterwards ? just rm ?
Because I'm doing a live backup do I need to create an external snapshot ?
To create an external snapshot do I have to provide --disk-only ?
Do I have to provide --live as a parameter ?
I tried --quiesce but got that message:

virsh # snapshot-create --domain sles11 --atomic --disk-only --quiesce
error: argument unsupported: QEMU guest agent is not configured
The system I'm testing with is SLES11 SP4 (host and guest).  I installed
the guest agent:

vm58820-8:~ # rpm -q qemu-guest-agent
qemu-guest-agent-2.0.2-1.35
Is there something I have to configure ?

Is this way of backup a good solution ? Is there something to improve,
something I have to take care of ?
Is this backup consistent ?
Does that all work with libvirt 1.2.5-7.1 or do I need a more recent
version (what I don't like because I tried
already to update my libvirt but had tons of dependencies) ?

I know. A lot of questions. But it's a backup, and I wanted to be sure
that it operates properly.

Thanks.


Bernd
--
Bernd Lentes

Systemadministration
institute of developmental genetics
Gebäude 35.34 - Raum 208
HelmholtzZentrum München
bernd.len...@helmholtz-muenchen.de
phone: +49 (0)89 3187 1241
fax: +49 (0)89 3187 2294

Wer Visionen hat soll zum Hausarzt gehen
Helmut Schmidt
   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] virsh uses internally qemu-img ?

2015-11-29 Thread Lentes, Bernd
Hi,

i read that virsh uses internally qemu-img 
(http://serverfault.com/questions/692435/qemu-img-snapshot-on-live-vm).
Is that true ? so snapshotting a running vm with virsh or qemu-img is the same ?

Bernd
-- 
Bernd Lentes 

Systemadministration 
institute of developmental genetics 
Gebäude 35.34 - Raum 208 
HelmholtzZentrum München 
bernd.len...@helmholtz-muenchen.de 
phone: +49 (0)89 3187 1241 
fax: +49 (0)89 3187 2294 

Wer Visionen hat soll zum Hausarzt gehen 
Helmut Schmidt
   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] shutdown windows 7 vm although someone is logged on via RemoteDesktop

2015-11-03 Thread Lentes, Bernd
Hi,

how can i shutdown a windows 7 vm although someone is logged on to that vm
via RemoteDesktop ?
Currently libvirt waits 5 min for the vm to shutdown and then switches it
off.


Bernd

--
Bernd Lentes

Systemadministration
institute of developmental genetics
Gebäude 35.34 - Raum 208
HelmholtzZentrum München
bernd.len...@helmholtz-muenchen.de
phone: +49 (0)89 3187 1241
fax: +49 (0)89 3187 2294

Wer Visionen hat soll zum Hausarzt gehen
Helmut Schmidt

   

Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671


___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] still possible to use traditional bridge network setup ?

2015-03-20 Thread Lentes, Bernd
Bernd wrote:


> -Original Message-
> From: libvirt-users-boun...@redhat.com [mailto:libvirt-users-
> boun...@redhat.com] On Behalf Of Lentes, Bernd
> Sent: Thursday, March 19, 2015 5:12 PM
> To: libvirt-users@redhat.com
> Subject: Re: [libvirt-users] still possible to use traditional bridge network
> setup ?
>
> Laine wrote:
>
>

...

>
> Hi Laine,
>
> the reason was the firewall. Thanks for your tip !
>
>

Hi,

now the more precise explaination:
I booted the host with a normal eth0 and nothing else. Firewall rules were 
evaluated. I created and configured the bridge. After that "systemctl restart 
network". Everything worked as expected.
I configured the vm to use the bridge and started it. The vm has an eth, but no 
ip, no route, no ns. " sysctl net.bridge.bridge-nf-call-iptables" brought a 1. 
I didn't change it. Then I restartet the firewall ! After that I have a new 
rule (and network is running):
" Chain FORWARD (policy DROP 0 packets, 0 bytes)
 pkts bytes target prot opt in out source   destination
34148 4651K ACCEPT all  --  *  *   0.0.0.0/00.0.0.0/0   
 PHYSDEV match --physdev-is-bridged
0 0 LOGall  --  *  *   0.0.0.0/00.0.0.0/0   
 limit: avg 3/min burst 5 LOG flags 6 level 4 prefix 
"SFW2-FWD-ILL-ROUTING"

man iptables-extensions says:
" physdev:  This module matches on the bridge port input and output devices 
enslaved to a bridge device. This module is a  part  of  the  infrastructure  
that
enables a transparent bridging IP firewall and is only useful for kernel 
versions above version 2.5.44."

and further more:
" --physdev-is-bridged: Matches if the packet is being bridged and therefore is 
not being routed.  This is only useful in the FORWARD and POSTROUTING chains."

When I booted the host for the 1st time, the bridge didn't exist, so no 
firewall rule for the bridge. After creating the bridge and restarting the 
firewall, it recognizes the bridge and creates dynamically this rule. I didn't 
change " net.bridge.bridge-nf-call-iptables". Still 1.

Bernd


Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] still possible to use traditional bridge network setup ?

2015-03-19 Thread Lentes, Bernd
Laine wrote:


> -Original Message-
> From: sendmail [mailto:justsendmailnothinge...@gmail.com] On Behalf
> Of Laine Stump
> Sent: Tuesday, March 17, 2015 3:57 PM
> To: libvirt-users@redhat.com
> Cc: Lentes, Bernd
> Subject: Re: [libvirt-users] still possible to use traditional bridge network
> setup ?
>
> On 03/16/2015 01:07 PM, Lentes, Bernd wrote:
> > Bernd wrote:
> >
> >> Laine wrote:
> >>
> >>> -Original Message-
> >>> From: sendmail [mailto:justsendmailnothinge...@gmail.com] On
> >> Behalf Of
> >>> Laine Stump
> >>> Sent: Monday, March 16, 2015 4:12 PM
> >>> To: libvirt-users@redhat.com
> >>> Cc: Lentes, Bernd
> >>> Subject: Re: [libvirt-users] still possible to use traditional
> >>> bridge network setup ?
> >>>
> >>> On 03/16/2015 10:08 AM, Lentes, Bernd wrote:
> >>>> Hi,
> >>>>
> >>>> i'm currently installing a SLES 12 64bit system.
> >>>> libvirt-client-1.2.5-
> >>> 13.3.x86_64 and libvirt-daemon-1.2.5-13.3.x86_64.
> >>>> Formerly I created my vm's (KVM) using a traditional bridge in my
> >>>> host
> >>> systems, mostly SLES 11 SP3.
> >>>> But with SLES 12 I don't succeed. I can use the macvtap device in
> >>>> the
> >>> host, but I like to be able to communicate between host and guest.
> >>>> Is the traditional bridge setup not any longer available ?
> >>> Nothing has been removed in libvirt. Traditional bridges work just
> fine.
> >>> What failure did you see?
> >> Hi Laine,
> >>
> >> thank you for your answer. Well, it simply does not work:
> >>
> >> this is my setup:
> >>
> >> pc63422:/etc/sysconfig/network # cat ifcfg-br0 BOOTPROTO='dhcp4'
> >> TYPE='Bridge'
> >> BRIDGE='yes'
> >> DEVICE='br0'
> >> BRIDGE_FORWARDDELAY='0'
> >> BRIDGE_PORTS='eth0'
> >> BRIDGE_STP='off'
> >> BROADCAST=''
> >> DHCLIENT_SET_DEFAULT_ROUTE='yes'
> >> ETHTOOL_OPTIONS=''
> >> IPADDR=''
> >> MTU=''
> >> NAME=''
> >> NETMASK=''
> >> NETWORK=''
> >> REMOTE_IPADDR=''
> >> STARTMODE='auto'
> >> USERCONTROL='no
> >>
> >> pc63422:/etc/sysconfig/network # cat ifcfg-eth0 #
> BOOTPROTO='dhcp'
> >> BROADCAST=''
> >> ETHTOOL_OPTIONS=''
> >> IPADDR=''
> >> MTU=''
> >> NAME=''
> >> NETMASK=''
> >> NETWORK=''
> >> REMOTE_IPADDR=''
> >> STARTMODE='auto'
> >> DHCLIENT_SET_DEFAULT_ROUTE='yes'
> >> PREFIXLEN=''
> >> BOOTPROTO='static'
> >> USERCONTROL='no'
> >> BRIDGE='br0'
> >>
> >>
> >> guest.xml:
> >> ...
> >> 
> >>   
> >>   
> >>   
> >>>> function='0x0'/>
> >> 
> >> ...
> >>
> >> pc63422:/etc/sysconfig/network # ip addr
> >> 1: lo:  mtu 65536 qdisc noqueue state
> UNKNOWN
> >> group default
> >> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >> inet 127.0.0.1/8 scope host lo
> >>valid_lft forever preferred_lft forever
> >> inet6 ::1/128 scope host
> >>valid_lft forever preferred_lft forever
> >> 2: eth0:  mtu 1500 qdisc
> pfifo_fast
> >> master br0 state UP group default qlen 1000
> >> link/ether 78:24:af:9c:bd:a6 brd ff:ff:ff:ff:ff:ff
> >> inet6 fe80::7a24:afff:fe9c:bda6/64 scope link
> >>valid_lft forever preferred_lft forever
> >> 27: br0:  mtu 1500 qdisc
> noqueue
> >> state UP group default
> >> link/ether 78:24:af:9c:bd:a6 brd ff:ff:ff:ff:ff:ff
> >> inet 10.35.34.115/24 brd 10.35.34.255 scope global br0
> >>valid_lft forever preferred_lft forever
> >> inet6 fe80::7a24:afff:fe9c:bda6/64 scope link
> >>valid_lft forever preferred_lft forever
> >> 28: vnet0:  mtu 1500 qdisc
> >> pfifo_fast master br0 state UNKNOWN group default qlen 500
> >> link/ether fe:54:00:37:92:b1 brd ff:ff:ff:ff:ff

Re: [libvirt-users] still possible to use traditional bridge network setup ?

2015-03-16 Thread Lentes, Bernd
Bernd wrote:

>
> Laine wrote:
>
> > -Original Message-
> > From: sendmail [mailto:justsendmailnothinge...@gmail.com] On
> Behalf Of
> > Laine Stump
> > Sent: Monday, March 16, 2015 4:12 PM
> > To: libvirt-users@redhat.com
> > Cc: Lentes, Bernd
> > Subject: Re: [libvirt-users] still possible to use traditional bridge
> > network setup ?
> >
> > On 03/16/2015 10:08 AM, Lentes, Bernd wrote:
> > > Hi,
> > >
> > > i'm currently installing a SLES 12 64bit system.
> > > libvirt-client-1.2.5-
> > 13.3.x86_64 and libvirt-daemon-1.2.5-13.3.x86_64.
> > > Formerly I created my vm's (KVM) using a traditional bridge in my
> > > host
> > systems, mostly SLES 11 SP3.
> > > But with SLES 12 I don't succeed. I can use the macvtap device in
> > > the
> > host, but I like to be able to communicate between host and guest.
> > > Is the traditional bridge setup not any longer available ?
> >
> > Nothing has been removed in libvirt. Traditional bridges work just fine.
> > What failure did you see?
>
> Hi Laine,
>
> thank you for your answer. Well, it simply does not work:
>
> this is my setup:
>
> pc63422:/etc/sysconfig/network # cat ifcfg-br0 BOOTPROTO='dhcp4'
> TYPE='Bridge'
> BRIDGE='yes'
> DEVICE='br0'
> BRIDGE_FORWARDDELAY='0'
> BRIDGE_PORTS='eth0'
> BRIDGE_STP='off'
> BROADCAST=''
> DHCLIENT_SET_DEFAULT_ROUTE='yes'
> ETHTOOL_OPTIONS=''
> IPADDR=''
> MTU=''
> NAME=''
> NETMASK=''
> NETWORK=''
> REMOTE_IPADDR=''
> STARTMODE='auto'
> USERCONTROL='no
>
> pc63422:/etc/sysconfig/network # cat ifcfg-eth0 # BOOTPROTO='dhcp'
> BROADCAST=''
> ETHTOOL_OPTIONS=''
> IPADDR=''
> MTU=''
> NAME=''
> NETMASK=''
> NETWORK=''
> REMOTE_IPADDR=''
> STARTMODE='auto'
> DHCLIENT_SET_DEFAULT_ROUTE='yes'
> PREFIXLEN=''
> BOOTPROTO='static'
> USERCONTROL='no'
> BRIDGE='br0'
>
>
> guest.xml:
> ...
> 
>   
>   
>   
>function='0x0'/>
> 
> ...
>
> pc63422:/etc/sysconfig/network # ip addr
> 1: lo:  mtu 65536 qdisc noqueue state
> UNKNOWN group default
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
>valid_lft forever preferred_lft forever
> inet6 ::1/128 scope host
>valid_lft forever preferred_lft forever
> 2: eth0:  mtu 1500 qdisc
> pfifo_fast master br0 state UP group default qlen 1000
> link/ether 78:24:af:9c:bd:a6 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::7a24:afff:fe9c:bda6/64 scope link
>valid_lft forever preferred_lft forever
> 27: br0:  mtu 1500 qdisc
> noqueue state UP group default
> link/ether 78:24:af:9c:bd:a6 brd ff:ff:ff:ff:ff:ff
> inet 10.35.34.115/24 brd 10.35.34.255 scope global br0
>valid_lft forever preferred_lft forever
> inet6 fe80::7a24:afff:fe9c:bda6/64 scope link
>valid_lft forever preferred_lft forever
> 28: vnet0:  mtu 1500 qdisc
> pfifo_fast master br0 state UNKNOWN group default qlen 500
> link/ether fe:54:00:37:92:b1 brd ff:ff:ff:ff:ff:ff
> inet6 fe80::fc54:ff:fe37:92b1/64 scope link
>valid_lft forever preferred_lft forever
>
>
> Attached is what I choose during creation of the vm. It's german,
> network source means something like "name of the shared device".
>
> I attached the host to a network hub to be able to sniff all packets. But I
> don't see any packet from the guest. E.g. I choosed "dhclient" after
> booting a knoppix cd in the guest, but no packet from the guest. Also
> using a windows 7 installation cd - no packet from the guest. But I see
> packets from the host.

Hi,

using the above mentioned setup, i booted the guest using a knoppix cd. Inside 
the guest I configured the ip address statically. I can ping from host to guest 
and vice versa, but nothing else. Of course the host is able to connect 
everything, but the guest just reaches the host.


Bernd

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] still possible to use traditional bridge network setup ?

2015-03-16 Thread Lentes, Bernd
Laine wrote:

> -Original Message-
> From: sendmail [mailto:justsendmailnothinge...@gmail.com] On Behalf
> Of Laine Stump
> Sent: Monday, March 16, 2015 4:12 PM
> To: libvirt-users@redhat.com
> Cc: Lentes, Bernd
> Subject: Re: [libvirt-users] still possible to use traditional bridge network
> setup ?
>
> On 03/16/2015 10:08 AM, Lentes, Bernd wrote:
> > Hi,
> >
> > i'm currently installing a SLES 12 64bit system. libvirt-client-1.2.5-
> 13.3.x86_64 and libvirt-daemon-1.2.5-13.3.x86_64.
> > Formerly I created my vm's (KVM) using a traditional bridge in my host
> systems, mostly SLES 11 SP3.
> > But with SLES 12 I don't succeed. I can use the macvtap device in the
> host, but I like to be able to communicate between host and guest.
> > Is the traditional bridge setup not any longer available ?
>
> Nothing has been removed in libvirt. Traditional bridges work just fine.
> What failure did you see?

Hi Laine,

thank you for your answer. Well, it simply does not work:

this is my setup:

pc63422:/etc/sysconfig/network # cat ifcfg-br0
BOOTPROTO='dhcp4'
TYPE='Bridge'
BRIDGE='yes'
DEVICE='br0'
BRIDGE_FORWARDDELAY='0'
BRIDGE_PORTS='eth0'
BRIDGE_STP='off'
BROADCAST=''
DHCLIENT_SET_DEFAULT_ROUTE='yes'
ETHTOOL_OPTIONS=''
IPADDR=''
MTU=''
NAME=''
NETMASK=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='auto'
USERCONTROL='no

pc63422:/etc/sysconfig/network # cat ifcfg-eth0
# BOOTPROTO='dhcp'
BROADCAST=''
ETHTOOL_OPTIONS=''
IPADDR=''
MTU=''
NAME=''
NETMASK=''
NETWORK=''
REMOTE_IPADDR=''
STARTMODE='auto'
DHCLIENT_SET_DEFAULT_ROUTE='yes'
PREFIXLEN=''
BOOTPROTO='static'
USERCONTROL='no'
BRIDGE='br0'


guest.xml:
...

  
  
  
  

...

pc63422:/etc/sysconfig/network # ip addr
1: lo:  mtu 65536 qdisc noqueue state UNKNOWN group 
default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
   valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
   valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast master br0 
state UP group default qlen 1000
link/ether 78:24:af:9c:bd:a6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::7a24:afff:fe9c:bda6/64 scope link
   valid_lft forever preferred_lft forever
27: br0:  mtu 1500 qdisc noqueue state UP 
group default
link/ether 78:24:af:9c:bd:a6 brd ff:ff:ff:ff:ff:ff
inet 10.35.34.115/24 brd 10.35.34.255 scope global br0
   valid_lft forever preferred_lft forever
inet6 fe80::7a24:afff:fe9c:bda6/64 scope link
   valid_lft forever preferred_lft forever
28: vnet0:  mtu 1500 qdisc pfifo_fast master 
br0 state UNKNOWN group default qlen 500
link/ether fe:54:00:37:92:b1 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe37:92b1/64 scope link
   valid_lft forever preferred_lft forever


Attached is what I choose during creation of the vm. It's german, network 
source means something like "name of the shared device".

I attached the host to a network hub to be able to sniff all packets. But I 
don't see any packet from the guest. E.g. I choosed "dhclient" after booting a 
knoppix cd in the guest, but no packet from the guest. Also using a windows 7 
installation cd - no packet from the guest. But I see packets from the host.




Bernd

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671
___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] still possible to use traditional bridge network setup ?

2015-03-16 Thread Lentes, Bernd
Hi,

i'm currently installing a SLES 12 64bit system. 
libvirt-client-1.2.5-13.3.x86_64 and libvirt-daemon-1.2.5-13.3.x86_64.
Formerly I created my vm's (KVM) using a traditional bridge in my host systems, 
mostly SLES 11 SP3.
But with SLES 12 I don't succeed. I can use the macvtap device in the host, but 
I like to be able to communicate between host and guest.
Is the traditional bridge setup not any longer available ?

Bernd
--
Bernd Lentes

Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.len...@helmholtz-muenchen.de
phone: +49 89 3187 1241
fax:   +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg

Je suis Charlie


Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] wrong time in guest logs

2014-10-28 Thread Lentes, Bernd
Eric wrote:


> -Ursprüngliche Nachricht-
> Von: Eric Blake [mailto:ebl...@redhat.com]
> Gesendet: Dienstag, 28. Oktober 2014 17:18
> An: Lentes, Bernd; libvirt-ML (libvirt-users@redhat.com)
> Betreff: Re: [libvirt-users] wrong time in guest logs
>
> On 10/28/2014 10:08 AM, Lentes, Bernd wrote:
> > Hi,
>
> [can you convince your mailer to wrap long lines?]
>
> >
> > i have several vm's running on KVM hosts. Recently i found out that the
> time in the log-files of the guests (/var/log/libvirt/qemu/xxx.log) is wrong.
> The time on the guest itself is right, just the time in the log-files is one 
> hour
> back. Windows and linux vm's are affected. Also on another host the same.
> Changing the clock offset does not have any influence, still the wrong time in
> the log file.
>
> The time in libvirt logs is ALWAYS tied to UTC, precisely because that is
> unambiguous (any log that outputs timestamps in local time risks being
> misinterpreted if the log is read from a different timezone, especially if the
> timezone name is not included as part of the timestamp).  Based on you
> email address, it looks like your problem is that Germany is coincident with
> UTC in the summer, but one hour off in the winter, and that your "problem"
> (which is not a bug) was made manifest because Germany left daylight
> savings this week.
>

Nearly. In summer we have in germany UTC+2, in winter UTC+1.


Bernd

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] wrong time in guest logs

2014-10-28 Thread Lentes, Bernd
Hi,

i have several vm's running on KVM hosts. Recently i found out that the time in 
the log-files of the guests (/var/log/libvirt/qemu/xxx.log) is wrong. The time 
on the guest itself is right, just the time in the log-files is one hour back. 
Windows and linux vm's are affected. Also on another host the same. Changing 
the clock offset does not have any influence, still the wrong time in the log 
file.
Any ideas ?


Bernd

--
Bernd Lentes

Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.len...@helmholtz-muenchen.de
phone: +49 89 3187 1241
fax:   +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg

Die Freiheit wird nicht durch weniger Freiheit verteidigt


Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess, Dr. Nikolaus Blum, Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] guests not shutting down when host shuts down - SOLVED

2013-07-10 Thread Lentes, Bernd

> >
> Hi,
>
> for the Ubuntu guest i found a solution:
>
> http://ubuntuforums.org/showthread.php?t=1972464
>
>
> Bernd
>

The windows guest didn't shutdown because i had a remotedesktop seesion on it. 
After finishing it windows shut down properly.


Bernd

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess Dr. Nikolaus Blum Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] guests not shutting down when host shuts down

2013-07-10 Thread Lentes, Bernd

Bernd wrote:
> >
> > What's the LIBVIRTD_KVM_SHUTDOWN value (on my system it's in
> > /etc/conf.d/libvirtd)? You want it to be 'shutdown'.
> >
> > Michal
> >
>
> Hi Michal,
>
> i have neither this variable nor that file.
> But i have /etc/syconfig/libvirt-guests:
>
> 
> pc59093:/var/log/libvirt/qemu # cat /etc/sysconfig/libvirt-guests
> ## Path: System/Virtualization/libvirt
> ## Type: string
> ## Default: default
> # URIs to check for running guests
> # example: URIS='default xen:/// vbox+tcp://host/system lxc:///'
> URIS=default
>
> ## Type: string
> ## Default: start
> # action taken on host boot
> # - start   all guests which were running on shutdown are
> started on boot
> #   regardless on their autostart settings
> # - ignore  libvirt-guests init script won't start any guest
> on boot, however,
> #   guests marked as autostart will still be
> automatically started by
> #   libvirtd
> ON_BOOT=start
>
> ## Type: integer
> ## Default: 0
> # number of seconds to wait between each guest start
> START_DELAY=0
>
> ## Type: string
> ## Default: suspend
> # action taken on host shutdown
> # - suspend   all running guests are suspended using virsh managedsave
> # - shutdown  all running guests are asked to shutdown.
> Please be careful with
> # this settings since there is no way to
> distinguish between a
> # guest which is stuck or ignores shutdown
> requests and a guest
> # which just needs a long time to shutdown. When setting
> # ON_SHUTDOWN=shutdown, you must also set
> SHUTDOWN_TIMEOUT to a
> # value suitable for your guests.
> ON_SHUTDOWN=shutdown
> =
>
> I changed "ON_SHUTDOWN" already from suspend to shutdown. I
> think this should be the same.
>
>
> Bernd
>
Hi,

for the Ubuntu guest i found a solution:

http://ubuntuforums.org/showthread.php?t=1972464


Bernd

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess Dr. Nikolaus Blum Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] guests not shutting down when host shuts down

2013-07-10 Thread Lentes, Bernd
Michal wrote:

> -Original Message-
> From: Michal Privoznik [mailto:mpriv...@redhat.com]
> Sent: Wednesday, July 10, 2013 12:45 PM
> To: Lentes, Bernd
> Cc: libvirt-ML (libvirt-users@redhat.com)
> Subject: Re: [libvirt-users] guests not shutting down when
> host shuts down
>
> On 10.07.2013 11:37, Lentes, Bernd wrote:
> > Hi,
> >
> > i have a SLES 11 SP2 64bit host with three guests:
> > - Windows XP 32
> > - Ubuntu 12.04 LTS 64bit
> > - SLES 11 SP2 64bit
> >
> > The SLES guest shuts down with the host shutdown. The
> others not. When i shutdown these two guests with the
> virt-manager, they shutdown fine.
> > ACPI is activated in virt-manager for both of them. Acpid
> is running in the Ubuntu Client.
> > When the host shuts down, the two guests get a signal
> (excerpt from the log of the host:)
> >
> > ===
> > 2013-07-07 16:39:51.674: starting up
> > LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/
> QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -S -M pc-0.15
> -enable-kvm -m 1025 -smp 1,sockets=1,cores=1,threads=1 -name
> greensql_2 -uuid 2cfbac9c-dbb2-c4bf-4aba-2d18dc49d18e
> -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/greensql_2.mo
> nitor,server,nowait -mon
> chardev=charmonitor,id=monitor,mode=control -rtc base=utc
> -no-shutdown -drive
> file=/var/lib/kvm/images/greensql_2/disk0.raw,if=none,id=drive
> -ide0-0-0,format=raw -device
> ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bo
> otindex=1 -drive
> if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw
> -device
> ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
> -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=20 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:37:92:a9,b
> us=pci.0,addr=0x3 -usb -vnc 127.0.0.1:2 -vga cirrus -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
> > Domain id=3 is tainted: high-privileges
> >
> > qemu: terminating on signal 15 from pid 24958
> >
> > 2013-07-08 13:58:29.651: starting up
> > ==
> >
> > I'm a bit astonished about "no-shutdown" in the
> commandline, but the sles guest also has it in its
> commandline, so it should not bother.
> >
> > I'm using kvm-0.15.1-0.23.1, libvirt-client-0.9.6-0.23.1,
> libvirt-0.9.6-0.23.1 and virt-manager-0.9.0-3.19.1 in the host.
> >
> > Thanks for any help.
> >
> >
> > Bernd
> >
>
> What's the LIBVIRTD_KVM_SHUTDOWN value (on my system it's in
> /etc/conf.d/libvirtd)? You want it to be 'shutdown'.
>
> Michal
>

Hi Michal,

i have neither this variable nor that file.
But i have /etc/syconfig/libvirt-guests:


pc59093:/var/log/libvirt/qemu # cat /etc/sysconfig/libvirt-guests
## Path: System/Virtualization/libvirt
## Type: string
## Default: default
# URIs to check for running guests
# example: URIS='default xen:/// vbox+tcp://host/system lxc:///'
URIS=default

## Type: string
## Default: start
# action taken on host boot
# - start   all guests which were running on shutdown are started on boot
#   regardless on their autostart settings
# - ignore  libvirt-guests init script won't start any guest on boot, however,
#   guests marked as autostart will still be automatically started by
#   libvirtd
ON_BOOT=start

## Type: integer
## Default: 0
# number of seconds to wait between each guest start
START_DELAY=0

## Type: string
## Default: suspend
# action taken on host shutdown
# - suspend   all running guests are suspended using virsh managedsave
# - shutdown  all running guests are asked to shutdown. Please be careful with
# this settings since there is no way to distinguish between a
# guest which is stuck or ignores shutdown requests and a guest
# which just needs a long time to shutdown. When setting
# ON_SHUTDOWN=shutdown, you must also set SHUTDOWN_TIMEOUT to a
# value suitable for your guests.
ON_SHUTDOWN=shutdown
=

I changed "ON_SHUTDOWN" already from suspend to shutdown. I think this should 
be the same.


Bernd



Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess Dr. Nikolaus Blum Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] guests not shutting down when host shuts down

2013-07-10 Thread Lentes, Bernd
Hi,

i have a SLES 11 SP2 64bit host with three guests:
- Windows XP 32
- Ubuntu 12.04 LTS 64bit
- SLES 11 SP2 64bit

The SLES guest shuts down with the host shutdown. The others not. When i 
shutdown these two guests with the virt-manager, they shutdown fine.
ACPI is activated in virt-manager for both of them. Acpid is running in the 
Ubuntu Client.
When the host shuts down, the two guests get a signal (excerpt from the log of 
the host:)

===
2013-07-07 16:39:51.674: starting up
LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none 
/usr/bin/qemu-kvm -S -M pc-0.15 -enable-kvm -m 1025 -smp 
1,sockets=1,cores=1,threads=1 -name greensql_2 -uuid 
2cfbac9c-dbb2-c4bf-4aba-2d18dc49d18e -nodefconfig -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/greensql_2.monitor,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown 
-drive 
file=/var/lib/kvm/images/greensql_2/disk0.raw,if=none,id=drive-ide0-0-0,format=raw
 -device 
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive 
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device 
ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev 
tap,fd=17,id=hostnet0,vhost=on,vhostfd=20 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:37:92:a9,bus=pci.0,addr=0x3 
-usb -vnc 127.0.0.1:2 -vga cirrus -device 
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4
Domain id=3 is tainted: high-privileges

qemu: terminating on signal 15 from pid 24958

2013-07-08 13:58:29.651: starting up
==

I'm a bit astonished about "no-shutdown" in the commandline, but the sles guest 
also has it in its commandline, so it should not bother.

I'm using kvm-0.15.1-0.23.1, libvirt-client-0.9.6-0.23.1, libvirt-0.9.6-0.23.1 
and virt-manager-0.9.0-3.19.1 in the host.

Thanks for any help.


Bernd


--
Bernd Lentes

Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.len...@helmholtz-muenchen.de
phone: +49 89 3187 1241
fax:   +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg

Wer nichts verdient außer Geld verdient nichts außer Geld

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess Dr. Nikolaus Blum Dr. Alfons Enhsen
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


[libvirt-users] how can i get rid of the password for accessing the console in virt-manager ?

2012-11-22 Thread Lentes, Bernd
Hi,

i'm using virt-manager to manage serveral guests on a sles 11 host. For one 
guest, i configured a password to enter the console via virt-manager (not via 
VNC). How can i get rid of it ?

Thanks in advance.


Bernd

--
Bernd Lentes

Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.len...@helmholtz-muenchen.de
phone: +49 89 3187 1241
fax:   +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg

Wir sollten nicht den Tod fürchten, sondern
das schlechte Leben

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess und Dr. Nikolaus Blum
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users


Re: [libvirt-users] problem starting virt-manager

2012-09-12 Thread Lentes, Bernd

Bernd wrote:

>
>
> Michal wrote:
>
> > On 11.09.2012 10:20, Lentes, Bernd wrote:
> > > Hi,
> > >
> > > i try to run virt-manager on a SLES 11 SP1 box. I'm using
> > kernel 2.6.32.12 and virt-manager 0.9.4-106.1.x86_64 .
> > > The system is a 64bit box.
> > >
> > > Here is the output:
> > > =
> > >
> > pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/vir
> > t_manager/sles_11_sp1 # virt-manager &
> > > [1] 9659
> > >
> > pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/vir
> > t_manager/sles_11_sp1 # Traceback (most recent call last):
> > >   File "/usr/share/virt-manager/virt-manager.py", line 386,
> > in 
> > > main()
> > >   File "/usr/share/virt-manager/virt-manager.py", line
> 247, in main
> > > from virtManager import cli
> > >   File "/usr/share/virt-manager/virtManager/cli.py", line
> > 29, in 
> > > import libvirt
> > >   File "/usr/lib64/python2.6/site-packages/libvirt.py",
> > line 25, in 
> > > raise lib_e
> > > ImportError: /usr/lib64/libvirt.so.0: undefined symbol:
> > selinux_virtual_domain_context_path
> > >
> >
> > Seems like a broken dependencies to me. This function is
> > supposed to be
> > in libselinux-utils package IIRC. Can you try installing it
> and if it
> > works, maybe we need to update our spec file.
> >
>
> Hi,
>
> i installed libselinux-utils-2.0.73-1.fc10.x86_64.rpm. I
> didn't find the exact version i have (2.0.71) and used a rpm
> for Fedora.
> Is that a problem ? Anyway, the error message is still the same :-(
> rpm -ql libselinux-utils-2.0.73-1.fc10 said that the package
> consists only of executables and manpages.
> Shouldn't there be some libraries ?
>

Hi,

trying the same from the python shell, i get the same error:
>>> import libvirt
Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 25, in 
raise lib_e
ImportError: /usr/lib64/libvirt.so.0: undefined symbol: 
selinux_virtual_domain_context_path
>>>

The lines in the above mentioned file around 25 looks like this:
# On cygwin, the DLL is called cygvirtmod.dll
try:
import libvirtmod
except ImportError, lib_e:
try:
import cygvirtmod as libvirtmod
except ImportError, cyg_e:
if str(cyg_e).count("No module named"):
raise lib_e

raise lib_e is line 25.

Does it try to find a file from cygwin ?

Bernd

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess und Dr. Nikolaus Blum
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] problem starting virt-manager

2012-09-11 Thread Lentes, Bernd

Michal wrote:

> On 11.09.2012 10:20, Lentes, Bernd wrote:
> > Hi,
> >
> > i try to run virt-manager on a SLES 11 SP1 box. I'm using
> kernel 2.6.32.12 and virt-manager 0.9.4-106.1.x86_64 .
> > The system is a 64bit box.
> >
> > Here is the output:
> > =
> >
> pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/vir
> t_manager/sles_11_sp1 # virt-manager &
> > [1] 9659
> >
> pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/vir
> t_manager/sles_11_sp1 # Traceback (most recent call last):
> >   File "/usr/share/virt-manager/virt-manager.py", line 386,
> in 
> > main()
> >   File "/usr/share/virt-manager/virt-manager.py", line 247, in main
> > from virtManager import cli
> >   File "/usr/share/virt-manager/virtManager/cli.py", line
> 29, in 
> > import libvirt
> >   File "/usr/lib64/python2.6/site-packages/libvirt.py",
> line 25, in 
> > raise lib_e
> > ImportError: /usr/lib64/libvirt.so.0: undefined symbol:
> selinux_virtual_domain_context_path
> >
>
> Seems like a broken dependencies to me. This function is
> supposed to be
> in libselinux-utils package IIRC. Can you try installing it and if it
> works, maybe we need to update our spec file.
>

Hi,

i installed libselinux-utils-2.0.73-1.fc10.x86_64.rpm. I didn't find the exact 
version i have (2.0.71) and used a rpm for Fedora.
Is that a problem ? Anyway, the error message is still the same :-(
rpm -ql libselinux-utils-2.0.73-1.fc10 said that the package consists only of 
executables and manpages.
Shouldn't there be some libraries ?

Bernd

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess und Dr. Nikolaus Blum
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

Re: [libvirt-users] problem starting virt-manager

2012-09-11 Thread Lentes, Bernd

Michal wrote:
>
> On 11.09.2012 10:20, Lentes, Bernd wrote:
> > Hi,
> >
> > i try to run virt-manager on a SLES 11 SP1 box. I'm using
> kernel 2.6.32.12 and virt-manager 0.9.4-106.1.x86_64 .
> > The system is a 64bit box.
> >
> > Here is the output:
> > =
> >
> pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/vir
> t_manager/sles_11_sp1 # virt-manager &
> > [1] 9659
> >
> pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/vir
> t_manager/sles_11_sp1 # Traceback (most recent call last):
> >   File "/usr/share/virt-manager/virt-manager.py", line 386,
> in 
> > main()
> >   File "/usr/share/virt-manager/virt-manager.py", line 247, in main
> > from virtManager import cli
> >   File "/usr/share/virt-manager/virtManager/cli.py", line
> 29, in 
> > import libvirt
> >   File "/usr/lib64/python2.6/site-packages/libvirt.py",
> line 25, in 
> > raise lib_e
> > ImportError: /usr/lib64/libvirt.so.0: undefined symbol:
> selinux_virtual_domain_context_path
> >
>
> Seems like a broken dependencies to me. This function is
> supposed to be
> in libselinux-utils package IIRC. Can you try installing it and if it
> works, maybe we need to update our spec file.
>

Hi Michal,

i googled for several hours but didn't suceed in finding a binary or src rpm 
with libselinux-utils-2.0.71 (that's the version i need).
Do you have any idea where i can find it ? Btw: i have libselinux1 installed, 
not libselinux. Is it the same, or where is the difference ?


Bernd

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess und Dr. Nikolaus Blum
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users

[libvirt-users] problem starting virt-manager

2012-09-11 Thread Lentes, Bernd
Hi,

i try to run virt-manager on a SLES 11 SP1 box. I'm using kernel 2.6.32.12 and 
virt-manager 0.9.4-106.1.x86_64 .
The system is a 64bit box.

Here is the output:
=
pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/virt_manager/sles_11_sp1
 # virt-manager &
[1] 9659
pc56846:/media/idg2/SysAdmin_AG_Wurst/software_und_treiber/virt_manager/sles_11_sp1
 # Traceback (most recent call last):
  File "/usr/share/virt-manager/virt-manager.py", line 386, in 
main()
  File "/usr/share/virt-manager/virt-manager.py", line 247, in main
from virtManager import cli
  File "/usr/share/virt-manager/virtManager/cli.py", line 29, in 
import libvirt
  File "/usr/lib64/python2.6/site-packages/libvirt.py", line 25, in 
raise lib_e
ImportError: /usr/lib64/libvirt.so.0: undefined symbol: 
selinux_virtual_domain_context_path

[1]+  Exit 1  virt-manager
=

As you see, virt-manager does not start.

Thanks for any hint.


Bernd

--
Bernd Lentes

Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.len...@helmholtz-muenchen.de
phone: +49 89 3187 1241
fax:   +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg

Wir sollten nicht den Tod fürchten, sondern
das schlechte Leben

Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess und Dr. Nikolaus Blum
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671

___
libvirt-users mailing list
libvirt-users@redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users