Re: [CentOS-virt] Xen CentOS 7.3 server + CentOS 7.3 VM fails to boot after CR updates (applied to VM)!

2017-09-02 Thread Fabian Arrotin
ble:0kB isolated(anon):0kB isolated(file):0kB present:15996kB
> managed:15912kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB
> slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB
> pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB
> free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
> [1.971217] lowmem_reserve[]: 0 4063 16028 16028
> [1.971226] Node 0 DMA32 free:4156584kB min:4104kB low:5128kB
> high:6156kB active_anon:952kB inactive_anon:1924kB active_file:0kB
> inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB
> present:4177920kB managed:4162956kB mlocked:0kB dirty:0kB writeback:0kB
> mapped:4kB shmem:1928kB slab_reclaimable:240kB slab_unreclaimable:504kB
> kernel_stack:32kB pagetables:592kB unstable:0kB bounce:0kB
> free_pcp:1760kB local_pcp:288kB free_cma:0kB writeback_tmp:0kB
> pages_scanned:0 all_unreclaimable? no
> [1.971264] lowmem_reserve[]: 0 0 11964 11964
> [1.971273] Node 0 Normal free:12091564kB min:12088kB low:15108kB
> high:18132kB active_anon:2352kB inactive_anon:6272kB active_file:3164kB
> inactive_file:35364kB unevictable:0kB isolated(anon):0kB
> isolated(file):0kB present:12591104kB managed:12251788kB mlocked:0kB
> dirty:0kB writeback:0kB mapped:5852kB shmem:6284kB
> slab_reclaimable:6688kB slab_unreclaimable:6012kB kernel_stack:880kB
> pagetables:1328kB unstable:0kB bounce:0kB free_pcp:1196kB
> local_pcp:152kB free_cma:0kB writeback_tmp:0kB pages_scanned:0
> all_unreclaimable? no
> [1.971309] lowmem_reserve[]: 0 0 0 0
> [1.971316] Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 1*32kB (U) 2*64kB (U)
> 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) =
> 15912kB
> [1.971343] Node 0 DMA32: 7*4kB (M) 18*8kB (UM) 7*16kB (EM) 3*32kB
> (EM) 1*64kB (E) 2*128kB (UM) 1*256kB (E) 4*512kB (UM) 4*1024kB (UEM)
> 4*2048kB (EM) 1011*4096kB (M) = 4156348kB
> [1.971377] Node 0 Normal: 64*4kB (UEM) 10*8kB (UEM) 6*16kB (EM)
> 3*32kB (EM) 3*64kB (UE) 3*128kB (UEM) 1*256kB (E) 2*512kB (UE) 0*1024kB
> 1*2048kB (M) 2951*4096kB (M) = 12091728kB
> [1.971413] Node 0 hugepages_total=0 hugepages_free=0
> hugepages_surp=0 hugepages_size=2048kB
> [1.971425] 11685 total pagecache pages
> [1.971430] 0 pages in swap cache
> [1.971437] Swap cache stats: add 0, delete 0, find 0/0
> [1.971444] Free swap  = 0kB
> [1.971451] Total swap = 0kB
> [1.971456] 4196255 pages RAM
> [1.971462] 0 pages HighMem/MovableOnly
> [1.971467] 88591 pages reserved
> 

Hi,

Just to confirm that from our internal 7.4.1708 QA provisioning tests,
kernel panics directly when trying to deploy a PV guest so 7.4.1708
isn't installable in that scenerio (that we tested in QA and it fails)

-- 
Fabian Arrotin
The CentOS Project | http://www.centos.org
gpg key: 56BEC54E | twitter: @arrfab



signature.asc
Description: OpenPGP digital signature
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] IBM GPFS filesystem

2010-12-03 Thread Fabian Arrotin
Pasi Kärkkäinen wrote:
 On Fri, Dec 03, 2010 at 12:45:22PM +0530, Rajagopal Swaminathan wrote:
snip
 
 You can also use normal LVM over shared iSCSI LUN,
 but you need to be (very) careful with running LVM management commands
 and getting all the nodes (dom0s) to be in sync :)
 
 (Citrix XenServer does this, but there the management toolstack
 takes care of the LVM command execution + state synchronization).
 

Yes, Citrix XenServer also use LVM, but a different implementation 
though (with a VHD format in the LV itself)
That's also true that the management toolstack takes care of the state 
synch and the active/inactive state of the LV

-- 
--
Fabian Arrotin



___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] IBM GPFS filesystem

2010-12-02 Thread Fabian Arrotin
Benjamin Franz wrote:
 On 12/02/2010 12:58 PM, compdoc wrote:
 []...live migration...?
snip
 
 No. You need a shared filesystem. Which pretty much leaves you on either 
 NFS or a clustered filesystem. 

Totally wrong ! If you have never tested it , try it (and try to 
understand clvmd) before saying that it doesn't work !
If you've never tried it, that means you've never played with the rhcs 
stack, because even if you want to put gfs/gfs2 on top, you still need 
clvmd to have a consistent logical volume management across all the 
nodes in the hypervisor cluster ...
It seems to me that most people wanting to have a clusterfs 
(gfs/gfs2/ocfs2/whateverfs) on top of a shared storage want that just 
because they are used to that  thing that Vmware did for a shared 
storage : vmfs on top of a shared storage and file-based container 
(.vmdk) for the virtual machines.
I've installed several solutions based purely on lvm

Please compare all the solutions and you'll easily find that on a 
performance/IO level you'll be always faster to put put extra layer 
between the VM storage and the shared storage

-- 
--
Fabian Arrotin



___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] VirtIO with CentOS 5.4

2010-01-21 Thread Fabian Arrotin
Bill McGonigle wrote:
 Hi, all,
 
 I'm attempting to run a Windows 2003 (32-bit) VM under CentOS 5.4, 
 generally following:
 
http://wiki.centos.org/HowTos/KVM

ARGHH ... forget that page as it was written during the first kvm tests 
and ïsn't current anymore
see 
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtualization_Guide/index.html
 
for accurate documentation (as http://www.centos.org/docs is outdated 
and doesn't even cover kvm)

 
 I've seen nice performance benefits with the VirtIO driver under Fedora, 
 so I'd like to get that running with CentOS as well.  I have the 
 September drivers build .iso.
 
 According to:
 
http://www.linux-kvm.org/page/Virtio
 
 I need a KVM version 60 or later - fair enough.  Back at the wiki 
 there's a note about later KVM in -testing, and sure enough there's a 
 -66 there, but it's only built for an old kernel.
 
 I got that SRPM and tried to build it against the current kernel, but 
 get kmod build errors, ala:
 
  
 /root/rpmbuild/BUILD/kvm-kmod-66/_kmod_build_/kernel/external-module-compat.h:421:
  
 error: redefinition of typedef 'bool'
...
  
 /root/rpmbuild/BUILD/kvm-kmod-66/_kmod_build_/kernel/external-module-compat.h:734:1:
  
 warning: __aligned redefined
 
 The wiki also has a note about -84 being in Levente Farkas's repo, but 
 those don't appear to be there any longer.
 
 So, questions:
 1) what are folks generally using for VirtIO-capable KVM on CentOS 5.4?

The standard kvm from 5.4 and not the *old* one from extras

 2) given that the upstream has Windows drivers available, I'm curios how 
 they're handling the issue, and if we're in sync.
 3) does anybody have the SRPM that was at Farkas's repo or know if it 
 went elsewhere?  I assume the build issues have been solved there already.
 
 I'd be happy to update the Wiki with info from responses here.
 
 Thanks,
 -Bill
 


-- 
--
Fabian Arrotin
test -e /dev/human/brain || ( echo 1  /proc/sys/kernel/sysrq ; echo c  
/proc/sysrq-trigger )



___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Move Windows within an LV to another pv safely

2009-10-25 Thread Fabian Arrotin
Ben M. wrote:
 Using CentOS Xen current with the 5.4 update applied.
 
 I need to move a Windows 2008 installation in LVM2 from one pv/vg/lv to 
 different disk pv/vg/lv.
 
 What are considered safe ways to move it on same machine and retain a
 copy until sure it reboots?
 
 Turn off (shutdown) in Xen create identical extents in target pv/vg/lv 
 and mount -t ntfs and cp? dd? rsync?
 
 Or pvmove (doesn't look like it retains a copy)?
 
 Is there an equivalent to AIX cplv?
 


I always use dd when i need to 'move' a LV from one host to the other 
(can be of course used on the same host)

-- 
--
Fabian Arrotin
idea=`grep -i clue /dev/brain`
test -z $idea  echo sorry, init 6 in progress || sh ./answer.sh


___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Resizing disks for VMs

2009-09-28 Thread Fabian Arrotin
Dennis J. wrote:
 Hi,
 Is there a way to make a PV xen guest aware of a size change of the host 
 disk? In my case I'm talking about a Centos 5.3 host using logical volumes 
 as storage for the guests and the guests running Centos 5.3 and LVM too.
 What I'm trying to accomplish is to resize the logical volume for the guest 
 by adding a few gigs and then make the guest see this change without 
 requiring a reboot. Is this possible maybe using some kind of bus rescan in 
 the guest?
 

No, it's not possible unfortunately. On a traditionnal SCSI bus you can 
rescan the whole bus to see newer/added devices or just the device to 
see newer size, but not on a Xen domU .
At least that's what i found when i blogged about that . See that thread 
on the Xen list : 
http://lists.xensource.com/archives/html/xen-users/2008-04/msg00246.html

So what i do since then is to use lvm in the domU as well and add a new 
xvd block device to the domU (aka a new LV on the dom0) and then the 
traditionnal pvcreate/vgextend/lvextend. Working correctly for all my 
domU's ..


-- 
--
Fabian Arrotin
idea=`grep -i clue /dev/brain`
test -z $idea  echo sorry, init 6 in progress || sh ./answer.sh


___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Resizing disks for VMs

2009-09-28 Thread Fabian Arrotin
Karanbir Singh wrote:
 On 28/09/09 17:37, Fabian Arrotin wrote:
 So what i do since then is to use lvm in the domU as well and add a new
 xvd block device to the domU (aka a new LV on the dom0) and then the
 traditionnal pvcreate/vgextend/lvextend. Working correctly for all my
 domU's ..
 
 how are  you able to add a new disk without a reboot ? or is that 
 something that works with the xenblock drivers ?
 
Yes, i've only PV domU's ;-)

virsh attach-disk /path/to/lv/on/the/domO xvd[letter as it appears on 
the domU]

-- 
--
Fabian Arrotin
idea=`grep -i clue /dev/brain`
test -z $idea  echo sorry, init 6 in progress || sh ./answer.sh


___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] fully virt Xen DomU network question

2009-06-26 Thread Fabian Arrotin
Coert Waagmeester wrote:
 Hello all fellow CentOS users!
 
 I have a working xen setup with 3 paravirt domUs and one Windblows 2003
 fully virt domU.
 
 
 There are to virtual networks.
 
 As far as I can tell in the paravirt Linux DomUs I have gigabit
 networking, but not in the fully virt Windows 2003 domU
 
 Is there a setting for this, or is it not yet supported?


That's not on the dom0 side, but directly in the w2k3 domU .. : you'll 
get *bad* performances (at IO and network level) if the xenpv drivers 
for Windows aren't installed .. Unfortunately you will not be able to 
find them for CentOS. (While Upstream have them of course)

-- 
--
Fabian Arrotin
  idea=`grep -i clue /dev/brain`
  test -z $idea  echo sorry, init 6 in progress || sh ./answer.sh
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] lfarkas Repository + KVM on centos Wiki

2009-04-23 Thread Fabian Arrotin
Rainer Traut wrote:
 Dear all,
 
 I've succesfully installed a F10 x86_64 KVM guest on C5 x86_64 with the 
 help of the wiki - running for 7 days now. :)
 
 Two questions;
 What's the state of the lfarkas's repository - can it be trusted - I 
 guess yes but google did not help much.
 It's only mentioned in the kvm howto - not under the third party repos. 
 And sadly http://www.lfarkas.org/ is empty.
 

I suppose you already know that kvm will be included by default in the 
upcoming 5.4 release ?
It will be provided only for x86_64 though (afaik ...)

-- 
--
Fabian Arrotin
  idea=`grep -i clue /dev/brain`
  test -z $idea  echo sorry, init 6 in progress || sh ./answer.sh
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] tick_divider kernel parameter for guest vm

2008-01-02 Thread Fabian Arrotin
When upstream released 5.1, everybody wanted to test a new kernel
parameter that could adjust the system clock rate at boot time to
something else than the standard 1000Hz clock rate.
A lot of testings has been done (thanks to Akemi Yagi for her great
work) and you can see the results here :
http://bugs.centos.org/view.php?id=2189

As you can read at the bottom of the comments, it seems it was a typo in
the official RH Release Notes : you'd have to read divider= and *NOT*
tick_divider=  ! (see
http://www.centos.org/docs/5/html/release-notes/as-x86/RELEASE-NOTES-U1-x86-en.html)

It seems so to work with the correct kernel parameter and so there is no
need to build a kernel-vm for CentOS 5.1 guests .. (it's still needed
for example for 4.x ..)

Keep on reading the comments on bugs.centos.org for further
informations ... I assume that upstream release notes will be corrected
to reflect the real parameter

-- 
Fabian Arrotin [EMAIL PROTECTED]
Solution ? 
echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc


signature.asc
Description: This is a digitally signed message part
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt