Re: [CentOS-virt] Xen on CentOS 6.4
On 6/21/2013 8:24 AM, Johnny Hughes wrote: On 05/26/2013 04:19 PM, Nux! wrote: On 26.05.2013 21:41, Luke S. Crawford wrote: I heard talk of a centos-supported xen dom0 for CentOS 6.4, but I haven't heard talk of such a thing lately, and I haven't seen where to download it, which could just be me being stupid. http://dev.centos.org/centos/6/xen-c6/ Just to take this one step further, we have now released Xen4CentOS: http://lists.centos.org/pipermail/centos-announce/2013-June/019800.html Excellent. Congratulations and thank you for your hard work. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] Package updates and required reboots
I still operate under the assumption that glibc and kernel updates require a reboot to be prudent on a Linux OS. With CentOS Xen 5.6 (standard installation, SELinux enabled) is there an FAQ or general user consensus as to when to do a reboot after what updates? ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] SPICE Benchmark
Same here. Thank you very much Alexey for sharing this. Tom Bishop wrote: Very Nice...Keep us up to date on future findings, very interesting readThanks. On Tue, Nov 16, 2010 at 11:09 AM, Alexey Vasyukov vasyu...@gmail.com mailto:vasyu...@gmail.com wrote: Hi folks. We finally finished our work on benchmarking SPICE and would like to share the results. Detailed report in English: http://www.bureausolomatina.ru/sites/default/files/SPICE%20Benchmark%20-%202010-11-16.pdf The report provides benchmark results of SPICE network load for different types of workload. SPICE operation on limited low-speed network connection was also tested. The results were compared with similar tests for RDP. Testing was based on SPICE version 0.4.3. This version is not the latest one but it is used currently in RHEV-D 2.2 and also in RHEL6. So, we guess, the results are pretty useful. We are going to test SPICE 0.6 in nearest future because it has very interesting WAN improvements. So, we welcome any comments, critics or advices. Best regards, Alexey Vasyukov ___ CentOS-virt mailing list CentOS-virt@centos.org mailto:CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Arp Flip Flops make machine inaccessible.
chaim.rie...@gmail.com wrote: Comment out the mac addy from eth1 and try that Sent via BlackBerry from T-Mobile ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt Hmnn. I had done that prior. I didn't notice it popped back in when I switched it to dhcp. I have to look at my previous ifcfg's, I saved copies. Would yum updates redo the ifcfg-ethX's? I was working fine for days at the shop with the mac addy's out and then updated a few hours before I brought the box into the office. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] Lockup with (none) login
I had a CentOS 5.5 Xen standard virtualization install lockup on reboot after an battery backup (apcusbd) orderly shutdown induced by a power outage. It may have been sitting with two kernel updates without a reboot. I have to head to the site (with a fractured ankle), but reports indicate that it is at - (none) login: which only returns back to itself after a user login at console, including root. - the local user says, though the monitor speed was too fast that it is failing to find its mounts OR that the disk reported errors. It is on a dmraid (I know, please don't flame me). There is some critical information on the drives that did NOT backup. I need a list of tools and ideas to have a checklist to try and resurrect this machine. Of course I will go with - Live CD - CentOS 5.5 install. - Hard drives. I would appreciate any procedural methods to go about this and try to resurrect this machine. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] Slightly OT - Grub fallback option
Does anyone have a standard install of CentOS virtualization grub.conf working in a proven state with the FALLBACK option? If so, can you post your grub.conf? I mistakenly updated to 5.5 and am concerned about nvidia driver comments in the release notes. I did not have protectbase configured on this machine. It is a remote device that will reboot. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] CentOS and Xen 3.0.3
Gilberto Nunes wrote: Yes Tks 2010/4/4 Christopher G. Stach II c...@ldsys.net: - Gilberto Nunes gilberto.nune...@gmail.com wrote: Hi... I already has the same VM running on Xen 3.4.1 and the 4 VCPUS appears on WIndows system... Now I downgrade to xen 3.0.3 but ewven I define VCPUS =4 and the number o CPUS on five VM, the cores not be present Is it the Xen that comes stock with CentOS? -- Christopher G. Stach II I don't think so, type xm dmesg and see what the Xen Version says. I'm completely stock CentOS 5.4 virt install and I have: Xen version 3.1.2-164.15.1.el5 (mockbu...@centos.org) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-46)) Wed Mar 17 11:22:38 EDT 2010 Which indicates Version 3.1.2 AFAIK. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] Xen Database vms
Are centralized database (SQL) servers best left out of virtualization and committed to their own hardware like the old days or are there some guidelines one should consider in setting them up? I'm not talking mega-large recordsets, but large enough to handle multiple years of CRMs and intensive querying, Accounting Systems and so forth. Fundamental website CMS database is a no-sweat issue. This would be on plain vanilla CentOS/Xen stock. All usual SQL server flavors considered. I'm trying to get an idea of where I am ultimately heading in my LANs. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Xen Database vms
Neil Aggarwal wrote: Ben: Are centralized database (SQL) servers best left out of virtualization ... and intensive querying, Accounting Systems and so forth. It really depends on the hardware you allocate to the VM and how intensive the usage is. Personally, if I have an intensively queried database server, I want it directly on hardware. Neil Neil: What if it were the only real active vm? I know that might sound a bit of a waste, but I am really enjoying the backup and duplication abilities of running in a Xen hypervisor as well as its other features. It seems to be saving me a lot of time in production settings. And there is also a comfort level in uniformity on a LAN. Would there still be a significant hit on resource performance by the hypervisor if running that database server alone in it, or alongside a few rarely used, lightweight or spurious vms? I am talking about the database activities running during the biz day and backups, batches and other maintenance in the off hours. Nothing urgent here, just trying to plan out the future, mull over the possibilities and where to head. - Ben ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Xen Database vms
I think it could work well. Having a server in a vm makes it more portable. Many of my servers and services are running in vms on two centos 5.4 servers: openfiler, efw firewall, trixbox 2.8, SME Server (in server mode for email and spamassassin), windows 2003 server, windows 2008 server, windows 7, and others that aren't running. I would suggest: If there are a lot of temp files or disk access to the OS, install the vm OS on a block device rather than to a file. The storage should be on a local block device as well. If there's a lot of lan traffic to/from the other vms, install a 3rd ethernet card in the server that is only used for db traffic. I also use a virtual network that the vms can use to reach each other. This is basically a private internal lan running across the host machine's buses, rather than through your network switch. I get native performance with my set up... Thank you. Excellent thoughts all, and just the type of feedback I needed to think about. I would be going out a dedicated NIC on the data traffic, and I would do block devices in LVM for the disk IO, Windows 2008 32bit seems to be very happy with that setup in a mixed Xen environment. Under no circumstances would I run these as file images. There is no comparison to dedicated hardware for this application using file based vms. Has to be block devices. I think for my purposes putting the database servers in Xen is absolutely worth a shot. If it works, I have a lot of benefits from it. If it doesn't, I lost some hours but gained some knowledge, and maybe a backup server. A vpn (OpenVPN) server in Xen is part of my plan as well. I'm trying to sketch out all the lines of IO and see what I have to do to keep things snappy. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Xen box down
Ben M. wrote: Duh, fixed. PC Repair 101: Check the motherboard battery. I'm so peeved with myself for overlooking the first step of computer repair for 3 days. This motherboard snaps to default of Virtualization OFF on a dead batt. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Xen box down
Christopher G. Stach II wrote: - Ben M. cen...@rivint.com wrote: Error is: duplicate or bad block in use It's probably just that fsck can't automatically fix some dirtiness and not a big deal. If you aren't prompted for a password or to log in to fix manually, get to the grub menu, edit the grub command line, stick ``single'' and/or ``init=/bin/sh'' on the end, boot, and run fsck manually. If you just want the machine up in a possibly slightly fucked state, just answer yes to everything. If not and you care a little bit about maybe getting some data back, see the next paragraph. (It's usually not that bad unless you have a skilled enemy or very bad luck.) You can probably have someone do all of it over the phone. If it doesn't even get that far or fsck can't fix it automatically, you're probably screwed. Whenever that has happened to me, I just do a block level dump of the partition/disk and recover from that image. It's a lot easier. Anyway, it is probably fine. If it isn't, you can always try pulling each of the disks or setting it back to use a single disk to try and isolate the problem. Also, switch to MD RAID. :) Have at touch of flu or something, sorry for delays answering. Not a good way to approach the issue. Not thinking as clearly as would prefer. I am thinking of running SpinRite first. It tends to do a good job with SMART issues. Anyone see an issue with this? ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Xen box down
Thanks, I have heard of that issue. I will put that on my checklist. Grub is accessible, so I can try a menu change from there. Mr. X wrote: --- On Mon, 1/4/10, Ben M. cen...@rivint.com wrote: From: Ben M. cen...@rivint.com Subject: [CentOS-virt] Xen box down To: centos-virt@centos.org Date: Monday, January 4, 2010, 12:36 PM My stock CentOS 5.4 box won't come up after a reboot as reported from my office. Error is: duplicate or bad block in use Before rebooting xm dmsg had printk suppressed messages. The box is remote, 2 hour drive. Some advice on what hardware to bring with me and how to approach this via fsck would be welcome. Its an nvidia dmraid boot on WD Velocirapters. Was there a kernel update? Sometimes the new initrd does not support dmraid. If this is applicable, boot into the old working kernel and run mkinitrd against the new kernel, reboot and x-fingers. This has worked for me in the past. My 18 yo kid has an ich5 Raid0 dual boot of Winxp/Fedora going for almost 4 years. Once when updating the kernel it was necessary to rebuild the initrd. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] xendomains not autostarting
Thanks for everyone's help here. I need to put a RAID on this anyhow and am just losing way to much time on this and can't resolve it. I am pretty sure I inadvertently hosed something by removing a service (or subsequent dependency) from dom0 or playing with xm and virsh commands too much. One of the great things about a Xen environment is its ability to recover from a disaster in minimal time. Kai Schaetzl wrote: [r...@dom0 ~]# xenstored [r...@dom0 ~]# FATAL: Failed to initialize dom0 state: Invalid argument Should be already running, did you check with ps? I get this error as well, when I try to run it while running. Kai ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] Slightly OT: FakeRaid or Software Raid
I have had great luck with nvidia fakeraid on RAID1, but I see there are preferences for software raid. I have very little hands on with full Linux software RAID and that was about 14 years ago. I am trying to determine which to use on a rebuild in a standard CentOS/Xen enviroment. It seems to me that while FakeRaid is/can be completely taken care of in dom0 dmraid whereas with software raid there *might* be an option to pass that role off more granularly to the domUs and not performed by dom0 at all. I have a small number of tiny domUs that rarely change (like an OpenVPN) that are well handled just by backups and firewall based failover and don't need RAID1. Is there any feedback on where the performance and availability tweak is here? ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid
Thanks. The portability bonus is a big one. Just two other questions I think. - Raid1 entirely in dom0? - Will RE type HDs be bad or good in this circumstance? I buy RE types but have recently become aware of the possibility where TLER (Time-Limited Error Recovery) can be an issue when run outside of a Raid, e.g. alone on desktop machine. I do have a utility where I can change the HDs firmware setting to get turn it off or on for either Read or Write delays. Christopher G. Stach II wrote: - Ben M. cen...@rivint.com wrote: I have had great luck with nvidia fakeraid on RAID1, but I see there are preferences for software raid. I have very little hands on with full Linux software RAID and that was about 14 years ago. MD RAID. I'd even opt for MD RAID over a lot of hardware implementations. This writeup summarizes a bit of why: http://jeremy.zawodny.com/blog/archives/008696.html Hardware RAID's performance is obviously going to be better, but it's only worth it if you *need* it (more than ~8 disks, parity). If you're just doing RAID 0, 1, or 10 in a single box and you're not pushing it to its limits as a DB server or benchmarking and going over it with a magnifying glass, you probably won't notice a difference in performance. I'll take fewer moving parts and portability. As someone already said, dmraid is done in software, too. Fakeraid is basically the same as MD RAID, but with an extra piece of hardware and extra logic bits to fail. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Slightly OT: FakeRaid or Software Raid
Thanks for sharing Grant. Your point about hardware raid is well taken. However, the discussion is about Fake-Raid vs. Software RAID1 and controller/chipset dependence and portability. The portability of a software RAID1 hard drive to an entirely different box is, I have learned, much higher and less time consuming. Grant McWilliams wrote: He had a two drive RAID 1 drives and at least one of them failed but he didn't have any notification software set up to let him know that it had failed. And since that's the case he didn't know if both drives had failed or not. I wonder why he things software RAID would be a) more reliable b) fix itself magically without telling him. He never did say if he was able to use the second disk. I have 75 machines with 3ware controllers and on the very rare occasion that a controller fails you plug in another one and boot up. I don't use software RAID in any sort of production environment unless it's RAID 0 and I don't care about the data at all. I've also tested the speed between Hardware and Software RAID 5 and no matter how many CPUs you throw at it the hardware will win. Even in the case when a 3ware RAID controller only has one drive plugged in it will beat a single drive plugged into the motherboard if applications are requesting dissimilar data. One stream from an MD0 RAID 0 will be as fast as one stream from a Hardware RAID 0. Multiple streams of dissimilar data will be much faster on the Hardware RAID controller due to controller caching. Grant McWilliams ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] xendomains not autostarting
xenstore appears to be broken too. I'm hosed and lost. Other services/items are acting up too, including smartd and hotplug. Going to backup dev'd domUs, reformat drives and reinstall base Centos Xen Virtualization. [r...@dom0 ~]# xenstored [r...@dom0 ~]# FATAL: Failed to initialize dom0 state: Invalid argument full talloc report on 'null_context' (total 96 bytes in 3 blocks) struct domain contains 96 bytes in 2 blocks (ref 0) /local/domain/0contains 16 bytes in 1 blocks (ref 0) ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] xendomains not autostarting
I have been scratching my head on this for days. Xendomains services just doesn't want to start at boot it seems, so I don't get my auto-domU's up without service xendomains start and the all start. chkconfig looks correct, I have checked xm dmesg, dmesg, turned off selinux and the only clue I have is that the xend.log startup looks different than a fairly similar machine and I don't quite understand what it might be saying. Is dom0 crashing and restarting at machine bootup? I have only one domU in ../auto to keep this simpler, its name is v22c54 and I have one other anomaly: smartd is also not starting on services boot up but apparently runs fine with a manual command. === xend.log boot up === [2009-11-30 08:40:53 xend 3466] INFO (SrvDaemon:283) Xend Daemon started [2009-11-30 08:40:53 xend 3466] INFO (SrvDaemon:287) Xend changeset: unavailable. [2009-11-30 08:40:53 xend.XendDomainInfo 3466] DEBUG (XendDomainInfo:228) XendDomainInfo.recreate({'paused': 0, 'cpu_time': 19493383087L, 'ssidref': 0, 'hvm': 0, 'shutdown_reason': 0, 'dying': 0, 'mem_kb': 1048576L, 'domid': 0, 'max_vcpu_id': 3, 'crashed': 0, 'running': 1, 'maxmem_kb': 17179869180L, 'shutdown': 0, 'online_vcpus': 4, 'handle': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'blocked': 0}) [2009-11-30 08:40:53 xend.XendDomainInfo 3466] INFO (XendDomainInfo:240) Recreating domain 0, UUID ----. [2009-11-30 08:40:53 xend.XendDomainInfo 3466] WARNING (XendDomainInfo:262) No vm path in store for existing domain 0 [2009-11-30 08:40:53 xend.XendDomainInfo 3466] DEBUG (XendDomainInfo:992) Storing VM details: {'shadow_memory': '0', 'uuid': '----', 'on_reboot': 'restart', 'on_poweroff': 'destroy', 'name': 'Domain-0', 'xend/restart_count': '0', 'vcpus': '4', 'vcpu_avail': '15', 'memory': '1024', 'on_crash': 'restart', 'maxmem': '1024'} [2009-11-30 08:40:53 xend.XendDomainInfo 3466] DEBUG (XendDomainInfo:1027) Storing domain details: {'cpu/1/availability': 'online', 'cpu/3/availability': 'online', 'name': 'Domain-0', 'console/limit': '4194304', 'cpu/2/availability': 'online', 'vm': '/vm/----', 'domid': '0', 'cpu/0/availability': 'online', 'memory/target': '1048576'} [2009-11-30 08:40:53 xend 3466] DEBUG (XendDomain:163) number of vcpus to use is 2 [2009-11-30 08:40:53 xend 3466] INFO (SrvServer:116) unix path=/var/lib/xend/xend-socket [2009-11-30 08:40:53 xend.XendDomainInfo 3466] DEBUG (XendDomainInfo:1249) XendDomainInfo.handleShutdownWatch == after manual xendomains start == [r...@localhost ~]# service xendomains start Restoring Xen domains: v22c54. Starting auto Xen domains: v22c54(skip)[done] [ OK ] ~~ xend.log cont'd from above point ~~ [2009-11-30 11:17:15 xend.XendDomainInfo 3466] DEBUG (XendDomainInfo:287) XendDomainInfo.restore(['domain', ['domid', '3'], ['uuid', 'a3199faf-edb4-42e5-bea1-01f2df77a47f'], ['vcpus', '1'], ['vcpu_avail', '1'], ['cpu_cap', '0'], ['cpu_weight', '256.0'], ['memory', '512'], ['shadow_memory', '0'], ['maxmem', '512'], ['bootloader', '/usr/bin/pygrub'], ['features'], ['name', 'v22c54'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['ramdisk', '/var/lib/xen/boot_ramdisk.yFE7zn'], ['kernel', '/var/lib/xen/boot_kernel.bnNF6O'], ['args', 'ro root=/dev/vgcentos00/root']]], ['cpus', []], ['device', ['vif', ['backend', '0'], ['script', 'vif-bridge'], ['bridge', 'xenbr1'], ['mac', '00:16:36:41:76:ae']]], ['device', ['tap', ['backend', '0'], ['dev', 'xvda:disk'], ['uname', 'tap:aio:/var/lib/xen/images/vms/v22c54'], ['mode', 'w']]], ['device', ['vkbd', ['backend', '0']]], ['device', ['vfb', ['backend', '0'], ['type', 'vnc'], ['vncunused', '1'], ['xauthority', '/root/.Xauthority'], ['keymap', 'en-us']]], ['state', '-b'], ['shutdown_reason', 'poweroff'], ['cpu_time', '0.008262668'], ['online_vcpus', '1'], ['up_time', '305.694555044'], ['start_time', '1259414461.79'], ['store_mfn', '1875035'], ['console_mfn', '2193022']]) [2009-11-30 11:17:15 xend.XendDomainInfo 3466] DEBUG (XendDomainInfo:328) parseConfig: config is ['domain', ['domid', '3'], ['uuid', 'a3199faf-edb4-42e5-bea1-01f2df77a47f'], ['vcpus', '1'], ['vcpu_avail', '1'], ['cpu_cap', '0'], ['cpu_weight', '256.0'], ['memory', '512'], ['shadow_memory', '0'], ['maxmem', '512'], ['bootloader', '/usr/bin/pygrub'], ['features'], ['name', 'v22c54'], ['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash', 'restart'], ['image', ['linux', ['ramdisk', '/var/lib/xen/boot_ramdisk.yFE7zn'], ['kernel', '/var/lib/xen/boot_kernel.bnNF6O'], ['args', 'ro root=/dev/vgcentos00/root']]], ['cpus', []], ['device', ['vif', ['backend', '0'], ['script', 'vif-bridge'], ['bridge', 'xenbr1'], ['mac', '00:16:36:41:76:ae']]], ['device', ['tap', ['backend', '0'], ['dev', 'xvda:disk'], ['uname', 'tap:aio:/var/lib/xen/images/vms/v22c54'], ['mode', 'w']]],
Re: [CentOS-virt] xendomains not autostarting
Thanks, you gave me some solid points to check that I hadn't fully and I think I know a little more. My chkconfig run level 2 was on, but runlevel was at N 3. I toggled off, rebooted, no difference. Toggled on and off for the rest of the checklist. Everything checked out except for XENDOMAINS_RESTORE=true which is default. I set it to false, toggled the runlevel 2 for a couple of reboot checks. No joy, but ... Oddly I am getting Saves, even though DESTROY is explicitly set in the vm's conf to all circumstances: name = 'v22c54' uuid = 'a3199faf-edb4-42e5-bea1-01f2df77a47f' maxmem = 512 memory = 512 vcpus = 1 bootloader = '/usr/bin/pygrub' on_poweroff = 'destroy' on_reboot = 'destroy' on_crash = 'destroy' vfb = [ 'type=vnc,vncunused=1,keymap=en-us' ] # note selinux is off now, but the privileges are set correctly disk = [ 'tap:aio:/var/lib/xen/images/vms/v22c54,xvda,w' ] vif = [ 'mac=00:16:36:41:76:ae,bridge=xenbr1' ] I then slapped it around a bit and another quirk appeared. From a fresh boot, I then manually started xendomains service. v22c54 comes up. I did an xm shut and it reported it shut, nothing in the Save folder. However, check this out: [r...@river22 ~]# service xendomains start Starting auto Xen domains: v22c54[done][ OK ] [r...@river22 ~]# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 1024 2 r- 24.7 v22c541 511 1 r- 9.0 [r...@river22 ~]# xm shutdown v22c54 (no echo) (I then tried to bring it back up, it balks, its not there and I see a boot_kernel.random and a boot_ramdisk.random come up in /var/lib/xen) [r...@river22 ~]# xm create v22c54 Using config file /etc/xen/v22c54. Error: VM name 'v22c54' already in use by domain 1 (it isn't there) [r...@river22 ~]# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 1024 2 r- 29.7 [r...@river22 ~]# xm shutdown v22c54 Error: Domain 'v22c54' does not exist. Usage: xm shutdown Domain [-waRH] Shutdown a domain. [r...@river22 ~]# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 1024 2 r- 29.9 I certainly like to know why things glitch and don't mind seeing this through a little further, but I am beginning to wonder if I should just backup the domU's and try a fresh installation. Is it possible I am running into a naming convention on these domUs? My first 3 chars help me determine on which host the virtual machine was originally created. Eric Searcy wrote: On Nov 30, 2009, at 8:26 AM, Ben M. wrote: I have been scratching my head on this for days. Xendomains services just doesn't want to start at boot it seems, so I don't get my auto-domU's up without service xendomains start and the all start. chkconfig looks correct, I have checked xm dmesg, dmesg, turned off selinux and the only clue I have is that the xend.log startup looks different than a fairly similar machine and I don't quite understand what it might be saying. Is dom0 crashing and restarting at machine bootup? I have only one domU in ../auto to keep this simpler, its name is v22c54 and I have one other anomaly: smartd is also not starting on services boot up but apparently runs fine with a manual command. I'm guessing you covered this (chkconfig looks correct) but you didn't change to a different runlevel like 2 did you? [r...@xen1 ~]# chkconfig --list xendomains xendomains 0:off 1:off 2:off 3:on4:on5:on6:off [r...@xen1 ~]# grep :initdefault /etc/inittab id:3:initdefault: [r...@xen1 ~]# runlevel N 3 Also, I'm not too familiar with it, but if you're not shutting your domains off before reboot there may be something awry with the save/restore functionality. Personally I have this disabled so I can't speak to whether it would create the symptom you have, but it might be something to try. I have: [r...@xen1 ~]# grep ^[^#] /etc/sysconfig/xendomains XENDOMAINS_SYSRQ= XENDOMAINS_USLEEP=10 XENDOMAINS_CREATE_USLEEP=500 XENDOMAINS_MIGRATE= XENDOMAINS_SAVE= XENDOMAINS_SHUTDOWN=--halt --wait XENDOMAINS_SHUTDOWN_ALL=--all --halt --wait XENDOMAINS_RESTORE=false XENDOMAINS_AUTO=/etc/xen/auto XENDOMAINS_AUTO_ONLY=false XENDOMAINS_STOP_MAXWAIT=300 Eric ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] xendomains not autostarting
My apologies, I thought that clearing the subject line and body of all data would do that. All other lists I have been on the past 25 years perform that way just fine. I will check the FAQ. Kai Schaetzl wrote: With or without scratching, please do not hit reply when you want to send a *new* message to the list! Use new message! Thanks, Kai ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] xendomains not autostarting
I apologize about that list etiquette breach. Was completely unaware a thread string was attached somewhere. Never knew that. I will observe that courtesy. I will go through yours and Christopher's points after I get some needed other work done. After that, I may just backup the domUs I developed and do a new install. I must have hosed something. Eric Searcy wrote: On Mon, Nov 30, 2009 at 11:18 AM, Ben M. cen...@rivint.com wrote: Thanks, you gave me some solid points to check that I hadn't fully and I think I know a little more. My chkconfig run level 2 was on, but runlevel was at N 3. I toggled off, rebooted, no difference. Toggled on and off for the rest of the checklist. (p.s. on next most recent post: it's not about the list, it's that your email client contains a reference to the thread which other mail clients use to present the thread in some manner--a tree or such. All other lists you have been on would have acted the same way, you just may not have realized as we all have different email clients.) I didn't go into detail about what I was trying to point out about the runlevels, which I think may have led you astray a bit. Being in runlevel 3 means it wouldn't matter whether xendomains is set to start when in 2. I only brought it up because by default xendomains doesn't start in 2, so *if* you were starting in 2 it wouldn't start then. As you're apparently running in 3 (the default), toggling the setting for 2 was a bit of a red herring. Everything checked out except for XENDOMAINS_RESTORE=true which is default. I set it to false, toggled the runlevel 2 for a couple of reboot checks. No joy, but ... Oddly I am getting Saves, even though DESTROY is explicitly set in the vm's conf to all circumstances: destroy in this context is your setting for what happens when the domain stops *on its own accord*. You still get saves if you shut down the dom0 and the xendomains script goes around and saves all the running domains (assuming it is configured to do that). name = 'v22c54' uuid = 'a3199faf-edb4-42e5-bea1-01f2df77a47f' maxmem = 512 memory = 512 vcpus = 1 bootloader = '/usr/bin/pygrub' on_poweroff = 'destroy' on_reboot = 'destroy' on_crash = 'destroy' vfb = [ 'type=vnc,vncunused=1,keymap=en-us' ] # note selinux is off now, but the privileges are set correctly disk = [ 'tap:aio:/var/lib/xen/images/vms/v22c54,xvda,w' ] vif = [ 'mac=00:16:36:41:76:ae,bridge=xenbr1' ] I then slapped it around a bit and another quirk appeared. From a fresh boot, I then manually started xendomains service. v22c54 comes up. I did an xm shut and it reported it shut, nothing in the Save folder. However, check this out: [r...@river22 ~]# service xendomains start Starting auto Xen domains: v22c54[done][ OK ] [r...@river22 ~]# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 1024 2 r- 24.7 v22c541 511 1 r- 9.0 [r...@river22 ~]# xm shutdown v22c54 (no echo) (I then tried to bring it back up, it balks, its not there and I see a boot_kernel.random and a boot_ramdisk.random come up in /var/lib/xen) [r...@river22 ~]# xm create v22c54 Using config file /etc/xen/v22c54. Error: VM name 'v22c54' already in use by domain 1 (it isn't there) [r...@river22 ~]# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 1024 2 r- 29.7 [r...@river22 ~]# xm shutdown v22c54 Error: Domain 'v22c54' does not exist. Usage: xm shutdown Domain [-waRH] Shutdown a domain. [r...@river22 ~]# xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 1024 2 r- 29.9 I certainly like to know why things glitch and don't mind seeing this through a little further, but I am beginning to wonder if I should just backup the domU's and try a fresh installation. Is it possible I am running into a naming convention on these domUs? My first 3 chars help me determine on which host the virtual machine was originally created. Likely just a timing issue if that was the order you ran the commands in. xm shutdown tells the guest to shutdown, it doesn't instantly destroy it. This can take awhile dependent on what your guest needs to do. If xm create told you it was still in use, it probably was still shutting down. It then probably finished shutting down and was gone when you ran xm list. The only thing that would be alarming is if you ran xm list *first* and didn't see the domain and then ran xm create and it told you it was in use. Typically if I need to hard-cycle the host (config file changes) I shut down a guest from the guest OS, watch xm list until it goes away, and then run xm create. The other thing I meant to suggest in my first email would
Re: [CentOS-virt] xendomains not autostarting
Is it possible that you once used xm start for this domain? Your xm list at the end suggests you didn't, but, well ... Entirely possible. I think I may have issued that command by mistake while in a rush. For a short about a week while I got an error, something akin to Cannot find xen storage. That may coincide when I noticed this issue. Is there a queue or a file I can purge? Kai Schaetzl wrote: Ben M. wrote on Mon, 30 Nov 2009 14:18:29 -0500: Oddly I am getting Saves, Is it possible that you once used xm start for this domain? Your xm list at the end suggests you didn't, but, well ... AFAIK you get saves only if you added the vm to xen storage with xm start. If you didn't then it will shutdown self-contained without a save. But if you did then it will automatically start all VMs that were running on shutdown (and saved) and the auto symlink and this functionality will clash. Kai ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] img partitioning swap
There is no need for a second img to use as swap right? Yes and no. On physical machines I am very used to putting swaps on separate controllers and drives for performance reasons as am sure many others here are. I have yet to see that pay off on Xen, but I really haven't had a hammered on system in production yet. All are smaller and much, much more focused on precisely what they need to do and not commingled as multipurpose machines (e.g. webserver + file server + vpn server). It is an engineering and use question and YMMV, but you can always add it later if the need arises and by really measuring where you are bottlenecking. Adam wrote: There is no need for a second img to use as swap right? -Adam On Sun, Nov 1, 2009 at 7:34 AM, Dennis J. denni...@conversis.de mailto:denni...@conversis.de wrote: On 11/01/2009 10:51 AM, Manuel Wolfshant wrote: On 11/01/2009 08:37 AM, Brett Worth wrote: Christopher G. Stach II wrote: I'd recommend not using LVM inside the images because if you just have a raw disk image in there with regular partitions you can mount it on dom0 (with losetup) for maintenance. I don't think that would be possible with LVM. But it is. I guess that's informative so why don't I feel informed? :-) OK. I'll bite. How? using the procedure described at http://www.centos.org/docs/5/html/5.2/Virtualization/sect-Virtualization-How_To_troubleshoot_Red_Hat_Virtualization-Accessing_data_on_guest_disk_image.html It should be mentioned that it's important not to accept the default volume group name when using LVM as that will lead to a collision in a case such as this where the VG name of both host and guest might end up beeing VolGroup00. I hope RHEL/Centos 6 chooses better defaults based on the hostname for example. Regards, Dennis ___ CentOS-virt mailing list CentOS-virt@centos.org mailto:CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Xen network problem
Not trying to take this off-topic, I think it is relevant. Christopher: When I booted up with dom0-cpus 1, Windows 2008 could not find its second cpu. However, once up, I can xm or virt-manager it down to 1 cpu on cup 0 and everything is fine. Is this a unique issue or do you have a tweak or hack to force smp with dom0 on one cpu? Christopher G. Stach II wrote: - Ken Bass kb...@kenbass.com wrote: It sounds like the domU is impacting the overall Xen performance. Is there anything I can do to tune this? It kinda defeats the whole Xen virtualization concept if a single CPU messes up the network. I am going to try to add a 'nice' to the crontab for the 4am job, but still, something doesn't seem correct. Any suggestions would be much appreciated. Thanks! In high I/O environments or ones with a lot of unpredictable guests, it's a good idea to pin dom0's CPU(s) to physical cores and exclude those cores from the guests. I find that dom0 usually only needs one CPU (pinned to one core) in almost every environment. For example, on an 8 core box, dom0 gets CPU 0 (dom0-cpus 1) and all of the guests use 1-7. The occasional firewall/load balancer/something else high-I/O guest would also gets its own core, also excluded from use by the other guests, but you should avoid that until you find that it's necessary. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] Move Windows within an LV to another pv safely
Using CentOS Xen current with the 5.4 update applied. I need to move a Windows 2008 installation in LVM2 from one pv/vg/lv to different disk pv/vg/lv. What are considered safe ways to move it on same machine and retain a copy until sure it reboots? Turn off (shutdown) in Xen create identical extents in target pv/vg/lv and mount -t ntfs and cp? dd? rsync? Or pvmove (doesn't look like it retains a copy)? Is there an equivalent to AIX cplv? ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Move Windows within an LV to another pv safely
I read the man on pvmove and it looks very cool, especially the auto-continue if there is some sort of system interruption. I plan to try this on a new, non-production machine I am building out, but need to do something right now on the Windows LV. BUT, according to 'man pvmove' it doesn't have a switch to leave a copy behind, or the old extents in place for a fallback. That makes me a little apprehensive about having something ready to roll back to in its most current data state. I don't think I am in the mood for this to be my first test case. haha. Feel's like a Murphy's Law morning. RedShift wrote: Ben M. wrote: Using CentOS Xen current with the 5.4 update applied. I need to move a Windows 2008 installation in LVM2 from one pv/vg/lv to different disk pv/vg/lv. What are considered safe ways to move it on same machine and retain a copy until sure it reboots? Turn off (shutdown) in Xen create identical extents in target pv/vg/lv and mount -t ntfs and cp? dd? rsync? Or pvmove (doesn't look like it retains a copy)? Is there an equivalent to AIX cplv? If you haven't created a volumegroup on the new target disks, add those disks to the old volume group, execute pvmove on the old logical volume, when that's complete, execute vgsplit to create the new volume group. Pvmove is pretty robust, it can restart if it's been interrupted and can be aborted. Glenn ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Move Windows within an LV to another pv safely
Ah, vgsplit. That is likely the answer. I was completely unfamiliar with that command. Going to try that now. Good stuff again, thanks for the simple straightforward procedures. Ben M. wrote: Oops, I pasted in the wrong notes of what I was sketching out... very sorry about that confusion. Totally botched up. My question about your method was that a pvmove could be done to a snapshot, which I am going to answer myself by trying it right now. I thought that snapshot were inexorably connected to the source. If there is a problem I am sure I will see the errors. heh. Christopher G. Stach II wrote: - Ben M. cen...@rivint.com wrote: Does this appear to be a sound procedure? I have one inline question. I read your version of the procedure and it looks like you want to skip the pvmove. That's fine, but it means more downtime (an unreliable estimate is one minuted per GB). In that case, you don't even need the snapshot. You won't need a point in time copy if you are copying from a stable source. 1. Shutdown domU source (source lvname = win2k8-source) which is never file mounted in Xen dom0, just lvm'd. Yeah, just turning off the guest and making sure it doesn't have the ``o'' flag set in the ``lvs'' output is enough. I hope that nothing else had it open (for writing) while your guest was running. :) 2. snapshot source win2k8-source to win2k8-snapshot [How long do I wait before bringing DomU source back up? Is there in indication when it is done? It is approx. 50gig] A few milliseconds. It will return almost immediately. 3. Bring up domU (Is this necessary if seeking accurate data state, would rather keep offline on a weekend dayrather than lose data entries.) The snapshot won't change. It's not necessary if you don't need your guest to be up. In fact, you can skip the whole snapshot bit if you don't care about downtime for your guest. Just dd from win2k8-source. You can't perform this step if you aren't going to use pvmove. Your source will change and your snapshot will be out of date. You would lose all of your changes between the snapshot and when you reboot the guest the second time. 4. Create identical lv extent space (win3k8-target) on target pv/vg Yes, but win2k8-target. :) Since you are copying to a new VG, you can just keep the LV name the same. 5. dd if=/dev/vgsnapshotsource/win2k8-snapshot of=/dev/vgtarget/win2k8-target Yes, but you can specify a larger block size and it will take less time. I personally just default to using bs=1048576 for most things, even if it's not ideal. 6. Shutdown DomU, change xen win2k8-source domU conf file phy: reference to win2k8-target Nope. Keep it the same. You don't want to run from the snapshot or the backup copy, unless you're skipping the pvmove. If you are, you want to change the VG and/or LV name to the non-backup copy. 6a. Drop snapshot, rename source lv to win2k8-old If you were going with pvmove, you would perform that after this step. 7. Start new domU. 8. test extensively, if works, run for few a day or two. Keep *-old as fallback for a week or so. Then move to an archive using dd. So, we have two possible procedures intermingled here. The major differences are Procedure A (lots of downtime) and Procedure B (minimal downtime). Procedure A ~~~ 1. Create target LV with geometry identical to source LV geometry 2. Stop guest. 3. dd 4. Modify guest configuration to point to target LV 5. Start guest This is the procedure to use if simplicity is desired. As a perk, your source LV becomes your backup. Procedure B ~~~ 1. Create backup LV with geometry identical to source LV geometry 2. Stop guest. 3. Create snapshot of source LV 4. Start guest 5. dd from snapshot of source LV to backup LV 6. Drop snapshot of source LV 7. vgextend source VG with additional PV 7. pvmove source LV to additional PV (opt) 8. vgsplit [source VG into additional VG with additional PV] (opt) 9. Modify guest configuration [to point to source LV on additional VG] Procedure B can be different for Linux guests in that, depending upon your guest filesystem choices (ext3 journal, in particular) and site specific caching issues, step 2 could be Pause guest and step 4 would then be Resume guest. Depending upon how you handle your PVs and VGs in the optional 8 and 9 steps, you may need to shut down your guest(s). Your desire to have one VG per PV will probably necessitate that being done eventually. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Move Windows within an LV to another pv safely
A little bit of knowledge is a dangerous thing. I just wrote to the wrong lv with a dd and lost 5 days of work on a project. Dumb, dumb, dumb. Ben M. wrote: Ah, vgsplit. That is likely the answer. I was completely unfamiliar with that command. Going to try that now. Good stuff again, thanks for the simple straightforward procedures. Ben M. wrote: Oops, I pasted in the wrong notes of what I was sketching out... very sorry about that confusion. Totally botched up. My question about your method was that a pvmove could be done to a snapshot, which I am going to answer myself by trying it right now. I thought that snapshot were inexorably connected to the source. If there is a problem I am sure I will see the errors. heh. Christopher G. Stach II wrote: - Ben M. cen...@rivint.com wrote: Does this appear to be a sound procedure? I have one inline question. I read your version of the procedure and it looks like you want to skip the pvmove. That's fine, but it means more downtime (an unreliable estimate is one minuted per GB). In that case, you don't even need the snapshot. You won't need a point in time copy if you are copying from a stable source. 1. Shutdown domU source (source lvname = win2k8-source) which is never file mounted in Xen dom0, just lvm'd. Yeah, just turning off the guest and making sure it doesn't have the ``o'' flag set in the ``lvs'' output is enough. I hope that nothing else had it open (for writing) while your guest was running. :) 2. snapshot source win2k8-source to win2k8-snapshot [How long do I wait before bringing DomU source back up? Is there in indication when it is done? It is approx. 50gig] A few milliseconds. It will return almost immediately. 3. Bring up domU (Is this necessary if seeking accurate data state, would rather keep offline on a weekend dayrather than lose data entries.) The snapshot won't change. It's not necessary if you don't need your guest to be up. In fact, you can skip the whole snapshot bit if you don't care about downtime for your guest. Just dd from win2k8-source. You can't perform this step if you aren't going to use pvmove. Your source will change and your snapshot will be out of date. You would lose all of your changes between the snapshot and when you reboot the guest the second time. 4. Create identical lv extent space (win3k8-target) on target pv/vg Yes, but win2k8-target. :) Since you are copying to a new VG, you can just keep the LV name the same. 5. dd if=/dev/vgsnapshotsource/win2k8-snapshot of=/dev/vgtarget/win2k8-target Yes, but you can specify a larger block size and it will take less time. I personally just default to using bs=1048576 for most things, even if it's not ideal. 6. Shutdown DomU, change xen win2k8-source domU conf file phy: reference to win2k8-target Nope. Keep it the same. You don't want to run from the snapshot or the backup copy, unless you're skipping the pvmove. If you are, you want to change the VG and/or LV name to the non-backup copy. 6a. Drop snapshot, rename source lv to win2k8-old If you were going with pvmove, you would perform that after this step. 7. Start new domU. 8. test extensively, if works, run for few a day or two. Keep *-old as fallback for a week or so. Then move to an archive using dd. So, we have two possible procedures intermingled here. The major differences are Procedure A (lots of downtime) and Procedure B (minimal downtime). Procedure A ~~~ 1. Create target LV with geometry identical to source LV geometry 2. Stop guest. 3. dd 4. Modify guest configuration to point to target LV 5. Start guest This is the procedure to use if simplicity is desired. As a perk, your source LV becomes your backup. Procedure B ~~~ 1. Create backup LV with geometry identical to source LV geometry 2. Stop guest. 3. Create snapshot of source LV 4. Start guest 5. dd from snapshot of source LV to backup LV 6. Drop snapshot of source LV 7. vgextend source VG with additional PV 7. pvmove source LV to additional PV (opt) 8. vgsplit [source VG into additional VG with additional PV] (opt) 9. Modify guest configuration [to point to source LV on additional VG] Procedure B can be different for Linux guests in that, depending upon your guest filesystem choices (ext3 journal, in particular) and site specific caching issues, step 2 could be Pause guest and step 4 would then be Resume guest. Depending upon how you handle your PVs and VGs in the optional 8 and 9 steps, you may need to shut down your guest(s). Your desire to have one VG per PV will probably necessitate that being done eventually. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman
Re: [CentOS-virt] Move Windows within an LV to another pv safely
What a mess that turned out to be. Hey, maybe it was your awesome numbering ducking and running. jk, probably because I forgot to take some meds today or something. I run into an occasional issue with dropping temp pv's assigned to vg's to move things around. I've learned to do pvck, vgck, lvscan, etc. to make sure everything in order whenever I do anything related to lvm. Occasionally I get vg corruption wherein the IUD is still being sought and I have to vgreduce --removemissing vgname to make sure all is clean. If I reboot with that in the root containing vg, I hang on boot and lose remote access (no serial hookup yet, planned for next machine). Oh, the dd quit on me before complete. Is it okay making the target lv bigger than the source when I try it again or does it have to be exact? I need some extra disk space in that xen vm too. Christopher G. Stach II wrote: - Ben M. cen...@rivint.com wrote: Ack. It wants to be in same Volume Group. I want to keep my VGs segregated to PVs segregrated to distinct harddrives (or devmapped raid1s). Mm hmm. pvmove only works within a VG. You would need to do the vgextend for it to at least be temporarily joined and then vgsplit later. I've found that it's usually worth the extra steps to be able to pvmove. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Move Windows within an LV to another pv safely
Did you allocate the new LV with the same number of extents on a VG with the same extent size? It should work perfectly, if so. Ah, no, the target VG was a bit larger. Will try that next. I'm toasted for the evening, time to back off. Been sweating the client's delivery of their project beta, but that is at least 6 months behind schedule ... just don't want to be a part of their cluster %^$#^, I have my own to worry about. Thanks, will pick up tomorrow after I get some billing time in. Christopher G. Stach II wrote: - Ben M. cen...@rivint.com wrote: What a mess that turned out to be. Hey, maybe it was your awesome numbering ducking and running. jk, probably because I forgot to take some meds today or something. Measure twice. Cut once. :) Oh, the dd quit on me before complete. Is it okay making the target lv bigger than the source when I try it again or does it have to be exact? I need some extra disk space in that xen vm too. You can make it larger if you want. The partition table copied from the source will only have the original size allocated. Did you allocate the new LV with the same number of extents on a VG with the same extent size? It should work perfectly, if so. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] Error setting up bridge with static IP address
Are you using the leaked copy of 5.4 or is it showing on some of the mirrors now? Neil Aggarwal wrote: Actually, this worked. I am able to SSH to the box on the 192.168.2.200 IP address. I had a typo in my ssh command. Sorry for any confusion. Thanks, Neil -- Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com Will your e-commerce site go offline if you have a DB server failure, fiber cut, flood, fire, or other disaster? If so, ask about our geographically redundant database system. -Original Message- From: centos-virt-boun...@centos.org [mailto:centos-virt-boun...@centos.org] On Behalf Of Neil Aggarwal Sent: Tuesday, October 20, 2009 1:11 PM To: 'Discussion about the virtualization on CentOS' Subject: Re: [CentOS-virt] Error setting up bridge with static IP address I did some more reading on the Internet and it looks like I am supposed to set up the bridge on eth0 and configure the bridge with the IP address of the host. So, I removed ifcfg-eth0:1 and changed ifcfg-eth0 to this: DEVICE=eth0 HWADDR=[The MAC address] ONBOOT=yes BRIDGE=br0 I removed ifcfg-br1 and created ifcfg-br0 with this content: DEVICE=br0 TYPE=Bridge BOOTPROTO=static BROADCAST=192.168.2.255 IPADDR=192.168.2.200 NETMASK=255.255.255.0 NETWORK=192.168.2.0 ONBOOT=yes DELAY=0 I don't get any errors when I do service network restart but now I can't ssh to the host using the 192.168.2.200 IP address. I also tried setting these values in /etc/sysctl.conf: net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 net.ipv4.ip_forward = 1 and rebooting the machine. That did not help. Any ideas? Thanks, Neil -- Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com Will your e-commerce site go offline if you have a DB server failure, fiber cut, flood, fire, or other disaster? If so, ask about our geographically redundant database system. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] LVM Lockout
Gack. I added an additional Raid1 pair to my machine just before I planned to bring it over to the office and I did something dumb and locked out. I have the pv's, vg's and lv's cleared. All I need to do is get on root and remove a line from fstab, but I can't get it out of read only mode to save the edit. My root login at Repair Filesystem seems unable to make the file writeable. I have done this in past with Knoppix, but can't seem to file the utility to make the write editable (same with CentOS and other live CD's, am downloading newer BackTrack now). ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] LVM Lockout
That worked the first time (or second time, I still keep trying to add the additional storage). I couldn't it mount until I unplugged the second (new) raid1 this time. It might be connected to how dmraid is loading up. dmraid loads during the xen-kernel bootup, right? Jorick Astrego wrote: It's really simple: Boot Centos 5 CD Start rescue mode by typing # linux rescue # sudo -i # mkdir /mnt/root # mount -t ext3 /dev/whatever /mnt/root # nano /mnt/root/etc/fstab Regards, Netbulae Jorick Astrego Netbulae B.V. Janninksweg 127 7513 DH Enschede Tel. +31 (0)6 - 34 15 20 76 Fax. +31 (0)53 - 88 00 326 Email: jor...@netbulae.com Site: http://www.netbulae.com centos-virt-requ...@centos.org wrote: Send CentOS-virt mailing list submissions to centos-virt@centos.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.centos.org/mailman/listinfo/centos-virt or, via email, send a message with subject or body 'help' to centos-virt-requ...@centos.org You can reach the person managing the list at centos-virt-ow...@centos.org When replying, please edit your Subject line so it is more specific than Re: Contents of CentOS-virt digest... Today's Topics: 1. LVM Lockout (Ben M.) -- Message: 1 Date: Wed, 14 Oct 2009 08:17:28 -0400 From: Ben M. cen...@rivint.com Subject: [CentOS-virt] LVM Lockout To: Discussion about the virtualization on CentOS centos-virt@centos.org Message-ID: 4ad5c158.9080...@rivint.com Content-Type: text/plain; charset=ISO-8859-1; format=flowed Gack. I added an additional Raid1 pair to my machine just before I planned to bring it over to the office and I did something dumb and locked out. I have the pv's, vg's and lv's cleared. All I need to do is get on root and remove a line from fstab, but I can't get it out of read only mode to save the edit. My root login at Repair Filesystem seems unable to make the file writeable. I have done this in past with Knoppix, but can't seem to file the utility to make the write editable (same with CentOS and other live CD's, am downloading newer BackTrack now). -- ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt End of CentOS-virt Digest, Vol 26, Issue 11 *** ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] LVM Lockout
The metadata should get everything loaded for you out of the initrd, as long as it has all of the necessary RAID level drivers in it. I wish. This has been agonizing and it looks like I am going to be installing a kludge. I'm going to have to accelerate the build of the next ones. I don't have a warm and fuzzy on this machine anymore. Christopher G. Stach II wrote: - Ben M. cen...@rivint.com wrote: Are there any known issues with Xen/CentOS standard and adding new drives and dmraid automagically activating them? There are problems with things like snapshots of LVs in VMs on LVs on dmraid in dom0. :) I don't know of anything about adding RAID devices in dom0. The metadata should get everything loaded for you out of the initrd, as long as it has all of the necessary RAID level drivers in it. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
Re: [CentOS-virt] LVM Lockout SIMPLER
Give what I wrote before, after boot up I want this: [r...@thisxen1 ~]# dmraid -s *** Active Set name : nvidia_fbacacad size : 293046656 stride : 128 type : mirror status : ok subsets: 0 devs : 2 spares : 0 *** Set name : nvidia_hfcehfah size : 1250263680 stride : 128 type : mirror status : ok subsets: 0 devs : 2 spares : 0 To have BOTH sets Active so that my LVs for my domUs are up. nvidia_hfcehfah is not active. If I type dmraid -ay, they will both be active. I can't find a grub.conf kernel argument to do that. Which would seem safer than the fstab entry and have a more graceful failure, likely limited to just domUs. Ben M. wrote: It may be easier to help if you explain where you're at, where you want to be, and what you did to make it worse. :) The dmraid and LVM layouts would help, as well as the symptoms you're having. This is all Standard CentOS 5.3 install, with X Windows and XFCE4 loaded afterwards and initiated as command line, not as service. No repos other than 'stock'. I use virt-manager to start of domUs quickly and then edit them to polish off. Scenario Small machine, a test case, but will be using pretty much the same for the next few builds, only twice the capacity. 4cpu cores, 16gig ram. - 2X AMD 2212s (2 core by two) - Tyan Board, ECC Mem 16gig. - MCP55 nvidia fake raid (I have had good fortune with this chipset). - Pair of 160g WD Velocipaptors, RAID1 on the BIOS, dmraid on OS. - DVD plus 160gig pata drive on IDE for odds and ends. 160gig - ACPI Manufacturer's Table set to Off in BIOS, was troublesome. Essentially everything is good with exception of a couple of xm dmesg WSMR and NMI warnings that are not fatal. I'm ran decently with above. Fine on domU's with CentOS x86, not sure on x64 but don't need it yet. Have Windows 2008 Web Server (x86 version, not x64) running okay with GPLPV. I say okay because I must be getting some sloppy shutdowns due to some Registry Hive and corruption errors. It could be that W2K8 is not the stable creature that I found in W2k3. It certainly is loaded with an incredible amount of crap that is useless and some of which seems to be serving DRM more than system needs. They should have called it Vista Server, heh. Then instability caused by addition of another RAID1 set. Situation (Where I'm at) = I add in a pair of WD 640g Caviars, not RE types, which I modded a tiny bit with WDTyler utility to make them more raid ready. It's a minor trick, but gives you a little more assurance. They show up in /dev as sd(x)s, but not in /dev/mapper with the nvidia_ handle. I type dmraid -ay and there they are, but not auto-magically like the first pair do at boot. I don't see a conf file to make this happen Mistake 1: == Never mount to the primary device in mapper, only the ones that end with p1 p2 etc. Mistake 2: == Never pvextend a disk into the same vggroup as your / (root). I wanted this so I could migrate the extents off of the test domUs that were good. That would have been the easy, and slick way. Probably Mistake 3: === Don't pvextend hard devices at all. Keep VolumeGroups dedicated PhysicalVolumes that don't cross over. My initial raidset was 'vg00' and the added on pata drive 'vg01' and I was fine with that. The loss of a LogicalVolume that is on a dropped device is rather inelegant. I have not found out what happens if all is contained by a dedicated Volume Group (VG) to a dedicated device (PV), but if a LogicalVolume (LV) is on another device (PV) than the rest of the LVs within that VG then Xen, dom0 and all the domUs are non-booting and inaccessible via IP protocols. fstab entry for a /dev/mapper/raid_device(PartNo)/LVname kills the boot if the LV isn't there, whether by hardware drop, or non-initialization by dmraid. All seems fine on one dmraid device box. It is two Raid1s where I am hitting a wall. I'm going to try, right now, one more time, with the additional raid set, on its own PV/VG setup (e.g. vg02). Where I was: Stable with 1 Raid1 set, running out of space. Where I am: Unstable after attempting to add additional Raid1 set. Where I want to be: 2 Raid1 sets 1 miscellaneous pata and stable. I am going to try one more time to add the additional RAID1 set with own PV and VG and figure out how to move existing domUs to it after I bring it to office (150 miles away). I hope I don't have to run back and forth to go on the console. Christopher G. Stach II wrote: - Ben M. cen...@rivint.com wrote: The metadata should get everything loaded for you out of the initrd, as long as it has all of the necessary RAID level drivers in it. I wish. This has been agonizing and it looks like I am going to be installing a kludge. I'm going to have to accelerate the build of the next ones. I don't have a warm and fuzzy
Re: [CentOS-virt] Testing new Xen Version and rollback.
Thank you very much that fills in a few gaps in my knowledge. Christopher G. Stach II wrote: - Ben M. cen...@rivint.com wrote: I do not have a comprehensive grasp on startup scripts, as well as what files are not rolled into the kernel itself. In other words, I don't understand yet when a new kernel is installed, whether there are any support files that come with it, or whether everything that, for instance, the Xen kernel needs, are entirely within that kernel file (hardware drivers). Kernels in major distros are usually distributed with most drivers compiled as modules in a package that contains those modules and an initrd, or script that makes an initrd, that contains the drivers necessary to boot your system. This isn't always the case, as drivers may be compiled right into the kernel or they may be completely excluded for whatever reason (mini distros, appliances). After booting, depmod resolves the kernel module dependencies in /lib/modules/kernel version only for the kernel that is currently running. As long as you don't install a kernel package that has the same version string (e.g., 2.6.18-128.4.1.el5xen) as a kernel you care about, you have nothing to worry about. If someone is distributing third party kernel packages that collide with a major distribution's without a really good reason, you should probably avoid using their packages altogether. If it is just a matter of having a section for it in grub.conf. Many kernel packages will set up grub.conf for you. If it's just a tarball, you will have to do this manually. You may also need to build a new initrd. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt
[CentOS-virt] Testing new Xen Version and rollback.
I have a fairly stable Xen (CentOS 5.3 standard 3.1.x Xen) install that I want to put into production within the next two weeks or so. I have some small (so far non-fatal) issues and tweaks that Xen 3.4.x may address. E.g. AMD x64 IOMMU bios read, GPLPV PCI connection, HPET clock, better GPLPV handling, and some others. My question is: that if I follow the directions at stacklet.com (http://stacklet.com/downloads/kernel) to load up Xen 3.4 can/will depmod overwrite dependencies needed for my standard Xen kernel that will not be available by a simple edit of grub.conf to restore the standard Xen kernel. I'm not familiar with depmod's actions. ___ CentOS-virt mailing list CentOS-virt@centos.org http://lists.centos.org/mailman/listinfo/centos-virt