Hi all,
one more question regarding KVM on IBM System z:
Is there a way to check wether KVM is using hardware virtualisation (SIE
instruction)?
I installed SLES 11 and virt-host-validate is missing. In FC 17 it returns only
software virtualisation:
QEMU: Checking for hardware virtualization
So that might have been my problem (but not necessarily limited to that
one).
I was on Suse 10 system. I initially stripped the LVM. When it got nearly
full, I tried to add a pack. Couldn't do it. So I went back and recreated
the LVM without striping and I could add a pack. I want to say that
On Wed, Oct 10, 2012 at 12:53 PM, Mark Post wrote:
>
> >>> On 10/10/2012 at 11:35 AM, "Duerbusch, Tom"
> >>> wrote:
>
> > Just speaking to LVM...
> >
> > Striping the data across multiple volumes (which in modern dasd is already
> > stripped in the Raid array), would give you the best performanc
On Wed, Oct 10, 2012 at 12:53 PM, Mark Post wrote:
>
> >>> On 10/10/2012 at 11:35 AM, "Duerbusch, Tom"
> >>> wrote:
>
> > Just speaking to LVM...
> >
> > Striping the data across multiple volumes (which in modern dasd is already
> > stripped in the Raid array), would give you the best performanc
Tom,
I agree 100% with you, in many cases, using LVM (with stripes) and PAV, we
improve 100% of I/O capacity for a I/O intensive workload (eg. Oracle
Database).
One critical point in some cases, is that the LVM architecture is not fully
supported by vendors (eg. Oracle RAC using LVM volumes in cl
And to add to the conversation, I've also used UDEV to help query disk
information for me in the past. Say your SCSI disk is "/dev/sdf", then you
can query all sorts of information with udev via:
udevadm info -a -p $(udevadm info -q path -n /dev/sdf) | less -Ip size
I choose "less" in my sample C
>>> On 10/10/2012 at 11:35 AM, "Duerbusch, Tom"
>>> wrote:
> Just speaking to LVM...
>
> Striping the data across multiple volumes (which in modern dasd is already
> stripped in the Raid array), would give you the best performance.
> Especially if you can strip across multiple DS8000 (or other
On Oct 9, 2012, at 7:32 PM, Thang Pham wrote:
> Hello List,
>
> Is there a way to find out the size of a native SCSI device attached via
> FCP channel? I do not see lszfcp or lsscsi having an option that lets you
> see the size of the disk you have attached to a VM.
>
I always liked "sfdisk -s
>>> On 10/9/2012 at 07:32 PM, Thang Pham wrote:
> Is there a way to find out the size of a native SCSI device attached via
> FCP channel? I do not see lszfcp or lsscsi having an option that lets you
> see the size of the disk you have attached to a VM.
The simplest and most direct method is sim
Just speaking to LVM...
Striping the data across multiple volumes (which in modern dasd is already
stripped in the Raid array), would give you the best performance.
Especially if you can strip across multiple DS8000 (or other dasd
subsystems).
But you can also use LVM as a pool of DASD, with no
On Wednesday, 10/10/2012 at 09:40 EDT, Steffen Maier
wrote:
> I'm not quite sure what you mean in your second sentence with regard to
> a disk attached to a VM.
> To a VM as a z/VM userid, i.e. attached to the virtual machine by the
> hypervisor? AFAIK this means EDEV under z/VM. I don't know of
I use /proc/partitions all the time - but stopped recommending it to others
when I was told (can't remember the source) it was deprecated. I'd be
glad to be wrong here.. as you say - it's quick and easy and no running
through /dev or /sys structures..
Scott Rohling
On Wed, Oct 10, 2012 at 7:02
<><><><><>
signature.asc
Description: OpenPGP digital signature
On 10.10.2012 01:32, Thang Pham wrote:
Is there a way to find out the size of a native SCSI device attached via
FCP channel? I do not see lszfcp or lsscsi having an option that lets you
see the size of the disk you have attached to a VM.
You can view the usable size of any block device (not ju
On 10/10/2012 02:00 AM, Thang Pham wrote:
That works, thanks.
From: Raymond Higgs/Poughkeepsie/IBM@IBMUS
Date: 10/09/2012 07:56 PM
It is in /var/log/messages:
Oct 9 19:43:21 4e1d-laplace-48 kernel: [23044.656933] scsi 1:0:26:1082146832:
Direct-Access IBM 2107900 36.5 PQ
fdisk can help.
Here is a little hack I named "lunsizes" that assumes up to 26 mounted
LUNs and "friendly names" (/dev/mapper/mapth):
#!/bin/bash
# find LUNs in /dev/mapper and list in bytes and GiB
ls /dev/mapper/mpath[a-z] >/dev/null 2>&1
if [ $? != 0 ]; then # no LUNs found
echo "No LUNs fou
16 matches
Mail list logo