Re: KVM on IBM System z

2012-10-10 Thread Tobias Doerkes
Hi all,

one more question regarding KVM on IBM System z:
Is there a way to check wether KVM is using hardware virtualisation (SIE 
instruction)?

I installed SLES 11 and virt-host-validate is missing. In FC 17 it returns only 
software virtualisation:

  QEMU: Checking for hardware virtualization : 
WARN (Only emulated 
CPUs are available, performance will be significantly limited)
  QEMU: Checking for device /dev/vhost-net   : 
PASS
  QEMU: Checking for device /dev/net/tun : 
PASS
   LXC: Checking for Linux >= 2.6.26 : 
PASS

But i think virt-host-validate in FC17 has no support for s390x. So i want to 
check wether SIE is used or not.

Kind regards,

Tobias.

  
--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: HyperPAV and LVM striping

2012-10-10 Thread Duerbusch, Tom
So that might have been my problem (but not necessarily limited to that
one).

I was on Suse 10 system.  I initially stripped the LVM.  When it got nearly
full, I tried to add a pack.  Couldn't do it.  So I went back and recreated
the LVM without striping and I could add a pack.  I want to say that the
documentation at that time, also said you couldn't add packs to a striped
LVM, but that was a while ago.

Anyway, it hasn't been a performance issue.  But that is due to us not
needing the I/O performance.

Thanks for the update.  I'm updating my notes.

Tom Duerbusch
THD Consulting

On Wed, Oct 10, 2012 at 12:53 PM, Mark Post  wrote:

> >>> On 10/10/2012 at 11:35 AM, "Duerbusch, Tom" 
> wrote:
>
> > Just speaking to LVM...
> >
> > Striping the data across multiple volumes (which in modern dasd is
> already
> > stripped in the Raid array), would give you the best performance.
> >  Especially if you can strip across multiple DS8000 (or other dasd
> > subsystems).
> >
> > But you can also use LVM as a pool of DASD, with no striping involved.
> >
> > In case 1, if you need to expand the LVM pool, it is a hassle.  It might
> > mean backing up, reformatting and reloading the data.  In any case, it
> > involves a knowledgeable person and most likely, downtime.
>
> This is simply not true.  Expanding a striped LV can be done dynamically
> with no downtime.  The only aspect that is different from a non-striped LV
> is that you have to have enough free space on as many different PVs as the
> number of stripes you have.  That is, if you did an "lvcreate -i 2" then
> when you do an lvextend/lvresize, you have to have free space available on
> 2 different PVs in the pool.  An "lvcreate -i 3" means you need free space
> on 3 PVs, etc.
>
> A lot of people tend to add space to a volume group one PV at a time.  If
> you're using striped LVs, that won't work unless you make sure that the
> existing PVs have enough free space on them to accommodate additional
> stripes being allocated.
>
>
> Mark Post
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>



--

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: HyperPAV and LVM striping

2012-10-10 Thread Leland Lucius
On Wed, Oct 10, 2012 at 12:53 PM, Mark Post  wrote:
>
> >>> On 10/10/2012 at 11:35 AM, "Duerbusch, Tom"  
> >>> wrote:
>
> > Just speaking to LVM...
> >
> > Striping the data across multiple volumes (which in modern dasd is already
> > stripped in the Raid array), would give you the best performance.
> >  Especially if you can strip across multiple DS8000 (or other dasd
> > subsystems).
> >
> > But you can also use LVM as a pool of DASD, with no striping involved.
> >
> > In case 1, if you need to expand the LVM pool, it is a hassle.  It might
> > mean backing up, reformatting and reloading the data.  In any case, it
> > involves a knowledgeable person and most likely, downtime.
>
> This is simply not true.  Expanding a striped LV can be done dynamically with 
> no downtime.  The only aspect that is different from a non-striped LV is that 
> you have to have enough free space on as many different PVs as the number of 
> stripes you have.  That is, if you did an "lvcreate -i 2" then when you do an 
> lvextend/lvresize, you have to have free space available on 2 different PVs 
> in the pool.  An "lvcreate -i 3" means you need free space on 3 PVs, etc.
>

Not "really" true either.  Even if you originally stripe with 2 or 3
or whatever, you can always "lvextend -i 1" to add another segment.
That's because striping is done at the segment level and each segment
can be configured independently.

Mind you that you loose the performance benefit for that specific
segment, but that can be remedied later when you have more time or can
take the outage.

Leland

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: HyperPAV and LVM striping

2012-10-10 Thread Leland Lucius
On Wed, Oct 10, 2012 at 12:53 PM, Mark Post  wrote:
>
> >>> On 10/10/2012 at 11:35 AM, "Duerbusch, Tom"  
> >>> wrote:
>
> > Just speaking to LVM...
> >
> > Striping the data across multiple volumes (which in modern dasd is already
> > stripped in the Raid array), would give you the best performance.
> >  Especially if you can strip across multiple DS8000 (or other dasd
> > subsystems).
> >
> > But you can also use LVM as a pool of DASD, with no striping involved.
> >
> > In case 1, if you need to expand the LVM pool, it is a hassle.  It might
> > mean backing up, reformatting and reloading the data.  In any case, it
> > involves a knowledgeable person and most likely, downtime.
>
> This is simply not true.  Expanding a striped LV can be done dynamically with 
> no downtime.  The only aspect that is different from a non-striped LV is that 
> you have to have enough free space on as many different PVs as the number of 
> stripes you have.  That is, if you did an "lvcreate -i 2" then when you do an 
> lvextend/lvresize, you have to have free space available on 2 different PVs 
> in the pool.  An "lvcreate -i 3" means you need free space on 3 PVs, etc.
>
Not "really" true either.  Even if you originally stripe with 2 or 3
or whatever, you can always "lvextend -i 1" to add another segment.
That's because striping is done at the segment level and each segment
can be configured independently.

Mind you that you loose the performance benefit for that specific
segment, but that can be remedied later when you have more time or can
take the outage.

Leland

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: HyperPAV and LVM striping

2012-10-10 Thread Fernando Gieseler
Tom,

I agree 100% with you, in many cases, using LVM (with stripes) and PAV, we
improve 100% of I/O capacity for a I/O intensive workload (eg. Oracle
Database).

One critical point in some cases, is that the LVM architecture is not fully
supported by vendors (eg. Oracle RAC using LVM volumes in cluster
filesystem based on ASM technology is not supported by RedHat, Oracle or
SuSE). At this point, the best way is use only PAV technologies (HyperPAV,
Dynamic PAV, Static PAV, etc..) with a large number of "alias" volumes and
MDISK's on z/VM directory (to use a parallel access on these volumes).

But, considering the I/O capacity and the facility to manage, I suggest the
use of SCSI/FCP volumes, of course, if you want this option.

Regards,

Fernando


2012/10/10 Duerbusch, Tom 

> Just speaking to LVM...
>
> Striping the data across multiple volumes (which in modern dasd is already
> stripped in the Raid array), would give you the best performance.
>  Especially if you can strip across multiple DS8000 (or other dasd
> subsystems).
>
> But you can also use LVM as a pool of DASD, with no striping involved.
>
> In case 1, if you need to expand the LVM pool, it is a hassle.  It might
> mean backing up, reformatting and reloading the data.  In any case, it
> involves a knowledgeable person and most likely, downtime.
>
> In case 2, if you need to expand the LVM pool, you can just add disks to it
> on the fly (and even easier with VM).  No downtime.  I add dasd to my LVM
> pool in minutes.
>
> The trade off is normally (well isn't it always), performance vs man power.
>  If you find you really don't need the "BEST" performance, then make the
> job easier.
>
> But then, I you know the requirements of your application.
>
> Tom Duerbusch
> THD Consulting
>
> On Tue, Oct 9, 2012 at 5:50 PM, Brad Hinson  wrote:
>
> > Hi folks,
> >
> > What are the best practices for HyperPAV and LVM striping?  I assumed
> that
> > if you have HyperPAV enabled, you don't need to stripe the data.  Is this
> > true, or if not, what is the best practice for optimum performance?
> >
> > I have lots of mod-9 ECKD with HyperPAV enabled, so I want to use LVM.
>  So
> > my two choices are standard LVM, or LVM striping.  If I stripe across the
> > disks I spread the I/O across the physical volumes, but my gut tells me I
> > shouldn't have to do this, since HyperPAV is moving around aliases
> > dynamically.  For example, say I have 2 PVs and 4 HyperPAV aliases.  If I
> > send some heavy I/O through the Linux (device-mapper) block device, then
> I
> > would assume:
> >
> > - #1, for the case with LVM striping enabled, LVM will spread the I/O to
> > both PVs, and HyperPAV will assign 2 aliases to each PV since I'm banging
> > on them both.
> > - #2, for the case without LVM striping, HyperPAV will assign 4 aliases
> to
> > the first PV since that's the only one in use.
> >
> > In either case, it seems I'm using all 4 aliases, so seems like I would
> > get the same performance.  Please correct me if I'm wrong.  And if so,
> > which of these configs is better?
> >
> > Lastly, is there a presentation or doc that talks about how to enable
> > HyperPAV in Linux, or is bringing the HyperPAV aliases online enough to
> > trigger the dasd driver to do the right thing?
> >
> > Thanks as always,
> > -Brad
> >
> > --
> > Brad Hinson
> > Solution Architect, Red Hat
> > +1 (919) 360-0443
> >
> >
> >
> >
> > --
> > For LINUX-390 subscribe / signoff / archive access instructions,
> > send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> > visit
> > http://www.marist.edu/htbin/wlvindex?LINUX-390
> > --
> > For more information on Linux on System z, visit
> > http://wiki.linuxvm.org/
> >
>
>
>
> --
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SCSI/FCP disk size

2012-10-10 Thread Jon Miller
And to add to the conversation, I've also used UDEV to help query disk
information for me in the past. Say your SCSI disk is "/dev/sdf", then you
can query all sorts of information with udev via:
udevadm info -a -p $(udevadm info -q path -n /dev/sdf) | less -Ip size

 I choose "less" in my sample CLI invocation to help point out the "size"
attribute but also promote exploration of the other attributes available.

-- Jon Miller

On Wed, Oct 10, 2012 at 12:27 PM, Mark Post  wrote:

> >>> On 10/9/2012 at 07:32 PM, Thang Pham  wrote:
> > Is there a way to find out the size of a native SCSI device attached via
> > FCP channel?  I do not see lszfcp or lsscsi having an option that lets
> you
> > see the size of the disk you have attached to a VM.
>
> The simplest and most direct method is simply "fdisk -l /dev/sd?".  After
> all, it's just a SCSI disk.
>
>
> Mark Post
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: HyperPAV and LVM striping

2012-10-10 Thread Mark Post
>>> On 10/10/2012 at 11:35 AM, "Duerbusch, Tom"  
>>> wrote:

> Just speaking to LVM...
> 
> Striping the data across multiple volumes (which in modern dasd is already
> stripped in the Raid array), would give you the best performance.
>  Especially if you can strip across multiple DS8000 (or other dasd
> subsystems).
> 
> But you can also use LVM as a pool of DASD, with no striping involved.
> 
> In case 1, if you need to expand the LVM pool, it is a hassle.  It might
> mean backing up, reformatting and reloading the data.  In any case, it
> involves a knowledgeable person and most likely, downtime.

This is simply not true.  Expanding a striped LV can be done dynamically with 
no downtime.  The only aspect that is different from a non-striped LV is that 
you have to have enough free space on as many different PVs as the number of 
stripes you have.  That is, if you did an "lvcreate -i 2" then when you do an 
lvextend/lvresize, you have to have free space available on 2 different PVs in 
the pool.  An "lvcreate -i 3" means you need free space on 3 PVs, etc.

A lot of people tend to add space to a volume group one PV at a time.  If 
you're using striped LVs, that won't work unless you make sure that the 
existing PVs have enough free space on them to accommodate additional stripes 
being allocated.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SCSI/FCP disk size

2012-10-10 Thread Brad Hinson
On Oct 9, 2012, at 7:32 PM, Thang Pham wrote:

> Hello List,
> 
> Is there a way to find out the size of a native SCSI device attached via
> FCP channel?  I do not see lszfcp or lsscsi having an option that lets you
> see the size of the disk you have attached to a VM.
> 

I always liked "sfdisk -s ".  Just returns the size as one 
number, easy to parse in a script.

> Thanks,
> Thang Pham
> 
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SCSI/FCP disk size

2012-10-10 Thread Mark Post
>>> On 10/9/2012 at 07:32 PM, Thang Pham  wrote: 
> Is there a way to find out the size of a native SCSI device attached via
> FCP channel?  I do not see lszfcp or lsscsi having an option that lets you
> see the size of the disk you have attached to a VM.

The simplest and most direct method is simply "fdisk -l /dev/sd?".  After all, 
it's just a SCSI disk.


Mark Post

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: HyperPAV and LVM striping

2012-10-10 Thread Duerbusch, Tom
Just speaking to LVM...

Striping the data across multiple volumes (which in modern dasd is already
stripped in the Raid array), would give you the best performance.
 Especially if you can strip across multiple DS8000 (or other dasd
subsystems).

But you can also use LVM as a pool of DASD, with no striping involved.

In case 1, if you need to expand the LVM pool, it is a hassle.  It might
mean backing up, reformatting and reloading the data.  In any case, it
involves a knowledgeable person and most likely, downtime.

In case 2, if you need to expand the LVM pool, you can just add disks to it
on the fly (and even easier with VM).  No downtime.  I add dasd to my LVM
pool in minutes.

The trade off is normally (well isn't it always), performance vs man power.
 If you find you really don't need the "BEST" performance, then make the
job easier.

But then, I you know the requirements of your application.

Tom Duerbusch
THD Consulting

On Tue, Oct 9, 2012 at 5:50 PM, Brad Hinson  wrote:

> Hi folks,
>
> What are the best practices for HyperPAV and LVM striping?  I assumed that
> if you have HyperPAV enabled, you don't need to stripe the data.  Is this
> true, or if not, what is the best practice for optimum performance?
>
> I have lots of mod-9 ECKD with HyperPAV enabled, so I want to use LVM.  So
> my two choices are standard LVM, or LVM striping.  If I stripe across the
> disks I spread the I/O across the physical volumes, but my gut tells me I
> shouldn't have to do this, since HyperPAV is moving around aliases
> dynamically.  For example, say I have 2 PVs and 4 HyperPAV aliases.  If I
> send some heavy I/O through the Linux (device-mapper) block device, then I
> would assume:
>
> - #1, for the case with LVM striping enabled, LVM will spread the I/O to
> both PVs, and HyperPAV will assign 2 aliases to each PV since I'm banging
> on them both.
> - #2, for the case without LVM striping, HyperPAV will assign 4 aliases to
> the first PV since that's the only one in use.
>
> In either case, it seems I'm using all 4 aliases, so seems like I would
> get the same performance.  Please correct me if I'm wrong.  And if so,
> which of these configs is better?
>
> Lastly, is there a presentation or doc that talks about how to enable
> HyperPAV in Linux, or is bringing the HyperPAV aliases online enough to
> trigger the dasd driver to do the right thing?
>
> Thanks as always,
> -Brad
>
> --
> Brad Hinson
> Solution Architect, Red Hat
> +1 (919) 360-0443
>
>
>
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>



--

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SCSI/FCP disk size

2012-10-10 Thread Alan Altmark
On Wednesday, 10/10/2012 at 09:40 EDT, Steffen Maier
 wrote:

> I'm not quite sure what you mean in your second sentence with regard to
> a disk attached to a VM.
> To a VM as a z/VM userid, i.e. attached to the virtual machine by the
> hypervisor? AFAIK this means EDEV under z/VM. I don't know of any other
> way of attaching a scsi disk to a VM (usually users only attach the FCP
> host bus adapter to a VM).
> To a VM as in guest operating system? Then the previous paragraphs
apply.
> Or would you like to get the lun size without even attaching them to
> Linux or the VM? If so, then this depends on the storage type since
> you'd have to use out of band mechanisms (i.e. non-scsi) to query the
> storage target server.

Regrettably absent is the ability of CP to perform standard (and common
extension) inquiries of a remote LUN without having to ATTACH it to a
guest.  I.e. CP QUERY LUN 5 WWPN x  DETAILS.  Right now,
LUNs are very much like tape drives: they have to be attached to a guest
that knows how to talk to it in detail in order to get useful information
out of it.

Alan Altmark

Senior Managing z/VM and Linux Consultant
IBM System Lab Services and Training
ibm.com/systems/services/labservices
office: 607.429.3323
mobile; 607.321.7556
alan_altm...@us.ibm.com
IBM Endicott

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SCSI/FCP disk size

2012-10-10 Thread Scott Rohling
I use /proc/partitions all the time - but stopped recommending it to others
when I was told (can't remember the source) it was deprecated.   I'd be
glad to be wrong here.. as you say - it's quick and easy and no running
through /dev or /sys structures..

Scott Rohling

On Wed, Oct 10, 2012 at 7:02 AM, Rick Troth wrote:

> On 10/10/2012 09:27 AM, Peter Oberparleiter wrote:
> > # cat /proc/partitions
> > major minor  #blocks  name
> >80   10485760 sda
> >81   10485743 sda1
> >
> > Note: the #blocks is the size in 1k-blocks
>
> Yep.  I was going to suggest exactly what Peter suggested.  It's quick
> and easy.
>
> Note that a disk need not be "partitioned" to show up under
> /proc/partitions.  (I say, since I have a penchant for pointing it
> out.)  In fact, if you're going to use these volumes as "LVM fodder"
> (that is, let them be PVs in a volume group), then you will find it
> better to *not* partition them.  Just stamp the requisite LVM magic on
> "sda" (instead of "sda1") and proceed.
>
>
> --
>
> Rick Troth
> Senior Software Developer
>
> Velocity Software Inc.
> Mountain View, CA 94041
> Main: (877) 964-8867
> Direct: (614) 594-9768
> ri...@velocitysoftware.com 
> 
> Signature 
> *Follow us:*
> Facebook
> 
> LinkedIn  Twitter
>  Xing
> 
>
> --
> For LINUX-390 subscribe / signoff / archive access instructions,
> send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
> visit
> http://www.marist.edu/htbin/wlvindex?LINUX-390
> --
> For more information on Linux on System z, visit
> http://wiki.linuxvm.org/
>

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SCSI/FCP disk size

2012-10-10 Thread Rick Troth
<><><><><>

signature.asc
Description: OpenPGP digital signature


Re: SCSI/FCP disk size

2012-10-10 Thread Peter Oberparleiter

On 10.10.2012 01:32, Thang Pham wrote:

Is there a way to find out the size of a native SCSI device attached via
FCP channel?  I do not see lszfcp or lsscsi having an option that lets you
see the size of the disk you have attached to a VM.


You can view the usable size of any block device (not just SCSI) using 
the following command:


# cat /proc/partitions
major minor  #blocks  name
   80   10485760 sda
   81   10485743 sda1

Note: the #blocks is the size in 1k-blocks

Newer distributions provide a tool called 'lsblk' which also shows disk 
and partition sizes.


# lsblk
NAME MAJ:MIN RM   SIZE RO MOUNTPOINT
sda8:0010G  0
└─sda1 8:1010G  0 /


Regards,
  Peter Oberparleiter

--
Peter Oberparleiter
Linux on System z Development - IBM Germany

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SCSI/FCP disk size

2012-10-10 Thread Steffen Maier

On 10/10/2012 02:00 AM, Thang Pham wrote:

That works, thanks.

From:   Raymond Higgs/Poughkeepsie/IBM@IBMUS
Date:   10/09/2012 07:56 PM

It is in /var/log/messages:

Oct  9 19:43:21 4e1d-laplace-48 kernel: [23044.656933] scsi 1:0:26:1082146832: 
Direct-Access IBM  2107900  36.5 PQ: 0 ANSI: 5
Oct  9 19:43:21 4e1d-laplace-48 kernel: [23044.657047] sd 1:0:26:1082146832: 
Attached scsi generic sg21 type 0
Oct  9 19:43:21 4e1d-laplace-48 kernel: [23044.659692] sd 1:0:26:1082146832: 
[sdv] 4194304 512-byte logical blocks: (2.14 GB/2.00 GiB)
Oct  9 19:43:21 4e1d-laplace-48 kernel: [23044.660473] sd 1:0:26:1082146832: 
[sdv] Write Protect is off
Oct  9 19:43:21 4e1d-laplace-48 kernel: [23044.660839] sd 1:0:26:1082146832: 
[sdv] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Oct  9 19:43:21 4e1d-laplace-48 kernel: [23044.663606]  sdv: unknown partition 
table
Oct  9 19:43:21 4e1d-laplace-48 kernel: [23044.665817] sd 1:0:26:1082146832: 
[sdv] Attached SCSI disk

Or send the SCSI read capacity command like this:

root@4e1d-laplace-48.1:sg_readcap /dev/sdv
Read Capacity results:
Last logical block address=4194303 (0x3f), Number of blocks=4194304
Logical block length=512 bytes
Hence:
Device size: 2147483648 bytes, 2048.0 MiB, 2.15 GB



Linux on 390 Port  wrote on 10/09/2012 07:32:19 PM:


From: Thang Pham/Poughkeepsie/IBM@IBMUS
Date: 10/09/2012 07:40 PM



Is there a way to find out the size of a native SCSI device attached via
FCP channel?  I do not see lszfcp or lsscsi having an option that lets you
see the size of the disk you have attached to a VM.


The size of a scsi (disk) device is the same as the block device size. 
This is not specific to Linux on System z nor zfcp nor SCSI. It's just 
common block subsystem code.


Depending on what kind of code you want to process this information with;
for scripting you could use /sys/block//size (and adhere to 
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=Documentation/sysfs-rules.txt;hb=HEAD) 
(or parse the somewhat older /proc/partitions);
for C code you could just use the ioctl BLKGETSIZE64 from 
include/linux/fs.h [see e.g. the source of 'blockdev --getsz ' as 
an example].
All other suggestions so far basically boil down to this in the end, so 
I'd probably prefer to use the user space interface directly.


This requires that you had attached the LUN to Linux previously and that 
would include lun probing in the kernel which already did inquiry and 
readcapacity (if the device is a disk, e.g.) among other things so 
there's no need to resend those scsi commands again explicitly.


I'm not quite sure what you mean in your second sentence with regard to 
a disk attached to a VM.
To a VM as a z/VM userid, i.e. attached to the virtual machine by the 
hypervisor? AFAIK this means EDEV under z/VM. I don't know of any other 
way of attaching a scsi disk to a VM (usually users only attach the FCP 
host bus adapter to a VM).

To a VM as in guest operating system? Then the previous paragraphs apply.
Or would you like to get the lun size without even attaching them to 
Linux or the VM? If so, then this depends on the storage type since 
you'd have to use out of band mechanisms (i.e. non-scsi) to query the 
storage target server.


Steffen Maier

Linux on System z Development

IBM Deutschland Research & Development GmbH
Vorsitzende des Aufsichtsrats: Martina Koederitz
Geschäftsführung: Dirk Wittkopp
Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/


Re: SCSI/FCP disk size

2012-10-10 Thread Michael MacIsaac
fdisk can help.

Here is a little hack I named "lunsizes" that assumes up to 26 mounted
LUNs and "friendly names" (/dev/mapper/mapth):

#!/bin/bash
# find LUNs in /dev/mapper and list in bytes and GiB
ls /dev/mapper/mpath[a-z] >/dev/null 2>&1
if [ $? != 0 ]; then # no LUNs found
  echo "No LUNs found in /dev/mapper/mpath*"
  exit 1
fi

echo -e "LUN   \tBytes  \t~GiB"
for nextLUN in /dev/mapper/mpath[a-z]; do
  bytes=`fdisk -l $nextLUN 2>/dev/null | grep "Disk $nextLUN" | awk
'{print $5}'`
  let GB=bytes/1024/1024/1024
  echo -e "$nextLUN \t$bytes \t$GB"
done

"Mike MacIsaac" 



From:
Thang Pham/Poughkeepsie/IBM@IBMUS
To:
LINUX-390@vm.marist.edu,
Date:
10/09/2012 07:33 PM
Subject:
[LINUX-390] SCSI/FCP disk size
Sent by:
Linux on 390 Port 



Hello List,

Is there a way to find out the size of a native SCSI device attached via
FCP channel?  I do not see lszfcp or lsscsi having an option that lets you
see the size of the disk you have attached to a VM.

Thanks,
Thang Pham

--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or
visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/




--
For LINUX-390 subscribe / signoff / archive access instructions,
send email to lists...@vm.marist.edu with the message: INFO LINUX-390 or visit
http://www.marist.edu/htbin/wlvindex?LINUX-390
--
For more information on Linux on System z, visit
http://wiki.linuxvm.org/