Re: TSM and SATA Disk Pools

2004-11-25 Thread Henk ten Have
> The List administrator (Martha McConaghy) can be reached via the List's
> official web page, http://www2.marist.edu/htbin/wlvindex?adsm-l .

   Well, I knew it was a typical "Calling a Richard"

>Richard Sims   in the Pilgrims state (where they landed at Provincetown,
> on the tip of Cape Cod, only later moving on to Plymouth)

   Cheers,
   Henk ten Have  in the Lincoln state (where I landed at Chicago, on the tip 
of Lake Michigan, only later moved on to Champaign.)


Re: TSM and SATA Disk Pools

2004-11-25 Thread Richard Sims
>> If anyone here has an inroad to whomever administers this mailing list,
>> an intervention would be most welcome.
>
>Richard?
>
>Cheers,
>Henk ten Have (a typical "Calling a Richard"...;)

What?...it's not my fault...I was out of the room when it happened, whatever it
was.

The List administrator (Martha McConaghy) can be reached via the List's official
web page, http://www2.marist.edu/htbin/wlvindex?adsm-l .

Given the size of the Monthly FAQ, I would suggest either making it available
through a web site, or splitting the mailing up into several parts, as some
other FAQ distributions do.

   Richard Sims   in the Pilgrims state (where they landed at Provincetown, on
  the tip of Cape Cod, only later moving on to Plymouth)


Re: TSM and SATA Disk Pools

2004-11-25 Thread Henk ten Have
> If anyone here has an inroad to whomever administers this mailing list,
> an intervention would be most welcome.

Richard?

Cheers,
Henk ten Have (a typical "Calling a Richard"...;)


Re: TSM and SATA Disk Pools

2004-11-17 Thread Hart, Charles
Thank you again for the info Mark.  My head is spinning abit, I myself kinda 
like the KISS mentality when it comes to Storage pools, and would have 
preferred to stay on FC disk. (Might still be able to) Unfortunately lately I 
have not been able to "Dedicated " to just the TSM env, so I can see where 
creating multiple vols wouldn't be that big of a deal if I had dedicated time.  

Here are some responses to your questions.

The TSM server I am trying the SATA Disks on has 4 Disk pools 
Disk_Backup - Approx 200 Sessions with 3-4 Streams each
Disk Exchange Backup 20 Exchange Servers Single threaded (need to work with 
messaging team to multistream)
Disk Remote Backup - Approx 20 Remote Win2k Clients With 3-4 sessions a piece.

Unfortunately lately I have not been able to "Dedicated " to just the TSM env, 
so I can see where creating multiple vols wouldn't be that big of a deal if I 
had dedicated time.

Max Ses = 200
Max Sched 250

Backup Schedules spread out over 10 schedules and 5 start times.

I will let you know what we end up with, I'm asking our SAN Admin about doing 
raid 10 / 50 and see where that takes us... I got spoiled with Symetrix Disk...

Thanks again!



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Mark D. Rodriguez
Sent: Monday, November 15, 2004 4:50 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools


Charles,

There are some other limiting factors you must consider.  Although you
have 300+ clients how many do you schedule to back up at the same time?
Even if it is all of them what is your Maxsessions and MaxSchedsessions
values?  If I remember right you are running on a p630 that box can
probably handle (depending on the amount of memory, # of NICs and # of
processors) up to 200-300 concurrent sessions, but you probably have it
set much lower.  As with all things in life there are practical
limitations, 1200 volumes might seem to be a lot but I have worked in
large environments with several hundred volumes because that is what the
environment needed!  Another question are all of these clients going to
one disk pool and/or are some going straight to tape?

On another note that I did not address in my previous post, the original
topic was about SATA drives and their viability in an ITSM disk pool.  A
couple things to consider here:

   1. They are cheap so you can afford to have very large disk pools -
  Thats a good thing!
   2. SATA drives are typically large capacity (250GB and above) when
  used by IBM, EMC, LSI etc. - This is not so good, see me previous
  post more dirves is better.
   3. SATA drives are usually slower drives, 7200 or 10K rpm, FC drives
  can 15K rpm - Another performance hit.
   4. The reliability, i.e. failure rate, is not as good, but this might
  not be as important in a ITSM server as it might be in a
  prodcution DB server.
   5. In order to get good performance out of SATA you need to work a
  little harder and you probably want to go with RAID 10 or 50 to
  get the best performance/reliabilty.
   6. If you have to move huge amounts of data on a daily basis with a
  minimal amount of time, i.e. you need the best possible
  performance, than SATA is not your answer!
   7. But if you need large disk pools with reasonable performance at a
  great price than your going to love SATA.

Good Luck and let us know how it turns out

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===



Hart, Charles wrote:

>Fantastic Read   Thank you very much for the info!  Just one of our TSM 
>server has 300+ Clients, currently using collocation and a client setting of 
>Resource of 4 we could potentially have to create 1200 volumes on Disk?
>
>Regards,
>
>Charles
>
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
>Mark D. Rodriguez
>Sent: Monday, November 15, 2004 1:58 PM
>To: [EMAIL PROTECTED]
>Subject: Re: TSM and SATA Disk Pools
>
>
>OK, so there seems to be some interest in how to layout disk pools on an
>AIX system using JFS2 instead of raw lv's.  I will try to keep this as
>general as possible so please remember you must make some choices based
>on your particular environment.
>
>* In general I would rather have more small disks than a few large
>  as you will see.  However, this would not apply if the larger disk
>  where 15K rp

Re: TSM and SATA Disk Pools

2004-11-17 Thread Mark D. Rodriguez
Hi,
I have to apologize for the confusion.  I was trying to compile some
general guidelines from several notebooks I have developed over the
years and condense it into a simple email.  I did in fact jump stream
from disk volumes to file volumes.  ITSM does not lock the disk volume
as you stated, but it does limit the file volumes to a single
connection.  Again I am sorry if this has caused you any discomfort.
--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.
===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===
TSM_User wrote:
I went to IBM and opened a PMR on this today.  I just got off the phone with support and 
they said that TSM "DOES NOT" put an exclusive lock a disk pool volume for each 
node session that is sending data.  They said that while a size estimate of the data 
being sent is used to allocate an amount of space it is by no means a lock on an entire 
volume unless of course the size estimate is larger than the entire volume itself.
Richard Sims <[EMAIL PROTECTED]> wrote:
On Nov 17, 2004, at 2:43 PM, Curtis Stewart wrote:

"Calling a Richard": Definition: Name for a situation where the entire
list is stumped and needs to look to Richard for advice and/or
information
regarding some obscure ITSM function.

And we all look to Andy and Del. :-)
Well, there are some things I specifically know; some whose answers I
know
how to look for; and some which are in situational categories. I do
stay
out of areas where I have no expertise, such as virtual volumes and
SANs.
But in so many questions, it is the information that so many on the List
contribute based upon their own areas of learning that help get answers
so quickly. Kind of like having a giant, networked mind. (Visions of
a weird 1950s sci-fi movie...) I think I honed a lot of skills in the
pre-Internet days, when I had to dig deep for my own answers. (Get the
kids away from the TV and into books.)
Richard Sims
-
Do you Yahoo!?
Discover all that s new in My Yahoo!



Re: TSM and SATA Disk Pools

2004-11-17 Thread TSM_User
I went to IBM and opened a PMR on this today.  I just got off the phone with 
support and they said that TSM "DOES NOT" put an exclusive lock a disk pool 
volume for each node session that is sending data.  They said that while a size 
estimate of the data being sent is used to allocate an amount of space it is by 
no means a lock on an entire volume unless of course the size estimate is 
larger than the entire volume itself.


Richard Sims <[EMAIL PROTECTED]> wrote:
On Nov 17, 2004, at 2:43 PM, Curtis Stewart wrote:

> "Calling a Richard": Definition: Name for a situation where the entire
> list is stumped and needs to look to Richard for advice and/or
> information
> regarding some obscure ITSM function.

And we all look to Andy and Del. :-)
Well, there are some things I specifically know; some whose answers I
know
how to look for; and some which are in situational categories. I do
stay
out of areas where I have no expertise, such as virtual volumes and
SANs.
But in so many questions, it is the information that so many on the List
contribute based upon their own areas of learning that help get answers
so quickly. Kind of like having a giant, networked mind. (Visions of
a weird 1950s sci-fi movie...) I think I honed a lot of skills in the
pre-Internet days, when I had to dig deep for my own answers. (Get the
kids away from the TV and into books.)

Richard Sims


-
Do you Yahoo!?
 Discover all that s new in My Yahoo!


Re: TSM and SATA Disk Pools

2004-11-17 Thread Richard Sims
On Nov 17, 2004, at 2:43 PM, Curtis Stewart wrote:
"Calling a Richard": Definition: Name for a situation where the entire
list is stumped and needs to look to Richard for advice and/or
information
regarding some obscure ITSM function.
And we all look to Andy and Del.  :-)
Well, there are some things I specifically know; some whose answers I
know
how to look for; and some which are in situational categories.  I do
stay
out of areas where I have no expertise, such as virtual volumes and
SANs.
But in so many questions, it is the information that so many on the List
contribute based upon their own areas of learning that help get answers
so quickly.  Kind of like having a giant, networked mind.  (Visions of
a weird 1950s sci-fi movie...)  I think I honed a lot of skills in the
pre-Internet days, when I had to dig deep for my own answers.  (Get the
kids away from the TV and into books.)
  Richard Sims


Re: TSM and SATA Disk Pools

2004-11-17 Thread Stapleton, Mark
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On 
Behalf Of Curtis Stewart
>On the lighter side of thingsMay I propose a new ADSM-L 
>mailing list phrase?
>
>"Calling a Richard": Definition: Name for a situation where 
>the entire list is stumped and needs to look to Richard for 
>advice and/or information regarding some obscure ITSM function.
>
> I can see the day when my new ADSM-L term is added 
>to the monthly FAQ. Finally, the fame I've always dreamed 
>about. 

Well, you know, I will add that phrase to the FAQ. Unfortunately, I've
not been able to post the FAQ this month because:

1. The FAQ has reached the 1000-line limit set by the ADSM-L
administrator(s), and
2. The ADSM-L administrator(s) haven't seen fit to respond to my
requests for a resolution to this problem.

If anyone here has an inroad to whomever administers this mailing list,
an intervention would be most welcome.

--
Mark Stapleton ([EMAIL PROTECTED])
Berbee Information Networks
Office 262.521.5627  


Re: TSM and SATA Disk Pools

2004-11-17 Thread Curtis Stewart
On the lighter side of thingsMay I propose a new ADSM-L mailing list
phrase?

"Calling a Richard": Definition: Name for a situation where the entire
list is stumped and needs to look to Richard for advice and/or information
regarding some obscure ITSM function.

 I can see the day when my new ADSM-L term is added to the
monthly FAQ. Finally, the fame I've always dreamed about. 

"Not to call out Richard on this one but I have to ask "Richard, have you
heard of this?"."



Curtis


Re: TSM and SATA Disk Pools

2004-11-17 Thread Rushforth, Tim
The TSM Admin guide has the following info which I assume applies to client
backups:

Concurrent volume access
Disk Volumes: A volume can be accessed concurrently by different operations.
File Volumes: Only one operation can access the volume at a time.

So it depends on whether you are using device type disk or device type file
for your disk volumes.


-Original Message-
From: TSM_User [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 17, 2004 12:49 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools

Mark,
Where did you get the information about "each backup session
(remember a client could have multiple session) opens a volume for
its exclusive use".

I can't find any documentation anywhere to support this.  I have heard other
AIX admins make the same claim.  I have quite  few Windows TSM server that I
am sure have more client sessions sending data (consumer threads) then I
have disk pool volumes. I can't say I've ever run a q sess to see if the
number of nodes sessions in a SEND state are greater than the number of
volumes I have though.


Re: TSM and SATA Disk Pools

2004-11-17 Thread Ben Bullock
Ya,
I'm pretty sure that it is not an "exclusive lock" on AIX
systems either. I have one disk storagepool that only has 1 volume (SSA
disk) in it and it has multiple TSM clients writing to it at the same
time with no issues. Sure there might be some disk contention, but no
failures or long waits on any of those sessions.

Perhaps it's a "round-robin" or a "rotating" lock that migrates
between TSM clients so they all get a share of the I/O on the disk...

Ben


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
TSM_User
Sent: Wednesday, November 17, 2004 11:49 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools


Mark,
Where did you get the information about "each backup session (remember a
client could have multiple session) opens a volume for its exclusive
use".

I can't find any documentation anywhere to support this.  I have heard
other AIX admins make the same claim.  I have quite  few Windows TSM
server that I am sure have more client sessions sending data (consumer
threads) then I have disk pool volumes. I can't say I've ever run a q
sess to see if the number of nodes sessions in a SEND state are greater
than the number of volumes I have though.

Anyway, if this were true then I would have expected to find a
performance tuning doc discussing this issue.  That is unless this is an
AIX issue.

Not to call out Richard on this one but I have to ask "Richard, have you
heard of this?".

"Mark D. Rodriguez" <[EMAIL PROTECTED]> wrote:
OK, so there seems to be some interest in how to layout disk pools on an
AIX system using JFS2 instead of raw lv's. I will try to keep this as
general as possible so please remember you must make some choices based
on your particular environment.

* In general I would rather have more small disks than a few large as
you will see. However, this would not apply if the larger disk where 15K
rpm vs. smaller disks of 10K rpm.
* Creating your hdisks - there are several possibilities here depending
on you environment. o Small environments with only a few disk should use
JBOD. obviously you give up some safety over running RAID 1, 5 or 10 but
small environments can't afford this anyway. o Mid size and above should
use one of the following configs that fits there environment the best.
If you will use RAID 5 then create several small arrays, 4 or 5 disks
per array is good if you have lots of disk then you can go as high as 8
per array. If you have a very large number of disks than you can use
either RAID 0 or 10, obviously RAID 10 will give you some disk failure
protection but at the cost of 2 x actual space vs. usable space. Again 4
or 5 disk arrays (8 or 10 if RAID 10) will work well and as before you
can go larger if you have a very large number of disks to work with. o
The idea of using small arrays is so that you wind up with as many
hdisk's as possible. I like to have at least 4 or 5, but I have also
worked in environments with over 50 hdisks each of which was a RAID
array. o NOTE: This section assumes you are not using any disk
virtualization. In virtualized environments you could have logically
created 4 disk arrays but physically they might all be on the same set
of disks. That situation could cause some performance issues. Disk
virtualization is way outside the scope of this note.
* Create a VG from all the hdisks above, nothing tricky here.
* Create a JFS2 large file enabled file system on each disk. Make sure
the file system consume the entire hdisk and that it does not span
multiple disks. Any reasonably skilled AIX admin can do this for you. In
regards to the log file for these file systems for absolute maximum
performance you could dedicate a separate disk to handle the logs, but
in most cases simply selecting "in line" log will do fine.
* NOTE: This is very important make sure that you add the mount option
of RBRW to each of these file systems. Also, it would help to add this
mount option to the file systems that contain your ITSM DB and LOG. This
option increases the I/O performance and reduces the load on the system.
You will also see a radical reduction in system non-computational memory
usage. Which means you can use more memory for DB and LOG pages as well
as for network performance. For a more in depth discussion of this
option please refer to the AIX Performance Management Guide.
* Now create the storage pool volumes. The size of these volumes is
somewhat up to you, but I like to make sure that I have at least as many
volumes as I might have backup sessions writing to this disk pool at any
given time. That is because each backup session (remember a client could
have multiple session) opens a volume for its exclusive use. Therefore
if I have enough volumes they can all run at once. NOTE: Again this is
very important, when you create your volumes for the storage pool make

Re: TSM and SATA Disk Pools

2004-11-17 Thread TSM_User
#x27;t then get someone that does because you can cripple a
system if you are not careful. Having said that, you should
definitely adjust the min/max read ahead values
(j2_minPageReadAhead and j2_maxPageReadAhead) with the ioo command
a good starting point is 16/128. If you use the RBRW on the file
systems you won't need to make changes to minfree and maxfree
despite what some of the literature says you need do when you
increase the page ahead value. Minperm and maxperm parameters
have been talked about on lot on this list, but again if you are
using the RBRW mount option these values will have a marginal
effect since most of your non-computational memory usage will be
released immediately (without the use of the LRU). However, it
won't hurt any if you lower maxperm to 60% with the vmo command so
that you make sure that you have plenty of memory for
computational pages, i.e. ITSM DB and LOG pages as well as network
memory usage.
* One area of tuning that I can't cover here is tuning the path to
your disk and tape drives. There is just to many combinations
possible (SSA, SCSI, FC, iSCSI, etc.) to give any input. However
it is important that you address the performance issues of these
various communication paths. I will mention a couple of common
problems. Make sure that you don't overload you particular bus
technology, i.e. you can't put 6 LTO1 drives on the same SCSI
bus. So make sure you know the bandwidth of you bus and don't
overload it! Another common mistake is in FC environments, don't
have disk I/O traffic and tape I/O running over the same HBA this
just causes horrible performance and there is no amount of tuning
that can fix it! You must use separate HBA's and zone your switch
to make sure that traffic stays separate. SSA loops should have
at least 4 initiators, i.e. use at least 2 SSA cards on each loop,
and make sure that the SSA cards are connect as far away from each
other in the loop as possible.
* Some ITSM tunables. for the ITSM DB and LOG pages make sure you
have set BufPoolSize and LogPoolSize large enough so that you are
getting at least 99% Cache Hit Pct. on the DB and that your Log
Pool Pct. Wait is 0. Your MoveBatchSize and MoveSizeThresh should
be set to the max values, this will help things like migration and
storage pool backups.

This is a very general list of things you can do, but if you take these
guidelines and apply some common sense about your particular environment
I am sure that you can get very good performance out of your disk/tape
subsystems.

If you have any questions or comments on this than post them and lets
keep this discussion going.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===



Wells, William wrote:

>I would be interested in your post.
>
>-Original Message-
>From: Mark D. Rodriguez [mailto:[EMAIL PROTECTED]
>Sent: Sunday, November 14, 2004 5:49 PM
>To: [EMAIL PROTECTED]
>Subject: Re: TSM and SATA Disk Pools
>
>
>Charles,
>
>I may be missing something here, but even your numbers out of the
>Symetrix seem pretty bad. Are you sure you didn't drop a "0"
>somewhere? I have one customer that I set up using SSA drives with JFS2
>filesystems and LTO1 drives and we average between 35 and 40MB/sec. and
>some days as high as 45MB/sec (compression of data plays a large
>factor). Your Symetrix at 40GB/hr is only 11.11MB/sec! BTW, this is
>with no unusual tuning to the system, since this was more than enough
>performance for their needs. With a little more tuning I could easily
>increase that by 50% and possibly double it if I really tried and that
>is ancient SSA technology. FC technology should be much faster.
>
>I know there are many people who prefer raw lvs for there disk pools,
>but on an AIX system I don't believe it is worth it. I have never had
>anyone show me raw lv numbers on AIX that I could not match (with far
>less hassle) with a good JFS2 configuration. If raw is the way you want
>to go than I wish you luck. However, if you are interested in switching
>to using a JFS2 approach I would be glad to post to the list some simple
>guidelines for configuring your environment to get much better
>performance than you are reporting in your post.
>
>--
>Regards,
>Mark D. Rodriguez
>President MDR Consulting, Inc.
>
>
>===
>MDR Consulting
>The ve

AW: TSM and SATA Disk Pools

2004-11-15 Thread Stefan Holzwarth
Thank you Charles, very good overview.
I have one more thing to mention: (only my opinion)
You should try to setup storagpool hirarchy within the SATA/FC Disk-Pools.

As an idea:

FC Diskpool for Daily Backup ---> 
(S)ATA Diskpool for storing the files as long as the policy allows new
versions (migdelay ~ versions+1) --->
(S)ATA Sequential Pool for long term (small volumes and collocation per node
to avoid a lot of reclamation after deleting nodes) --->
Not neccessary a Tapepool as last chain

Kind Regards 
Stefan Holzwarth


-Ursprüngliche Nachricht-
Von: Mark D. Rodriguez [mailto:[EMAIL PROTECTED] 
Gesendet: Montag, 15. November 2004 23:50
An: [EMAIL PROTECTED]
Betreff: Re: TSM and SATA Disk Pools


Charles,

There are some other limiting factors you must consider.  Although you
have 300+ clients how many do you schedule to back up at the same time?
Even if it is all of them what is your Maxsessions and MaxSchedsessions
values?  If I remember right you are running on a p630 that box can
probably handle (depending on the amount of memory, # of NICs and # of
processors) up to 200-300 concurrent sessions, but you probably have it
set much lower.  As with all things in life there are practical
limitations, 1200 volumes might seem to be a lot but I have worked in
large environments with several hundred volumes because that is what the
environment needed!  Another question are all of these clients going to
one disk pool and/or are some going straight to tape?

On another note that I did not address in my previous post, the original
topic was about SATA drives and their viability in an ITSM disk pool.  A
couple things to consider here:

   1. They are cheap so you can afford to have very large disk pools -
  Thats a good thing!
   2. SATA drives are typically large capacity (250GB and above) when
  used by IBM, EMC, LSI etc. - This is not so good, see me previous
  post more dirves is better.
   3. SATA drives are usually slower drives, 7200 or 10K rpm, FC drives
  can 15K rpm - Another performance hit.
   4. The reliability, i.e. failure rate, is not as good, but this might
  not be as important in a ITSM server as it might be in a
  prodcution DB server.
   5. In order to get good performance out of SATA you need to work a
  little harder and you probably want to go with RAID 10 or 50 to
  get the best performance/reliabilty.
   6. If you have to move huge amounts of data on a daily basis with a
  minimal amount of time, i.e. you need the best possible
  performance, than SATA is not your answer!
   7. But if you need large disk pools with reasonable performance at a
  great price than your going to love SATA.

Good Luck and let us know how it turns out

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.


===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE

===



Hart, Charles wrote:

>Fantastic Read   Thank you very much for the info!  Just one of our TSM
server has 300+ Clients, currently using collocation and a client setting of
Resource of 4 we could potentially have to create 1200 volumes on Disk?
>
>Regards,
>
>Charles
>
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
>Mark D. Rodriguez
>Sent: Monday, November 15, 2004 1:58 PM
>To: [EMAIL PROTECTED]
>Subject: Re: TSM and SATA Disk Pools
>
>
>OK, so there seems to be some interest in how to layout disk pools on an
>AIX system using JFS2 instead of raw lv's.  I will try to keep this as
>general as possible so please remember you must make some choices based
>on your particular environment.
>
>* In general I would rather have more small disks than a few large
>  as you will see.  However, this would not apply if the larger disk
>  where 15K rpm vs. smaller disks of 10K rpm.
>* Creating your hdisks - there are several possibilities here
>  depending on you environment.
>  o Small environments with only a few disk should use JBOD.
>obviously you give up some safety over running RAID 1, 5 or
>10 but small environments can't afford this anyway.
>  o Mid size and above should use one of the following configs
>that fits there environment the best.  If you will use RAID
>5 then create several small arrays, 4 or 5 disks per array
>is good if you have lots of disk then you can go as high as
>8 per array.  If you have a very large 

Re: TSM and SATA Disk Pools

2004-11-15 Thread Mark D. Rodriguez
Charles,
There are some other limiting factors you must consider.  Although you
have 300+ clients how many do you schedule to back up at the same time?
Even if it is all of them what is your Maxsessions and MaxSchedsessions
values?  If I remember right you are running on a p630 that box can
probably handle (depending on the amount of memory, # of NICs and # of
processors) up to 200-300 concurrent sessions, but you probably have it
set much lower.  As with all things in life there are practical
limitations, 1200 volumes might seem to be a lot but I have worked in
large environments with several hundred volumes because that is what the
environment needed!  Another question are all of these clients going to
one disk pool and/or are some going straight to tape?
On another note that I did not address in my previous post, the original
topic was about SATA drives and their viability in an ITSM disk pool.  A
couple things to consider here:
  1. They are cheap so you can afford to have very large disk pools -
 Thats a good thing!
  2. SATA drives are typically large capacity (250GB and above) when
 used by IBM, EMC, LSI etc. - This is not so good, see me previous
 post more dirves is better.
  3. SATA drives are usually slower drives, 7200 or 10K rpm, FC drives
 can 15K rpm - Another performance hit.
  4. The reliability, i.e. failure rate, is not as good, but this might
 not be as important in a ITSM server as it might be in a
 prodcution DB server.
  5. In order to get good performance out of SATA you need to work a
 little harder and you probably want to go with RAID 10 or 50 to
 get the best performance/reliabilty.
  6. If you have to move huge amounts of data on a daily basis with a
 minimal amount of time, i.e. you need the best possible
 performance, than SATA is not your answer!
  7. But if you need large disk pools with reasonable performance at a
 great price than your going to love SATA.
Good Luck and let us know how it turns out
--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.
===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===

Hart, Charles wrote:
Fantastic Read   Thank you very much for the info!  Just one of our TSM 
server has 300+ Clients, currently using collocation and a client setting of 
Resource of 4 we could potentially have to create 1200 volumes on Disk?
Regards,
Charles

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Mark D. Rodriguez
Sent: Monday, November 15, 2004 1:58 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools
OK, so there seems to be some interest in how to layout disk pools on an
AIX system using JFS2 instead of raw lv's.  I will try to keep this as
general as possible so please remember you must make some choices based
on your particular environment.
   * In general I would rather have more small disks than a few large
 as you will see.  However, this would not apply if the larger disk
 where 15K rpm vs. smaller disks of 10K rpm.
   * Creating your hdisks - there are several possibilities here
 depending on you environment.
 o Small environments with only a few disk should use JBOD.
   obviously you give up some safety over running RAID 1, 5 or
   10 but small environments can't afford this anyway.
 o Mid size and above should use one of the following configs
   that fits there environment the best.  If you will use RAID
   5 then create several small arrays, 4 or 5 disks per array
   is good if you have lots of disk then you can go as high as
   8 per array.  If you have a very large number of disks than
   you can use either RAID 0 or 10, obviously RAID 10 will give
   you some disk failure protection but at the cost of 2 x
   actual space vs. usable space.  Again 4 or 5 disk arrays (8
   or 10 if RAID 10) will work well and as before you can go
   larger if you have a very large number of disks to work with.
 o The idea of using small arrays is so that you wind up with
   as many hdisk's as possible.  I like to have at least 4 or
   5, but I have also worked in environments with over 50
   hdisks each of which was a RAID array.
 o NOTE: This section assumes you are not using any disk
   virtualization.  In virtualized environments you could have
   logically created 4 disk arrays but physically they might
   all be on the same set of disks.  That situation could cause

Re: TSM and SATA Disk Pools

2004-11-15 Thread Hart, Charles
Fantastic Read   Thank you very much for the info!  Just one of our TSM 
server has 300+ Clients, currently using collocation and a client setting of 
Resource of 4 we could potentially have to create 1200 volumes on Disk?

Regards,

Charles



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Mark D. Rodriguez
Sent: Monday, November 15, 2004 1:58 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools


OK, so there seems to be some interest in how to layout disk pools on an
AIX system using JFS2 instead of raw lv's.  I will try to keep this as
general as possible so please remember you must make some choices based
on your particular environment.

* In general I would rather have more small disks than a few large
  as you will see.  However, this would not apply if the larger disk
  where 15K rpm vs. smaller disks of 10K rpm.
* Creating your hdisks - there are several possibilities here
  depending on you environment.
  o Small environments with only a few disk should use JBOD.
obviously you give up some safety over running RAID 1, 5 or
10 but small environments can't afford this anyway.
  o Mid size and above should use one of the following configs
that fits there environment the best.  If you will use RAID
5 then create several small arrays, 4 or 5 disks per array
is good if you have lots of disk then you can go as high as
8 per array.  If you have a very large number of disks than
you can use either RAID 0 or 10, obviously RAID 10 will give
you some disk failure protection but at the cost of 2 x
actual space vs. usable space.  Again 4 or 5 disk arrays (8
or 10 if RAID 10) will work well and as before you can go
larger if you have a very large number of disks to work with.
  o The idea of using small arrays is so that you wind up with
as many hdisk's as possible.  I like to have at least 4 or
5, but I have also worked in environments with over 50
hdisks each of which was a RAID array.
  o NOTE: This section assumes you are not using any disk
virtualization.  In virtualized environments you could have
logically created 4 disk arrays but physically they might
all be on the same set of disks.  That situation could cause
some performance issues.  Disk virtualization is way outside
the scope of this note.
* Create a VG from all the hdisks above, nothing tricky here.
* Create a JFS2 large file enabled file system on each disk.  Make
  sure the file system consume the entire hdisk and that it does not
  span multiple disks.  Any reasonably skilled AIX admin can do this
  for you.  In regards to the log file for these file systems for
  absolute maximum performance you could dedicate a separate disk to
  handle the logs, but in most cases simply selecting "in line" log
  will do fine.
* NOTE: This is very important make sure that you add the mount
  option of RBRW to each of these file systems.  Also, it would help
  to add this mount option to the file systems that contain your
  ITSM DB and LOG.  This option increases the I/O performance and
  reduces the load on the system.  You will also see a radical
  reduction in system non-computational memory usage.  Which means
  you can use more memory for DB and LOG pages as well as for
  network performance.  For a more in depth discussion of this
  option please refer to the AIX Performance Management Guide.
* Now create the storage pool volumes.  The size of these volumes is
  somewhat up to you, but I like to make sure that I have at least
  as many volumes as I might have backup sessions writing to this
  disk pool at any given time.  That is because each backup session
  (remember a client could have multiple session) opens a volume for
  its exclusive use.  Therefore if I have enough volumes they can
  all run at once.  NOTE: Again this is very important, when you
  create your volumes for the storage pool make sure you use a round
  robin approach to using the hdisks, i.e. if you have 10 hdisks
  then create the first on hdisk1, second on hdisk2, third on
  hdisk3, and so on so that 11th would be back on hdisk1.  And you
  must create them in sequential order!  The reason for this is that
  ITSM appears ( I have never seen the code nor have I had any
  developers confirm this, although they all agree it appears to do
  this) to use the volumes in the order they were created.
  Therefore, I am sure that once a backup starts I will get all of
  my hdisks in the game and the same thing will apply on migration.
* Some simple tunable system parameters, please note that w

Re: TSM and SATA Disk Pools

2004-11-15 Thread Mark D. Rodriguez
iterature says you need do when you
 increase the page ahead value.  Minperm and maxperm parameters
 have been talked about on lot on this list, but again if you are
 using the RBRW mount option these values will have a marginal
 effect since most of your non-computational memory usage will be
 released immediately (without the use of the LRU).  However, it
 won't hurt any if you lower maxperm to 60% with the vmo command so
 that you make sure that you have plenty of memory for
 computational pages, i.e. ITSM DB and LOG pages as well as network
 memory usage.
   * One area of tuning that I can't cover here is tuning the path to
 your disk and tape drives.  There is just to many combinations
 possible (SSA, SCSI, FC, iSCSI, etc.) to give any input.  However
 it is important that you address the performance issues of these
 various communication paths.  I will mention a couple of common
 problems.  Make sure that you don't overload you particular bus
 technology, i.e. you can't  put 6 LTO1 drives on the same SCSI
 bus.  So make sure you know the bandwidth of you bus and don't
 overload it!  Another common mistake is in FC environments, don't
 have disk I/O traffic and tape I/O running over the same HBA this
 just causes horrible performance and there is no amount of tuning
 that can fix it!  You must use separate HBA's and zone your switch
 to make sure that traffic stays separate.  SSA loops should have
 at least 4 initiators, i.e. use at least 2 SSA cards on each loop,
 and make sure that the SSA cards are connect as far away from each
 other in the loop as possible.
   * Some ITSM tunables. for the ITSM DB and LOG pages make sure you
 have set BufPoolSize and LogPoolSize large enough so that you are
 getting at least 99% Cache Hit Pct. on the DB and that your Log
 Pool Pct. Wait is 0.  Your MoveBatchSize and MoveSizeThresh should
 be set to the max values, this will help things like migration and
 storage pool backups.
This is a very general list of things you can do, but if you take these
guidelines and apply some common sense about your particular environment
I am sure that you can get very good performance out of your disk/tape
subsystems.
If you have any questions or comments on this than post them and lets
keep this discussion going.
--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.
===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===

Wells, William wrote:
I would be interested in your post.
-Original Message-
From: Mark D. Rodriguez [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 14, 2004 5:49 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools
Charles,
I may be missing something here, but even your numbers out of the
Symetrix seem pretty bad.  Are you sure you didn't drop a "0"
somewhere?  I have one customer that I set up using SSA drives with JFS2
filesystems and LTO1 drives and we average between 35 and 40MB/sec. and
some days as high as 45MB/sec (compression of data plays a large
factor).  Your Symetrix at 40GB/hr is only 11.11MB/sec!  BTW, this is
with no unusual tuning to the system, since this was more than enough
performance for their needs.  With a little more tuning I could easily
increase that by 50% and possibly double it if I really tried and that
is ancient SSA technology.  FC technology should be much faster.
I know there are many people who prefer raw lvs for there disk pools,
but on an AIX system I don't believe it is worth it.  I have never had
anyone show me raw lv numbers on AIX that I could not match (with far
less hassle) with a good JFS2 configuration.  If raw is the way you want
to go than I wish you luck.  However, if you are interested in switching
to using a JFS2 approach I would be glad to post to the list some simple
guidelines for configuring your environment to get much better
performance than you are reporting in your post.
--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.

===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE

===
Hart, Charles wrote:

Thanks you for the link.. Go

Re: SUSPECT: (MSW) Re: TSM and SATA Disk Pools

2004-11-15 Thread PAC Brion Arnaud
Mark,

As we planned to migrate part of our storage pools to SATA disks in the
following weeks, I would be grateful to get a copy of your guidelines !

TIA.
Cheers.


Arnaud 


**
Panalpina Management Ltd., Basle, Switzerland, CIT Department
Viadukstrasse 42, P.O. Box 4002 Basel/CH
Phone:  +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
Direct: +41 (61) 226 19 78
e-mail: [EMAIL PROTECTED]

**

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Mark D. Rodriguez
Sent: Monday, 15 November, 2004 00:49
To: [EMAIL PROTECTED]
Subject: SUSPECT: (MSW) Re: TSM and SATA Disk Pools

Charles,

I may be missing something here, but even your numbers out of the
Symetrix seem pretty bad.  Are you sure you didn't drop a "0"
somewhere?  I have one customer that I set up using SSA drives with JFS2
filesystems and LTO1 drives and we average between 35 and 40MB/sec. and
some days as high as 45MB/sec (compression of data plays a large
factor).  Your Symetrix at 40GB/hr is only 11.11MB/sec!  BTW, this is
with no unusual tuning to the system, since this was more than enough
performance for their needs.  With a little more tuning I could easily
increase that by 50% and possibly double it if I really tried and that
is ancient SSA technology.  FC technology should be much faster.

I know there are many people who prefer raw lvs for there disk pools,
but on an AIX system I don't believe it is worth it.  I have never had
anyone show me raw lv numbers on AIX that I could not match (with far
less hassle) with a good JFS2 configuration.  If raw is the way you want
to go than I wish you luck.  However, if you are interested in switching
to using a JFS2 approach I would be glad to post to the list some simple
guidelines for configuring your environment to get much better
performance than you are reporting in your post.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.


===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education IBM Certified
Advanced Technical Expert, CATE AIX Support and Performance Tuning,
RS6000 SP, TSM/ADSM and Linux Red Hat Certified Engineer, RHCE

===


Hart, Charles wrote:

>Thanks you for the link.. Good info!
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of

>William F. Colwell
>Sent: Friday, November 12, 2004 10:52 AM
>To: [EMAIL PROTECTED]
>Subject: Re: TSM and SATA Disk Pools
>
>
>Charles,
>
>See http://www.redbooks.ibm.com/abstracts/tips0458.html?Open
>
>This was in a recent IBM redbooks newsletter.  It discusses SATA 
>performance and to me it says that the tsm backup diskpool is not a
good use for SATA.
>Sequential volumes on SATA may be ok.
>
>Hope this helps,
>
>Bill
>At 10:21 AM 11/12/2004, you wrote:
>
>
>>Been asking lots of questions lately.  ;-)
>>
>>
>>We recently have put our TSM Disk Backup Pools on Clarrion SATA.
>>   The TSM Server is being presented as 600GB SATA Chunks
>>   Our Aix Admin has put a Raw logical over two 600GB Chunks to 
>>create a 1.2TB Raw Logical Volume
>>
>>Right now we are seeing Tape migrations @ about 4GB in 6hrs, where
before on EMC Symetrix disk we saw 29-40GB per hour.  If anyone would
like to share their TSM SATA Diskpool layout and or tips we would much
appreciate it!!!
>>
>>
>>TSM Env
>>AIX 5.2
>>TSM 5.2.4 (64bit)
>>p630 4x4
>>8x3592 FC Drives
>>
>>Regards,
>>
>>Charles
>>
>>
>
>--
>Bill Colwell
>C. S. Draper Lab
>Cambridge Ma.
>
>
>


Re: TSM and SATA Disk Pools

2004-11-14 Thread Wells, William
I would be interested in your post.

-Original Message-
From: Mark D. Rodriguez [mailto:[EMAIL PROTECTED]
Sent: Sunday, November 14, 2004 5:49 PM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools


Charles,

I may be missing something here, but even your numbers out of the
Symetrix seem pretty bad.  Are you sure you didn't drop a "0"
somewhere?  I have one customer that I set up using SSA drives with JFS2
filesystems and LTO1 drives and we average between 35 and 40MB/sec. and
some days as high as 45MB/sec (compression of data plays a large
factor).  Your Symetrix at 40GB/hr is only 11.11MB/sec!  BTW, this is
with no unusual tuning to the system, since this was more than enough
performance for their needs.  With a little more tuning I could easily
increase that by 50% and possibly double it if I really tried and that
is ancient SSA technology.  FC technology should be much faster.

I know there are many people who prefer raw lvs for there disk pools,
but on an AIX system I don't believe it is worth it.  I have never had
anyone show me raw lv numbers on AIX that I could not match (with far
less hassle) with a good JFS2 configuration.  If raw is the way you want
to go than I wish you luck.  However, if you are interested in switching
to using a JFS2 approach I would be glad to post to the list some simple
guidelines for configuring your environment to get much better
performance than you are reporting in your post.

--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.


===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE

===


Hart, Charles wrote:

>Thanks you for the link.. Good info!
>
>
>-Original Message-
>From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
>William F. Colwell
>Sent: Friday, November 12, 2004 10:52 AM
>To: [EMAIL PROTECTED]
>Subject: Re: TSM and SATA Disk Pools
>
>
>Charles,
>
>See http://www.redbooks.ibm.com/abstracts/tips0458.html?Open
>
>This was in a recent IBM redbooks newsletter.  It discusses SATA
performance
>and to me it says that the tsm backup diskpool is not a good use for SATA.
>Sequential volumes on SATA may be ok.
>
>Hope this helps,
>
>Bill
>At 10:21 AM 11/12/2004, you wrote:
>
>
>>Been asking lots of questions lately.  ;-)
>>
>>
>>We recently have put our TSM Disk Backup Pools on Clarrion SATA.
>>   The TSM Server is being presented as 600GB SATA Chunks
>>   Our Aix Admin has put a Raw logical over two 600GB Chunks to create
a 1.2TB Raw Logical Volume
>>
>>Right now we are seeing Tape migrations @ about 4GB in 6hrs, where before
on EMC Symetrix disk we saw 29-40GB per hour.  If anyone would like to share
their TSM SATA Diskpool layout and or tips we would much appreciate it!!!
>>
>>
>>TSM Env
>>AIX 5.2
>>TSM 5.2.4 (64bit)
>>p630 4x4
>>8x3592 FC Drives
>>
>>Regards,
>>
>>Charles
>>
>>
>
>--
>Bill Colwell
>C. S. Draper Lab
>Cambridge Ma.
>
>
>


Re: TSM and SATA Disk Pools

2004-11-14 Thread Mark D. Rodriguez
Charles,
I may be missing something here, but even your numbers out of the
Symetrix seem pretty bad.  Are you sure you didn't drop a "0"
somewhere?  I have one customer that I set up using SSA drives with JFS2
filesystems and LTO1 drives and we average between 35 and 40MB/sec. and
some days as high as 45MB/sec (compression of data plays a large
factor).  Your Symetrix at 40GB/hr is only 11.11MB/sec!  BTW, this is
with no unusual tuning to the system, since this was more than enough
performance for their needs.  With a little more tuning I could easily
increase that by 50% and possibly double it if I really tried and that
is ancient SSA technology.  FC technology should be much faster.
I know there are many people who prefer raw lvs for there disk pools,
but on an AIX system I don't believe it is worth it.  I have never had
anyone show me raw lv numbers on AIX that I could not match (with far
less hassle) with a good JFS2 configuration.  If raw is the way you want
to go than I wish you luck.  However, if you are interested in switching
to using a JFS2 approach I would be glad to post to the list some simple
guidelines for configuring your environment to get much better
performance than you are reporting in your post.
--
Regards,
Mark D. Rodriguez
President MDR Consulting, Inc.
===
MDR Consulting
The very best in Technical Training and Consulting.
IBM Advanced Business Partner
SAIR Linux and GNU Authorized Center for Education
IBM Certified Advanced Technical Expert, CATE
AIX Support and Performance Tuning, RS6000 SP, TSM/ADSM and Linux
Red Hat Certified Engineer, RHCE
===
Hart, Charles wrote:
Thanks you for the link.. Good info!
-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
William F. Colwell
Sent: Friday, November 12, 2004 10:52 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools
Charles,
See http://www.redbooks.ibm.com/abstracts/tips0458.html?Open
This was in a recent IBM redbooks newsletter.  It discusses SATA performance
and to me it says that the tsm backup diskpool is not a good use for SATA.
Sequential volumes on SATA may be ok.
Hope this helps,
Bill
At 10:21 AM 11/12/2004, you wrote:

Been asking lots of questions lately.  ;-)
We recently have put our TSM Disk Backup Pools on Clarrion SATA.
  The TSM Server is being presented as 600GB SATA Chunks
  Our Aix Admin has put a Raw logical over two 600GB Chunks to create a 
1.2TB Raw Logical Volume
Right now we are seeing Tape migrations @ about 4GB in 6hrs, where before on 
EMC Symetrix disk we saw 29-40GB per hour.  If anyone would like to share their 
TSM SATA Diskpool layout and or tips we would much appreciate it!!!
TSM Env
AIX 5.2
TSM 5.2.4 (64bit)
p630 4x4
8x3592 FC Drives
Regards,
Charles

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.



Re: TSM and SATA Disk Pools

2004-11-14 Thread Hart, Charles
Thanks you for the link.. Good info! 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
William F. Colwell
Sent: Friday, November 12, 2004 10:52 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools


Charles,

See http://www.redbooks.ibm.com/abstracts/tips0458.html?Open

This was in a recent IBM redbooks newsletter.  It discusses SATA performance
and to me it says that the tsm backup diskpool is not a good use for SATA.
Sequential volumes on SATA may be ok.

Hope this helps,

Bill
At 10:21 AM 11/12/2004, you wrote:
>Been asking lots of questions lately.  ;-)
>
>
>We recently have put our TSM Disk Backup Pools on Clarrion SATA.
>The TSM Server is being presented as 600GB SATA Chunks
>Our Aix Admin has put a Raw logical over two 600GB Chunks to create a 
> 1.2TB Raw Logical Volume
>
>Right now we are seeing Tape migrations @ about 4GB in 6hrs, where before on 
>EMC Symetrix disk we saw 29-40GB per hour.  If anyone would like to share 
>their TSM SATA Diskpool layout and or tips we would much appreciate it!!!
>
>
>TSM Env
>AIX 5.2
>TSM 5.2.4 (64bit)
>p630 4x4
>8x3592 FC Drives
>
>Regards,
>
>Charles

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: TSM and SATA Disk Pools

2004-11-14 Thread Hart, Charles
Currently disk... I've been readiung about SATA and cgf to devclass file, but 
unfortunately the Disk was purchased with out an thought to how TSM uses disk.

Charles 


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Sal Mangiapane
Sent: Friday, November 12, 2004 10:53 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools


Is that setup as DISK or FILE disk pools?

Sal


Hart, Charles wrote:
> Been asking lots of questions lately.  ;-)
>
>
> We recently have put our TSM Disk Backup Pools on Clarrion SATA.
> The TSM Server is being presented as 600GB SATA Chunks
> Our Aix Admin has put a Raw logical over two 600GB Chunks to create a 
> 1.2TB Raw Logical Volume
>
> Right now we are seeing Tape migrations @ about 4GB in 6hrs, where before on 
> EMC Symetrix disk we saw 29-40GB per hour.  If anyone would like to share 
> their TSM SATA Diskpool layout and or tips we would much appreciate it!!!
>
>
> TSM Env
> AIX 5.2
> TSM 5.2.4 (64bit)
> p630 4x4
> 8x3592 FC Drives
>
> Regards,
>
> Charles
>
>


Re: TSM and SATA Disk Pools

2004-11-12 Thread Sal Mangiapane
Is that setup as DISK or FILE disk pools?
Sal
Hart, Charles wrote:
Been asking lots of questions lately.  ;-)
We recently have put our TSM Disk Backup Pools on Clarrion SATA.
The TSM Server is being presented as 600GB SATA Chunks
Our Aix Admin has put a Raw logical over two 600GB Chunks to create a 
1.2TB Raw Logical Volume
Right now we are seeing Tape migrations @ about 4GB in 6hrs, where before on 
EMC Symetrix disk we saw 29-40GB per hour.  If anyone would like to share their 
TSM SATA Diskpool layout and or tips we would much appreciate it!!!
TSM Env
AIX 5.2
TSM 5.2.4 (64bit)
p630 4x4
8x3592 FC Drives
Regards,
Charles



Re: TSM and SATA Disk Pools

2004-11-12 Thread William F. Colwell
Charles,

See http://www.redbooks.ibm.com/abstracts/tips0458.html?Open

This was in a recent IBM redbooks newsletter.  It discusses SATA performance
and to me it says that the tsm backup diskpool is not a good use for SATA.
Sequential volumes on SATA may be ok.

Hope this helps,

Bill
At 10:21 AM 11/12/2004, you wrote:
>Been asking lots of questions lately.  ;-)
>
>
>We recently have put our TSM Disk Backup Pools on Clarrion SATA.
>The TSM Server is being presented as 600GB SATA Chunks
>Our Aix Admin has put a Raw logical over two 600GB Chunks to create a 
> 1.2TB Raw Logical Volume
>
>Right now we are seeing Tape migrations @ about 4GB in 6hrs, where before on 
>EMC Symetrix disk we saw 29-40GB per hour.  If anyone would like to share 
>their TSM SATA Diskpool layout and or tips we would much appreciate it!!!
>
>
>TSM Env
>AIX 5.2
>TSM 5.2.4 (64bit)
>p630 4x4
>8x3592 FC Drives
>
>Regards,
>
>Charles

--
Bill Colwell
C. S. Draper Lab
Cambridge Ma.


Re: TSM and SATA Disk Pools

2004-11-12 Thread Hart, Charles
Thank you very much, that's our next approach as we did try 2x600 LV's with 
better success.

Regards,

Charles 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Behalf Of
Wheelock, Michael D
Sent: Friday, November 12, 2004 9:56 AM
To: [EMAIL PROTECTED]
Subject: Re: TSM and SATA Disk Pools


Hi,

I have had issues with Fibre disks when they were cut into huge raw lv's
like this.  I am no expert, but I think there is something weird about
TSM's handling of large raw lv's that introduces a serialized step (not
sure exactly, but it seemed to really bottleneck after I added two large
lv's...when I split them, things were fixed without changing any
hardware/raid settings).

Michael Wheelock
Integris Health

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Hart, Charles
Sent: Friday, November 12, 2004 9:22 AM
To: [EMAIL PROTECTED]
Subject: TSM and SATA Disk Pools

Been asking lots of questions lately.  ;-) 


We recently have put our TSM Disk Backup Pools on Clarrion SATA.  
The TSM Server is being presented as 600GB SATA Chunks
Our Aix Admin has put a Raw logical over two 600GB Chunks to
create a 1.2TB Raw Logical Volume

Right now we are seeing Tape migrations @ about 4GB in 6hrs, where
before on EMC Symetrix disk we saw 29-40GB per hour.  If anyone would
like to share their TSM SATA Diskpool layout and or tips we would much
appreciate it!!!


TSM Env
AIX 5.2
TSM 5.2.4 (64bit)
p630 4x4
8x3592 FC Drives

Regards,

Charles 
**
This e-mail may contain identifiable health information that is subject to 
protection 
under state and federal law. This information is intended to be for the use of 
the 
individual named above. If you are not the intended recipient, be aware that 
any 
disclosure, copying, distribution or use of the contents of this information is 
prohibited 
and may be punishable by law. If you have received this electronic transmission 
in 
error, please notify us immediately by electronic mail (reply).
**


Re: TSM and SATA Disk Pools

2004-11-12 Thread Wheelock, Michael D
Hi,

I have had issues with Fibre disks when they were cut into huge raw lv's
like this.  I am no expert, but I think there is something weird about
TSM's handling of large raw lv's that introduces a serialized step (not
sure exactly, but it seemed to really bottleneck after I added two large
lv's...when I split them, things were fixed without changing any
hardware/raid settings).

Michael Wheelock
Integris Health

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Hart, Charles
Sent: Friday, November 12, 2004 9:22 AM
To: [EMAIL PROTECTED]
Subject: TSM and SATA Disk Pools

Been asking lots of questions lately.  ;-) 


We recently have put our TSM Disk Backup Pools on Clarrion SATA.  
The TSM Server is being presented as 600GB SATA Chunks
Our Aix Admin has put a Raw logical over two 600GB Chunks to
create a 1.2TB Raw Logical Volume

Right now we are seeing Tape migrations @ about 4GB in 6hrs, where
before on EMC Symetrix disk we saw 29-40GB per hour.  If anyone would
like to share their TSM SATA Diskpool layout and or tips we would much
appreciate it!!!


TSM Env
AIX 5.2
TSM 5.2.4 (64bit)
p630 4x4
8x3592 FC Drives

Regards,

Charles 
**
This e-mail may contain identifiable health information that is subject to 
protection 
under state and federal law. This information is intended to be for the use of 
the 
individual named above. If you are not the intended recipient, be aware that 
any 
disclosure, copying, distribution or use of the contents of this information is 
prohibited 
and may be punishable by law. If you have received this electronic transmission 
in 
error, please notify us immediately by electronic mail (reply).
**


TSM and SATA Disk Pools

2004-11-12 Thread Hart, Charles
Been asking lots of questions lately.  ;-) 


We recently have put our TSM Disk Backup Pools on Clarrion SATA.  
The TSM Server is being presented as 600GB SATA Chunks
Our Aix Admin has put a Raw logical over two 600GB Chunks to create a 
1.2TB Raw Logical Volume

Right now we are seeing Tape migrations @ about 4GB in 6hrs, where before on 
EMC Symetrix disk we saw 29-40GB per hour.  If anyone would like to share their 
TSM SATA Diskpool layout and or tips we would much appreciate it!!!


TSM Env
AIX 5.2
TSM 5.2.4 (64bit)
p630 4x4
8x3592 FC Drives

Regards,

Charles