Re: LTO5 Tuning

2014-10-30 Thread Steven Harris
Thanks for your interest Mike.  The san guy tells me it's 380 microseconds
which equates to 76 km distance.

Regards

Steve
On 30 Oct 2014 23:50, "Ryder, Michael S"  wrote:

> Steve:
>
> What is the latency on your intersite connection?
>
> Best regards,
>
> Mike, x7942
> RMD IT Client Services
>
> On Thu, Oct 30, 2014 at 8:35 AM, Steven Harris 
> wrote:
>
> > Hi All
> >
> > I have a TSM 6.3.4 server on newish P7 hardware and AIX V7.1. HBAs are
> > all 8Gb.  The sans behind it are 8Gb or 4Gb depending which path they
> > take as we are in the middle of a SAN upgrade and there is still an old
> > switch in the mix.
> >
> > Disk is XIV behind SVC.  Tape is TS3500 and LTO5.
> >
> > According to the LTO wikipedia entry I should be able to get 140MB/sec
> > raw out of the drive.  I have an internal company document that suggests
> > sustained 210MB/sec (compressed) is attainable in the real world.
> >
> > So far my server backs up 500GB per night of DB2 and Oracle databases on
> > to file pools, without deduplication.  Housekeeping then does a
> > single-streamed simultaneous migrate and copy to onsite and offsite
> > tapes.  Inter site bandwidth is 4Gb and I have most of that to myself.
> >
> > That process takes over 5 hours so I'm seeing less than 100MB/sec.
> >
> > Accordingly I started a tuning exercise.  I copied 50GB of my filepool
> > twice to give me a test dataset and started testing, of course when
> > there was no other activity on the TSM box.
> >
> > The data comes off disk at 500MB/sec to /dev/null, so that is not a
> > bottleneck.
> >
> > Copying using dd to tape runs at a peak of 120MB/sec with periods of
> > much lower than that, as measured using nmon's fc stats on the HBAs. I
> > presume some of that slowdown is where the tape reaches its end and has
> > to reverse direction.
> >
> > Elapsed time for 100GB is 18 min, with little variation so average speed
> > is 95MB/sec
> >
> > dd ibs and obs values were varied and ibs=256K obs=1024K seems to give
> > the best result.
> >
> > Elapsed time is very consistent.
> >
> > Copying to a local drive on the same switch blade as the tape HBA or
> > copying across blades made no difference.
> >
> > Copying to a drive at the remote site increased elapsed time by 2
> > minutes, as one would expect with more switches in the path and a longer
> > turnaround time.
> >
> > Tape to tape copy was not noticeably different to disk to tape.
> >
> > Reading from tape to /dev/null was no different.
> >
> > In all cases CPU time was about half of the elapsed time.
> >
> > lsattr on the drives shows that compression is on (this is also the
> > default)
> >
> > The tape FC adapters are set to use the large transfer size.
> >
> > The test was also run using 64KB pages and svmon was used to verify the
> > setting was effective. Again no difference.
> >
> > I'm running out of ideas here.  num_cmd_elements on the hbas is 500 (the
> > default)  I'm thinking of increasing that to 2000, but it will require
> > an outage and hence change control.
> >
> > Does anyone have any ideas, references I could look at or practical
> > advice as to how to get this to perform?
> >
> > Thanks
> >
> > Steve
> >
> > Steven Harris
> > TSM Admin
> > Canberra Australia
> >
>


Re: Monitoring Agent, 2 on a single TSM Server?

2014-10-30 Thread Lee Miller
You can do that on AIX and Linux, by installing the code into different
directories.  You can't however do that on Windows.

But what issue are you dealing with between the versions?


Lee Miller
Tivoli Storage Manager for System Backup and Recovery Development
and IBM Tivoli Monitoring for Tivoli Storage Manager
Phone: 817-874-7484



From:   "Vandeventer, Harold [OITS]" 
To: ADSM-L@VM.MARIST.EDU
Date:   10/30/2014 03:58 PM
Subject:[ADSM-L] Monitoring Agent, 2 on a single TSM Server?
Sent by:"ADSM: Dist Stor Manager" 



Is it possible to install two Monitoring Agents on a single TSM server?

My hope: leave the current remote agent that pushes data from TSM 6.3.x to
the production Monitoring/Reporting/Admin Center server where production
reports are produced.

But, also install a second Remote Monitoring Agent (related to TSM 7.1.1)
on that same TSM 6.3 server and configure it to push data to a new
Monitoring/Reporting server.

This would let me workout the wrinkles in new reporting and continue to
use Remote Agents, not several local agents.

Alternative, I suppose: create a second local Agent on the new Monitoring
server and in time delete it, when I upgrade the Remote Agent to the new
release.

Thanks.


Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services
STE 751-S
910 SW Jackson
(785) 296-0631


[Confidentiality notice:]
***
This e-mail message, including attachments, if any, is intended for the
person or entity to which it is addressed and may contain confidential
or privileged information.  Any unauthorized review, use, or disclosure
is prohibited.  If you are not the intended recipient, please contact
the sender and destroy the original message, including all copies,
Thank you.
***


Re: TSM 7.1 usage of volumes for dedupe

2014-10-30 Thread Remco Post
> Op 30 okt. 2014, om 22:04 heeft Colwell, William F.  het 
> volgende geschreven:
> 
> Hi Martha,
> 
> I am glad this was useful to you.
> 
> I have not reported this as a bug; I expect they would say 
> working-as-designed, try
> submitting an rfe.

I never understood that working as designed is a reason to close a call. If the 
design is bodged, then it should be fixed. I’ve been told that even TSM is 
designed by people, and I’m sure sometimes those do make mistakes.

> 
> - bill
> 
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Martha M McConaghy
> Sent: Thursday, October 30, 2014 10:09 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: TSM 7.1 usage of volumes for dedupe
> 
> Bill,
> 
> I just wanted to let you know how much this information helped.   I was
> able to clear out all the problem volumes and have removed the full LUNs
> from the devclass until there is enough space on them to be used again.
> 
> This situation really seems strange to me.  Why has TSM not been updated
> to handle the out of space condition better?  If it has a command that
> shows how much space is left on the LUN, why can't TSM understand it is
> time to stop allocating volumes on it?  Forcing admins to do manual
> clean up like this just to keep things healthy seems inconsistent with
> how the rest of TSM functions.
> 
> Has anyone ever reported this as a bug?
> 
> Martha
> 
> On 10/22/2014 2:38 PM, Colwell, William F. wrote:
>> Hi Martha,
>> 
>> I see this situation occur when a filesystem gets almost completely full.
>> 
>> Do 'q dirsp ' to check for nearly full filesystems.
>> 
>> The server doesn't fence off a filesystem like this, instead it keeps
>> hammering on it, allocating new volumes.  When it tries to write to a volume
>> and gets an immediate out-of-space error, it marks the volume full so it 
>> won't
>> try to use it again.
>> 
>> I run this sql to find such volumes and delete them -
>> 
>> select 'del v '||cast(volume_name as char(40)), cast(stgpool_name as 
>> char(30)), last_write_date -
>>  from volumes where upper(status) = 'FULL' and pct_utilized = 0 and 
>> pct_reclaim = 0 order by 2, 3
>> 
>> You should remove such filesystems from the devclass directory list until
>> reclaim has emptied them a little bit.
>> 
>> Hope his helps,
>> 
>> Bill Colwell
>> Draper Lab
>> 
>> 
>> 
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
>> Martha M McConaghy
>> Sent: Wednesday, October 22, 2014 2:23 PM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: Re: TSM 7.1 usage of volumes for dedupe
>> 
>> Interesting.  Seems very similar, except the status of these volumes is
>> "FULL", not "EMPTY".  However, the %reclaimable space is 0.0.
>> 
>> I think this is a bug.  I would expect the volume to leave the pool once
>> it is "reclaimed".  It would be OK with me if it did not. However, since
>> the status is "FULL", it will never be reused. That seems wrong.  If it
>> is going to remain attached to the dedupepool, the status should convert
>> to EMPTY so the file can be reused.  Or, go away altogether so the space
>> can be reclaimed and reused.
>> 
>> In looking at the filesystem on the Linux side (sorry I didn't mention
>> this is running on RHEL), the file exists on /data0, but with no size:
>> 
>> [urmm@tsmserver data0]$ ls -l *d57*
>> -rw--- 1 tsminst1 tsmsrvrs 0 Oct 10 20:22 0d57.bfs
>> 
>> /data0 is 100% utilized, so this file can never grow.  Seems like it
>> should get cleaned up rather than continue to exist.
>> 
>> Martha
>> 
>> On 10/22/2014 1:58 PM, Erwann SIMON wrote:
>>> hi Martha,
>>> 
>>> See if this can apply :
>>> www-01.ibm.com/support/docview.wss?uid=swg21685554
>>> 
>>> Note that I had a situation where Q CONT returned that the volume was empty 
>>> but it wasn't in reality since it was impossible to delete it (without 
>>> discrading data). A select statement against the contents showed some 
>>> files. Unforunately, I don't know how this story finished...
>>> 
>> --
>> Martha McConaghy
>> Marist: System Architect/Technical Lead
>> SHARE: Director of Operations
>> Marist College IT
>> Poughkeepsie, NY  12601
>> 
>>  Notice: This email and any attachments may contain proprietary (Draper 
>> non-public) and/or export-controlled information of Draper Laboratory. If 
>> you are not the intended recipient of this email, please immediately notify 
>> the sender by replying to this email and immediately destroy all copies of 
>> this email.
>> 
> 
> --
> Martha McConaghy
> Marist: System Architect/Technical Lead
> SHARE: Director of Operations
> Marist College IT
> Poughkeepsie, NY  12601
> 
> Notice: This email and any attachments may contain proprietary (Draper 
> non-public) and/or export-controlled information of Draper Laboratory. If you 
> are not the intended recipient of this email, please imm

Re: TSM 7.1 usage of volumes for dedupe

2014-10-30 Thread Colwell, William F.
Hi Martha,

I am glad this was useful to you.

I have not reported this as a bug; I expect they would say working-as-designed, 
try
submitting an rfe.

- bill

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Martha 
M McConaghy
Sent: Thursday, October 30, 2014 10:09 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM 7.1 usage of volumes for dedupe

Bill,

I just wanted to let you know how much this information helped.   I was
able to clear out all the problem volumes and have removed the full LUNs
from the devclass until there is enough space on them to be used again.

This situation really seems strange to me.  Why has TSM not been updated
to handle the out of space condition better?  If it has a command that
shows how much space is left on the LUN, why can't TSM understand it is
time to stop allocating volumes on it?  Forcing admins to do manual
clean up like this just to keep things healthy seems inconsistent with
how the rest of TSM functions.

Has anyone ever reported this as a bug?

Martha

On 10/22/2014 2:38 PM, Colwell, William F. wrote:
> Hi Martha,
>
> I see this situation occur when a filesystem gets almost completely full.
>
> Do 'q dirsp ' to check for nearly full filesystems.
>
> The server doesn't fence off a filesystem like this, instead it keeps
> hammering on it, allocating new volumes.  When it tries to write to a volume
> and gets an immediate out-of-space error, it marks the volume full so it won't
> try to use it again.
>
> I run this sql to find such volumes and delete them -
>
> select 'del v '||cast(volume_name as char(40)), cast(stgpool_name as 
> char(30)), last_write_date -
>   from volumes where upper(status) = 'FULL' and pct_utilized = 0 and 
> pct_reclaim = 0 order by 2, 3
>
> You should remove such filesystems from the devclass directory list until
> reclaim has emptied them a little bit.
>
> Hope his helps,
>
> Bill Colwell
> Draper Lab
>
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Martha M McConaghy
> Sent: Wednesday, October 22, 2014 2:23 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: TSM 7.1 usage of volumes for dedupe
>
> Interesting.  Seems very similar, except the status of these volumes is
> "FULL", not "EMPTY".  However, the %reclaimable space is 0.0.
>
> I think this is a bug.  I would expect the volume to leave the pool once
> it is "reclaimed".  It would be OK with me if it did not. However, since
> the status is "FULL", it will never be reused. That seems wrong.  If it
> is going to remain attached to the dedupepool, the status should convert
> to EMPTY so the file can be reused.  Or, go away altogether so the space
> can be reclaimed and reused.
>
> In looking at the filesystem on the Linux side (sorry I didn't mention
> this is running on RHEL), the file exists on /data0, but with no size:
>
> [urmm@tsmserver data0]$ ls -l *d57*
> -rw--- 1 tsminst1 tsmsrvrs 0 Oct 10 20:22 0d57.bfs
>
> /data0 is 100% utilized, so this file can never grow.  Seems like it
> should get cleaned up rather than continue to exist.
>
> Martha
>
> On 10/22/2014 1:58 PM, Erwann SIMON wrote:
>> hi Martha,
>>
>> See if this can apply :
>> www-01.ibm.com/support/docview.wss?uid=swg21685554
>>
>> Note that I had a situation where Q CONT returned that the volume was empty 
>> but it wasn't in reality since it was impossible to delete it (without 
>> discrading data). A select statement against the contents showed some files. 
>> Unforunately, I don't know how this story finished...
>>
> --
> Martha McConaghy
> Marist: System Architect/Technical Lead
> SHARE: Director of Operations
> Marist College IT
> Poughkeepsie, NY  12601
> 
>   Notice: This email and any attachments may contain proprietary (Draper 
> non-public) and/or export-controlled information of Draper Laboratory. If you 
> are not the intended recipient of this email, please immediately notify the 
> sender by replying to this email and immediately destroy all copies of this 
> email.
> 

--
Martha McConaghy
Marist: System Architect/Technical Lead
SHARE: Director of Operations
Marist College IT
Poughkeepsie, NY  12601

 Notice: This email and any attachments may contain proprietary (Draper 
non-public) and/or export-controlled information of Draper Laboratory. If you 
are not the intended recipient of this email, please immediately notify the 
sender by replying to this email and immediately destroy all copies of this 
email.



Monitoring Agent, 2 on a single TSM Server?

2014-10-30 Thread Vandeventer, Harold [OITS]
Is it possible to install two Monitoring Agents on a single TSM server?

My hope: leave the current remote agent that pushes data from TSM 6.3.x to the 
production Monitoring/Reporting/Admin Center server where production reports 
are produced.

But, also install a second Remote Monitoring Agent (related to TSM 7.1.1) on 
that same TSM 6.3 server and configure it to push data to a new 
Monitoring/Reporting server.

This would let me workout the wrinkles in new reporting and continue to use 
Remote Agents, not several local agents.

Alternative, I suppose: create a second local Agent on the new Monitoring 
server and in time delete it, when I upgrade the Remote Agent to the new 
release.

Thanks.


Harold Vandeventer
Systems Programmer
State of Kansas - Office of Information Technology Services
STE 751-S
910 SW Jackson
(785) 296-0631


[Confidentiality notice:]
***
This e-mail message, including attachments, if any, is intended for the
person or entity to which it is addressed and may contain confidential
or privileged information.  Any unauthorized review, use, or disclosure
is prohibited.  If you are not the intended recipient, please contact
the sender and destroy the original message, including all copies,
Thank you.
***


Re: exchange 2010

2014-10-30 Thread Remco Post
Hi Del,

thanks, that's what I'd expected. What I'll do is something along the lines of:

tdpexcc backup * incremental /excludedb="these_go_full_today" /preferdagpassive 
(and more stuff)
tdpexcc backup "these_go_full_today" full /preferdagpassive (and more stuff)

and have the script determine what "these_go_full_today" will be for each day 
of the week, based on a simple array. It'll probably be powershell because I 
like it and it does provide the functionality I'll need like actual access to a 
simple function to indicate the day of the week and arrays and stuff. I'll 
lookup the URL that you provided.

O, and yes, what Erwann said.

Op 30 okt. 2014, om 20:48 heeft Del Hoobler  het volgende 
geschreven:

> Hi Remco,
> 
> The PREFERDAGPASSIVE tells DP/Exchange to only back up an active database
> if there
> are no current healthy passive copies of the database. If no healthy
> passive copy
> is available, the backup is made from the active copy.
> 
> Most of the DAG backup best practices and methodology is in the Exchange
> book
> under "Database Availability Group (DAG) backups":
> 
> 
> http://www-01.ibm.com/support/knowledgecenter/SSTG2D_7.1.1/com.ibm.itsm.mail.exc.doc/c_dpfcm_bup_replica_exc.html?lang=en
> 
> 
> The key to "load balance" when setting up the scheduled backup script is
> to have
> a separate invocation of each database.
> 
> For example:
> 
>   TDPEXCC BACKUP DB1 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
>   TDPEXCC BACKUP DB2 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
>   TDPEXCC BACKUP DB3 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
>   TDPEXCC BACKUP DB4 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
>   TDPEXCC BACKUP DB5 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
> 
> Then, run this command from each of the Exchange servers at or about the
> same time.
> 
> 
> 
> Del
> 
> 
> 
> "ADSM: Dist Stor Manager"  wrote on 10/30/2014
> 02:42:01 PM:
> 
>> From: Remco Post 
>> To: ADSM-L@VM.MARIST.EDU
>> Date: 10/30/2014 02:42 PM
>> Subject: exchange 2010
>> Sent by: "ADSM: Dist Stor Manager" 
>> 
>> Hi All,
>> 
>> we're sort of struggling to get the full backups of our exchange
>> environment completed. The good news is that the mail admins have
>> decided to split the exchange env. into many small databases, so I
>> was thinking to create a script that backups a number of databases
>> full and the rest incrementally and rotating the selection of
>> databases to be backed up fully each day.
>> 
>> I was just wondering about the preferdagpassive parameter. If I say
>> 'backup db1,db2,db3,db4 full /preferdagpasive' and db3 and db4 are
>> active databases on this DAG node, will they or will they not be
>> backed up? In other words, is there a difference in handling for *
>> vs. explicitly naming the databases to be backed up?
>> 
>> (of course I'll run the exact same command on the other DAG node at
>> more or less the same time to backup those databases that are
>> passive over there).
>> 
>> --
>> 
>> Met vriendelijke groeten/Kind Regards,
>> 
>> Remco Post
>> r.p...@plcs.nl
>> +31 6 248 21 622
>> 

-- 

 Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: exchange 2010

2014-10-30 Thread Erwann Simon
Hi Del,

It would be nice to have a feature like vmmaxparallel for easy Exchange 
databases parallel backups.


Le 30 octobre 2014 20:48:33 CET, Del Hoobler  a écrit :
>Hi Remco,
>
>The PREFERDAGPASSIVE tells DP/Exchange to only back up an active
>database
>if there
>are no current healthy passive copies of the database. If no healthy
>passive copy
>is available, the backup is made from the active copy.
>
>Most of the DAG backup best practices and methodology is in the
>Exchange
>book
>under "Database Availability Group (DAG) backups":
>
>
>http://www-01.ibm.com/support/knowledgecenter/SSTG2D_7.1.1/com.ibm.itsm.mail.exc.doc/c_dpfcm_bup_replica_exc.html?lang=en
>
>
>The key to "load balance" when setting up the scheduled backup script
>is
>to have
>a separate invocation of each database.
>
>For example:
>
>   TDPEXCC BACKUP DB1 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
>   TDPEXCC BACKUP DB2 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
>   TDPEXCC BACKUP DB3 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
>   TDPEXCC BACKUP DB4 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
>   TDPEXCC BACKUP DB5 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
>
>Then, run this command from each of the Exchange servers at or about
>the
>same time.
>
>
>
>Del
>
>
>
>"ADSM: Dist Stor Manager"  wrote on 10/30/2014
>02:42:01 PM:
>
>> From: Remco Post 
>> To: ADSM-L@VM.MARIST.EDU
>> Date: 10/30/2014 02:42 PM
>> Subject: exchange 2010
>> Sent by: "ADSM: Dist Stor Manager" 
>>
>> Hi All,
>>
>> we're sort of struggling to get the full backups of our exchange
>> environment completed. The good news is that the mail admins have
>> decided to split the exchange env. into many small databases, so I
>> was thinking to create a script that backups a number of databases
>> full and the rest incrementally and rotating the selection of
>> databases to be backed up fully each day.
>>
>> I was just wondering about the preferdagpassive parameter. If I say
>> 'backup db1,db2,db3,db4 full /preferdagpasive' and db3 and db4 are
>> active databases on this DAG node, will they or will they not be
>> backed up? In other words, is there a difference in handling for *
>> vs. explicitly naming the databases to be backed up?
>>
>> (of course I'll run the exact same command on the other DAG node at
>> more or less the same time to backup those databases that are
>> passive over there).
>>
>> --
>>
>>  Met vriendelijke groeten/Kind Regards,
>>
>> Remco Post
>> r.p...@plcs.nl
>> +31 6 248 21 622
>>

-- 
Erwann SIMON
Envoyé de mon téléphone Android avec K-9 Mail. Excusez la brièveté.


Re: exchange 2010

2014-10-30 Thread Del Hoobler
Hi Remco,

The PREFERDAGPASSIVE tells DP/Exchange to only back up an active database
if there
are no current healthy passive copies of the database. If no healthy
passive copy
is available, the backup is made from the active copy.

Most of the DAG backup best practices and methodology is in the Exchange
book
under "Database Availability Group (DAG) backups":


http://www-01.ibm.com/support/knowledgecenter/SSTG2D_7.1.1/com.ibm.itsm.mail.exc.doc/c_dpfcm_bup_replica_exc.html?lang=en


The key to "load balance" when setting up the scheduled backup script is
to have
a separate invocation of each database.

For example:

   TDPEXCC BACKUP DB1 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
   TDPEXCC BACKUP DB2 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
   TDPEXCC BACKUP DB3 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
   TDPEXCC BACKUP DB4 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE
   TDPEXCC BACKUP DB5 FULL /MINIMUMBACKUPINTERVAL=720 /PREFERDAGPASSIVE

Then, run this command from each of the Exchange servers at or about the
same time.



Del



"ADSM: Dist Stor Manager"  wrote on 10/30/2014
02:42:01 PM:

> From: Remco Post 
> To: ADSM-L@VM.MARIST.EDU
> Date: 10/30/2014 02:42 PM
> Subject: exchange 2010
> Sent by: "ADSM: Dist Stor Manager" 
>
> Hi All,
>
> we're sort of struggling to get the full backups of our exchange
> environment completed. The good news is that the mail admins have
> decided to split the exchange env. into many small databases, so I
> was thinking to create a script that backups a number of databases
> full and the rest incrementally and rotating the selection of
> databases to be backed up fully each day.
>
> I was just wondering about the preferdagpassive parameter. If I say
> 'backup db1,db2,db3,db4 full /preferdagpasive' and db3 and db4 are
> active databases on this DAG node, will they or will they not be
> backed up? In other words, is there a difference in handling for *
> vs. explicitly naming the databases to be backed up?
>
> (of course I'll run the exact same command on the other DAG node at
> more or less the same time to backup those databases that are
> passive over there).
>
> --
>
>  Met vriendelijke groeten/Kind Regards,
>
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622
>


exchange 2010

2014-10-30 Thread Remco Post
Hi All,

we're sort of struggling to get the full backups of our exchange environment 
completed. The good news is that the mail admins have decided to split the 
exchange env. into many small databases, so I was thinking to create a script 
that backups a number of databases full and the rest incrementally and rotating 
the selection of databases to be backed up fully each day.

I was just wondering about the preferdagpassive parameter. If I say 'backup 
db1,db2,db3,db4 full /preferdagpasive' and db3 and db4 are active databases on 
this DAG node, will they or will they not be backed up? In other words, is 
there a difference in handling for * vs. explicitly naming the databases to be 
backed up?

(of course I'll run the exact same command on the other DAG node at more or 
less the same time to backup those databases that are passive over there).

-- 

 Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Re: TSM 7.1 usage of volumes for dedupe

2014-10-30 Thread Martha M McConaghy

Bill,

I just wanted to let you know how much this information helped.   I was
able to clear out all the problem volumes and have removed the full LUNs
from the devclass until there is enough space on them to be used again.

This situation really seems strange to me.  Why has TSM not been updated
to handle the out of space condition better?  If it has a command that
shows how much space is left on the LUN, why can't TSM understand it is
time to stop allocating volumes on it?  Forcing admins to do manual
clean up like this just to keep things healthy seems inconsistent with
how the rest of TSM functions.

Has anyone ever reported this as a bug?

Martha

On 10/22/2014 2:38 PM, Colwell, William F. wrote:

Hi Martha,

I see this situation occur when a filesystem gets almost completely full.

Do 'q dirsp ' to check for nearly full filesystems.

The server doesn't fence off a filesystem like this, instead it keeps
hammering on it, allocating new volumes.  When it tries to write to a volume
and gets an immediate out-of-space error, it marks the volume full so it won't
try to use it again.

I run this sql to find such volumes and delete them -

select 'del v '||cast(volume_name as char(40)), cast(stgpool_name as char(30)), 
last_write_date -
  from volumes where upper(status) = 'FULL' and pct_utilized = 0 and 
pct_reclaim = 0 order by 2, 3

You should remove such filesystems from the devclass directory list until
reclaim has emptied them a little bit.

Hope his helps,

Bill Colwell
Draper Lab



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Martha 
M McConaghy
Sent: Wednesday, October 22, 2014 2:23 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM 7.1 usage of volumes for dedupe

Interesting.  Seems very similar, except the status of these volumes is
"FULL", not "EMPTY".  However, the %reclaimable space is 0.0.

I think this is a bug.  I would expect the volume to leave the pool once
it is "reclaimed".  It would be OK with me if it did not. However, since
the status is "FULL", it will never be reused. That seems wrong.  If it
is going to remain attached to the dedupepool, the status should convert
to EMPTY so the file can be reused.  Or, go away altogether so the space
can be reclaimed and reused.

In looking at the filesystem on the Linux side (sorry I didn't mention
this is running on RHEL), the file exists on /data0, but with no size:

[urmm@tsmserver data0]$ ls -l *d57*
-rw--- 1 tsminst1 tsmsrvrs 0 Oct 10 20:22 0d57.bfs

/data0 is 100% utilized, so this file can never grow.  Seems like it
should get cleaned up rather than continue to exist.

Martha

On 10/22/2014 1:58 PM, Erwann SIMON wrote:

hi Martha,

See if this can apply :
www-01.ibm.com/support/docview.wss?uid=swg21685554

Note that I had a situation where Q CONT returned that the volume was empty but 
it wasn't in reality since it was impossible to delete it (without discrading 
data). A select statement against the contents showed some files. Unforunately, 
I don't know how this story finished...


--
Martha McConaghy
Marist: System Architect/Technical Lead
SHARE: Director of Operations
Marist College IT
Poughkeepsie, NY  12601

  Notice: This email and any attachments may contain proprietary (Draper 
non-public) and/or export-controlled information of Draper Laboratory. If you 
are not the intended recipient of this email, please immediately notify the 
sender by replying to this email and immediately destroy all copies of this 
email.



--
Martha McConaghy
Marist: System Architect/Technical Lead
SHARE: Director of Operations
Marist College IT
Poughkeepsie, NY  12601


Access to VE api

2014-10-30 Thread Steven Harris
Hi again.  last one from me and then I'll shut up.

I've been toying for years with the idea of wrapping the TSM API so it
could be accessed from modern scripting languages.  One reason would
be to re-implement something like the old adsmpipe program. Others
would be to clean up leftovers of decommissioned nodes or trump Oracle
DBAs who unilaterally exceed their agreed retention periods.

Lately my thinking has been to do this in Java as then it would be
available to anything using the JVM, such as Javascript with Rhino,
Jruby, Jython, even Clojure if that suits. The Java FFI makes calling
the C api possible.

Unless you have been under a rock you will have heard about Docker.
This technology is sweeping all before it and I can see it becoming
ubiquitous very quickly.  One of the features of Docker is the Docker
volume which is like a private file system for it.

So I was considering how to back up a docker volume, and one way would
be to back it up as a single object from the host OS. That would be an
excellent use case for the TSM VE API... that is, break the docker
volume into 128MB chunks and run a subfile-style delta backup on each,
with full copies being taken once the amount of change or number of
deltas gets too large.

However, looking at the API Docs, I see no exposure of the VE API.

Anyone know of plans to make the VE API publicly available?


Regards

Steve

Steven Harris
Dreamer of programming way beyond his ability
Canberra Australia


multi-threaded Solaris backups

2014-10-30 Thread Steven Harris
Hi Again.

A new customer has been signed, and while I don't think I'll get handed
the job, the implications got me thinking.

The customer is a strong Solaris user, and our solution is to coalesce
the workload from multiple solaris boxes on to two latest generation
machines one at each of two datacentres. There are many hundreds of
servers and over 2000 existing cpus and this is all going to run on just
two hi-spec machines, in Solaris containers.

So question is how do we back this up?  The simple way would be to
install a client in every local zone, but that is unweildy and requires
a lot of management.  Its really not intelligent to work that way today.

Backing up from the global zone would seem to be reasonable, but you'd
need a stack of client instances running to get through the workload,
possibly taking ZFS snapshots for consistency.  If you wanted one node
for each zone, then scheduling would be a problem becuase each schedule
would need ASNODE specified and that means one schedule per node.
Running multiple client processes would have us back in the bad place we
were with fullVM backups prior to VE, where one stuck backup would
derail a whole stream.

Another issue would be automatically detecting new zones and added
filesystems.

Pretty much what is needed is a re-implementation of VE and the
multi-threaded scheduler for the Solaris Cluster environment.

Now down here in oz we are nowhere near the bleeding edge so someone
else in the world must also be having these issues.

Is anyone aware of any plans for such a beast?
Has anyone wrestled with the problem and be willing to share some insights?

Regards

Steve

Steven Harris
TSM Admin
Canberra Australia


Re: LTO5 Tuning

2014-10-30 Thread Ryder, Michael S
Steve:

What is the latency on your intersite connection?

Best regards,

Mike, x7942
RMD IT Client Services

On Thu, Oct 30, 2014 at 8:35 AM, Steven Harris 
wrote:

> Hi All
>
> I have a TSM 6.3.4 server on newish P7 hardware and AIX V7.1. HBAs are
> all 8Gb.  The sans behind it are 8Gb or 4Gb depending which path they
> take as we are in the middle of a SAN upgrade and there is still an old
> switch in the mix.
>
> Disk is XIV behind SVC.  Tape is TS3500 and LTO5.
>
> According to the LTO wikipedia entry I should be able to get 140MB/sec
> raw out of the drive.  I have an internal company document that suggests
> sustained 210MB/sec (compressed) is attainable in the real world.
>
> So far my server backs up 500GB per night of DB2 and Oracle databases on
> to file pools, without deduplication.  Housekeeping then does a
> single-streamed simultaneous migrate and copy to onsite and offsite
> tapes.  Inter site bandwidth is 4Gb and I have most of that to myself.
>
> That process takes over 5 hours so I'm seeing less than 100MB/sec.
>
> Accordingly I started a tuning exercise.  I copied 50GB of my filepool
> twice to give me a test dataset and started testing, of course when
> there was no other activity on the TSM box.
>
> The data comes off disk at 500MB/sec to /dev/null, so that is not a
> bottleneck.
>
> Copying using dd to tape runs at a peak of 120MB/sec with periods of
> much lower than that, as measured using nmon's fc stats on the HBAs. I
> presume some of that slowdown is where the tape reaches its end and has
> to reverse direction.
>
> Elapsed time for 100GB is 18 min, with little variation so average speed
> is 95MB/sec
>
> dd ibs and obs values were varied and ibs=256K obs=1024K seems to give
> the best result.
>
> Elapsed time is very consistent.
>
> Copying to a local drive on the same switch blade as the tape HBA or
> copying across blades made no difference.
>
> Copying to a drive at the remote site increased elapsed time by 2
> minutes, as one would expect with more switches in the path and a longer
> turnaround time.
>
> Tape to tape copy was not noticeably different to disk to tape.
>
> Reading from tape to /dev/null was no different.
>
> In all cases CPU time was about half of the elapsed time.
>
> lsattr on the drives shows that compression is on (this is also the
> default)
>
> The tape FC adapters are set to use the large transfer size.
>
> The test was also run using 64KB pages and svmon was used to verify the
> setting was effective. Again no difference.
>
> I'm running out of ideas here.  num_cmd_elements on the hbas is 500 (the
> default)  I'm thinking of increasing that to 2000, but it will require
> an outage and hence change control.
>
> Does anyone have any ideas, references I could look at or practical
> advice as to how to get this to perform?
>
> Thanks
>
> Steve
>
> Steven Harris
> TSM Admin
> Canberra Australia
>


LTO5 Tuning

2014-10-30 Thread Steven Harris
Hi All

I have a TSM 6.3.4 server on newish P7 hardware and AIX V7.1. HBAs are
all 8Gb.  The sans behind it are 8Gb or 4Gb depending which path they
take as we are in the middle of a SAN upgrade and there is still an old
switch in the mix.

Disk is XIV behind SVC.  Tape is TS3500 and LTO5.

According to the LTO wikipedia entry I should be able to get 140MB/sec
raw out of the drive.  I have an internal company document that suggests
sustained 210MB/sec (compressed) is attainable in the real world.

So far my server backs up 500GB per night of DB2 and Oracle databases on
to file pools, without deduplication.  Housekeeping then does a
single-streamed simultaneous migrate and copy to onsite and offsite
tapes.  Inter site bandwidth is 4Gb and I have most of that to myself.

That process takes over 5 hours so I'm seeing less than 100MB/sec.

Accordingly I started a tuning exercise.  I copied 50GB of my filepool
twice to give me a test dataset and started testing, of course when
there was no other activity on the TSM box.

The data comes off disk at 500MB/sec to /dev/null, so that is not a
bottleneck.

Copying using dd to tape runs at a peak of 120MB/sec with periods of
much lower than that, as measured using nmon's fc stats on the HBAs. I
presume some of that slowdown is where the tape reaches its end and has
to reverse direction.

Elapsed time for 100GB is 18 min, with little variation so average speed
is 95MB/sec

dd ibs and obs values were varied and ibs=256K obs=1024K seems to give
the best result.

Elapsed time is very consistent.

Copying to a local drive on the same switch blade as the tape HBA or
copying across blades made no difference.

Copying to a drive at the remote site increased elapsed time by 2
minutes, as one would expect with more switches in the path and a longer
turnaround time.

Tape to tape copy was not noticeably different to disk to tape.

Reading from tape to /dev/null was no different.

In all cases CPU time was about half of the elapsed time.

lsattr on the drives shows that compression is on (this is also the default)

The tape FC adapters are set to use the large transfer size.

The test was also run using 64KB pages and svmon was used to verify the
setting was effective. Again no difference.

I'm running out of ideas here.  num_cmd_elements on the hbas is 500 (the
default)  I'm thinking of increasing that to 2000, but it will require
an outage and hence change control.

Does anyone have any ideas, references I could look at or practical
advice as to how to get this to perform?

Thanks

Steve

Steven Harris
TSM Admin
Canberra Australia