Expired directories with DIRMC and restore

2007-01-26 Thread L'Huillier, Denis (GTI)
I ran a test backing up a directory structure with files and
subdirectories to a reto=20 management class for the files and
directories via DIRMC to a reto=0 management class.  After the backup I
deleted the data and re-ran the incremental backup to expire the objects
from TSM.  I confirmed the objects were now INACTIVE_VERSION in the
backups table then ran expiration to purge from the DB (reto=0).
Checking the backups table again I can see no directory objects existed,
just the files.

Next, I launched the GUI to test if I could expand the directory tree
and restore the data.  Both worked.  Path preserved on the restore as
well.

So my confusion is, what is the directory structure for?  I would have
suspected I would not have been able to expand the tree in the GUI or
restore the data to its original location.

We are looking to mix long term data with short term data in our TSM
servers and want to prevent all directory objects from being bound to
the longest retention policy mgmtclass using the DIRMC option.  This
test was to see the implications of not having a directory structure,
which surprisingly had no affect...  Am I missing something?

Thanks.


If you are not an intended recipient of this e-mail, please notify the sender, 
delete it and do not read, act upon, print, disclose, copy, retain or 
redistribute it. Click here for important additional terms relating to this 
e-mail. http://www.ml.com/email_terms/



Re: Domain ALL-LOCAL client option set

2006-11-07 Thread L'Huillier, Denis (GTI)
I found my problem.. 

Looks like it's a problem with my TSM client level..

http://www-1.ibm.com/support/docview.wss?rs=1019&context=SSSQWC&context=
SSGSG7&q1=domain&q2=ALL-LOCAL&q3=option+set&q4=IC43694&uid=swg1IC43694&l
oc=en_US&cs=utf-8&lang=en


-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Richard Sims
Sent: Tuesday, November 07, 2006 2:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Fwd: Domain ALL-LOCAL client option set


My error... Cancel that - additive options are not affected by the
Force parameter.

Begin forwarded message:

> From: Richard Sims <[EMAIL PROTECTED]>
> Date: November 7, 2006 1:56:10 PM EST
> To: "ADSM: Dist Stor Manager" 
> Subject: Re: Domain ALL-LOCAL client option set
>
> On Nov 7, 2006, at 12:35 PM, L'Huillier, Denis (GTI) wrote:
>
>> When using ALL-LOCAL in the domain statement of a client option
>> set it
>> doesn't seem to work.
>
> Referring to topic "Using options with commands" in the client
> manual, what value did you use for the Force operand of DEFine
> CLIENTOpt?
>
>Richard Sims


If you are not an intended recipient of this e-mail, please notify the sender, 
delete it and do not read, act upon, print, disclose, copy, retain or 
redistribute it. Click here for important additional terms relating to this 
e-mail. http://www.ml.com/email_terms/



Domain ALL-LOCAL client option set

2006-11-07 Thread L'Huillier, Denis (GTI)
When using ALL-LOCAL in the domain statement of a client option set it
doesn't seem to work.

When I issue "dsmc q opt" from the client the domain states "ALL-LOCAL"
instead of the translated local filesystems.

The system is AIX.  The local dsm.opt file contains "DOMAIN  /"  to back
up the root directory only.  
I want to use client option sets to control all other file systems
backed up.  Since domains are additive in a client option set I would
expect ALL-LOCAL to work.

Is this normal behavior or a bug?  I haven't been able to find any
documentation on using ALL-LOCAL in client option sets.

Thanks.


If you are not an intended recipient of this e-mail, please notify the sender, 
delete it and do not read, act upon, print, disclose, copy, retain or 
redistribute it. Click here for important additional terms relating to this 
e-mail. http://www.ml.com/email_terms/



Re: BACKUP STGPOOL or MOVE DATA Error

2006-02-09 Thread L'Huillier, Denis (GTI)
Ionocopy "on" is what you want.  The fact that you had this option "off"
at one time, as I did, exposes you to the data corruption problem.  This
is kind of good news since there is a PTF that can be applied to recover
the data.  There is also a PTF available which includes a fix for the
ANR999D errors which prompted the use of IONOCOPY in the options file in
the first place.

I recommend you contact IBM support immediately.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Gee, Norman
Sent: Thursday, February 09, 2006 11:59 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: BACKUP STGPOOL or MOVE DATA Error


I was first told to put in ionocopy off in my server options file to
resolve one problem and then latter I was told to remove this option as
it cause other problems.  When I query ionocopy option it now shows on.
Your statement is if it is enable then remove it from your option file.
Is 'ON' enable or disable, sorry the query shows on or off and not
enable or disable. 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
L'Huillier, Denis (GTI)
Sent: Thursday, February 09, 2006 7:52 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: BACKUP STGPOOL or MOVE DATA Error

Contact support.  I had a similar situation.  Here's the synopsis:

The data is corrupt but may be recoverable via a PTF.  New storage pools
defined with 5.3 will verify the data before moving to a different pool.
If the data is corrupt it will not move the data so recovery efforts can
be initiated.

There is a known problem which you may be experiencing and is
recoverable.

All your disk pools should have the "verifyData=True" option setting
which can only be viewed via "show sspool".
If "verifyData=False" then "upd stg x VERIFYDATA=YES" should be
issued.   This option will give you the error below instead of moving
corrupt data to another pool.

Issue "q opt ionocopy" from the server.  If this is enable remove it
immediately from your server options file.


If you are not an intended recipient of this e-mail, please notify the sender, 
delete it and do not read, act upon, print, disclose, copy, retain or 
redistribute it. Click here for important additional terms relating to this 
e-mail. http://www.ml.com/email_terms/



Re: BACKUP STGPOOL or MOVE DATA Error

2006-02-09 Thread L'Huillier, Denis (GTI)
Contact support.  I had a similar situation.  Here's the synopsis:

The data is corrupt but may be recoverable via a PTF.  New storage pools
defined with 5.3 will verify the data before moving to a different pool.
If the data is corrupt it will not move the data so recovery efforts can
be initiated.

There is a known problem which you may be experiencing and is
recoverable.

All your disk pools should have the "verifyData=True" option setting
which can only be viewed via "show sspool".
If "verifyData=False" then "upd stg x VERIFYDATA=YES" should be
issued.   This option will give you the error below instead of moving
corrupt data to another pool.

Issue "q opt ionocopy" from the server.  If this is enable remove it
immediately from your server options file.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
TSM User
Sent: Thursday, February 09, 2006 8:00 AM
To: ADSM-L@VM.MARIST.EDU
Subject: BACKUP STGPOOL or MOVE DATA Error


Hi all

When executing commandos BACKUP STGPOOL or MOVE DATA from pool of disc
to
tape appear the messages of error that are more down.

TSM has been working for a year, these processes are made habitually and
began to fail the last week.

In the support page, they appear some cases, but they do not apply to
this
case.



Somebody knows that it can be happening?

Thanks


 TSM Server 5.2.4



02/08/2006 18:31:30   ANR1330E The server has detected possible
corruption
in an

   object being restored or moved. The actual values
for
the

   incorrect frame are: magic  hdr version

hdr

   length   sequence number  data
length

    server id  segment id


   crc . (SESSION: 1954, PROCESS:
25)

02/08/2006 18:31:30   ANR1331E Invalid frame detected.  Expected magic
53454652

   sequence number 0004 server id 
segment
id

   01771066. (SESSION: 1954, PROCESS:
25)

02/08/2006 18:31:30   ANR2017I Administrator ADMIN issued command: QUERY
PROCESS

(SESSION:
1969)

02/08/2006 18:31:44   ANR1330E The server has detected possible
corruption
in an

   object being restored or moved. The actual values
for
the

   incorrect frame are: magic  hdr version

hdr

   length   sequence number  data
length

    server id  segment id



If you are not an intended recipient of this e-mail, please notify the sender, 
delete it and do not read, act upon, print, disclose, copy, retain or 
redistribute it. Click here for important additional terms relating to this 
e-mail. http://www.ml.com/email_terms/



Re: Inclexcl for root file system

2006-02-08 Thread L'Huillier, Denis (GTI)
I was actually trying to avoid the use of a long domain statement.  This
adds complexity and I'm looking to simplify.

In real life there are well over 20 file systems on this particular
server of which many are required.  We have strict SLA's and as per our
clients we do not back up what they don't need.  We also have a usage
based chargeback system in place which gives us a lot of incentive to
minimize the amount of non-business data being backed up.  We run daily
comparison reports of dumps of the file spacetable which alert us if a
new filespace has been added / removed from a client.  The "exclude
/.../*" allows TSM to recognize a new filespace without actually backing
up the data.

If domain is my only options then I may have to remove the exclude
/.../* statement.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of
Thomas Denier
Sent: Tuesday, February 07, 2006 4:42 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: Inclexcl for root file system


>From: "L'Huillier, Denis (GTI)" <[EMAIL PROTECTED]>
>
>I'm trying to include the root "/" file system and exclude
>everything
>else not previously matched in my inclexcl file.
>
>Example...
>File systems:
>/
>/data
>/data1
>/data2
>/dump
>/junk
>
>Inclexcl:
>
>Exclude /.../* <-- Here I want to exclude everything not
>explicitly
>stated below which would include "/junk" and any future file system.
>Include /<--Here I want the root file system and all
>subdirectories that are mounted on root.
>Include /data/.../* CLASS3
>Exclude /data/.../*.out
>Include /data1/.../* CLASS2
>Include /data2/.../* CLASS1
>
>The "Include /" line will not get all files and subdirectories that
>are
>part of the root file system. If I use "Include /.../*" then it
>negates
>the "Exclude /.../*" above it.

I really don't recommend making exclusion the rule and inclusion the
exception. If the system file population changes and you forget a
necessary update to the include/exclude file you may fail to backup
data that should be backed up. If inclusion is the rule and exclusion
the exception, a similar mistake might cause you to backup unneeded
data. The latter is normally considered the less serious risk.

If you must go ahead with this approach, I would suggest using the
'domain' option to limit backup coverage to the four file systems
containing the data you care about. You could then have three
excludes like 'exclude /data/.../*' and more specific includes as
in your original plan.


If you are not an intended recipient of this e-mail, please notify the sender, 
delete it and do not read, act upon, print, disclose, copy, retain or 
redistribute it. Click here for important additional terms relating to this 
e-mail. http://www.ml.com/email_terms/



Inclexcl for root file system

2006-02-07 Thread L'Huillier, Denis (GTI)
I'm trying to include the root "/" file system and exclude everything
else not previously matched in my inclexcl file.

Example... 
File systems:
/
/data
/data1
/data2
/dump
/junk

Inclexcl:

Exclude /.../*   <-- Here I want to exclude everything not explicitly
stated below which would include "/junk" and any future file system.
Include /   <--Here I want the root file system and all
subdirectories that are mounted on root.
Include /data/.../* CLASS3
Exclude /data/.../*.out
Include /data1/.../* CLASS2
Include /data2/.../* CLASS1

The "Include /" line will not get all files and subdirectories that are
part of the root file system.  If I use "Include /.../*" then it negates
the "Exclude /.../*" above it.

Any help?


If you are not an intended recipient of this e-mail, please notify the sender, 
delete it and do not read, act upon, print, disclose, copy, retain or 
redistribute it. Click here for important additional terms relating to this 
e-mail. http://www.ml.com/email_terms/



Backup Storage Pool Stats

2004-09-25 Thread L'Huillier, Denis (IDS ECCS)
When I look at this output:
tsm: TSMPC1>q pr

 Process Process Description  Status

  Number  
 
-
 209 Backup Storage Pool  Primary Pool RMT_PRIM_LO, Copy
Pool RMT_COPY_DR, 
   Files Backed Up: 1, Bytes Backed
Up:
   15,966,520,559, Unreadable Files:
0, Unreadable 
   Bytes: 0. Current Physical File
(bytes):
   95,210,102,579 Current input
volume: F00222.
   Current output volume: F10339.


It states that the current physical file is 95,210,102,579 bytes..
Does this correlate to a physical client file or is this a TSM server
aggregate file of many client files? 

 
If you are not an intended recipient of this e-mail, please notify the sender, delete 
it and do not read, act upon, print, disclose, copy, retain or redistribute it. Click 
here for important additional terms relating to this e-mail. 
http://www.ml.com/email_terms/ 

 


Re: Any SANergy users?

2004-03-02 Thread L'Huillier, Denis (IDS ECCS)
> Hello,
> I'm looking into SANergy as a possible configuration for doing LAN Free backups to a 
> FILE device class instead of having to a physical tape drive.
> I notice that SANergy doesn't appear to be too popular of a product.. I guess so 
> much so that IBM removed the SANergy section out of the 5.1 Technical Guide Redbook 
> where it was in the 4.1 Technical
> Guide Redbook..
>
> I don't understand why this technology didn't take off better.  The ability to do 
> LAN Free backups to disk is a great feature and cost savings.
>
> The advantages I see for SANergy are:
>
> 1. Many more concurrent LAN Free backups, not restricted by number of physical tape 
> drives.
> 2. Ability to specify the size of the FILE device type volume being created in the 
> FILE device class SANergy storage pool.
> 3. Cost - I can migrate my FILE SANergy pool to much fewer real physical tape drives 
> in a sequential access storage pool.
> 4. Sharing a clients HBA with Disk and Tape is no longer an issue since the FILE 
> device class will use disk type block sizes in its writes (Not 256K block sizes like 
> 359X).
> 5. Tape mount times are instantaneous for backups.
>
> In the 4.1 Technical Guide there are only 5 supported configurations.  The TSM 
> server looks like it has to be a Win2K or Solaris server and supported clients look 
> to be only AIX or Win2K.. Is this
> accurate today?  Is Solaris a supported client?  Anybody know where there is an 
> updated TSM/SANergy support matrix/
>
> I would be very interested on your thoughts and opinions as well as any technical 
> documentation you can point me to.
>
> Thanks,
> Denis

==

If you are not an intended recipient of this e-mail, please notify
the sender, delete it and do not read, act upon, print, disclose,
copy, retain or redistribute it.

Click here for important additional terms relating to this e-mail.
 

==


Schedule prompter not working?

2002-12-06 Thread L'Huillier, Denis
Env: TSM 4.2.2.9 z/OS 1.3 server
TSM 4.2 client Windows NT

I updated a node to a new domain.  Then associated the node with a new schedule in the 
new domain.
The client scheduler (NT) is in prompted mode..

The TSM Server never attempted to contact the node for the backup schedule. (ANR2561I)
I see this ANR2561I message for all other nodes within the schedule except for the one 
I just moved there.
I had to restart the schedule service on the client to get it to work.

Anybody know why this is?  And it isn't always.. Just sometimes.

Thanks.



Re: Select problems in 5.1.1 ... again

2002-08-21 Thread L'Huillier, Denis

I recently upgraded from 4.1.X to 4.2.2.0 and had the zero bytes problem.
As per Tivoli I then went to 4.2.2.9.  Bytes problem is fixed.  Examined and affected 
not working now.

 START_TIME: 2002-08-19 12:41:48.00
 END_TIME: 2002-08-19 13:43:40.00
 ACTIVITY: BACKUP
   NUMBER: 107
   ENTITY: DENISL
 COMMMETH: BPX-Tcp/
  ADDRESS: 172.25.73.107:1073
SCHEDULE_NAME: DAILY_1900
 EXAMINED: 41
 AFFECTED: 39
   FAILED: 25
BYTES: 2199357869
 IDLE: 3703
   MEDIAW: 0
PROCESSES: 1
   SUCCESSFUL: YES
  VOLUME_NAME: 
   DRIVE_NAME: 
 LIBRARY_NAME: 
 LAST_USE: 


-Original Message-
From: Paul van Dongen [mailto:[EMAIL PROTECTED]] 
Sent: Wednesday, August 21, 2002 8:53 AM
To: [EMAIL PROTECTED]
Subject: Select problems in 5.1.1 ... again


   Hello all,

   I have just finished (about ten days now) an upgrade of TSM 4.2 to 5.1.
Being aware of the zero bytes problem in the summary table, I upgraded to
5.1.1.2. Now, I donĀ“t get zero bytes in my summary entries for backups, but
all my files (examined and affected) counters are being divided by 1000
Someone got this problem too?

   Here is an example:

Summary table:

   START_TIME: 2002-08-09 19:29:20.00
 END_TIME: 2002-08-09 19:47:19.00
 ACTIVITY: BACKUP
   NUMBER: 8015
   ENTITY: 
 COMMMETH: Tcp/Ip
  ADDRESS: 10.131.64.29:54333
SCHEDULE_NAME: 
 EXAMINED: 40
 AFFECTED: 39
   FAILED: 0
BYTES: 3857289012
 IDLE: 1076
   MEDIAW: 0
PROCESSES: 1
   SUCCESSFUL: YES
  VOLUME_NAME: 
   DRIVE_NAME: 
 LIBRARY_NAME: 
 LAST_USE: 



Tivoli Storage Manager
Command Line Backup/Archive Client Interface - Version 5, Release 1, Level
1.0
(C) Copyright IBM Corporation 1990, 2002 All Rights Reserved.

Node Name: XX
Session established with server YY: AIX-RS/6000
  Server Version 5, Release 1, Level 1.2
  Server date/time: 08/09/02   19:29:20  Last access: 08/09/02   18:59:46


Total number of objects inspected:   40,769
Total number of objects backed up:   39,934
Total number of objects updated:  0
Total number of objects rebound:  0
Total number of objects deleted:  0
Total number of objects expired:  0
Total number of objects failed:   0
Total number of bytes transferred: 3.59 GB
Data transfer time:  221.48 sec
Network data transfer rate:17,012.47 KB/sec
Aggregate data transfer rate:  3,492.59 KB/sec
Objects compressed by:0%
Elapsed processing time:   00:17:58


Thank you all for your help

Paul van Dongen



SQL timestamp not working when upgraded to 4.2.2 for summary tabl e

2002-08-15 Thread L'Huillier, Denis

Hello -
Since I upgraded our 4.1 server to 4.2.2 my sql query against the summary table no 
longer works.
Has anyone run into this problem before?
Here's the query...

/* --- Query Summary Table  */
/* ---   Run as a macro   - */
select cast(entity as varchar(12)) as "Node Name", \
cast(activity as varchar(10)) as Type, \
cast(number as varchar(8)) "Filespace", \
cast(failed as varchar(3)) "Stg", \
cast(affected as decimal(7,0)) as files, \
cast(bytes/1024/1024 as decimal(12,4)) as "Phy_MB", \
cast(bytes/1024/1024 as decimal(12,4)) as "Log_MB" \
from summary where end_time>=timestamp(current_date -1 days, '09:00:00') \
and end_time<=timestamp(current_date, '08:59:59') \
and (activity='BACKUP' or activity='ARCHIVE') \
order by "Node Name"

Here's the output from a 4.1 server:

Node Name TYPEFilespace  Stg  FILES  Phy_MB  Log_MB
  --  -  ---  -  --  --
CENKRSBACKUP  6490   26  0.0222  0.0222
CENNTNFS  BACKUP  6480   15  0.0072  0.0072
RSCH-AS1-PBACKUP  6150   90  7.3412  7.3412
RSCH-DB2-PBACKUP  6140   43  5.6337  5.6337
RSCH-DB3-PBACKUP  60810  0.  0.
RSCH-DB3-PBACKUP  6160  114   1477.5513   1477.5513
RSCH-FS1-PBACKUP  61110  0.  0.
RSCH-FS1-PBACKUP  6180   97 10.3834 10.3834
RSCH-WS5-PBACKUP  6670   29  2.5706  2.5706
RSCH-WS6-PBACKUP  6660   35  5.4812  5.4812
TPRSCHHOME01  BACKUP  62420  0.  0.
TPRSCHHOME01  BACKUP  6270 2467  16412.1675  16412.1675
TPRSCHHOME02  BACKUP  63410  0.  0.
TPRSCHHOME02  BACKUP  6370 3552  19135.1409  19135.1409

Here's the output from a 4.2 server:

Node Name TYPEFilespace  Stg  FILES  Phy_MB  Log_MB
  --  -  ---  -  --  --
REMEDY2W  BACKUP  3896   0   64  0.  0.

I only get one line back.. There should be one for each node (about 100 nodes on this 
server)

Now, for any of you who are wondering..  'Filespace' and 'Stg' are columns put in just 
as place holders.
We were using the 'q occu' to generate charge back info.  I needed to generate an sql 
query would look
Just like the q occu (same columns) so the data could be fed into an existing program 
which handled charge
Back to the clients.

Regards,

Denis L. L'Huillier
212-647-2168



Re: SQL timestamp not working when upgraded to 4.2.2 for summary tabl e

2002-08-15 Thread L'Huillier, Denis

Hey look what I found...

*
$$4225 Interim fixes delivered by patch 4.2.2.5
$$Patches are cumulative, just like PTFs.  So Interim fixes
$$delivered as "4.2.2.5" include those delivered in previous patches
*
<@>
IC33455 SUMMARY TABLE NOT BEING FULLY UPDATED


And I'm querying the summary table..

Thanks..

-Original Message-
From: PAC Brion Arnaud [mailto:[EMAIL PROTECTED]]
Sent: Thursday, August 15, 2002 11:34 AM
To: [EMAIL PROTECTED]
Subject: Re: SQL timestamp not working when upgraded to 4.2.2 for summary tabl e


Hi Denis,

Just for information : I tested your query on my system (4.2.2.15) and
it worked like a charm (except I had to modify "Node Name" to
"Node_Name")

Did you apply the latest PTF's to get 4.2.2.15 ? Maybe it could help ...
Good luck anyway !

=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
| Arnaud Brion, Panalpina Management Ltd., IT Group |
| Viaduktstrasse 42, P.O. Box, 4002 Basel - Switzerland |
| Phone: +41 61 226 19 78 / Fax: +41 61 226 17 01   |
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=



-Original Message-
From: L'Huillier, Denis [mailto:[EMAIL PROTECTED]]
Sent: Thursday, 15 August, 2002 15:39
To: [EMAIL PROTECTED]
Subject: SQL timestamp not working when upgraded to 4.2.2 for summary
tabl e


Hello -
Since I upgraded our 4.1 server to 4.2.2 my sql query against the
summary table no longer works. Has anyone run into this problem before?
Here's the query...

/* --- Query Summary Table  */
/* ---   Run as a macro   - */
select cast(entity as varchar(12)) as "Node Name", \ cast(activity as
varchar(10)) as Type, \ cast(number as varchar(8)) "Filespace", \
cast(failed as varchar(3)) "Stg", \ cast(affected as decimal(7,0)) as
files, \ cast(bytes/1024/1024 as decimal(12,4)) as "Phy_MB", \
cast(bytes/1024/1024 as decimal(12,4)) as "Log_MB" \ from summary where
end_time>=timestamp(current_date -1 days, '09:00:00') \ and
end_time<=timestamp(current_date, '08:59:59') \ and (activity='BACKUP'
or activity='ARCHIVE') \ order by "Node Name"

Here's the output from a 4.1 server:

Node Name TYPEFilespace  Stg  FILES  Phy_MB
Log_MB
  --  -  ---  -  --
--
CENKRSBACKUP  6490   26  0.0222
0.0222
CENNTNFS  BACKUP  6480   15  0.0072
0.0072
RSCH-AS1-PBACKUP  6150   90  7.3412
7.3412
RSCH-DB2-PBACKUP  6140   43  5.6337
5.6337
RSCH-DB3-PBACKUP  60810  0.
0.
RSCH-DB3-PBACKUP  6160  114   1477.5513
1477.5513
RSCH-FS1-PBACKUP  61110  0.
0.
RSCH-FS1-PBACKUP  6180   97 10.3834
10.3834
RSCH-WS5-PBACKUP  6670   29  2.5706
2.5706
RSCH-WS6-PBACKUP  6660   35  5.4812
5.4812
TPRSCHHOME01  BACKUP  62420  0.
0.
TPRSCHHOME01  BACKUP  6270 2467  16412.1675
16412.1675
TPRSCHHOME02  BACKUP  63410  0.
0.
TPRSCHHOME02  BACKUP  6370 3552  19135.1409
19135.1409

Here's the output from a 4.2 server:

Node Name TYPEFilespace  Stg  FILES  Phy_MB
Log_MB
  --  -  ---  -  --
--
REMEDY2W  BACKUP  3896   0   64  0.
0.

I only get one line back.. There should be one for each node (about 100
nodes on this server)

Now, for any of you who are wondering..  'Filespace' and 'Stg' are
columns put in just as place holders. We were using the 'q occu' to
generate charge back info.  I needed to generate an sql query would look
Just like the q occu (same columns) so the data could be fed into an
existing program which handled charge Back to the clients.

Regards,

Denis L. L'Huillier
212-647-2168



Help with select statement

2002-07-18 Thread L'Huillier, Denis

Hello -

I wrote the following select statement (with a lot of plagiarism).

/* --- Query Summary Table  */
select cast(entity as varchar(12)) as "Node Name", \
cast(activity as varchar(10)) as Type, \
cast(affected as decimal(7,0)) as files, \
cast(bytes/1024/1024 as decimal(12,4)) as "Phy_MB" \
from summary where end_time>=timestamp(current_date -1 days, '09:00:00') \
and end_time<=timestamp(current_date, '08:59:59') \
and (activity='BACKUP' or activity='ARCHIVE') \
order by "Node Name"

The problem is, if a node performed 10 backups and 5 archives over the 24 hour period 
there are 15 lines for that node in the output, 10 for backup and 5 for archive.
Is there a way I can add the affected columns and bytes column for a node which has 
activity=BACKUP and again for those with activity=ARCHIVE ?
Basically, what I want is at most 2 lines per node... 1 line can be the sum of 
affected files and bytes for all backup activities
And the other line for that node can be the sum of affected files and bytes for all 
ARCHIVE activities.

I think I'm over my head.


Regards,

Denis L. L'Huillier
212-647-2168