Re: export question

2019-10-08 Thread Zoltan Forray
I see that all the time where there is a difference between the export and
import totals - sometimes 2-3 objects (or more) difference - sometimes
occupancy totals varying a little.  A long time ago when I first saw
such a difference, I reran the export with mergefilespace and it still
didn't change the ending import totals.  I just except there is some kind
of variance in the counts and as long as it isn't a HUGE difference, I
don't worry about it.

On Tue, Oct 8, 2019 at 1:16 PM Lee, Gary  wrote:

> I just finished exporting a linux server changing hardware and
> decommissioning a tsm server.
>
> The original node on server x has 4,000,798 objects.
> The import on server y has 4,000,797 objects.
> I calculated the number with the following sql statement
>
> Select sum(num_fules) from occupancy where node_name='node' and
> stgpool_name='offsite'
>
> All storage pools are backed up to offsite.
> Export ended with success, and 0 errors.
>
> I am confused.
> Any ideas?
>


--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
VMware Administrator
Xymon Monitor Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/


export question

2019-10-08 Thread Lee, Gary
I just finished exporting a linux server changing hardware and decommissioning 
a tsm server.

The original node on server x has 4,000,798 objects.
The import on server y has 4,000,797 objects.
I calculated the number with the following sql statement

Select sum(num_fules) from occupancy where node_name='node' and 
stgpool_name='offsite'

All storage pools are backed up to offsite.
Export ended with success, and 0 errors.

I am confused.
Any ideas?


Re: Question on Replication: unsuccessful replication due to sessions terminated by admin

2019-05-14 Thread Bjørn Nachtwey

Hi,

ok, i need to share some additional information :-)

first i observed a huge number of dropped packages on one server. so we 
did some tests including iperf which showed nearly the physical 
bandwidth. we also took a tcpdump and handed it to our network guys who 
didn't see anything giving a hint but that there was broadcast traffic 
which might cause the dropped packages. Starting with a LACP trunk we 
broke it and tried both NICs seperately, switched the SFP modules als 
well vice versa as also tested completely different ones.


then we switched to a dedicated private network for the ISP/TSM 
server2server traffic only, this means both servers and one dedicated 
switch -- on this connection no packages got lost or were dropped. for 
this setup we also took traces, but got no helpful answer.


The last approach was to connect both servers directly by crossing the 
fiber cables, but the problem still remains.


by now the ordinary client traffic is handled on each server using one 
nic and for the server2server connection we have this crosslink 
connection on a second nic.


i do wonder because the export of the nodes run as expected: got 
suspended when not enough drives were available or the staging was full 
on the destination server, but finished without any problems, as well 
for small nodes as for large one (> 10 TB primary data and/or > 10 mio. 
files). more silly: some replications do work, replication larger nodes. 
If we have a problem with the driver, the export's shouldn't finish and 
all replications should run in an error, shouldn't they?



my problem is, that i got no idea why the replication fails. The error 
messages are not clear to me.



APAR IC920088 
(https://www-01.ibm.com/support/docview.wss?uid=swg1IC92088) says it's 
caused by network timeouts, but this note is about TSM6.3 -- an "This 
problem was fixed". Unfortunately the corresponding technote 
(http://www-01.ibm.com/support/docview.wss?uid=swg1642715) isn't 
available any more.


well we also increased different timeout setting on the target server

CommTimeOut 600
AdminIdleTimeOut 180
AdminCommTimeOut 180

now i will increase IdleTimeOut on 600, too.

but due to the option "KeepAliveInterval 30" i expect idle connection 
where refreshed every 5 minutes, so within the idleTimeOuts, especially 
"KeepAliveTime 300" -- on both servers.



thanks & best,

Bjørn



Stefan Folkerts wrote:

Did you use something like iperf with a long and heavy load? a bad nic or
driver might cause this, so it might still be the network.

On Mon, May 13, 2019 at 4:15 PM Bjørn Nachtwey
wrote:


Hi all,

we planned to switch from COPYPOOL to Replication for having a second
copy of the data, therefore we bought a new server that should become
the primary TSM/ISP server and then make the old one holding the
replicates.

what we did:

we started by exporting the nodes, which worked well. But as the
"incremental" exports even took some time, we set up a replication from
old server "A" to the new one "B". For all nodes already exported we set
up the replication vice versa: TSM "B" replicates them to TSM "A".

well, the replication jobs did not finish, some data and files were
missing as long as we replicated using a node group. Now we use
replication for each single node and it works -- for most of them :-(

Replication the "bad" nodes from "TSM A" to "TSM B" first the sessions
hang for many minutes, sometimes even hours, then they got "terminated -
forced by administrator" (ANR0483W), e.g.:

05/13/2019 15:23:16ANR2017I Administrator GK issued command:
REPLICATE NODE vsbck  (SESSION: 26128)
05/13/2019 15:23:16ANR1626I The previous message (message number
2017) was repeated 1 times.
05/13/2019 15:23:16ANR0984I Process 494 for Replicate Node started
in the BACKGROUND at 15:23:16. (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:16ANR2110I REPLICATE NODE started as process 494.
(SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:16ANR0408I Session 26184 started for server SM283
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:16ANR0408I Session 26185 started for server SM283
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:16ANR0408I Session 26186 started for server SM283
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17ANR0408I Session 26187 started for server SM283
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17ANR0408I Session 26188 started for server SM283
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17ANR0408I Session 26189 started for server SM283
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17ANR0408I Session 26190 started for server SM283
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17ANR0408I Session 

Re: Question on Replication: unsuccessful replication due to sessions terminated by admin

2019-05-13 Thread Stefan Folkerts
Did you use something like iperf with a long and heavy load? a bad nic or
driver might cause this, so it might still be the network.

On Mon, May 13, 2019 at 4:15 PM Bjørn Nachtwey 
wrote:

> Hi all,
>
> we planned to switch from COPYPOOL to Replication for having a second
> copy of the data, therefore we bought a new server that should become
> the primary TSM/ISP server and then make the old one holding the
> replicates.
>
> what we did:
>
> we started by exporting the nodes, which worked well. But as the
> "incremental" exports even took some time, we set up a replication from
> old server "A" to the new one "B". For all nodes already exported we set
> up the replication vice versa: TSM "B" replicates them to TSM "A".
>
> well, the replication jobs did not finish, some data and files were
> missing as long as we replicated using a node group. Now we use
> replication for each single node and it works -- for most of them :-(
>
> Replication the "bad" nodes from "TSM A" to "TSM B" first the sessions
> hang for many minutes, sometimes even hours, then they got "terminated -
> forced by administrator" (ANR0483W), e.g.:
>
> 05/13/2019 15:23:16ANR2017I Administrator GK issued command:
> REPLICATE NODE vsbck  (SESSION: 26128)
> 05/13/2019 15:23:16ANR1626I The previous message (message number
> 2017) was repeated 1 times.
> 05/13/2019 15:23:16ANR0984I Process 494 for Replicate Node started
> in the BACKGROUND at 15:23:16. (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:16ANR2110I REPLICATE NODE started as process 494.
> (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:16ANR0408I Session 26184 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:16ANR0408I Session 26185 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:16ANR0408I Session 26186 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26187 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26188 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26189 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26190 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26191 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
>
> 05/13/2019 15:24:57ANR0483W Session 26187 for node SM283
> (Linux/x86_64) terminated - forced by administrator. (SESSION: 26128,
> PROCESS: 494)
>
> on the target server we observe at that time:
>
> 13.05.2019 15:25:51 ANR8213E Socket 34 aborted due to send error; error
> 104.
> 13.05.2019 15:25:51 ANR3178E A communication error occurred during
> session 65294 with replication server TSM.
> 13.05.2019 15:25:51 ANR0479W Session 65294 for server TSM (Windows)
> terminated - connection with server severed.
> 13.05.2019 15:25:51 ANR8213E Socket 34 aborted due to send error; error 32.
>
> => Any idea why this replication aborts?
>
> => why is there a "socket abortion error"?
>
>
> well, we already opened a SR case, send lots of logs and traces. as IBM
> suspects a network problem, now both serves use a cross link connection
> without nothing but NIC/GBICs, plugs and wires.
>
> thanks & best
>
> Bjørn
>
> --
>
> --
> Bjørn Nachtwey
>
> Arbeitsgruppe "IT-Infrastruktur“
> Tel.: +49 551 201-2181, E-Mail:bjoern.nacht...@gwdg.de
>
> --
> Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG)
> Am Faßberg 11, 37077 Göttingen, URL:http://www.gwdg.de
> Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail:g...@gwdg.de
> Service-Hotline: Tel.: +49 551 201-1523, E-Mail:supp...@gwdg.de
> Geschäftsführer: Prof. Dr. Ramin Yahyapour
> Aufsichtsratsvorsitzender: Prof. Dr. Christian Griesinger
> Sitz der Gesellschaft: Göttingen
> Registergericht: Göttingen, Handelsregister-Nr. B 598
>
> --
> Zertifiziert nach ISO 9001
>
> --
>


Question on Replication: unsuccessful replication due to sessions terminated by admin

2019-05-13 Thread Bjørn Nachtwey

Hi all,

we planned to switch from COPYPOOL to Replication for having a second 
copy of the data, therefore we bought a new server that should become 
the primary TSM/ISP server and then make the old one holding the replicates.


what we did:

we started by exporting the nodes, which worked well. But as the 
"incremental" exports even took some time, we set up a replication from 
old server "A" to the new one "B". For all nodes already exported we set 
up the replication vice versa: TSM "B" replicates them to TSM "A".


well, the replication jobs did not finish, some data and files were 
missing as long as we replicated using a node group. Now we use 
replication for each single node and it works -- for most of them :-(


Replication the "bad" nodes from "TSM A" to "TSM B" first the sessions 
hang for many minutes, sometimes even hours, then they got "terminated - 
forced by administrator" (ANR0483W), e.g.:


05/13/2019 15:23:16    ANR2017I Administrator GK issued command: 
REPLICATE NODE vsbck  (SESSION: 26128)
05/13/2019 15:23:16    ANR1626I The previous message (message number 
2017) was repeated 1 times.
05/13/2019 15:23:16    ANR0984I Process 494 for Replicate Node started 
in the BACKGROUND at 15:23:16. (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:16    ANR2110I REPLICATE NODE started as process 494. 
(SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:16    ANR0408I Session 26184 started for server SM283 
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:16    ANR0408I Session 26185 started for server SM283 
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:16    ANR0408I Session 26186 started for server SM283 
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17    ANR0408I Session 26187 started for server SM283 
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17    ANR0408I Session 26188 started for server SM283 
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17    ANR0408I Session 26189 started for server SM283 
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17    ANR0408I Session 26190 started for server SM283 
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
05/13/2019 15:23:17    ANR0408I Session 26191 started for server SM283 
(Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)


05/13/2019 15:24:57    ANR0483W Session 26187 for node SM283 
(Linux/x86_64) terminated - forced by administrator. (SESSION: 26128, 
PROCESS: 494)


on the target server we observe at that time:

13.05.2019 15:25:51 ANR8213E Socket 34 aborted due to send error; error 104.
13.05.2019 15:25:51 ANR3178E A communication error occurred during 
session 65294 with replication server TSM.
13.05.2019 15:25:51 ANR0479W Session 65294 for server TSM (Windows) 
terminated - connection with server severed.

13.05.2019 15:25:51 ANR8213E Socket 34 aborted due to send error; error 32.

=> Any idea why this replication aborts?

=> why is there a "socket abortion error"?


well, we already opened a SR case, send lots of logs and traces. as IBM 
suspects a network problem, now both serves use a cross link connection 
without nothing but NIC/GBICs, plugs and wires.


thanks & best

Bjørn

--
--
Bjørn Nachtwey

Arbeitsgruppe "IT-Infrastruktur“
Tel.: +49 551 201-2181, E-Mail:bjoern.nacht...@gwdg.de
--
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG)
Am Faßberg 11, 37077 Göttingen, URL:http://www.gwdg.de
Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail:g...@gwdg.de
Service-Hotline: Tel.: +49 551 201-1523, E-Mail:supp...@gwdg.de
Geschäftsführer: Prof. Dr. Ramin Yahyapour
Aufsichtsratsvorsitzender: Prof. Dr. Christian Griesinger
Sitz der Gesellschaft: Göttingen
Registergericht: Göttingen, Handelsregister-Nr. B 598
--
Zertifiziert nach ISO 9001
--


Re: Re: dsm.sys question

2019-02-27 Thread Bronder, David M
For a pure TSM/SP client solution, beyond what can be done with an inclexcl
file or server-side client option sets as you and Skylar have already noted,
I think you're out of luck.

You probably have several options from the Puppet side, though...

You could use file_line resource in Pupppet's stdlib (carefully, if you're
trying to manage multi-line content such as a server stanza) to manage a
subset of the content of the dsm.sys file.

You could also look at the concat module to generate the dsm.sys file from
component files, as long as you manage all the components in Puppet as well.
(If you don't, it can still work, but can also have unexpected behavior.  It
sounds like this is what you're actually looking for, but I would generally
recommend you manage the "local" content with Puppet as well.  In particular,
with a local-only component, I believe concat would only rebuild the
composite file if one of the Puppet-managed components changed.  But there
are a couple different ways to use concat.)

Or if you're feeling ambitious, you could write an Augeas lens for TSM/SP
config files...  :-)  (If you do, please contribute back to the community!)

Depending on how your existing dsm.sys template is (or could be) defined, and
how you would manage "local" content in Puppet (if you're willing/able to do
so), it should also be possible to get the template to pull in the "local"
bits when and where you need.

(When using Puppet to manage dsm.sys in a SP replication setting, I would
expect things to get even more interesting, since the client wants to manage
some of the dsm.sys content itself.  You can end up with the SP client and
Puppet fighting over the file.)


On 2/27/19 5:11 PM, Skylar Thompson wrote:
> I don't know that this approach would work - Puppet would see that the file
> differs from the deployed file, and would just overwrite it the next time
> the agent runs. Puppet would need to manage dsm.sys completely[1] with
> Rick's desired changes, or those options would have to be taken out of
> dsm.sys and placed server-side.
> 
> [1] I think there might be a way to have Puppet only manage a block in a
> file, but this would be pretty complicated and prone to error.
> 
> On Wed, Feb 27, 2019 at 11:04:43PM +, Harris, Steven wrote:
>> Rick
>>
>> The m4 macro processor is a standard unix offering and can do anything from 
>> simple includes and variable substitutions to lisp-like processing that will 
>> boggle your mind. An m4 macro with some include files and a makefile with a 
>> cron job to build your dsm.sys might do the job.
>>
>> Cheers
>>
>> Steve
>>
>> -Original Message-
>> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
>> Skylar Thompson
>> Sent: Thursday, 28 February 2019 6:10 AM
>> To: ADSM-L@VM.MARIST.EDU
>> Subject: Re: [ADSM-L] dsm.sys question
>>
>> Hi Rick,
>>
>> I'm not aware of a mechanism that allows one to do that with dsmc/dsm.sys, 
>> but Puppet does have the ability to include arbitrary lines in a file, 
>> either via a template or directly in a rule definition.
>>
>> Another option would be to use server-side client option sets:
>>
>> https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.1/com.ibm.itsm.srv.doc/t_mgclinod_mkclioptsets.html
>>
>> These options mirror what can be set in dsm.sys, and can either be 
>> overridden by the client, or enforced by the server.
>>
>> On Wed, Feb 27, 2019 at 06:58:30PM +, Rhodes, Richard L. wrote:
>>> Hello,
>>>
>>> Our Unix team in implementing a management application named Puppet.
>>> They are running into a problem using Puppet to setup/maintain the TSM
>>> client dsm.sys files.  They create/maintain the dsm.sys as per a
>>> template of some kind.  If you change a dsm.sys with a unique option,
>>> it gets overwritten by the standard template when Puppet
>>> refreshes/checks the file.  The inclexcl option pulls include/excludes
>>> from a separate local file so this works fine for local specific needs.
>>> But some systems need other settings or whole servername stanzas that
>>> are unique.  I've looked through the BA client manual and see no way
>>> to include arbitrary lines from some other file into dsm.sys.
>>>
>>> Q) Is there a way to source options from another file into the dsm.sys, 
>>> kind of like the inclexcl option does?
>>>
>>>
>>> Thanks
>>>
>>> Rick
> 

-- 
Hello World.David Bronder - Systems Architect
Segmentation Fault  ITS-EI, Univ. of Iowa
Core dumped, disk trashed, quota filled, soda warm.   david-bron...@uiowa.edu


Re: dsm.sys question

2019-02-27 Thread Skylar Thompson
I don't know that this approach would work - Puppet would see that the file
differs from the deployed file, and would just overwrite it the next time
the agent runs. Puppet would need to manage dsm.sys completely[1] with
Rick's desired changes, or those options would have to be taken out of
dsm.sys and placed server-side.

[1] I think there might be a way to have Puppet only manage a block in a
file, but this would be pretty complicated and prone to error.

On Wed, Feb 27, 2019 at 11:04:43PM +, Harris, Steven wrote:
> Rick
>
> The m4 macro processor is a standard unix offering and can do anything from 
> simple includes and variable substitutions to lisp-like processing that will 
> boggle your mind. An m4 macro with some include files and a makefile with a 
> cron job to build your dsm.sys might do the job.
>
> Cheers
>
> Steve
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Skylar Thompson
> Sent: Thursday, 28 February 2019 6:10 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] dsm.sys question
>
> Hi Rick,
>
> I'm not aware of a mechanism that allows one to do that with dsmc/dsm.sys, 
> but Puppet does have the ability to include arbitrary lines in a file, either 
> via a template or directly in a rule definition.
>
> Another option would be to use server-side client option sets:
>
> https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.1/com.ibm.itsm.srv.doc/t_mgclinod_mkclioptsets.html
>
> These options mirror what can be set in dsm.sys, and can either be overridden 
> by the client, or enforced by the server.
>
> On Wed, Feb 27, 2019 at 06:58:30PM +, Rhodes, Richard L. wrote:
> > Hello,
> >
> > Our Unix team in implementing a management application named Puppet.
> > They are running into a problem using Puppet to setup/maintain the TSM
> > client dsm.sys files.  They create/maintain the dsm.sys as per a
> > template of some kind.  If you change a dsm.sys with a unique option,
> > it gets overwritten by the standard template when Puppet
> > refreshes/checks the file.  The inclexcl option pulls include/excludes
> > from a separate local file so this works fine for local specific needs.
> > But some systems need other settings or whole servername stanzas that
> > are unique.  I've looked through the BA client manual and see no way
> > to include arbitrary lines from some other file into dsm.sys.
> >
> > Q) Is there a way to source options from another file into the dsm.sys, 
> > kind of like the inclexcl option does?
> >
> >
> > Thanks
> >
> > Rick
> > --
> > 
> >
> > The information contained in this message is intended only for the personal 
> > and confidential use of the recipient(s) named above. If the reader of this 
> > message is not the intended recipient or an agent responsible for 
> > delivering it to the intended recipient, you are hereby notified that you 
> > have received this document in error and that any review, dissemination, 
> > distribution, or copying of this message is strictly prohibited. If you 
> > have received this communication in error, please notify us immediately, 
> > and delete the original message.
>
> --
> -- Skylar Thompson (skyl...@u.washington.edu)
> -- Genome Sciences Department, System Administrator
> -- Foege Building S046, (206)-685-7354
> -- University of Washington School of Medicine
>
> This message and any attachment is confidential and may be privileged or 
> otherwise protected from disclosure. You should immediately delete the 
> message if you are not the intended recipient. If you have received this 
> email by mistake please delete it from your system; you should not copy the 
> message or disclose its content to anyone.
>
> This electronic communication may contain general financial product advice 
> but should not be relied upon or construed as a recommendation of any 
> financial product. The information has been prepared without taking into 
> account your objectives, financial situation or needs. You should consider 
> the Product Disclosure Statement relating to the financial product and 
> consult your financial adviser before making a decision about whether to 
> acquire, hold or dispose of a financial product.
>
> For further details on the financial product please go to http://www.bt.com.au
>
> Past performance is not a reliable indicator of future performance.

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


Re: dsm.sys question

2019-02-27 Thread Harris, Steven
Rick

The m4 macro processor is a standard unix offering and can do anything from 
simple includes and variable substitutions to lisp-like processing that will 
boggle your mind. An m4 macro with some include files and a makefile with a 
cron job to build your dsm.sys might do the job.

Cheers

Steve 

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Thursday, 28 February 2019 6:10 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] dsm.sys question

Hi Rick,

I'm not aware of a mechanism that allows one to do that with dsmc/dsm.sys, but 
Puppet does have the ability to include arbitrary lines in a file, either via a 
template or directly in a rule definition.

Another option would be to use server-side client option sets:

https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.1/com.ibm.itsm.srv.doc/t_mgclinod_mkclioptsets.html

These options mirror what can be set in dsm.sys, and can either be overridden 
by the client, or enforced by the server.

On Wed, Feb 27, 2019 at 06:58:30PM +, Rhodes, Richard L. wrote:
> Hello,
>
> Our Unix team in implementing a management application named Puppet.
> They are running into a problem using Puppet to setup/maintain the TSM 
> client dsm.sys files.  They create/maintain the dsm.sys as per a 
> template of some kind.  If you change a dsm.sys with a unique option, 
> it gets overwritten by the standard template when Puppet 
> refreshes/checks the file.  The inclexcl option pulls include/excludes 
> from a separate local file so this works fine for local specific needs.
> But some systems need other settings or whole servername stanzas that 
> are unique.  I've looked through the BA client manual and see no way 
> to include arbitrary lines from some other file into dsm.sys.
>
> Q) Is there a way to source options from another file into the dsm.sys, kind 
> of like the inclexcl option does?
>
>
> Thanks
>
> Rick
> --
> 
>
> The information contained in this message is intended only for the personal 
> and confidential use of the recipient(s) named above. If the reader of this 
> message is not the intended recipient or an agent responsible for delivering 
> it to the intended recipient, you are hereby notified that you have received 
> this document in error and that any review, dissemination, distribution, or 
> copying of this message is strictly prohibited. If you have received this 
> communication in error, please notify us immediately, and delete the original 
> message.

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: dsm.sys question

2019-02-27 Thread Skylar Thompson
Hi Rick,

I'm not aware of a mechanism that allows one to do that with dsmc/dsm.sys,
but Puppet does have the ability to include arbitrary lines in a file,
either via a template or directly in a rule definition.

Another option would be to use server-side client option sets:

https://www.ibm.com/support/knowledgecenter/en/SSGSG7_7.1.1/com.ibm.itsm.srv.doc/t_mgclinod_mkclioptsets.html

These options mirror what can be set in dsm.sys, and can either be
overridden by the client, or enforced by the server.

On Wed, Feb 27, 2019 at 06:58:30PM +, Rhodes, Richard L. wrote:
> Hello,
>
> Our Unix team in implementing a management application named Puppet.
> They are running into a problem using Puppet to setup/maintain the
> TSM client dsm.sys files.  They create/maintain the dsm.sys as
> per a template of some kind.  If you change a dsm.sys with a unique
> option, it gets overwritten by the standard template when Puppet
> refreshes/checks the file.  The inclexcl option pulls include/excludes
> from a separate local file so this works fine for local specific needs.
> But some systems need other settings or whole servername stanzas that
> are unique.  I've looked through the BA client manual and see no way to
> include arbitrary lines from some other file into dsm.sys.
>
> Q) Is there a way to source options from another file into the dsm.sys, kind 
> of like the inclexcl option does?
>
>
> Thanks
>
> Rick
> --
>
> The information contained in this message is intended only for the personal 
> and confidential use of the recipient(s) named above. If the reader of this 
> message is not the intended recipient or an agent responsible for delivering 
> it to the intended recipient, you are hereby notified that you have received 
> this document in error and that any review, dissemination, distribution, or 
> copying of this message is strictly prohibited. If you have received this 
> communication in error, please notify us immediately, and delete the original 
> message.

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


dsm.sys question

2019-02-27 Thread Rhodes, Richard L.
Hello,

Our Unix team in implementing a management application named Puppet. 
They are running into a problem using Puppet to setup/maintain the 
TSM client dsm.sys files.  They create/maintain the dsm.sys as 
per a template of some kind.  If you change a dsm.sys with a unique 
option, it gets overwritten by the standard template when Puppet 
refreshes/checks the file.  The inclexcl option pulls include/excludes 
from a separate local file so this works fine for local specific needs.  
But some systems need other settings or whole servername stanzas that 
are unique.  I've looked through the BA client manual and see no way to 
include arbitrary lines from some other file into dsm.sys.

Q) Is there a way to source options from another file into the dsm.sys, kind of 
like the inclexcl option does?


Thanks

Rick
--

The information contained in this message is intended only for the personal and 
confidential use of the recipient(s) named above. If the reader of this message 
is not the intended recipient or an agent responsible for delivering it to the 
intended recipient, you are hereby notified that you have received this 
document in error and that any review, dissemination, distribution, or copying 
of this message is strictly prohibited. If you have received this communication 
in error, please notify us immediately, and delete the original message.


Re: Question on switching server roles in replication

2019-01-31 Thread Rick Adamson
Bjoern,
If I understand your situation correctly on the server that you need to 
transfer data from you need to set the default replication server to the 
desired target then on selected nodes you remove them from their current 
replication role and update their replication mode.

There may be a better way but this worked for me:

Example:
(Assuming the two server are already defined to each other)
On the "new" target server set the default replication server to the new source 
server:
SET REPLSERVER source_server_name

Remove the node you wish to replicate from its current replication role:
REMOVE REPLNODE node_name

Update the note to its replication state/mode:
UPDATE NODE node_name REPLSTATE=ENABLED REPLMODE=SEND

Repeat the REMOVE REPLNODE and UPDATE NODE commands for the same node on the 
server that will receive the data so it is configured to receive instead of 
send data:
REMOVE REPLNODE node_name
UPDATE NODE node_name REPLSTATE=ENABLED REPLMODE=RECEIVE

Then use the node replication command to replicate the data.
When complete change the replication mode for each node back to their desired 
state.
SENDING node back to REPLMODE=SEND and receiving node back to REPLMODE=RECEIVE

-Rick Adamson


-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Nachtwey, 
Bjoern
Sent: Thursday, January 31, 2019 5:07 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Question on switching server roles in replication

* This email originated outside of the organization. Use caution when opening 
attachments or clicking links. *

--
Dear all,

I set up a new server for one of our customers and will use the old server and 
library as target for replication. First I tried to export the data, but this 
takes too long due to the amount of data. So my next approach is to set up 
replication from the existing server to the new one and afterwards switch the 
roles: the former source server becomes the target server and vice versa, later 
on I switch both rules and replicate from the new server to the old one.

Unfortunately some data was backed up to the "former source server" but not 
replicated to the "former target server" when switching the roles for some 
nodes.
So now some backups are already run to the "new source server" (former target) 
server and the "new" target server (former source server) holds some data, the 
"new source server does" not have.

What's the best way to get both servers synchronized?
Perhaps I do a wrong assumption: Will that "additional" data on the "new target 
server" be deleted if I start the replication or can I switch the roles as 
often I like and synchronize by replicating the data first in one direction and 
then vice versa?

Thanks & best
Bjørn
--
Bjørn Nachtwey

Arbeitsgruppe "IT-Infrastruktur"
Tel.: +49 551 201-2181, E-Mail: bjoern.nacht...@gwdg.de
--
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG) Am 
Faßberg 11, 37077 Göttingen, URL: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.gwdg.de=DwIFAw=AzgFQeXLLKhxSQaoFCm29A=eqh5PzQPIsPArLoI_uV1mKvhIpcNP1MsClDPSJjFfxw=jAnKnZMMPDz__hdLfjgwSpd_xPWO2K_vvWwlWL1sXKY=pB0bhaT1_AWB51KGFHxeydMypyXYpaZnibILg0gi1qQ=
Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail: g...@gwdg.de
Service-Hotline: Tel.: +49 551 201-1523, E-Mail: supp...@gwdg.de
Geschäftsführer: Prof. Dr. Ramin Yahyapour
Aufsichtsratsvorsitzender: Prof. Dr. Christian Griesinger Sitz der 
Gesellschaft: Göttingen
Registergericht: Göttingen, Handelsregister-Nr. B 598
--
Zertifiziert nach ISO 9001
--

**CONFIDENTIALITY NOTICE** This electronic message contains information from 
Southeastern Grocers, Inc and is intended only for the use of the addressee. 
This message may contain information that is privileged, confidential and/or 
exempt from disclosure under applicable Law. This message may not be read, 
used, distributed, forwarded, reproduced or stored by any other than the 
intended recipient. If you are not the intended recipient, please delete and 
notify the sender.


AW: [ADSM-L] Question on switching server roles in replication

2019-01-31 Thread Nachtwey, Bjoern
Hi Francisco,

great.
Thanks!

Bjørn


-Ursprüngliche Nachricht-
Von: ADSM: Dist Stor Manager  Im Auftrag von Francisco 
Molero
Gesendet: Donnerstag, 31. Januar 2019 11:43
An: ADSM-L@VM.MARIST.EDU
Betreff: Re: [ADSM-L] Question on switching server roles in replication

 Hi , 

you can check this doc: 

https://www-01.ibm.com/support/docview.wss?uid=swg21628618

En jueves, 31 de enero de 2019 11:06:59 CET, Nachtwey, Bjoern 
 escribió:  
 
 Dear all,

I set up a new server for one of our customers and will use the old server and 
library as target for replication. First I tried to export the data, but this 
takes too long due to the amount of data. So my next approach is to set up 
replication from the existing server to the new one and afterwards switch the 
roles: the former source server becomes the target server and vice versa, later 
on I switch both rules and replicate from the new server to the old one.

Unfortunately some data was backed up to the "former source server" but not 
replicated to the "former target server" when switching the roles for some 
nodes. 
So now some backups are already run to the "new source server" (former target) 
server and the "new" target server (former source server) holds some data, the 
"new source server does" not have.

What's the best way to get both servers synchronized?
Perhaps I do a wrong assumption: Will that "additional" data on the "new target 
server" be deleted if I start the replication or can I switch the roles as 
often I like and synchronize by replicating the data first in one direction and 
then vice versa?

Thanks & best
Bjørn
--
Bjørn Nachtwey 

Arbeitsgruppe "IT-Infrastruktur" 
Tel.: +49 551 201-2181, E-Mail: bjoern.nacht...@gwdg.de
--
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG) Am 
Faßberg 11, 37077 Göttingen, URL: http://www.gwdg.de
Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail: g...@gwdg.de
Service-Hotline: Tel.: +49 551 201-1523, E-Mail: supp...@gwdg.de
Geschäftsführer: Prof. Dr. Ramin Yahyapour
Aufsichtsratsvorsitzender: Prof. Dr. Christian Griesinger Sitz der 
Gesellschaft: Göttingen
Registergericht: Göttingen, Handelsregister-Nr. B 598
--
Zertifiziert nach ISO 9001
--
  


Question on switching server roles in replication

2019-01-31 Thread Nachtwey, Bjoern
Dear all,

I set up a new server for one of our customers and will use the old server and 
library as target for replication. First I tried to export the data, but this 
takes too long due to the amount of data. So my next approach is to set up 
replication from the existing server to the new one and afterwards switch the 
roles: the former source server becomes the target server and vice versa, later 
on I switch both rules and replicate from the new server to the old one.

Unfortunately some data was backed up to the "former source server" but not 
replicated to the "former target server" when switching the roles for some 
nodes. 
So now some backups are already run to the "new source server" (former target) 
server and the "new" target server (former source server) holds some data, the 
"new source server does" not have.

What's the best way to get both servers synchronized?
Perhaps I do a wrong assumption: Will that "additional" data on the "new target 
server" be deleted if I start the replication or can I switch the roles as 
often I like and synchronize by replicating the data first in one direction and 
then vice versa?

Thanks & best
Bjørn
--
 
Bjørn Nachtwey 

Arbeitsgruppe "IT-Infrastruktur" 
Tel.: +49 551 201-2181, E-Mail: bjoern.nacht...@gwdg.de 
--
 
Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG) 
Am Faßberg 11, 37077 Göttingen, URL: http://www.gwdg.de 
Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail: g...@gwdg.de 
Service-Hotline: Tel.: +49 551 201-1523, E-Mail: supp...@gwdg.de 
Geschäftsführer: Prof. Dr. Ramin Yahyapour 
Aufsichtsratsvorsitzender: Prof. Dr. Christian Griesinger
Sitz der Gesellschaft: Göttingen 
Registergericht: Göttingen, Handelsregister-Nr. B 598 
--
 
Zertifiziert nach ISO 9001 
--


Re: archive / hsm for windows question

2018-10-08 Thread Lee, Gary
Got it.  Thanks. All are restored.


-Original Message-
From: ADSM: Dist Stor Manager  On Behalf Of Stefan Bender
Sent: Friday, September 28, 2018 10:38 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] archive / hsm for windows question

Hi all,

dsmclc is also in version 6.3 able to retrieves files to a different 

directory. 

Here is the examples that were published in the admin guide of 6.3: 

Retrieve the archived .xls files in the c:\big projects\2009\ directory to 

a new path: c:\projects\spreadsheets\. The archive copies are in file 

space def-hsm01.

Command: dsmclc retrieve -g def-hsm01 c: "\big projects\2009" *.xls 

c:\projects\spreadsheets.

Spaces separate the three parts of the search_pattern: c: "\big 

projects\2009" *.xls. Because the directory_pattern (\big projects\2009) 

contains a blank space, it is enclosed in quotation marks.





Before running a retrieve I usually run the "dsmclc list" command with the 

same pattern to validate that I get the right files: 

List all .doc archives in the c:\big projects\2009\ directory . The 

archive copies are in file space def-hsm01.

Command: dsmclc list -g def-hsm01 c: "\big projects\2009" *.doc





Mit freundlichen Grü?en / best regards



Stefan Bender



IBM Deutschland / Am Weiher 24 / 65451 Kelsterbach / Germany 



IBM Software Development "IBM Spectrum Protect" 

--

IBM Deutschland Research & Development GmbH / Vorsitzende des 

Aufsichtsrats: Martina Koederitz

Gesch?ftsführung: Dirk Wittkopp

Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, 

HRB 243294 







From:   Efim 

To: ADSM-L@VM.MARIST.EDU

Date:   25/09/2018 19:51

Subject:    Re: [ADSM-L] archive / hsm for windows question

Sent by:"ADSM: Dist Stor Manager" 







Hi,

As I know you can recall HSM files only using HSM client.

In version 7.1.X you can use command dsmclc retrieve for it. Don't know 

about 6.3.

Example: 
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww-01.ibm.com%2Fsupport%2Fdocview.wss%3Fuid%3Dswg21580766data=02%7C01%7Cglee%40BSU.EDU%7Cc7fc8553e0a74371c5d308d6255067be%7C6fff909f07dc40da9e30fd7549c0f494%7C0%7C0%7C636737424606826739sdata=jiw%2BXrWWNwzQUIpOG504CyQHu9vjS6tNplDBl3KYAAQ%3Dreserved=0

Efim



> 25 сент. 2018 г., в 20:29, Lee, Gary  написал(а):

> 

> I have a server which is running hsm for windows 6.3.

> 

> Folks have deleted the stub files for much of their data.

> 

> I need to restore, as we are terminating tsm services for that data 

center.

> 

> Is there a way to retrieve the archives avoiding the hsm client, just 

use the tsm client?

> This would let me retrieve to another workstation to put things on a 

stick.

> 

> If not, I need a little help with the hsm command line to craft a 

retrieval to a new directory tree.

> 

> Thanks for any assistance.













Re: archive / hsm for windows question

2018-09-28 Thread Stefan Bender
Hi all,
dsmclc is also in version 6.3 able to retrieves files to a different 
directory. 
Here is the examples that were published in the admin guide of 6.3: 
Retrieve the archived .xls files in the c:\big projects\2009\ directory to 
a new path: c:\projects\spreadsheets\. The archive copies are in file 
space def-hsm01.
Command: dsmclc retrieve -g def-hsm01 c: "\big projects\2009" *.xls 
c:\projects\spreadsheets.
Spaces separate the three parts of the search_pattern: c: "\big 
projects\2009" *.xls. Because the directory_pattern (\big projects\2009) 
contains a blank space, it is enclosed in quotation marks.


Before running a retrieve I usually run the "dsmclc list" command with the 
same pattern to validate that I get the right files: 
List all .doc archives in the c:\big projects\2009\ directory . The 
archive copies are in file space def-hsm01.
Command: dsmclc list -g def-hsm01 c: "\big projects\2009" *.doc


Mit freundlichen Grü?en / best regards

Stefan Bender

IBM Deutschland / Am Weiher 24 / 65451 Kelsterbach / Germany 

IBM Software Development "IBM Spectrum Protect" 
--
IBM Deutschland Research & Development GmbH / Vorsitzende des 
Aufsichtsrats: Martina Koederitz
Gesch?ftsführung: Dirk Wittkopp
Sitz der Gesellschaft: B?blingen / Registergericht: Amtsgericht Stuttgart, 
HRB 243294 



From:   Efim 
To: ADSM-L@VM.MARIST.EDU
Date:   25/09/2018 19:51
Subject:    Re: [ADSM-L] archive / hsm for windows question
Sent by:"ADSM: Dist Stor Manager" 



Hi,
As I know you can recall HSM files only using HSM client.
In version 7.1.X you can use command dsmclc retrieve for it. Don't know 
about 6.3.
Example: http://www-01.ibm.com/support/docview.wss?uid=swg21580766
Efim

> 25 сент. 2018 г., в 20:29, Lee, Gary  написал(а):
> 
> I have a server which is running hsm for windows 6.3.
> 
> Folks have deleted the stub files for much of their data.
> 
> I need to restore, as we are terminating tsm services for that data 
center.
> 
> Is there a way to retrieve the archives avoiding the hsm client, just 
use the tsm client?
> This would let me retrieve to another workstation to put things on a 
stick.
> 
> If not, I need a little help with the hsm command line to craft a 
retrieval to a new directory tree.
> 
> Thanks for any assistance.







Re: archive / hsm for windows question

2018-09-25 Thread Efim
Hi,
As I know you can recall HSM files only using HSM client.
In version 7.1.X you can use command dsmclc retrieve for it. Don't know about 
6.3.
Example: http://www-01.ibm.com/support/docview.wss?uid=swg21580766
Efim

> 25 сент. 2018 г., в 20:29, Lee, Gary  написал(а):
> 
> I have a server which is running hsm for windows 6.3.
> 
> Folks have deleted the stub files for much of their data.
> 
> I need to restore, as we are terminating tsm services for that data center.
> 
> Is there a way to retrieve the archives avoiding the hsm client, just use the 
> tsm client?
> This would let me retrieve to another workstation to put things on a stick.
> 
> If not, I need a little help with the hsm command line to craft a retrieval 
> to a new directory tree.
> 
> Thanks for any assistance.


archive / hsm for windows question

2018-09-25 Thread Lee, Gary
I have a server which is running hsm for windows 6.3.

Folks have deleted the stub files for much of their data.

I need to restore, as we are terminating tsm services for that data center.

Is there a way to retrieve the archives avoiding the hsm client, just use the 
tsm client?
This would let me retrieve to another workstation to put things on a stick.

If not, I need a little help with the hsm command line to craft a retrieval to 
a new directory tree.

Thanks for any assistance.


SnapDiff threading question

2018-02-09 Thread Schaub, Steve
We have 6 Netapp fileshares that have a small number of large files but TB 
daily change rates.
When Snapdiff runs, we are not seeing the number of sessions we expect after 
setting resourceutilization=10.
Is this due to how TSM is receiving the changed file list from the Netapp?
Would we be better off forcing CreateNewBase=yes on every run to increase 
multi-threading?
Thanks,
-steve



--
Please see the following link for the BlueCross BlueShield of Tennessee E-mail 
disclaimer:  http://www.bcbst.com/email_disclaimer.shtm


Re: Question about moving data to another stg pool directory container

2018-02-01 Thread Marc Lanteigne

That "should" work.  As a safe measure, you can rename the original node on
the original server instead of deleting it.  Delete the renamed node once
you are comfortable the procedure worked.

-
Thanks,
Marc...

Marc Lanteigne

Follow me on: Twitter, developerWorks, LinkedIn


-Original Message-
From: Remco Post [mailto:r.p...@plcs.nl]
Sent: Thursday, February 1, 2018 9:27 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question about moving data to another stg pool
directory container

Hi Robert,

you can’t.

Maybe you could work around that:

setup replication to another (temp) server replicate the node delete the
replstate from the node on both ends del file on the first server update
the node/domain/mgmtclass to store the data in the desired pool replicate
back

untested, just a wild idea.

> On 31 Jan 2018, at 11:28, rou...@univ.haifa.ac.il
<rou...@univ.haifa.ac.il> wrote:
>
> Hi to all
>
> Previously Storage container , I used the process of move nodedata to
move nodenames from storage to another storage.
>
> With container storage pool  is not allowed , anybody can point to
> another command to move nodename to another storage pool ( both
> storage pools are Directory container)
>
> Best Regard in Advance
>
> Robert

--

 Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622



Re: Question about moving data to another stg pool directory container

2018-02-01 Thread Remco Post
Hi Robert,

you can’t.

Maybe you could work around that:

setup replication to another (temp) server
replicate the node
delete the replstate from the node on both ends
del file on the first server
update the node/domain/mgmtclass to store the data in the desired pool
replicate back

untested, just a wild idea.

> On 31 Jan 2018, at 11:28, rou...@univ.haifa.ac.il  
> wrote:
> 
> Hi to all
> 
> Previously Storage container , I used the process of move nodedata to move 
> nodenames from storage to another storage.
> 
> With container storage pool  is not allowed , anybody can point to another 
> command to move nodename to another storage pool ( both storage pools are 
> Directory container)
> 
> Best Regard in Advance
> 
> Robert

-- 

 Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


Question about moving data to another stg pool directory container

2018-01-31 Thread rou...@univ.haifa.ac.il
Hi to all

Previously Storage container , I used the process of move nodedata to move 
nodenames from storage to another storage.

With container storage pool  is not allowed , anybody can point to another 
command to move nodename to another storage pool ( both storage pools are 
Directory container)

Best Regard in Advance

Robert


Re: Select question

2018-01-10 Thread Harris, Steven
Guys all of those are far more complex than they need be

select node_name, lastacc_time from nodes where lastacc_time>(current timestamp 
- 90 days)

Cheers

Steve

TSM Admin/Consultant
Canberra Australia

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Marc 
Lanteigne
Sent: Wednesday, 10 January 2018 11:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Select question


Hello,

There's that exact query on Thobias:
http://thobias.org/tsm/sql/#toc17

-
Thanks,
Marc...

Marc Lanteigne
Accelerated Value Specialist for Spectrum Protect
416.478.0233 | marclantei...@ca.ibm.com
Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern

Follow me on: Twitter, developerWorks, LinkedIn


-Original Message-
From: rou...@univ.haifa.ac.il [mailto:rou...@univ.haifa.ac.il]
Sent: Wednesday, January 10, 2018 4:16 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Select question

Hi to alll

Try to figure how to run a select to get for nodename the parameter “Days since 
last access”  greater than 3 months

When I run the q node ides got

Protect: ADSM2>q node ides

Node Name Platform  Policy Domain  Days Since
Days Since  Locked?
 Name
Last AccessPassword Set
- --
--  -----
IDESLinux x86-64   CC
178   1,497No

select node_name , LASTACC_TIME from nodes where ???

T.I.A Regards


This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Re: Select question

2018-01-10 Thread Marc Lanteigne

Hello,

There's that exact query on Thobias:
http://thobias.org/tsm/sql/#toc17

-
Thanks,
Marc...

Marc Lanteigne
Accelerated Value Specialist for Spectrum Protect
416.478.0233 | marclantei...@ca.ibm.com
Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern

Follow me on: Twitter, developerWorks, LinkedIn


-Original Message-
From: rou...@univ.haifa.ac.il [mailto:rou...@univ.haifa.ac.il]
Sent: Wednesday, January 10, 2018 4:16 AM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Select question

Hi to alll

Try to figure how to run a select to get for nodename the parameter “Days
since last access”  greater than 3 months

When I run the q node ides got

Protect: ADSM2>q node ides

Node Name Platform  Policy Domain  Days Since
Days Since  Locked?
 Name
Last AccessPassword Set
- --
--  -----
IDESLinux x86-64   CC
178   1,497No

select node_name , LASTACC_TIME from nodes where ???

T.I.A Regards



Re: Select question

2018-01-10 Thread Sasa Drnjevic
On 10.1.2018. 9:11, rou...@univ.haifa.ac.il wrote:
> Hi to alll
> 
> Try to figure how to run a select to get for nodename the parameter “Days 
> since last access”  greater than 3 months
> 
> When I run the q node ides got
> 
> Protect: ADSM2>q node ides
> 
> Node Name Platform  Policy Domain  Days Since 
> Days Since  Locked?
>  Name 
>Last AccessPassword Set
> - --  
>--  -----
> IDESLinux x86-64   CC
> 178   1,497No
> 
> select node_name , LASTACC_TIME from nodes where ???
> 
> T.I.A Regards
> 



SELECT cast((NODES.NODE_NAME) as char(30)) as "NODE",
cast((NODES.LASTACC_TIME) as char(28)) as "LASTACC_TIME" from nodes
where LASTACC_TIME<'2017-10-01 00:15:00.00'


You may have to change the date format...

Regards

--
Sasa Drnjevic
www.srce.unizg.hr


Select question

2018-01-10 Thread rou...@univ.haifa.ac.il
Hi to alll

Try to figure how to run a select to get for nodename the parameter “Days since 
last access”  greater than 3 months

When I run the q node ides got

Protect: ADSM2>q node ides

Node Name Platform  Policy Domain  Days Since Days 
Since  Locked?
 Name   
 Last AccessPassword Set
- --
 --  -----
IDESLinux x86-64   CC
178   1,497No

select node_name , LASTACC_TIME from nodes where ???

T.I.A Regards


Re: DBS vault volume question

2017-11-12 Thread rou...@univ.haifa.ac.il
Hi Testa

The trick "force=yes"  did it .

Thanks to all

Regards Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Giacomo Testa
Sent: Sunday, November 12, 2017 10:16 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DBS vault volume question

Hi Robert,

As Stefan said, DRM (Disaster Recovery Manager) should be able to handle all 
the media state changes automatically.

If, otherwise, you just want to stop using DRM and recover all the tapes, the 
problem is that it's not possible to delete the last db snapshot backup, at 
least not officially.
https://www-01.ibm.com/support/docview.wss?uid=swg21083952

There is the undocumented option force=yes for the delete volhistory command.
It's undocumented, so be sure that deleting this last db snapshot is really 
what you want to do, and double check before launching the command.
https://www.mail-archive.com/adsm-l@vm.marist.edu/msg85767.html

Steps:
1) Check that this is the last and only db snapshot backup q volh ty=dbs
2) Delete the volume from volhistory
del volh ty=dbs todate=today force=yes


Giacomo Testa

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Sunday, November 12, 2017 14:50
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DBS vault volume question

Hi Robert,

That's not the way you get vault tape back from the vault.
DR will put vault tapes in vault retrieve for you, you don't have to do that by 
yourself, it's will adhere to this setting ( 
https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.3/srv.reference/r_cmd_drmdbbackupexpiredays_set.html
).
So if you set that to 3 (I think the minimum value) the system will put the dbs 
you have offsite in vaultretrieve after that time, then you can place it in 
courierretrieve or go straight to onsiteretrieve.

Make sure you align the value above with the reuse delay of the copypool(s) you 
manage with drm.

Regards,
   Stefan


On Sun, Nov 12, 2017 at 10:56 AM, rou...@univ.haifa.ac.il < 
rou...@univ.haifa.ac.il> wrote:

> Hi to all
>
> I run the query: q drmedia * source=dbs
>
> Got this output
>
> Volume Name  State Last Update Date/Time
> Automated LibName
>  - ---
>
> 55L5 Vault  08/04/2017 19:37:13
>
> Want to delete this volume 00055L5   but NO success ….
>
> I check the query: q drmstatus
>
> Protect: ADSM2>q drmstatus
>
>   Recovery Plan Prefix:
>   Plan Instructions Prefix:
> Replacement Volume Postfix: @
>  Primary Storage Pools:
> Copy Storage Pools:
>  Active-Data Storage Pools:
>   Container Copy Storage Pools:
>Not Mountable Location Name: Offsite DataBank
>   Courier Name: COURIER
>Vault Site Name: Databank
>   DB Backup Series Expiration Days: 0 Day(s) Recovery Plan File 
> Expiration Days: 60 Day(s)
>   Check Label?: Yes
>  Process FILE Device Type?: No
>  Command File Name:
>
> I cannot succeed to bring the state of this volume to  VAULTRETREIVE
>
> Run the command:  move drmedia  55L5 source=dbs
> wherestate=vaultretrieve tostate=onsiteretrieve
>
> But nothing happen !
>
> Any suggestion ….
>
> Best  Regards
>
> Robert
>
>
>
>
> [cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>
>
> רוברט אוזן
> ראש תחום שרידות וזמינות נתונים.
> אוניברסיטת חיפה
> משרד: בניין ראשי, חדר 5015
> טלפון:  04-8240345 (פנימי: 2345)
> דואר: rou...@univ.haifa.ac.il
> _
> אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838 
> אתר אגף מחשוב ומערכות מידע: http://computing.haifa.ac.il ttp://computing.haifa.ac.il/>
>


Re: DBS vault volume question

2017-11-12 Thread Giacomo Testa
Hi Robert,

As Stefan said, DRM (Disaster Recovery Manager) should be able to handle all 
the media state changes automatically.

If, otherwise, you just want to stop using DRM and recover all the tapes, the 
problem is that it's not possible to delete the last db snapshot backup, at 
least not officially.
https://www-01.ibm.com/support/docview.wss?uid=swg21083952

There is the undocumented option force=yes for the delete volhistory command.
It's undocumented, so be sure that deleting this last db snapshot is really 
what you want to do, and double check before launching the command.
https://www.mail-archive.com/adsm-l@vm.marist.edu/msg85767.html

Steps:
1) Check that this is the last and only db snapshot backup
q volh ty=dbs
2) Delete the volume from volhistory
del volh ty=dbs todate=today force=yes


Giacomo Testa

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Sunday, November 12, 2017 14:50
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] DBS vault volume question

Hi Robert,

That's not the way you get vault tape back from the vault.
DR will put vault tapes in vault retrieve for you, you don't have to do that by 
yourself, it's will adhere to this setting ( 
https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.3/srv.reference/r_cmd_drmdbbackupexpiredays_set.html
).
So if you set that to 3 (I think the minimum value) the system will put the dbs 
you have offsite in vaultretrieve after that time, then you can place it in 
courierretrieve or go straight to onsiteretrieve.

Make sure you align the value above with the reuse delay of the copypool(s) you 
manage with drm.

Regards,
   Stefan


On Sun, Nov 12, 2017 at 10:56 AM, rou...@univ.haifa.ac.il < 
rou...@univ.haifa.ac.il> wrote:

> Hi to all
>
> I run the query: q drmedia * source=dbs
>
> Got this output
>
> Volume Name  State Last Update Date/Time
> Automated LibName
>  - ---
>
> 55L5 Vault  08/04/2017 19:37:13
>
> Want to delete this volume 00055L5   but NO success ….
>
> I check the query: q drmstatus
>
> Protect: ADSM2>q drmstatus
>
>   Recovery Plan Prefix:
>   Plan Instructions Prefix:
> Replacement Volume Postfix: @
>  Primary Storage Pools:
> Copy Storage Pools:
>  Active-Data Storage Pools:
>   Container Copy Storage Pools:
>Not Mountable Location Name: Offsite DataBank
>   Courier Name: COURIER
>Vault Site Name: Databank
>   DB Backup Series Expiration Days: 0 Day(s) Recovery Plan File 
> Expiration Days: 60 Day(s)
>   Check Label?: Yes
>  Process FILE Device Type?: No
>  Command File Name:
>
> I cannot succeed to bring the state of this volume to  VAULTRETREIVE
>
> Run the command:  move drmedia  55L5 source=dbs
> wherestate=vaultretrieve tostate=onsiteretrieve
>
> But nothing happen !
>
> Any suggestion ….
>
> Best  Regards
>
> Robert
>
>
>
>
> [cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>
>
> רוברט אוזן
> ראש תחום שרידות וזמינות נתונים.
> אוניברסיטת חיפה
> משרד: בניין ראשי, חדר 5015
> טלפון:  04-8240345 (פנימי: 2345)
> דואר: rou...@univ.haifa.ac.il
> _
> אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838 
> אתר אגף מחשוב ומערכות מידע: http://computing.haifa.ac.il ttp://computing.haifa.ac.il/>
>


Re: DBS vault volume question

2017-11-12 Thread Stefan Folkerts
Hi Robert,

That's not the way you get vault tape back from the vault.
DR will put vault tapes in vault retrieve for you, you don't have to do
that by yourself, it's will adhere to this setting (
https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.3/srv.reference/r_cmd_drmdbbackupexpiredays_set.html
).
So if you set that to 3 (I think the minimum value) the system will put the
dbs you have offsite in vaultretrieve after that time, then you can place
it in courierretrieve or go straight to onsiteretrieve.

Make sure you align the value above with the reuse delay of the copypool(s)
you manage with drm.

Regards,
   Stefan


On Sun, Nov 12, 2017 at 10:56 AM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:

> Hi to all
>
> I run the query: q drmedia * source=dbs
>
> Got this output
>
> Volume Name  State Last Update Date/Time
> Automated LibName
>  - ---
>
> 55L5 Vault  08/04/2017 19:37:13
>
> Want to delete this volume 00055L5   but NO success ….
>
> I check the query: q drmstatus
>
> Protect: ADSM2>q drmstatus
>
>   Recovery Plan Prefix:
>   Plan Instructions Prefix:
> Replacement Volume Postfix: @
>  Primary Storage Pools:
> Copy Storage Pools:
>  Active-Data Storage Pools:
>   Container Copy Storage Pools:
>Not Mountable Location Name: Offsite DataBank
>   Courier Name: COURIER
>Vault Site Name: Databank
>   DB Backup Series Expiration Days: 0 Day(s)
> Recovery Plan File Expiration Days: 60 Day(s)
>   Check Label?: Yes
>  Process FILE Device Type?: No
>  Command File Name:
>
> I cannot succeed to bring the state of this volume to  VAULTRETREIVE
>
> Run the command:  move drmedia  55L5 source=dbs
> wherestate=vaultretrieve tostate=onsiteretrieve
>
> But nothing happen !
>
> Any suggestion ….
>
> Best  Regards
>
> Robert
>
>
>
>
> [cid:image001.png@01D1284F.C3B0B400]
>
> רוברט אוזן
> ראש תחום שרידות וזמינות נתונים.
> אוניברסיטת חיפה
> משרד: בניין ראשי, חדר 5015
> טלפון:  04-8240345 (פנימי: 2345)
> דואר: rou...@univ.haifa.ac.il
> _
> אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
> אתר אגף מחשוב ומערכות מידע: http://computing.haifa.ac.il ttp://computing.haifa.ac.il/>
>


DBS vault volume question

2017-11-12 Thread rou...@univ.haifa.ac.il
Hi to all

I run the query: q drmedia * source=dbs

Got this output

Volume Name  State Last Update Date/Time  
Automated LibName
 - ---  
 
55L5 Vault  08/04/2017 19:37:13

Want to delete this volume 00055L5   but NO success ….

I check the query: q drmstatus

Protect: ADSM2>q drmstatus

  Recovery Plan Prefix:
  Plan Instructions Prefix:
Replacement Volume Postfix: @
 Primary Storage Pools:
Copy Storage Pools:
 Active-Data Storage Pools:
  Container Copy Storage Pools:
   Not Mountable Location Name: Offsite DataBank
  Courier Name: COURIER
   Vault Site Name: Databank
  DB Backup Series Expiration Days: 0 Day(s)
Recovery Plan File Expiration Days: 60 Day(s)
  Check Label?: Yes
 Process FILE Device Type?: No
 Command File Name:

I cannot succeed to bring the state of this volume to  VAULTRETREIVE

Run the command:  move drmedia  55L5 source=dbs 
wherestate=vaultretrieve tostate=onsiteretrieve

But nothing happen !

Any suggestion ….

Best  Regards

Robert




[cid:image001.png@01D1284F.C3B0B400]

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il


Re: Question on Collocation

2017-11-09 Thread Drammeh, Jennifer C
Skylar,

   Thanks! I am going to try this. I appreciate your input!


Jennifer

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Thursday, November 09, 2017 8:12 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on Collocation

 Content preview:  Hi Jennifer, While there is a COLLOCGROUP option to MOVE 
NODEDATA,
it sounds like you just want to move one node's data. Any kind of data 
movement
operation will try to respect the collocation setting you provide at the
   node level, assuming you have the storage pool set to collocate by group as
well. Something like this would move the node's data to fresh tapes in the
same storage pool: [...]

 Content analysis details:   (0.7 points, 5.0 required)

  pts rule name  description
  -- --
  0.7 SPF_NEUTRALSPF: sender does not match SPF record (neutral)
 -0.0 RP_MATCHES_RCVDEnvelope sender domain matches handover relay 
domain
X-Barracuda-Connect: mx.gs.washington.edu[128.208.8.134]
X-Barracuda-Start-Time: 1510243908
X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384
X-Barracuda-URL: https://148.100.49.28:443/cgi-mod/mark.cgi
X-Barracuda-Scan-Msg-Size: 3707
X-Virus-Scanned: by bsmtpd at marist.edu
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using global scores of TAG_LEVEL=3.5 
QUARANTINE_LEVEL=1000.0 KILL_LEVEL=5.5 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.44667
Rule breakdown below
 pts rule name  description
 -- 
--

Hi Jennifer,

While there is a COLLOCGROUP option to MOVE NODEDATA, it sounds like you just 
want to move one node's data.  Any kind of data movement operation will try to 
respect the collocation setting you provide at the node level, assuming you 
have the storage pool set to collocate by group as well.
Something like this would move the node's data to fresh tapes in the same 
storage pool:

MOVE NODEDATA some_node FROMSTGPOOL=some_pool

As Marc said, though, as long as collocation at the pool level is set to either 
NODE or GROUP, there is no difference between a node being in no collocation 
group, and a node being in a collocation group that has no other nodes.

As for starting a full backup, the TSM terminology is somewhat confusingly 
"selective backup". I'm not sure if this will start a VSS backup but it will 
backup all the eligible files in your backup domain.

On Thu, Nov 09, 2017 at 03:57:43PM +, Drammeh, Jennifer C wrote:
> Marc,
>
>I removed the node from the collocation group and added it to the new 
> collocation group - where it resides by itself. Will the move nodedata 
> command move the data from the tapes in the old collocation group to tapes in 
> the new collocation group? Also, do you know how to initiate a FULL new 
> backup of the entire system.
>
>
> Thanks!
>
>
> Jennifer
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf 
> Of Marc Lanteigne
> Sent: Friday, November 03, 2017 3:07 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Question on Collocation
>
> Hi Jennifer,
>
> The backup always only applies to the filesystem backup, not systemstate. 
> You???ll have to change the mode of the systemstate backup to get a full.
>
> Or use move nodedata to move it all together.
>
> BTW, you would just have needed to remove the node from the collocation 
> group. Nodes not in a collocation group are collocated by node.
>
> Marc...
>
> Sent from my iPhone using IBM Verse
>
> On Nov 3, 2017, 6:11:07 PM, jdram...@u.washington.edu wrote:
>
> From: jdram...@u.washington.edu
> To: ADSM-L@VM.MARIST.EDU
> Cc:
> Date: Nov 3, 2017, 6:11:07 PM
> Subject: [ADSM-L] Question on Collocation
>
>
>  I use collocation and I have a node where the System Admin has requested 
> that his data from a particular node be isolated on tapes by itself. I 
> created a collocation group and have associated this one node with the new 
> group. This node had previously been backing up to a different collocation 
> group. I created a new Policy Domain and associated the node with it as well. 
> This policy domain is set to send the data direct to tape instead of going to 
> our diskpool. I also modified the settings on the server to allow this system 
> to have 2 mount points.
>  I had the SA launch a "full" manual backup from the GUI - using the "Always 
> Backup" option. Here are the problems I am seeing.
>  1.   I can see that the system state data when to a tape that was 
> already used by other node

Re: Question on Collocation

2017-11-09 Thread Drammeh, Jennifer C
Thanks Marc! Very helpful!!

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Marc 
Lanteigne
Sent: Thursday, November 09, 2017 8:15 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on Collocation


Hi Jennifer,

>   I removed the node from the collocation group and added it to the 
> new
collocation group
The way you did it, or just not part of a group gives the same result.  So your 
way was good, I wasn't trying to say it wasn't, just that you could have saved 
a few steps and get the same end result.

> Will the move nodedata command move the data from the tapes in the old
collocation group to tapes in the new collocation group?
Yes, but not exactly the way you describe it.  Keep in mind that the server is 
only using collocation settings when writing data, but not when reading it.  
When reading, it doesn't look at collocation settings, it reads the data from 
wherever it is.

The move nodedata will read the client data and will move it onto new volumes 
following the collocation settings for the storage pool and the node.  Details 
here:
https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.2/srv.reference/r_cmd_nodedata_allfs_move.html



-
Thanks,
Marc...

Marc Lanteigne
Accelerated Value Specialist for Spectrum Protect
416.478.0233 | marclantei...@ca.ibm.com
Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern

Follow me on: Twitter, developerWorks, LinkedIn


-Original Message-
From: Drammeh, Jennifer C [mailto:jdram...@u.washington.edu]
Sent: Thursday, November 9, 2017 12:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on Collocation

Marc,

   I removed the node from the collocation group and added it to the new 
collocation group - where it resides by itself. Will the move nodedata command 
move the data from the tapes in the old collocation group to tapes in the new 
collocation group? Also, do you know how to initiate a FULL new backup of the 
entire system.


Thanks!


Jennifer

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Marc 
Lanteigne
Sent: Friday, November 03, 2017 3:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on Collocation

Hi Jennifer,

The backup always only applies to the filesystem backup, not systemstate.
You’ll have to change the mode of the systemstate backup to get a full.

Or use move nodedata to move it all together.

BTW, you would just have needed to remove the node from the collocation group. 
Nodes not in a collocation group are collocated by node.

Marc...

Sent from my iPhone using IBM Verse

On Nov 3, 2017, 6:11:07 PM, jdram...@u.washington.edu wrote:

From: jdram...@u.washington.edu
To: ADSM-L@VM.MARIST.EDU
Cc:
Date: Nov 3, 2017, 6:11:07 PM
Subject: [ADSM-L] Question on Collocation


 I use collocation and I have a node where the System Admin has requested that 
his data from a particular node be isolated on tapes by itself. I created a 
collocation group and have associated this one node with the new group. This 
node had previously been backing up to a different collocation group. I created 
a new Policy Domain and associated the node with it as well. This policy domain 
is set to send the data direct to tape instead of going to our diskpool. I also 
modified the settings on the server to allow this system to have 2 mount points.
 I had the SA launch a "full" manual backup from the GUI - using the "Always 
Backup" option. Here are the problems I am seeing.
 1.   I can see that the system state data when to a tape that was
already used by other nodes (which tells me the collocation is not working)
 2.   It only mounted 1 tape in a drive while performing the backup
 3.   The metrics at the end showed that it inspected 283 GB of data
and only transferred 175 GB which means it did not actually perform a full 
backup.
 Any ideas to help get a FULL backup of this data alone on a single 1 or 2 
tapes and ideally using multiple tape drives to speed things up? (for backup 
and restore)  Thanks!
 Jennifer


Re: Question on Collocation

2017-11-09 Thread Skylar Thompson
 Content preview:  Hi Jennifer, While there is a COLLOCGROUP option to MOVE 
NODEDATA,
it sounds like you just want to move one node's data. Any kind of data 
movement
operation will try to respect the collocation setting you provide at the
   node level, assuming you have the storage pool set to collocate by group as
well. Something like this would move the node's data to fresh tapes in the
same storage pool: [...]

 Content analysis details:   (0.7 points, 5.0 required)

  pts rule name  description
  -- --
  0.7 SPF_NEUTRALSPF: sender does not match SPF record (neutral)
 -0.0 RP_MATCHES_RCVDEnvelope sender domain matches handover relay 
domain
X-Barracuda-Connect: mx.gs.washington.edu[128.208.8.134]
X-Barracuda-Start-Time: 1510243908
X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384
X-Barracuda-URL: https://148.100.49.28:443/cgi-mod/mark.cgi
X-Barracuda-Scan-Msg-Size: 3707
X-Virus-Scanned: by bsmtpd at marist.edu
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using global scores of TAG_LEVEL=3.5 
QUARANTINE_LEVEL=1000.0 KILL_LEVEL=5.5 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.44667
Rule breakdown below
 pts rule name  description
 -- 
--

Hi Jennifer,

While there is a COLLOCGROUP option to MOVE NODEDATA, it sounds like you
just want to move one node's data.  Any kind of data movement operation
will try to respect the collocation setting you provide at the node level,
assuming you have the storage pool set to collocate by group as well.
Something like this would move the node's data to fresh tapes in the same
storage pool:

MOVE NODEDATA some_node FROMSTGPOOL=some_pool

As Marc said, though, as long as collocation at the pool level is set to
either NODE or GROUP, there is no difference between a node being in no
collocation group, and a node being in a collocation group that has no
other nodes.

As for starting a full backup, the TSM terminology is somewhat confusingly
"selective backup". I'm not sure if this will start a VSS backup but it
will backup all the eligible files in your backup domain.

On Thu, Nov 09, 2017 at 03:57:43PM +, Drammeh, Jennifer C wrote:
> Marc,
>
>I removed the node from the collocation group and added it to the new 
> collocation group - where it resides by itself. Will the move nodedata 
> command move the data from the tapes in the old collocation group to tapes in 
> the new collocation group? Also, do you know how to initiate a FULL new 
> backup of the entire system.
>
>
> Thanks!
>
>
> Jennifer
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Marc 
> Lanteigne
> Sent: Friday, November 03, 2017 3:07 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Question on Collocation
>
> Hi Jennifer,
>
> The backup always only applies to the filesystem backup, not systemstate. 
> You???ll have to change the mode of the systemstate backup to get a full.
>
> Or use move nodedata to move it all together.
>
> BTW, you would just have needed to remove the node from the collocation 
> group. Nodes not in a collocation group are collocated by node.
>
> Marc...
>
> Sent from my iPhone using IBM Verse
>
> On Nov 3, 2017, 6:11:07 PM, jdram...@u.washington.edu wrote:
>
> From: jdram...@u.washington.edu
> To: ADSM-L@VM.MARIST.EDU
> Cc:
> Date: Nov 3, 2017, 6:11:07 PM
> Subject: [ADSM-L] Question on Collocation
>
>
>  I use collocation and I have a node where the System Admin has requested 
> that his data from a particular node be isolated on tapes by itself. I 
> created a collocation group and have associated this one node with the new 
> group. This node had previously been backing up to a different collocation 
> group. I created a new Policy Domain and associated the node with it as well. 
> This policy domain is set to send the data direct to tape instead of going to 
> our diskpool. I also modified the settings on the server to allow this system 
> to have 2 mount points.
>  I had the SA launch a "full" manual backup from the GUI - using the "Always 
> Backup" option. Here are the problems I am seeing.
>  1.   I can see that the system state data when to a tape that was 
> already used by other nodes (which tells me the collocation is not working)
>  2.   It only mounted 1 tape in a drive while performing the backup
>  3.   The metrics at the end showed that it inspected 283 GB of data and 
> only transferred 175 GB which means it did not actually perform a full backup.
>  Any ideas to help get a FULL 

Re: Question on Collocation

2017-11-09 Thread Marc Lanteigne

Hi Jennifer,

>   I removed the node from the collocation group and added it to the new
collocation group
The way you did it, or just not part of a group gives the same result.  So
your way was good, I wasn't trying to say it wasn't, just that you could
have saved a few steps and get the same end result.

> Will the move nodedata command move the data from the tapes in the old
collocation group to tapes in the new collocation group?
Yes, but not exactly the way you describe it.  Keep in mind that the server
is only using collocation settings when writing data, but not when reading
it.  When reading, it doesn't look at collocation settings, it reads the
data from wherever it is.

The move nodedata will read the client data and will move it onto new
volumes following the collocation settings for the storage pool and the
node.  Details here:
https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.2/srv.reference/r_cmd_nodedata_allfs_move.html



-
Thanks,
Marc...

Marc Lanteigne
Accelerated Value Specialist for Spectrum Protect
416.478.0233 | marclantei...@ca.ibm.com
Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern

Follow me on: Twitter, developerWorks, LinkedIn


-Original Message-
From: Drammeh, Jennifer C [mailto:jdram...@u.washington.edu]
Sent: Thursday, November 9, 2017 12:00 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on Collocation

Marc,

   I removed the node from the collocation group and added it to the new
collocation group - where it resides by itself. Will the move nodedata
command move the data from the tapes in the old collocation group to tapes
in the new collocation group? Also, do you know how to initiate a FULL new
backup of the entire system.


Thanks!


Jennifer

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Marc Lanteigne
Sent: Friday, November 03, 2017 3:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on Collocation

Hi Jennifer,

The backup always only applies to the filesystem backup, not systemstate.
You’ll have to change the mode of the systemstate backup to get a full.

Or use move nodedata to move it all together.

BTW, you would just have needed to remove the node from the collocation
group. Nodes not in a collocation group are collocated by node.

Marc...

Sent from my iPhone using IBM Verse

On Nov 3, 2017, 6:11:07 PM, jdram...@u.washington.edu wrote:

From: jdram...@u.washington.edu
To: ADSM-L@VM.MARIST.EDU
Cc:
Date: Nov 3, 2017, 6:11:07 PM
Subject: [ADSM-L] Question on Collocation


 I use collocation and I have a node where the System Admin has requested
that his data from a particular node be isolated on tapes by itself. I
created a collocation group and have associated this one node with the new
group. This node had previously been backing up to a different collocation
group. I created a new Policy Domain and associated the node with it as
well. This policy domain is set to send the data direct to tape instead of
going to our diskpool. I also modified the settings on the server to allow
this system to have 2 mount points.
 I had the SA launch a "full" manual backup from the GUI - using the
"Always Backup" option. Here are the problems I am seeing.
 1.   I can see that the system state data when to a tape that was
already used by other nodes (which tells me the collocation is not working)
 2.   It only mounted 1 tape in a drive while performing the backup
 3.   The metrics at the end showed that it inspected 283 GB of data
and only transferred 175 GB which means it did not actually perform a full
backup.
 Any ideas to help get a FULL backup of this data alone on a single 1 or 2
tapes and ideally using multiple tape drives to speed things up? (for
backup and restore)  Thanks!
 Jennifer


Re: Question on Collocation

2017-11-09 Thread Drammeh, Jennifer C
Marc,

   I removed the node from the collocation group and added it to the new 
collocation group - where it resides by itself. Will the move nodedata command 
move the data from the tapes in the old collocation group to tapes in the new 
collocation group? Also, do you know how to initiate a FULL new backup of the 
entire system. 


Thanks!


Jennifer

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Marc 
Lanteigne
Sent: Friday, November 03, 2017 3:07 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on Collocation

Hi Jennifer,

The backup always only applies to the filesystem backup, not systemstate. 
You’ll have to change the mode of the systemstate backup to get a full. 

Or use move nodedata to move it all together. 

BTW, you would just have needed to remove the node from the collocation group. 
Nodes not in a collocation group are collocated by node. 

Marc...

Sent from my iPhone using IBM Verse

On Nov 3, 2017, 6:11:07 PM, jdram...@u.washington.edu wrote:

From: jdram...@u.washington.edu
To: ADSM-L@VM.MARIST.EDU
Cc: 
Date: Nov 3, 2017, 6:11:07 PM
Subject: [ADSM-L] Question on Collocation


 I use collocation and I have a node where the System Admin has requested that 
his data from a particular node be isolated on tapes by itself. I created a 
collocation group and have associated this one node with the new group. This 
node had previously been backing up to a different collocation group. I created 
a new Policy Domain and associated the node with it as well. This policy domain 
is set to send the data direct to tape instead of going to our diskpool. I also 
modified the settings on the server to allow this system to have 2 mount points.
 I had the SA launch a "full" manual backup from the GUI - using the "Always 
Backup" option. Here are the problems I am seeing.
 1.   I can see that the system state data when to a tape that was already 
used by other nodes (which tells me the collocation is not working)
 2.   It only mounted 1 tape in a drive while performing the backup
 3.   The metrics at the end showed that it inspected 283 GB of data and 
only transferred 175 GB which means it did not actually perform a full backup.
 Any ideas to help get a FULL backup of this data alone on a single 1 or 2 
tapes and ideally using multiple tape drives to speed things up? (for backup 
and restore)  Thanks!
 Jennifer


Re: Question on Collocation

2017-11-09 Thread Skylar Thompson
 Content preview:  Hmm... Another possibility I can think of is churn while 
you're
running the backup. If you have your management class's copy serialization
set to SHRDYNAMIC, SHRSTATIC, or STATIC, the client will try to determine
if a file is changing while the backup is happening (multiple times in the
case of SHRSTATIC) and either tell the server not to commit the data, or
   retry the transfer. If there's a retry, I'm not sure how it impacts the bytes
transferred report at the end. [...]

 Content analysis details:   (0.7 points, 5.0 required)

  pts rule name  description
  -- --
  0.7 SPF_NEUTRALSPF: sender does not match SPF record (neutral)
 -0.0 RP_MATCHES_RCVDEnvelope sender domain matches handover relay 
domain
X-Barracuda-Connect: mx.gs.washington.edu[128.208.8.134]
X-Barracuda-Start-Time: 1510242332
X-Barracuda-Encrypted: ECDHE-RSA-AES256-GCM-SHA384
X-Barracuda-URL: https://148.100.49.27:443/cgi-mod/mark.cgi
X-Barracuda-Scan-Msg-Size: 2835
X-Virus-Scanned: by bsmtpd at marist.edu
X-Barracuda-BRTS-Status: 1
X-Barracuda-Spam-Score: 0.00
X-Barracuda-Spam-Status: No, SCORE=0.00 using global scores of TAG_LEVEL=3.5 
QUARANTINE_LEVEL=1000.0 KILL_LEVEL=5.5 tests=
X-Barracuda-Spam-Report: Code version 3.2, rules version 3.2.3.44665
Rule breakdown below
 pts rule name  description
 -- 
--

Hmm... Another possibility I can think of is churn while you're running the
backup. If you have your management class's copy serialization set to
SHRDYNAMIC, SHRSTATIC, or STATIC, the client will try to determine if a
file is changing while the backup is happening (multiple times in the case
of SHRSTATIC) and either tell the server not to commit the data, or retry
the transfer. If there's a retry, I'm not sure how it impacts the bytes
transferred report at the end.

On Thu, Nov 09, 2017 at 03:30:59PM +, Drammeh, Jennifer C wrote:
> Skylar,
>
>Sorry for the delayed response! No client side compression enabled.
>
>
> Jennifer
>
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
> Skylar Thompson
> Sent: Friday, November 03, 2017 2:49 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Question on Collocation
>
> Hi Jennifer,
>
> Do you have client-side compression enabled? I'm not familiar with what the 
> GUI reports, but dsmc will show this in the "Objects compressed by"
> line.
>
> On 11/03/2017 02:06 PM, Drammeh, Jennifer C wrote:
> > I use collocation and I have a node where the System Admin has requested 
> > that his data from a particular node be isolated on tapes by itself. I 
> > created a collocation group and have associated this one node with the new 
> > group. This node had previously been backing up to a different collocation 
> > group. I created a new Policy Domain and associated the node with it as 
> > well. This policy domain is set to send the data direct to tape instead of 
> > going to our diskpool. I also modified the settings on the server to allow 
> > this system to have 2 mount points.
> >
> > I had the SA launch a "full" manual backup from the GUI - using the "Always 
> > Backup" option. Here are the problems I am seeing.
> >
> >
> > 1.   I can see that the system state data when to a tape that was 
> > already used by other nodes (which tells me the collocation is not working)
> >
> > 2.   It only mounted 1 tape in a drive while performing the backup
> >
> > 3.   The metrics at the end showed that it inspected 283 GB of data and 
> > only transferred 175 GB which means it did not actually perform a full 
> > backup.
> >
> > Any ideas to help get a FULL backup of this data alone on a single 1
> > or 2 tapes and ideally using multiple tape drives to speed things up?
> > (for backup and restore)
> >
> > Thanks!
> >
> >
> > Jennifer
>
>
> --
> -- Skylar Thompson (skyl...@u.washington.edu)
> -- Genome Sciences Department, System Administrator
> -- Foege Building S046, (206)-685-7354

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


Re: Question on Collocation

2017-11-09 Thread Drammeh, Jennifer C
Skylar,

   Sorry for the delayed response! No client side compression enabled. 


Jennifer



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Friday, November 03, 2017 2:49 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Question on Collocation

Hi Jennifer,

Do you have client-side compression enabled? I'm not familiar with what the GUI 
reports, but dsmc will show this in the "Objects compressed by"
line.

On 11/03/2017 02:06 PM, Drammeh, Jennifer C wrote:
> I use collocation and I have a node where the System Admin has requested that 
> his data from a particular node be isolated on tapes by itself. I created a 
> collocation group and have associated this one node with the new group. This 
> node had previously been backing up to a different collocation group. I 
> created a new Policy Domain and associated the node with it as well. This 
> policy domain is set to send the data direct to tape instead of going to our 
> diskpool. I also modified the settings on the server to allow this system to 
> have 2 mount points.
>
> I had the SA launch a "full" manual backup from the GUI - using the "Always 
> Backup" option. Here are the problems I am seeing.
>
>
> 1.   I can see that the system state data when to a tape that was already 
> used by other nodes (which tells me the collocation is not working)
>
> 2.   It only mounted 1 tape in a drive while performing the backup
>
> 3.   The metrics at the end showed that it inspected 283 GB of data and 
> only transferred 175 GB which means it did not actually perform a full backup.
>
> Any ideas to help get a FULL backup of this data alone on a single 1 
> or 2 tapes and ideally using multiple tape drives to speed things up? 
> (for backup and restore)
>
> Thanks!
>
>
> Jennifer


--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354


Re: Question on Collocation

2017-11-03 Thread Marc Lanteigne
Hi Jennifer,

The backup always only applies to the filesystem backup, not systemstate. 
You’ll have to change the mode of the systemstate backup to get a full. 

Or use move nodedata to move it all together. 

BTW, you would just have needed to remove the node from the collocation group. 
Nodes not in a collocation group are collocated by node. 

Marc...

Sent from my iPhone using IBM Verse

On Nov 3, 2017, 6:11:07 PM, jdram...@u.washington.edu wrote:

From: jdram...@u.washington.edu
To: ADSM-L@VM.MARIST.EDU
Cc: 
Date: Nov 3, 2017, 6:11:07 PM
Subject: [ADSM-L] Question on Collocation


 I use collocation and I have a node where the System Admin has requested that 
his data from a particular node be isolated on tapes by itself. I created a 
collocation group and have associated this one node with the new group. This 
node had previously been backing up to a different collocation group. I created 
a new Policy Domain and associated the node with it as well. This policy domain 
is set to send the data direct to tape instead of going to our diskpool. I also 
modified the settings on the server to allow this system to have 2 mount points.
 I had the SA launch a "full" manual backup from the GUI - using the "Always 
Backup" option. Here are the problems I am seeing.
 1.   I can see that the system state data when to a tape that was already 
used by other nodes (which tells me the collocation is not working)
 2.   It only mounted 1 tape in a drive while performing the backup
 3.   The metrics at the end showed that it inspected 283 GB of data and 
only transferred 175 GB which means it did not actually perform a full backup.
 Any ideas to help get a FULL backup of this data alone on a single 1 or 2 
tapes and ideally using multiple tape drives to speed things up? (for backup 
and restore)
 Thanks!
 Jennifer


Re: Question on Collocation

2017-11-03 Thread Skylar Thompson
Hi Jennifer,

Do you have client-side compression enabled? I'm not familiar with what
the GUI reports, but dsmc will show this in the "Objects compressed by"
line.

On 11/03/2017 02:06 PM, Drammeh, Jennifer C wrote:
> I use collocation and I have a node where the System Admin has requested that 
> his data from a particular node be isolated on tapes by itself. I created a 
> collocation group and have associated this one node with the new group. This 
> node had previously been backing up to a different collocation group. I 
> created a new Policy Domain and associated the node with it as well. This 
> policy domain is set to send the data direct to tape instead of going to our 
> diskpool. I also modified the settings on the server to allow this system to 
> have 2 mount points.
>
> I had the SA launch a "full" manual backup from the GUI - using the "Always 
> Backup" option. Here are the problems I am seeing.
>
>
> 1.   I can see that the system state data when to a tape that was already 
> used by other nodes (which tells me the collocation is not working)
>
> 2.   It only mounted 1 tape in a drive while performing the backup
>
> 3.   The metrics at the end showed that it inspected 283 GB of data and 
> only transferred 175 GB which means it did not actually perform a full backup.
>
> Any ideas to help get a FULL backup of this data alone on a single 1 or 2 
> tapes and ideally using multiple tape drives to speed things up? (for backup 
> and restore)
>
> Thanks!
>
>
> Jennifer


--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354


Question on Collocation

2017-11-03 Thread Drammeh, Jennifer C
I use collocation and I have a node where the System Admin has requested that 
his data from a particular node be isolated on tapes by itself. I created a 
collocation group and have associated this one node with the new group. This 
node had previously been backing up to a different collocation group. I created 
a new Policy Domain and associated the node with it as well. This policy domain 
is set to send the data direct to tape instead of going to our diskpool. I also 
modified the settings on the server to allow this system to have 2 mount points.

I had the SA launch a "full" manual backup from the GUI - using the "Always 
Backup" option. Here are the problems I am seeing.


1.   I can see that the system state data when to a tape that was already 
used by other nodes (which tells me the collocation is not working)

2.   It only mounted 1 tape in a drive while performing the backup

3.   The metrics at the end showed that it inspected 283 GB of data and 
only transferred 175 GB which means it did not actually perform a full backup.

Any ideas to help get a FULL backup of this data alone on a single 1 or 2 tapes 
and ideally using multiple tape drives to speed things up? (for backup and 
restore)

Thanks!


Jennifer


Re: tsm ops center 7.1.7 question

2017-09-28 Thread Lee, Gary
That got it.

Thanks. Wasn't clear from what I had read.



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of J. 
Pohlmann
Sent: Thursday, September 28, 2017 11:11 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm ops center 7.1.7 question

When you log on to the OC for the first time, or any other time subsequently, 
use your normal TSM Admin ID and password. Make sure you specify the IP of the 
hub server on the logon panel because by default it has localhost on the panel. 
The _OC_ Admin ID will be created by the OC when you are going through the 
initial configuration.

Best regards,
Joerg Pohlmann

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, September 28, 2017 08:06
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm ops center 7.1.7 question

Since I don't use the OPS center, not sure if this is correct.  From this
page:

https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.ibm.com%2Fsupport%2Fknowledgecenter%2FSSGSG7_7.1.7%2Fsrv.install%2Fc_oc_inst_admin_ids_and_passwords.html=02%7C01%7Cglee%40BSU.EDU%7Cbb3b814a6ec34d4e32a308d506836f1e%7C6fff909f07dc40da9e30fd7549c0f494%7C0%7C0%7C636422083930570134=0uydROqOXn6oZm65gE2bI63tXLpzOjWODmei6a5k5j8%3D=0

*When you initially configure the hub server, an administrator ID named 
IBM-OC-server_name is registered with system authority on the hub server and is 
associated with the initial password that you specify.*

On Thu, Sep 28, 2017 at 10:54 AM, Lee, Gary <g...@bsu.edu> wrote:

> Just installed ops center 7.1.7.
>
> First question, what is the user id used to log in for the first time 
> to configure.
>
> Looked through the server install guide, don't see it listed.
> The install asked for a password, but never hinted at the user id to 
> go with it.
>
> Thanks for the assistance.
>



--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services 
https://na01.safelinks.protection.outlook.com/?url=www.ucc.vcu.edu=02%7C01%7Cglee%40BSU.EDU%7Cbb3b814a6ec34d4e32a308d506836f1e%7C6fff909f07dc40da9e30fd7549c0f494%7C0%7C0%7C636422083930570134=m1BtadlaNrr805PqKKr3QUIjlb%2FXDHPbYQYgvvQKBIA%3D=0
 zfor...@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and other 
reputable organizations will never use email to request that you reply with 
your password, social security number or confidential personal information. For 
more details visit 
https://na01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fphishing.vcu.edu%2F=02%7C01%7Cglee%40BSU.EDU%7Cbb3b814a6ec34d4e32a308d506836f1e%7C6fff909f07dc40da9e30fd7549c0f494%7C0%7C0%7C636422083930570134=ViBDTQs2EoDgT%2BvgPrv4tpOzmud2vHp29N2iIZOREJU%3D=0


Re: tsm ops center 7.1.7 question

2017-09-28 Thread J. Pohlmann
When you log on to the OC for the first time, or any other time subsequently, 
use your normal TSM Admin ID and password. Make sure you specify the IP of the 
hub server on the logon panel because by default it has localhost on the panel. 
The _OC_ Admin ID will be created by the OC when you are going through the 
initial configuration.

Best regards,
Joerg Pohlmann

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Zoltan 
Forray
Sent: Thursday, September 28, 2017 08:06
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] tsm ops center 7.1.7 question

Since I don't use the OPS center, not sure if this is correct.  From this
page:

https://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.7/srv.install/c_oc_inst_admin_ids_and_passwords.html

*When you initially configure the hub server, an administrator ID named 
IBM-OC-server_name is registered with system authority on the hub server and is 
associated with the initial password that you specify.*

On Thu, Sep 28, 2017 at 10:54 AM, Lee, Gary <g...@bsu.edu> wrote:

> Just installed ops center 7.1.7.
>
> First question, what is the user id used to log in for the first time 
> to configure.
>
> Looked through the server install guide, don't see it listed.
> The install asked for a password, but never hinted at the user id to 
> go with it.
>
> Thanks for the assistance.
>



--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon Monitor 
Administrator VMware Administrator Virginia Commonwealth University UCC/Office 
of Technology Services www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be 
a phishing victim - VCU and other reputable organizations will never use email 
to request that you reply with your password, social security number or 
confidential personal information. For more details visit 
http://phishing.vcu.edu/


Re: tsm ops center 7.1.7 question

2017-09-28 Thread Zoltan Forray
Since I don't use the OPS center, not sure if this is correct.  From this
page:

https://www.ibm.com/support/knowledgecenter/SSGSG7_7.1.7/srv.install/c_oc_inst_admin_ids_and_passwords.html

*When you initially configure the hub server, an administrator ID named
IBM-OC-server_name is registered with system authority on the hub server
and is associated with the initial password that you specify.*

On Thu, Sep 28, 2017 at 10:54 AM, Lee, Gary <g...@bsu.edu> wrote:

> Just installed ops center 7.1.7.
>
> First question, what is the user id used to log in for the first time to
> configure.
>
> Looked through the server install guide, don't see it listed.
> The install asked for a password, but never hinted at the user id to go
> with it.
>
> Thanks for the assistance.
>



--
*Zoltan Forray*
Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
Xymon Monitor Administrator
VMware Administrator
Virginia Commonwealth University
UCC/Office of Technology Services
www.ucc.vcu.edu
zfor...@vcu.edu - 804-828-4807
Don't be a phishing victim - VCU and other reputable organizations will
never use email to request that you reply with your password, social
security number or confidential personal information. For more details
visit http://phishing.vcu.edu/


tsm ops center 7.1.7 question

2017-09-28 Thread Lee, Gary
Just installed ops center 7.1.7.

First question, what is the user id used to log in for the first time to 
configure.

Looked through the server install guide, don't see it listed.
The install asked for a password, but never hinted at the user id to go with it.

Thanks for the assistance.


Re: Mountpoints question

2017-09-09 Thread Stefan Folkerts
Robert,

Nope, I would not enable client-side compression when backing up to a u7c
device class, the drive can do it much more efficiently without putting
extra CPU load on the client.


On Sat, Sep 9, 2017 at 11:42 AM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:

> Hi Stefan 
>
> Great  Make sense !
>
> I made all the changes and works fine.
>
> The only thing I have to take care is the communication between the client
> to the TSM server, very slow !
>
> Meantime It will be wise to activate in the dsm.opt of the client the
> option:  compression YES 
>
> I remind that I backup directly to LTO7 tape (ULTRIUM7C).
>
> Best Regards
>
> Robert
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, September 7, 2017 5:28 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Mountpoints question
>
> Robert,
>
> The keep mount option is only applicable if you migrate the data to a
> sequential volume *after* you store it in the diskpool, so with your direct
> to tape setup here it doesn't do anything.
> The session will hold a single drive for the entire duration of its backup
> session.
>
> On Thu, Sep 7, 2017 at 3:23 PM, rou...@univ.haifa.ac.il <
> rou...@univ.haifa.ac.il> wrote:
>
> > Great Stefan
> >
> > What about the keep mount option need it ?
> >
> > Best regards
> >
> > Robert
> >
> >
> >
> > Envoyé depuis mon smartphone Samsung Galaxy.
> >
> >
> >  Message d'origine 
> > De : Stefan Folkerts <stefan.folke...@gmail.com> Date : 07/09/2017
> > 15:41 (GMT+02:00) À : ADSM-L@VM.MARIST.EDU Objet : Re: [ADSM-L]
> > Mountpoints question
> >
> > Robert,
> >
> > >09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> for
> > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of
> > >>
> > mount points. (SESSION: 20900)
> >
> > You stated that the default resource util is 2, you also stated that
> > "maxnummp is 1" well, that's when you get this message.
> > The client is trying to get two mountpounts but you are restricting it
> > to one.
> > The maxnummp must be the same or higher as the resource util, now you
> > have it the other way around.
> > Just lower the default value of 2 for the resource util to 1.
> >
> >
> >
> >
> > On Thu, Sep 7, 2017 at 10:29 AM, Krzysztof Przygoda
> > <przy...@gmail.com>
> > wrote:
> >
> > > Hi Robert
> > > Number of possible mount points you control from server. In your
> > > case it will be:
> > > update node your_nodename MAXNUMMP=1
> > >
> > > Good luck and regards
> > > Krzysztof
> > >
> > > 2017-09-07 9:36 GMT+02:00 rou...@univ.haifa.ac.il <
> > rou...@univ.haifa.ac.il
> > > >:
> > >
> > > > Hi Stefan
> > > >
> > > > First thanks for the input .
> > > >
> > > > O.K but how I avoid this message:
> > > >
> > > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session
> 20900
> > > for
> > > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number
> > > > > of  >
> > > > mount points. (SESSION: 20900)
> > > >
> > > > Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is
> > NO)
> > > >
> > > > Regards
> > > >
> > > > Robert
> > > >
> > > >
> > > >
> > > > -Original Message-
> > > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
> > > > Behalf
> > Of
> > > > Stefan Folkerts
> > > > Sent: Thursday, September 7, 2017 8:49 AM
> > > > To: ADSM-L@VM.MARIST.EDU
> > > > Subject: Re: [ADSM-L] Mountpoints question
> > > >
> > > > It might seem pretty obvious but you need to set
> > > > resourceutilization
> > to 1
> > > > and not 3.
> > > > Also, the amount of mountpoints doesn't equal the amount of tapes
> > > > a
> > > backup
> > > > can use, it limits the amount of concurrent mountpoints a backup
> > > > can
> > use.
> > > >
> > > >
> > > >
> > > > On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
> > > > rou...@univ.haifa.ac.il> wrote:
> > > >
> > > > > Hi to all
> > > > >
> > > > > Want to backup  directly to LTO7 tapes and only in one tape till
> > > > > is
> > > full.
> > > > >
> > > > > When my maxnummp is 1 for the node name got a lot of warning as:
> > > > >
> > > > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session
> > 20900
> > > > for
> > > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number
> > > > > of mount points. (SESSION: 20900)
> > > > >
> > > > > I made some research and find this article:
> > > > > http://www-01.ibm.com/support/docview.wss?uid=swg21584672
> > > > >
> > > > > No resourceutilization option specified so default:2
> > > > >
> > > > > As the article said  I increase the maxnummp to 3 , this night
> > > > > the backup took 3 tapes !
> > > > >
> > > > > In my q node f=d I see:Keep Mount Point?: No
> > > > >
> > > > > Did I have to change it to YES , to backup to only one tape ?
> > > > >
> > > > > Any suggestions ???
> > > > >
> > > > > Best Regards
> > > > >
> > > > > Robert
> > > > >
> > > >
> > >
> >
>


Re: Mountpoints question

2017-09-09 Thread rou...@univ.haifa.ac.il
Hi Stefan 

Great  Make sense !

I made all the changes and works fine.

The only thing I have to take care is the communication between the client to 
the TSM server, very slow !

Meantime It will be wise to activate in the dsm.opt of the client the option:  
compression YES 

I remind that I backup directly to LTO7 tape (ULTRIUM7C).

Best Regards

Robert

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Thursday, September 7, 2017 5:28 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Mountpoints question

Robert,

The keep mount option is only applicable if you migrate the data to a 
sequential volume *after* you store it in the diskpool, so with your direct to 
tape setup here it doesn't do anything.
The session will hold a single drive for the entire duration of its backup 
session.

On Thu, Sep 7, 2017 at 3:23 PM, rou...@univ.haifa.ac.il < 
rou...@univ.haifa.ac.il> wrote:

> Great Stefan
>
> What about the keep mount option need it ?
>
> Best regards
>
> Robert
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
>  Message d'origine 
> De : Stefan Folkerts <stefan.folke...@gmail.com> Date : 07/09/2017 
> 15:41 (GMT+02:00) À : ADSM-L@VM.MARIST.EDU Objet : Re: [ADSM-L] 
> Mountpoints question
>
> Robert,
>
> >09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> > node IIP DM-LAB-FISH. This node has exceeded its maximum number of  
> >>
> mount points. (SESSION: 20900)
>
> You stated that the default resource util is 2, you also stated that 
> "maxnummp is 1" well, that's when you get this message.
> The client is trying to get two mountpounts but you are restricting it 
> to one.
> The maxnummp must be the same or higher as the resource util, now you 
> have it the other way around.
> Just lower the default value of 2 for the resource util to 1.
>
>
>
>
> On Thu, Sep 7, 2017 at 10:29 AM, Krzysztof Przygoda 
> <przy...@gmail.com>
> wrote:
>
> > Hi Robert
> > Number of possible mount points you control from server. In your 
> > case it will be:
> > update node your_nodename MAXNUMMP=1
> >
> > Good luck and regards
> > Krzysztof
> >
> > 2017-09-07 9:36 GMT+02:00 rou...@univ.haifa.ac.il <
> rou...@univ.haifa.ac.il
> > >:
> >
> > > Hi Stefan
> > >
> > > First thanks for the input .
> > >
> > > O.K but how I avoid this message:
> > >
> > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> > for
> > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number 
> > > > of  >
> > > mount points. (SESSION: 20900)
> > >
> > > Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is
> NO)
> > >
> > > Regards
> > >
> > > Robert
> > >
> > >
> > >
> > > -Original Message-
> > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On 
> > > Behalf
> Of
> > > Stefan Folkerts
> > > Sent: Thursday, September 7, 2017 8:49 AM
> > > To: ADSM-L@VM.MARIST.EDU
> > > Subject: Re: [ADSM-L] Mountpoints question
> > >
> > > It might seem pretty obvious but you need to set 
> > > resourceutilization
> to 1
> > > and not 3.
> > > Also, the amount of mountpoints doesn't equal the amount of tapes 
> > > a
> > backup
> > > can use, it limits the amount of concurrent mountpoints a backup 
> > > can
> use.
> > >
> > >
> > >
> > > On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il < 
> > > rou...@univ.haifa.ac.il> wrote:
> > >
> > > > Hi to all
> > > >
> > > > Want to backup  directly to LTO7 tapes and only in one tape till 
> > > > is
> > full.
> > > >
> > > > When my maxnummp is 1 for the node name got a lot of warning as:
> > > >
> > > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session
> 20900
> > > for
> > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number 
> > > > of mount points. (SESSION: 20900)
> > > >
> > > > I made some research and find this article:
> > > > http://www-01.ibm.com/support/docview.wss?uid=swg21584672
> > > >
> > > > No resourceutilization option specified so default:2
> > > >
> > > > As the article said  I increase the maxnummp to 3 , this night 
> > > > the backup took 3 tapes !
> > > >
> > > > In my q node f=d I see:Keep Mount Point?: No
> > > >
> > > > Did I have to change it to YES , to backup to only one tape ?
> > > >
> > > > Any suggestions ???
> > > >
> > > > Best Regards
> > > >
> > > > Robert
> > > >
> > >
> >
>


Re: Mountpoints question

2017-09-07 Thread Stefan Folkerts
Robert,

The keep mount option is only applicable if you migrate the data to a
sequential volume *after* you store it in the diskpool, so with your direct
to tape setup here it doesn't do anything.
The session will hold a single drive for the entire duration of its backup
session.

On Thu, Sep 7, 2017 at 3:23 PM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:

> Great Stefan
>
> What about the keep mount option need it ?
>
> Best regards
>
> Robert
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
>  Message d'origine 
> De : Stefan Folkerts <stefan.folke...@gmail.com>
> Date : 07/09/2017 15:41 (GMT+02:00)
> À : ADSM-L@VM.MARIST.EDU
> Objet : Re: [ADSM-L] Mountpoints question
>
> Robert,
>
> >09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> > node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
> mount points. (SESSION: 20900)
>
> You stated that the default resource util is 2, you also stated that
> "maxnummp
> is 1" well, that's when you get this message.
> The client is trying to get two mountpounts but you are restricting it to
> one.
> The maxnummp must be the same or higher as the resource util, now you have
> it the other way around.
> Just lower the default value of 2 for the resource util to 1.
>
>
>
>
> On Thu, Sep 7, 2017 at 10:29 AM, Krzysztof Przygoda <przy...@gmail.com>
> wrote:
>
> > Hi Robert
> > Number of possible mount points you control from server. In your case it
> > will be:
> > update node your_nodename MAXNUMMP=1
> >
> > Good luck and regards
> > Krzysztof
> >
> > 2017-09-07 9:36 GMT+02:00 rou...@univ.haifa.ac.il <
> rou...@univ.haifa.ac.il
> > >:
> >
> > > Hi Stefan
> > >
> > > First thanks for the input .
> > >
> > > O.K but how I avoid this message:
> > >
> > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> > for
> > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
> > > mount points. (SESSION: 20900)
> > >
> > > Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is
> NO)
> > >
> > > Regards
> > >
> > > Robert
> > >
> > >
> > >
> > > -Original Message-
> > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> Of
> > > Stefan Folkerts
> > > Sent: Thursday, September 7, 2017 8:49 AM
> > > To: ADSM-L@VM.MARIST.EDU
> > > Subject: Re: [ADSM-L] Mountpoints question
> > >
> > > It might seem pretty obvious but you need to set resourceutilization
> to 1
> > > and not 3.
> > > Also, the amount of mountpoints doesn't equal the amount of tapes a
> > backup
> > > can use, it limits the amount of concurrent mountpoints a backup can
> use.
> > >
> > >
> > >
> > > On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
> > > rou...@univ.haifa.ac.il> wrote:
> > >
> > > > Hi to all
> > > >
> > > > Want to backup  directly to LTO7 tapes and only in one tape till is
> > full.
> > > >
> > > > When my maxnummp is 1 for the node name got a lot of warning as:
> > > >
> > > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session
> 20900
> > > for
> > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of
> > > > mount points. (SESSION: 20900)
> > > >
> > > > I made some research and find this article:
> > > > http://www-01.ibm.com/support/docview.wss?uid=swg21584672
> > > >
> > > > No resourceutilization option specified so default:2
> > > >
> > > > As the article said  I increase the maxnummp to 3 , this night the
> > > > backup took 3 tapes !
> > > >
> > > > In my q node f=d I see:Keep Mount Point?: No
> > > >
> > > > Did I have to change it to YES , to backup to only one tape ?
> > > >
> > > > Any suggestions ???
> > > >
> > > > Best Regards
> > > >
> > > > Robert
> > > >
> > >
> >
>


Re: Mountpoints question

2017-09-07 Thread rou...@univ.haifa.ac.il
Great Stefan

What about the keep mount option need it ?

Best regards

Robert



Envoyé depuis mon smartphone Samsung Galaxy.


 Message d'origine 
De : Stefan Folkerts <stefan.folke...@gmail.com>
Date : 07/09/2017 15:41 (GMT+02:00)
À : ADSM-L@VM.MARIST.EDU
Objet : Re: [ADSM-L] Mountpoints question

Robert,

>09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
mount points. (SESSION: 20900)

You stated that the default resource util is 2, you also stated that "maxnummp
is 1" well, that's when you get this message.
The client is trying to get two mountpounts but you are restricting it to
one.
The maxnummp must be the same or higher as the resource util, now you have
it the other way around.
Just lower the default value of 2 for the resource util to 1.




On Thu, Sep 7, 2017 at 10:29 AM, Krzysztof Przygoda <przy...@gmail.com>
wrote:

> Hi Robert
> Number of possible mount points you control from server. In your case it
> will be:
> update node your_nodename MAXNUMMP=1
>
> Good luck and regards
> Krzysztof
>
> 2017-09-07 9:36 GMT+02:00 rou...@univ.haifa.ac.il <rou...@univ.haifa.ac.il
> >:
>
> > Hi Stefan
> >
> > First thanks for the input .
> >
> > O.K but how I avoid this message:
> >
> > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> for
> > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
> > mount points. (SESSION: 20900)
> >
> > Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is NO)
> >
> > Regards
> >
> > Robert
> >
> >
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> > Stefan Folkerts
> > Sent: Thursday, September 7, 2017 8:49 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: [ADSM-L] Mountpoints question
> >
> > It might seem pretty obvious but you need to set resourceutilization to 1
> > and not 3.
> > Also, the amount of mountpoints doesn't equal the amount of tapes a
> backup
> > can use, it limits the amount of concurrent mountpoints a backup can use.
> >
> >
> >
> > On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
> > rou...@univ.haifa.ac.il> wrote:
> >
> > > Hi to all
> > >
> > > Want to backup  directly to LTO7 tapes and only in one tape till is
> full.
> > >
> > > When my maxnummp is 1 for the node name got a lot of warning as:
> > >
> > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> > for
> > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of
> > > mount points. (SESSION: 20900)
> > >
> > > I made some research and find this article:
> > > http://www-01.ibm.com/support/docview.wss?uid=swg21584672
> > >
> > > No resourceutilization option specified so default:2
> > >
> > > As the article said  I increase the maxnummp to 3 , this night the
> > > backup took 3 tapes !
> > >
> > > In my q node f=d I see:Keep Mount Point?: No
> > >
> > > Did I have to change it to YES , to backup to only one tape ?
> > >
> > > Any suggestions ???
> > >
> > > Best Regards
> > >
> > > Robert
> > >
> >
>


Re: Mountpoints question

2017-09-07 Thread Stefan Folkerts
Robert,

>09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
mount points. (SESSION: 20900)

You stated that the default resource util is 2, you also stated that "maxnummp
is 1" well, that's when you get this message.
The client is trying to get two mountpounts but you are restricting it to
one.
The maxnummp must be the same or higher as the resource util, now you have
it the other way around.
Just lower the default value of 2 for the resource util to 1.




On Thu, Sep 7, 2017 at 10:29 AM, Krzysztof Przygoda <przy...@gmail.com>
wrote:

> Hi Robert
> Number of possible mount points you control from server. In your case it
> will be:
> update node your_nodename MAXNUMMP=1
>
> Good luck and regards
> Krzysztof
>
> 2017-09-07 9:36 GMT+02:00 rou...@univ.haifa.ac.il <rou...@univ.haifa.ac.il
> >:
>
> > Hi Stefan
> >
> > First thanks for the input .
> >
> > O.K but how I avoid this message:
> >
> > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> for
> > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
> > mount points. (SESSION: 20900)
> >
> > Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is NO)
> >
> > Regards
> >
> > Robert
> >
> >
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> > Stefan Folkerts
> > Sent: Thursday, September 7, 2017 8:49 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: [ADSM-L] Mountpoints question
> >
> > It might seem pretty obvious but you need to set resourceutilization to 1
> > and not 3.
> > Also, the amount of mountpoints doesn't equal the amount of tapes a
> backup
> > can use, it limits the amount of concurrent mountpoints a backup can use.
> >
> >
> >
> > On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
> > rou...@univ.haifa.ac.il> wrote:
> >
> > > Hi to all
> > >
> > > Want to backup  directly to LTO7 tapes and only in one tape till is
> full.
> > >
> > > When my maxnummp is 1 for the node name got a lot of warning as:
> > >
> > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> > for
> > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of
> > > mount points. (SESSION: 20900)
> > >
> > > I made some research and find this article:
> > > http://www-01.ibm.com/support/docview.wss?uid=swg21584672
> > >
> > > No resourceutilization option specified so default:2
> > >
> > > As the article said  I increase the maxnummp to 3 , this night the
> > > backup took 3 tapes !
> > >
> > > In my q node f=d I see:Keep Mount Point?: No
> > >
> > > Did I have to change it to YES , to backup to only one tape ?
> > >
> > > Any suggestions ???
> > >
> > > Best Regards
> > >
> > > Robert
> > >
> >
>


Re: Mountpoints question

2017-09-07 Thread Krzysztof Przygoda
Hi Robert
Number of possible mount points you control from server. In your case it
will be:
update node your_nodename MAXNUMMP=1

Good luck and regards
Krzysztof

2017-09-07 9:36 GMT+02:00 rou...@univ.haifa.ac.il <rou...@univ.haifa.ac.il>:

> Hi Stefan
>
> First thanks for the input .
>
> O.K but how I avoid this message:
>
> 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> > node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
> mount points. (SESSION: 20900)
>
> Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is NO)
>
> Regards
>
> Robert
>
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, September 7, 2017 8:49 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Mountpoints question
>
> It might seem pretty obvious but you need to set resourceutilization to 1
> and not 3.
> Also, the amount of mountpoints doesn't equal the amount of tapes a backup
> can use, it limits the amount of concurrent mountpoints a backup can use.
>
>
>
> On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
> rou...@univ.haifa.ac.il> wrote:
>
> > Hi to all
> >
> > Want to backup  directly to LTO7 tapes and only in one tape till is full.
> >
> > When my maxnummp is 1 for the node name got a lot of warning as:
> >
> > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> for
> > node IIP DM-LAB-FISH. This node has exceeded its maximum number of
> > mount points. (SESSION: 20900)
> >
> > I made some research and find this article:
> > http://www-01.ibm.com/support/docview.wss?uid=swg21584672
> >
> > No resourceutilization option specified so default:2
> >
> > As the article said  I increase the maxnummp to 3 , this night the
> > backup took 3 tapes !
> >
> > In my q node f=d I see:Keep Mount Point?: No
> >
> > Did I have to change it to YES , to backup to only one tape ?
> >
> > Any suggestions ???
> >
> > Best Regards
> >
> > Robert
> >
>


Re: Mountpoints question

2017-09-07 Thread rou...@univ.haifa.ac.il
Hi Stefan

First thanks for the input .

O.K but how I avoid this message:

09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for > 
node IIP DM-LAB-FISH. This node has exceeded its maximum number of  > mount 
points. (SESSION: 20900)

Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is NO)

Regards

Robert



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Stefan 
Folkerts
Sent: Thursday, September 7, 2017 8:49 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] Mountpoints question

It might seem pretty obvious but you need to set resourceutilization to 1 and 
not 3.
Also, the amount of mountpoints doesn't equal the amount of tapes a backup can 
use, it limits the amount of concurrent mountpoints a backup can use.



On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il < 
rou...@univ.haifa.ac.il> wrote:

> Hi to all
>
> Want to backup  directly to LTO7 tapes and only in one tape till is full.
>
> When my maxnummp is 1 for the node name got a lot of warning as:
>
> 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> node IIP DM-LAB-FISH. This node has exceeded its maximum number of  
> mount points. (SESSION: 20900)
>
> I made some research and find this article:
> http://www-01.ibm.com/support/docview.wss?uid=swg21584672
>
> No resourceutilization option specified so default:2
>
> As the article said  I increase the maxnummp to 3 , this night the 
> backup took 3 tapes !
>
> In my q node f=d I see:Keep Mount Point?: No
>
> Did I have to change it to YES , to backup to only one tape ?
>
> Any suggestions ???
>
> Best Regards
>
> Robert
>


Re: Mountpoints question

2017-09-06 Thread Stefan Folkerts
It might seem pretty obvious but you need to set resourceutilization to 1
and not 3.
Also, the amount of mountpoints doesn't equal the amount of tapes a backup
can use, it limits the amount of concurrent mountpoints a backup can use.



On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:

> Hi to all
>
> Want to backup  directly to LTO7 tapes and only in one tape till is full.
>
> When my maxnummp is 1 for the node name got a lot of warning as:
>
> 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> node IIP DM-LAB-FISH. This node has exceeded its maximum number of  mount
> points. (SESSION: 20900)
>
> I made some research and find this article:
> http://www-01.ibm.com/support/docview.wss?uid=swg21584672
>
> No resourceutilization option specified so default:2
>
> As the article said  I increase the maxnummp to 3 , this night the backup
> took 3 tapes !
>
> In my q node f=d I see:Keep Mount Point?: No
>
> Did I have to change it to YES , to backup to only one tape ?
>
> Any suggestions ???
>
> Best Regards
>
> Robert
>


Mountpoints question

2017-09-06 Thread rou...@univ.haifa.ac.il
Hi to all

Want to backup  directly to LTO7 tapes and only in one tape till is full.

When my maxnummp is 1 for the node name got a lot of warning as:

09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for node 
IIP DM-LAB-FISH. This node has exceeded its maximum number of  mount points. 
(SESSION: 20900)

I made some research and find this article:   
http://www-01.ibm.com/support/docview.wss?uid=swg21584672

No resourceutilization option specified so default:2

As the article said  I increase the maxnummp to 3 , this night the backup took 
3 tapes !

In my q node f=d I see:Keep Mount Point?: No

Did I have to change it to YES , to backup to only one tape ?

Any suggestions ???

Best Regards

Robert


Question about Operation Center

2017-09-03 Thread rou...@univ.haifa.ac.il
Hi to all


Want to be on the safe side , can I upgrade my Operation center Version 
8.1.1.100 stand alone on a TSM server 8.1.1.0



Best Regards



Robert


Re: question about eventlogging

2017-08-30 Thread Remco Post
that the receiver is active is not important, if all events are disabled to the 
nteventlog, nothing gets logged.

> On 30 Aug 2017, at 09:01, rou...@univ.haifa.ac.il <rou...@univ.haifa.ac.il> 
> wrote:
> 
> Hello
> 
> A quick question I run those  commands on my TSM server:
> end eventlogging nteventlog
> disable events ntevenlog info,warning
> 
> Check with q status: Active Receivers: CONSOLE ACTLOG
> 
> But after a reboot , I get:
> 
> Active Receivers: CONSOLE ACTLOG NTEVENTLOG
> 
> Maybe I can add and option in dsmserv.opt ?
> 
> Best  Regards
> 
> Robert

-- 

 Met vriendelijke groeten/Kind Regards,

Remco Post
r.p...@plcs.nl
+31 6 248 21 622


question about eventlogging

2017-08-30 Thread rou...@univ.haifa.ac.il
Hello

A quick question I run those  commands on my TSM server:
 end eventlogging nteventlog
 disable events ntevenlog info,warning

Check with q status: Active Receivers: CONSOLE ACTLOG

But after a reboot , I get:

Active Receivers: CONSOLE ACTLOG NTEVENTLOG

Maybe I can add and option in dsmserv.opt ?

Best  Regards

Robert


Re: Question adding new scratch at library

2017-08-14 Thread Harris, Steven
Hi Robert

My suggestion is that overwrite=yes won't make much difference.

You still have to mount the tape to write the label, so an extra read and 
rewind is nothing.  Personally I always let it default to overwrite=no unless I 
know I need overwrite=yes.  Less chance of accidents that way - such as 
specifying the wrong volume range and overwriting something you really wanted.

Cheers

Steve

Steven Harris
TSM Admin/Consultant
Canberra Australia

Steve at stevenharris dot info

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
rou...@univ.haifa.ac.il
Sent: Tuesday, 15 August 2017 3:10 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] Question adding new scratch at library

Hi to all

A quick question about adding new scratch's  on a TS3200 Library (SCSI). What 
will be the correct or more efficient command to run

label libv LTO7LIB checkin=scratch search=bulk labelsource=barcode 
volrange=$1,$2 overwrite=yes or checkin libv LTO7LIB status=scratch  
search=bulk checklabel=barcode volrange=$1,$2

Best Regards

Robert

This message and any attachment is confidential and may be privileged or 
otherwise protected from disclosure. You should immediately delete the message 
if you are not the intended recipient. If you have received this email by 
mistake please delete it from your system; you should not copy the message or 
disclose its content to anyone. 

This electronic communication may contain general financial product advice but 
should not be relied upon or construed as a recommendation of any financial 
product. The information has been prepared without taking into account your 
objectives, financial situation or needs. You should consider the Product 
Disclosure Statement relating to the financial product and consult your 
financial adviser before making a decision about whether to acquire, hold or 
dispose of a financial product. 

For further details on the financial product please go to http://www.bt.com.au 

Past performance is not a reliable indicator of future performance.


Question adding new scratch at library

2017-08-14 Thread rou...@univ.haifa.ac.il
Hi to all

A quick question about adding new scratch’s  on a TS3200 Library (SCSI). What 
will be the correct or more efficient command to run

label libv LTO7LIB checkin=scratch search=bulk labelsource=barcode 
volrange=$1,$2 overwrite=yes
or
checkin libv LTO7LIB status=scratch  search=bulk checklabel=barcode 
volrange=$1,$2

Best Regards

Robert


Full VM Instant Restore question

2017-08-01 Thread rou...@univ.haifa.ac.il
Hi Guys

Just want to know if after the process of Full VM Instant Restore ( process  
run  successfully) I need to see in Instant access / restore status windows in 
Web gui ,  the machine waiting for dismount 

Tied to check via the gui an via console with command:q vm * -vmrest= all   
or q vm * -vmrest=instantr  No result

In my case I didn’t  see it , only when I run the process Full VM Instant 
Access !

Best Regards

Robert


Re: restore backupset question forgive the repeat

2017-07-05 Thread Lee, Gary
Finally got the backupset restored.

Final syntax of batch file was as follows:

dsmc restore backupset f:\* e:\rcip-f\ 
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes 
-asnode=rcipserver >> restore-f.log

Thanks all.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Giacomo Testa
Sent: Wednesday, June 28, 2017 5:02 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] restore backupset question forgive the repeat

Correction: it should be .bat and not .mac

Obj=c:\restore-it.bat

restore-it.bat:
dsmc restore backupset {"\\rcipserver\c$"}\* e:\rcip-c\
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes
-asnode=rcipserver




Giacomo Testa

-Original Message-
From: Giacomo Testa [mailto:giacomo.te...@gmail.com]
Sent: Wednesday, June 28, 2017 10:52
To: 'ADSM: Dist Stor Manager' <ADSM-L@VM.MARIST.EDU>
Subject: RE: [ADSM-L] restore backupset question forgive the repeat

Hi,

I'm not sure, but I think that a schedule with Option=-asnode doesn't work
together with Action=macro or Action=command, but only with
Action=incremental, restore etc.

The following should work:
Act=command
Opt=""
Obj=c:\restore-it.mac

restore-it.mac:
dsmc restore backupset {"\\rcipserver\c$"}\* e:\rcip-c\
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes
-asnode=rcipserver


Giacomo Testa

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Lee, Gary
Sent: Tuesday, June 27, 2017 21:20
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] restore backupset question forgive the repeat

The screen reader did not tell me certain messages were highlighted, so an
answer to this was deleted so asking again with more information.

Tsm server 7.1.7.

Two tsm windows clients, restore-tem 7.1.4 Rcipserver 6.3.2.

Rcipserver has a backupset

Want to restore files to restore-temp.
Proxy granted with rcipserver as target, and restore-temp as agent.

Restore-rcip schedule defined as follows:

tsm: TSM01>q sch twoweeks restore-rcip f=d Policy Domain Name: TWOWEEKS
Schedule Name: RESTORE-RCIP
Description: restore filespace from backupset
Action: Macro
Subaction:
Options: -asnode=rcipserver
Objects: c:\restore-it.mac
Priority: 5
Start Date/Time: 06/27/2017 12:26:13
Duration: 5 Minute(s)
.  .  .

Restore-it.mac:

restore backupset {'\\rcipserver\c$'}\* e:\rcip-c\
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes


q backupset rcipserver f=d:

tsm: TSM01>q backupset rcipserver f=d
Node Name: RCIPSERVER
Backup Set Name: RCIP-RETAIN-6-MONTH.777843459 Data Type: File
Date/Time: 05/17/2017 08:33:06
Retention Period: 185
Device Class Name: TS1120-RB-ON-BACK
Description: rcipserver retain until 11/17/2017 Has Table of Contents
(TOC)?: No Filespace names: RCIPSERVER\SystemState\NULL\System
State\SystemState \\rcipserver\c$
\\ rcipserver\e$ \\rcipserver\f$
Volume names: 550025 227337


Errors:

06/27/2017 12:26:52  ANR0406I Session 38074 started for node
RESTORE-TEMP
  (WinNT) (Tcp/Ip
restore-temp.servers.bsu.edu(65421)).
  (SESSION: 38074)
06/27/2017 12:26:54  ANR3540E Object set
RESTORE-TEMP:RCIP-RETAIN-6-MONTH.7778-
  43459:File was not found for session 38074,
  RESTORE-TEMP. (SESSION: 38074)
06/27/2017 12:26:56  ANR1626I The previous message (message number 3540)
was
  repeated 2 times.


Probably overlooked something simple, but just don't see it.

Thanks for the help.

Hopefully things don't highlight spontaneously again,.


Re: restore backupset question forgive the repeat

2017-06-28 Thread Giacomo Testa
Correction: it should be .bat and not .mac

Obj=c:\restore-it.bat

restore-it.bat:
dsmc restore backupset {"\\rcipserver\c$"}\* e:\rcip-c\
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes
-asnode=rcipserver




Giacomo Testa

-Original Message-
From: Giacomo Testa [mailto:giacomo.te...@gmail.com]
Sent: Wednesday, June 28, 2017 10:52
To: 'ADSM: Dist Stor Manager' <ADSM-L@VM.MARIST.EDU>
Subject: RE: [ADSM-L] restore backupset question forgive the repeat

Hi,

I'm not sure, but I think that a schedule with Option=-asnode doesn't work
together with Action=macro or Action=command, but only with
Action=incremental, restore etc.

The following should work:
Act=command
Opt=""
Obj=c:\restore-it.mac

restore-it.mac:
dsmc restore backupset {"\\rcipserver\c$"}\* e:\rcip-c\
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes
-asnode=rcipserver


Giacomo Testa

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Lee, Gary
Sent: Tuesday, June 27, 2017 21:20
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] restore backupset question forgive the repeat

The screen reader did not tell me certain messages were highlighted, so an
answer to this was deleted so asking again with more information.

Tsm server 7.1.7.

Two tsm windows clients, restore-tem 7.1.4 Rcipserver 6.3.2.

Rcipserver has a backupset

Want to restore files to restore-temp.
Proxy granted with rcipserver as target, and restore-temp as agent.

Restore-rcip schedule defined as follows:

tsm: TSM01>q sch twoweeks restore-rcip f=d Policy Domain Name: TWOWEEKS
Schedule Name: RESTORE-RCIP
Description: restore filespace from backupset
Action: Macro
Subaction:
Options: -asnode=rcipserver
Objects: c:\restore-it.mac
Priority: 5
Start Date/Time: 06/27/2017 12:26:13
Duration: 5 Minute(s)
.  .  .

Restore-it.mac:

restore backupset {'\\rcipserver\c$'}\* e:\rcip-c\
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes


q backupset rcipserver f=d:

tsm: TSM01>q backupset rcipserver f=d
Node Name: RCIPSERVER
Backup Set Name: RCIP-RETAIN-6-MONTH.777843459 Data Type: File
Date/Time: 05/17/2017 08:33:06
Retention Period: 185
Device Class Name: TS1120-RB-ON-BACK
Description: rcipserver retain until 11/17/2017 Has Table of Contents
(TOC)?: No Filespace names: RCIPSERVER\SystemState\NULL\System
State\SystemState \\rcipserver\c$
\\ rcipserver\e$ \\rcipserver\f$
Volume names: 550025 227337


Errors:

06/27/2017 12:26:52  ANR0406I Session 38074 started for node
RESTORE-TEMP
  (WinNT) (Tcp/Ip
restore-temp.servers.bsu.edu(65421)).
  (SESSION: 38074)
06/27/2017 12:26:54  ANR3540E Object set
RESTORE-TEMP:RCIP-RETAIN-6-MONTH.7778-
  43459:File was not found for session 38074,
  RESTORE-TEMP. (SESSION: 38074)
06/27/2017 12:26:56  ANR1626I The previous message (message number 3540)
was
  repeated 2 times.


Probably overlooked something simple, but just don't see it.

Thanks for the help.

Hopefully things don't highlight spontaneously again,.


Re: restore backupset question forgive the repeat

2017-06-28 Thread Giacomo Testa
Hi,

I'm not sure, but I think that a schedule with Option=-asnode doesn't work
together with Action=macro or Action=command, but only with
Action=incremental, restore etc.

The following should work:
Act=command
Opt=""
Obj=c:\restore-it.mac

restore-it.mac:
dsmc restore backupset {"\\rcipserver\c$"}\* e:\rcip-c\
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes
-asnode=rcipserver


Giacomo Testa

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
Lee, Gary
Sent: Tuesday, June 27, 2017 21:20
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] restore backupset question forgive the repeat

The screen reader did not tell me certain messages were highlighted, so an
answer to this was deleted so asking again with more information.

Tsm server 7.1.7.

Two tsm windows clients, restore-tem 7.1.4 Rcipserver 6.3.2.

Rcipserver has a backupset

Want to restore files to restore-temp.
Proxy granted with rcipserver as target, and restore-temp as agent.

Restore-rcip schedule defined as follows:

tsm: TSM01>q sch twoweeks restore-rcip f=d Policy Domain Name: TWOWEEKS
Schedule Name: RESTORE-RCIP
Description: restore filespace from backupset
Action: Macro
Subaction:
Options: -asnode=rcipserver
Objects: c:\restore-it.mac
Priority: 5
Start Date/Time: 06/27/2017 12:26:13
Duration: 5 Minute(s)
.  .  .

Restore-it.mac:

restore backupset {'\\rcipserver\c$'}\* e:\rcip-c\
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes


q backupset rcipserver f=d:

tsm: TSM01>q backupset rcipserver f=d
Node Name: RCIPSERVER
Backup Set Name: RCIP-RETAIN-6-MONTH.777843459 Data Type: File
Date/Time: 05/17/2017 08:33:06
Retention Period: 185
Device Class Name: TS1120-RB-ON-BACK
Description: rcipserver retain until 11/17/2017 Has Table of Contents
(TOC)?: No Filespace names: RCIPSERVER\SystemState\NULL\System
State\SystemState \\rcipserver\c$
\\ rcipserver\e$ \\rcipserver\f$
Volume names: 550025 227337


Errors:

06/27/2017 12:26:52  ANR0406I Session 38074 started for node
RESTORE-TEMP
  (WinNT) (Tcp/Ip
restore-temp.servers.bsu.edu(65421)).
  (SESSION: 38074)
06/27/2017 12:26:54  ANR3540E Object set
RESTORE-TEMP:RCIP-RETAIN-6-MONTH.7778-
  43459:File was not found for session 38074,
  RESTORE-TEMP. (SESSION: 38074)
06/27/2017 12:26:56  ANR1626I The previous message (message number 3540)
was
  repeated 2 times.


Probably overlooked something simple, but just don't see it.

Thanks for the help.

Hopefully things don't highlight spontaneously again,.


restore backupset question forgive the repeat

2017-06-27 Thread Lee, Gary
The screen reader did not tell me certain messages were highlighted, so an 
answer to this was deleted so asking again with more information.

Tsm server 7.1.7.

Two tsm windows clients, restore-tem 7.1.4
Rcipserver 6.3.2.

Rcipserver has a backupset

Want to restore files to restore-temp.
Proxy granted with rcipserver as target, and restore-temp as agent.

Restore-rcip schedule defined as follows:

tsm: TSM01>q sch twoweeks restore-rcip f=d
Policy Domain Name: TWOWEEKS
Schedule Name: RESTORE-RCIP
Description: restore filespace from backupset
Action: Macro
Subaction:
Options: -asnode=rcipserver
Objects: c:\restore-it.mac
Priority: 5
Start Date/Time: 06/27/2017 12:26:13
Duration: 5 Minute(s)
.  .  .

Restore-it.mac:

restore backupset {'\\rcipserver\c$'}\* e:\rcip-c\ 
-backupsetname=RCIP-RETAIN-6-MONTH.777843459 -loc=server -subdir=yes


q backupset rcipserver f=d:

tsm: TSM01>q backupset rcipserver f=d
Node Name: RCIPSERVER
Backup Set Name: RCIP-RETAIN-6-MONTH.777843459
Data Type: File
Date/Time: 05/17/2017 08:33:06
Retention Period: 185
Device Class Name: TS1120-RB-ON-BACK
Description: rcipserver retain until 11/17/2017
Has Table of Contents (TOC)?: No
Filespace names: RCIPSERVER\SystemState\NULL\System
State\SystemState \\rcipserver\c$
\\ rcipserver\e$ \\rcipserver\f$
Volume names: 550025 227337


Errors:

06/27/2017 12:26:52  ANR0406I Session 38074 started for node RESTORE-TEMP
  (WinNT) (Tcp/Ip restore-temp.servers.bsu.edu(65421)).
  (SESSION: 38074)
06/27/2017 12:26:54  ANR3540E Object set 
RESTORE-TEMP:RCIP-RETAIN-6-MONTH.7778-
  43459:File was not found for session 38074,
  RESTORE-TEMP. (SESSION: 38074)
06/27/2017 12:26:56  ANR1626I The previous message (message number 3540) was
  repeated 2 times.


Probably overlooked something simple, but just don't see it.

Thanks for the help.

Hopefully things don't highlight spontaneously again,.


client side encryption question

2017-06-01 Thread Lee, Gary
We are being asked about encrypting our backups.

Looking at client side encryption, if they want to encrypt all contents of a 
drive under windows, is

Include.encrypt c:\...\*

The best way?

Also, what about a bare metal restore of an encrypted client?
Using encryptkey=generate

Will doing a systemstate restore first bring back the key so as to restore 
files?

Thank you all. This will get me started.


Re: Script question

2017-05-17 Thread Heinz Flemming
According to rou...@univ.haifa.ac.il
> Hi All
>
> Need a little help with this script
>
> select cast(node_name as varchar(20)) as "Nodebane" , cast(TCP_ADDRESS as 
> varchar(15)) as "IP" ,  cast(CLIENT_OS_NAME as varchar(35)) as " OS Name"   , 
> cast(CLIENT_SYSTEM_ARCHITECTURE as varchar(4)) as "BIT" , 
> cast(APPLICATION_VERSION as varchar(2)) as "Aooplcation Version", 
> cast(APPLICATION_RELEASE as varchar(2)) as "Applcation Release", 
> cast(APPLICATION_LEVEL as varchar(2)) as "Application Level" , 
> cast(APPLICATION_SUBLEVEL as varchar(2)) as "Application Sublevel" from nodes 
>  where (node_name like ('%%_DB')  and node_name not like ('DRM%%')) order by 4
>

Try the following:

select node_name from nodes where node_name like '%\_SQL' escape '\'

The character "_" matches excactly one character.
Therefore you must escaped it.


Heinz

--
Karlsruher Institut fuer Technologie (KIT)
Steinbuch Centre for Computing (SCC)

Heinz Flemming
Scientific Data Management (SDM)

76131 Karlsruhe

KIT - Die Forschungsuniversitaet in der Helmholtz-Gemeinschaft
--


Script question

2017-05-17 Thread rou...@univ.haifa.ac.il
Hi All

Need a little help with this script

select cast(node_name as varchar(20)) as "Nodebane" , cast(TCP_ADDRESS as 
varchar(15)) as "IP" ,  cast(CLIENT_OS_NAME as varchar(35)) as " OS Name"   , 
cast(CLIENT_SYSTEM_ARCHITECTURE as varchar(4)) as "BIT" , 
cast(APPLICATION_VERSION as varchar(2)) as "Aooplcation Version", 
cast(APPLICATION_RELEASE as varchar(2)) as "Applcation Release", 
cast(APPLICATION_LEVEL as varchar(2)) as "Application Level" , 
cast(APPLICATION_SUBLEVEL as varchar(2)) as "Application Sublevel" from nodes  
where (node_name like ('%%_DB')  and node_name not like ('DRM%%')) order by 4

Got output:

NodenaneIP  
 OS Name BIT  
Application Version  Application Release  Application Level  
Application Sublevel
-  
  
-   --- 
--   -
HPLDB  132.74.1.210
AIX:AIX  PPC
EXCHSRVAN_DB  132.74.1.226WIN:Windows 
Server 2008 R2  x647
 1 4
0
EXCHSRVBN_DB  132.74.1.227 WIN:Windows 
Server 2008 R2  x647
 1 4
0

Why the line:HPLDB appears !!

I specific node_name like(‘%%_DB’)   ?

Ideas ???

Best Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il


Re: question about fecapacity field in filespaces table

2017-03-17 Thread Erwann SIMON
Hi Gary,

FE stands for Front End. It's used for Front End Capacity licensing evaluation.

-- 
Best regards / Cordialement / مع تحياتي
Erwann SIMON

- Mail original -
De: "Gary Lee" <g...@bsu.edu>
À: ADSM-L@VM.MARIST.EDU
Envoyé: Vendredi 17 Mars 2017 15:56:24
Objet: [ADSM-L] question about fecapacity field in filespaces table

I am not sure what this field is supposed to signify.

In the following extract, I did a
Select * from filespaces where node_name='PRODBANDB2'

At the bottom of the extract, I actually calculated the value by multiplying 
capacity by pct_util, (correcting for the fact that it is a percentage).
The answer makes no bares no resemblance to the fecapacity field.
So, what does the fecapacity field signify?
Extract follows.


   NODE_NAME: PRODBANDB1
FILESPACE_NAME: /u04
  FILESPACE_ID: 2
FILESPACE_TYPE: EXT3
   CAPACITY: 403167.88
   PCT_UTIL: 78.32
  BACKUP_START: 2017-03-16 23:15:48.00
  BACKUP_END: 2017-03-17 00:40:42.00
DELETE_OCCURRED: 2016-09-21 20:53:58.00
UNICODE_FILESPACE: NO
FILESPACE_HEXNAME: 2F753034
LAST_REPL_START:
 LAST_REPL_COMP:
 BK_REPLRULE_NAME: DEFAULT
BK_REPLRULE_STATE: ENABLED
AR_REPLRULE_NAME: DEFAULT
AR_REPLRULE_STATE: ENABLED
SP_REPLRULE_NAME: DEFAULT
SP_REPLRULE_STATE: ENABLED
COLLOCGROUP_NAME:
 LAST_BACKUP_COMPLETE:
LAST_ARCHIVE_COMPLETE:
  ATRISK_TYPE:
 ATRISK_INTERVAL:
  ENTITYNAME:
  ENTITYTYPE:
  PARENTNAME:
  ENTITYINFO:
  PROTECTSIZE:
  FECAPACITY: 25593199616
   MACADDR:
  DECOMM_STATE: NO
  DECOMM_DATE:


calculated total: 315761.083616


question about fecapacity field in filespaces table

2017-03-17 Thread Lee, Gary
I am not sure what this field is supposed to signify.

In the following extract, I did a
Select * from filespaces where node_name='PRODBANDB2'

At the bottom of the extract, I actually calculated the value by multiplying 
capacity by pct_util, (correcting for the fact that it is a percentage).
The answer makes no bares no resemblance to the fecapacity field.
So, what does the fecapacity field signify?
Extract follows.


   NODE_NAME: PRODBANDB1
FILESPACE_NAME: /u04
  FILESPACE_ID: 2
FILESPACE_TYPE: EXT3
   CAPACITY: 403167.88
   PCT_UTIL: 78.32
  BACKUP_START: 2017-03-16 23:15:48.00
  BACKUP_END: 2017-03-17 00:40:42.00
DELETE_OCCURRED: 2016-09-21 20:53:58.00
UNICODE_FILESPACE: NO
FILESPACE_HEXNAME: 2F753034
LAST_REPL_START:
 LAST_REPL_COMP:
 BK_REPLRULE_NAME: DEFAULT
BK_REPLRULE_STATE: ENABLED
AR_REPLRULE_NAME: DEFAULT
AR_REPLRULE_STATE: ENABLED
SP_REPLRULE_NAME: DEFAULT
SP_REPLRULE_STATE: ENABLED
COLLOCGROUP_NAME:
 LAST_BACKUP_COMPLETE:
LAST_ARCHIVE_COMPLETE:
  ATRISK_TYPE:
 ATRISK_INTERVAL:
  ENTITYNAME:
  ENTITYTYPE:
  PARENTNAME:
  ENTITYINFO:
  PROTECTSIZE:
  FECAPACITY: 25593199616
   MACADDR:
  DECOMM_STATE: NO
  DECOMM_DATE:


calculated total: 315761.083616


Re: Question about Aspera FASP

2017-01-09 Thread Chavdar Cholev
Hi Robert,
yes both TSM srvs on Linux

have a good day
Chavdar

On Mon, Jan 9, 2017 at 11:42 AM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:

> Hi All
>
> Thanks to all for the answers…..
>
> So this feature is working only on Linux TSM Server,   so the source have
> to be Linus (MUST) , but did the target too ?  Meaning Aspera FASP have to
> be install at both servers (Source / Target) to works ?
>
> Thanks Again
>
> Robert
>
>
> Hi All
>
> A quick question did anybody know if the Module IBM Spectrum Protect with
> Aspera FASP ( IBM Spectrum Protect High Speed Data Transfer), is also
> available for TSM Server Windows 2012 64B .
>
> I only find a package for TSM Server Linux …..
>
> Best Regards
>
> Robert Ouzen
> Haifa University
>
>
> [cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>
>
> רוברט אוזן
> ראש תחום שרידות וזמינות נתונים.
> אוניברסיטת חיפה
> משרד: בניין ראשי, חדר 5015
> טלפון:  04-8240345 (פנימי: 2345)
> דואר: rou...@univ.haifa.ac.il
> _
> אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
> אתר אגף מחשוב ומערכות מידע: http://computing.haifa.ac.il ttp://computing.haifa.ac.il/>
>


Question about Aspera FASP

2017-01-09 Thread rou...@univ.haifa.ac.il
Hi All

Thanks to all for the answers…..

So this feature is working only on Linux TSM Server,   so the source have to be 
Linus (MUST) , but did the target too ?  Meaning Aspera FASP have to be install 
at both servers (Source / Target) to works ?

Thanks Again

Robert


Hi All

A quick question did anybody know if the Module IBM Spectrum Protect with 
Aspera FASP ( IBM Spectrum Protect High Speed Data Transfer), is also available 
for TSM Server Windows 2012 64B .

I only find a package for TSM Server Linux …..

Best Regards

Robert Ouzen
Haifa University


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Re: Question about Aspera FASP

2017-01-08 Thread Gergana V Markova
Aspera is supported on Linux only. 
Thanks.

Regards/Pozdravi, Gergana

| | “The shortest answer is doing.” - George Herbert 
| |  ~\\  !  //~
| |   ( ( 'o . o' ) )Imagination can change the equation . .
| |(  w )
Gergana V Markova   gmark...@us.ibm.com
| |   Storage and SW Defined Systems CTO Office 
| |   linkedin.com/in/gergana 

''o' 'Check the awesome TSM Operation Center Live Demo on Service Engage! 
https://www.ibmserviceengage.com/on-premises-solutions



From:   Chavdar Cholev <chavdar.cho...@gmail.com>
To: ADSM-L@VM.MARIST.EDU
Date:   01/08/2017 02:06 AM
Subject:Re: [ADSM-L] Question about Aspera FASP
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Hi,
as far as I know... Linux only...
Chavdar

On Sunday, 8 January 2017, rou...@univ.haifa.ac.il 
<rou...@univ.haifa.ac.il>
wrote:

> Hi All
>
> A quick question did anybody know if the Module IBM Spectrum Protect 
with
> Aspera FASP ( IBM Spectrum Protect High Speed Data Transfer), is also
> available for TSM Server Windows 2012 64B .
>
> I only find a package for TSM Server Linux …..
>
> Best Regards
>
> Robert Ouzen
> Haifa University
>
>
> [cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>
>
> רוברט אוזן
> ראש תחום שרידות וזמינות נתונים.
> אוניברסיטת חיפה
> משרד: בניין ראשי, חדר 5015
> טלפון:  04-8240345 (פנימי: 2345)
> דואר: rou...@univ.haifa.ac.il <javascript:;>
> _
> אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
> אתר אגף מחשוב ומערכות מידע: http://computing.haifa.ac.il ttp://computing.haifa.ac.il/>
>






Re: Question about Aspera FASP

2017-01-08 Thread Chavdar Cholev
Hi,
as far as I know... Linux only...
Chavdar

On Sunday, 8 January 2017, rou...@univ.haifa.ac.il <rou...@univ.haifa.ac.il>
wrote:

> Hi All
>
> A quick question did anybody know if the Module IBM Spectrum Protect with
> Aspera FASP ( IBM Spectrum Protect High Speed Data Transfer), is also
> available for TSM Server Windows 2012 64B .
>
> I only find a package for TSM Server Linux …..
>
> Best Regards
>
> Robert Ouzen
> Haifa University
>
>
> [cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>
>
> רוברט אוזן
> ראש תחום שרידות וזמינות נתונים.
> אוניברסיטת חיפה
> משרד: בניין ראשי, חדר 5015
> טלפון:  04-8240345 (פנימי: 2345)
> דואר: rou...@univ.haifa.ac.il <javascript:;>
> _
> אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
> אתר אגף מחשוב ומערכות מידע: http://computing.haifa.ac.il ttp://computing.haifa.ac.il/>
>


Question about Aspera FASP

2017-01-08 Thread rou...@univ.haifa.ac.il
Hi All

A quick question did anybody know if the Module IBM Spectrum Protect with 
Aspera FASP ( IBM Spectrum Protect High Speed Data Transfer), is also available 
for TSM Server Windows 2012 64B .

I only find a package for TSM Server Linux …..

Best Regards

Robert Ouzen
Haifa University


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
The problem was corrected but,  I have no idea how.

I stopped the migration and replication kicked off. I let the replication run 
for an hour or so while I was at lunch. When I got back I stopped replication 
and restarted the migration. I have no idea how but it used the correct volumes 
this time. 

If this makes any sense by all means please explain it to me.

I appreciate all the help.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Ron 
Delaware
Sent: Wednesday, September 21, 2016 12:55 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

Ricky,

could you please send the output of the following commands:
1.   Q MOUNT
2. q act begint=-12 se=Migration

Also, the only way that the stgpool would migrate back to itself would be if 
there was a loop, meaning your disk pool points to the tape pool as the next 
stgpool, and your tape pool points to the disk pool as the next stgpool. If 
your tape pool was to hit the high migration mark, it would start a migration 
to the disk pool



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert IBM Corporation | System Lab Services 
IBM Certified Solutions Advisor - Spectrum Storage IBM Certified Spectrum Scale 
4.1 & Spectrum Protect 7.1.3 IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Plair, Ricky" <rpl...@healthplan.com>
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:57 AM
Subject:Re: [ADSM-L] TSM Migration Question
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Nothing out of the normal, the below is kind of odd but I'm not sure it 
has anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to 
migrate to the tapepool and now it's doing the same thing. Migrating to 
itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Skylar Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   C

Re: TSM Migration Question

2016-09-21 Thread Ron Delaware
Ricky,

could you please send the output of the following commands:
1.   Q MOUNT
2. q act begint=-12 se=Migration

Also, the only way that the stgpool would migrate back to itself would be 
if there was a loop, meaning your disk pool points to the tape pool as the 
next stgpool, and your tape pool points to the disk pool as the next 
stgpool. If your tape pool was to hit the high migration mark, it would 
start a migration to the disk pool



_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert
IBM Corporation | System Lab Services
IBM Certified Solutions Advisor - Spectrum Storage
IBM Certified Spectrum Scale 4.1 & Spectrum Protect 7.1.3
IBM Certified Cloud Object Storage (Cleversafe) 
Open Group IT Certified - Masters
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   "Plair, Ricky" <rpl...@healthplan.com>
To: ADSM-L@VM.MARIST.EDU
Date:   09/21/2016 08:57 AM
Subject:Re: [ADSM-L] TSM Migration Question
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Nothing out of the normal, the below is kind of odd but I'm not sure it 
has anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to 
migrate to the tapepool and now it's doing the same thing. Migrating to 
itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has 
been
   deleted from storage pool DDSTGPOOL. (SESSION: 
427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Skylar Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
>   

Re: TSM Migration Question

2016-09-21 Thread Gee, Norman
Is it migrating or reclaiming at 70% as defined.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Plair, 
Ricky
Sent: Wednesday, September 21, 2016 8:50 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: TSM Migration Question

Nothing out of the normal, the below is kind of odd but I'm not sure it has 
anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to migrate to 
the tapepool and now it's doing the same thing. Migrating to itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>

Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
Nothing out of the normal, the below is kind of odd but I'm not sure it has 
anything to do with my problem. 

Now here is something, I have another disk pool that is supposed to migrate to 
the tapepool and now it's doing the same thing. Migrating to itself.

What the heck.


09/21/2016 10:58:21   ANR1341I Scratch volume /ddstgpool/A7F7.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:58:24   ANR1341I Scratch volume /ddstgpool2/A7FA.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)
09/21/2016 10:59:24   ANR1341I Scratch volume /ddstgpool/A7BC.BFS has been
   deleted from storage pool DDSTGPOOL. (SESSION: 427155)

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:35 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get 
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No  Processes For Identifying 
> Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client Contains Data 
> Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 0
> Delay Period for Volume Reuse: 0

Re: TSM Migration Question

2016-09-21 Thread Skylar Thompson
The only oddity I see is that DDSTGPOOL4500 has a NEXTSTGPOOL of TAPEPOOL.
Shouldn't cause any problems now since utilization is 0% but would get
triggered once you hit the HIGHMIG threshold.

Is there anything in the activity log for the errant migration processes?

On Wed, Sep 21, 2016 at 03:28:53PM +, Plair, Ricky wrote:
> OLD STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool f=d
>
> Storage Pool Name: DDSTGPOOL
> Storage Pool Type: Primary
> Device Class Name: DDFILE
>Estimated Capacity: 402,224 G
>Space Trigger Util: 69.4
>  Pct Util: 70.4
>  Pct Migr: 70.4
>   Pct Logical: 95.9
>  High Mig Pct: 100
>   Low Mig Pct: 95
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 26
> Reclamation Processes: 10
> Next Storage Pool: DDSTGPOOL4500
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 2,947
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 4,560
>  Reclamation in Progress?: Yes
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 09:05:51
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No
>  Processes For Identifying Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client
> Contains Data Deduplicated by Client?: No
>
>
>
> NEW STORAGE POOL
>
> tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d
>
> Storage Pool Name: DDSTGPOOL4500
> Storage Pool Type: Primary
> Device Class Name: DDFILE1
>Estimated Capacity: 437,159 G
>Space Trigger Util: 21.4
>  Pct Util: 6.7
>  Pct Migr: 6.7
>   Pct Logical: 100.0
>  High Mig Pct: 90
>   Low Mig Pct: 70
>   Migration Delay: 0
>Migration Continue: Yes
>   Migration Processes: 1
> Reclamation Processes: 1
> Next Storage Pool: TAPEPOOL
>  Reclaim Storage Pool:
>Maximum Size Threshold: No Limit
>Access: Read/Write
>   Description:
> Overflow Location:
> Cache Migrated Files?:
>Collocate?: No
> Reclamation Threshold: 70
> Offsite Reclamation Limit:
>   Maximum Scratch Volumes Allowed: 3,000
>Number of Scratch Volumes Used: 0
> Delay Period for Volume Reuse: 0 Day(s)
>Migration in Progress?: No
>  Amount Migrated (MB): 0.00
>  Elapsed Migration Time (seconds): 0
>  Reclamation in Progress?: No
>Last Update by (administrator): RPLAIR
> Last Update Date/Time: 09/21/2016 08:38:58
>  Storage Pool Data Format: Native
>  Copy Storage Pool(s):
>   Active Data Pool(s):
>   Continue Copy on Error?: Yes
>  CRC Data: No
>  Reclamation Type: Threshold
>   Overwrite Data when Deleted:
> Deduplicate Data?: No
>  Processes For Identifying Duplicates:
> Duplicate Data Not Stored:
>Auto-copy Mode: Client
> Contains Data Deduplicated by Client?: No
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On

Re: TSM Migration Question

2016-09-21 Thread Plair, Ricky
OLD STORAGE POOL

tsm: PROD-TSM01-VM>q stg ddstgpool f=d

Storage Pool Name: DDSTGPOOL
Storage Pool Type: Primary
Device Class Name: DDFILE
   Estimated Capacity: 402,224 G
   Space Trigger Util: 69.4
 Pct Util: 70.4
 Pct Migr: 70.4
  Pct Logical: 95.9
 High Mig Pct: 100
  Low Mig Pct: 95
  Migration Delay: 0
   Migration Continue: Yes
  Migration Processes: 26
Reclamation Processes: 10
Next Storage Pool: DDSTGPOOL4500
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?:
   Collocate?: No
Reclamation Threshold: 70
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed: 3,000
   Number of Scratch Volumes Used: 2,947
Delay Period for Volume Reuse: 0 Day(s)
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 4,560
 Reclamation in Progress?: Yes
   Last Update by (administrator): RPLAIR
Last Update Date/Time: 09/21/2016 09:05:51
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type: Threshold
  Overwrite Data when Deleted:
Deduplicate Data?: No
 Processes For Identifying Duplicates:
Duplicate Data Not Stored:
   Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No



NEW STORAGE POOL

tsm: PROD-TSM01-VM>q stg ddstgpool4500 f=d

Storage Pool Name: DDSTGPOOL4500
Storage Pool Type: Primary
Device Class Name: DDFILE1
   Estimated Capacity: 437,159 G
   Space Trigger Util: 21.4
 Pct Util: 6.7
 Pct Migr: 6.7
  Pct Logical: 100.0
 High Mig Pct: 90
  Low Mig Pct: 70
  Migration Delay: 0
   Migration Continue: Yes
  Migration Processes: 1
Reclamation Processes: 1
Next Storage Pool: TAPEPOOL
 Reclaim Storage Pool:
   Maximum Size Threshold: No Limit
   Access: Read/Write
  Description:
Overflow Location:
Cache Migrated Files?:
   Collocate?: No
Reclamation Threshold: 70
Offsite Reclamation Limit:
  Maximum Scratch Volumes Allowed: 3,000
   Number of Scratch Volumes Used: 0
Delay Period for Volume Reuse: 0 Day(s)
   Migration in Progress?: No
 Amount Migrated (MB): 0.00
 Elapsed Migration Time (seconds): 0
 Reclamation in Progress?: No
   Last Update by (administrator): RPLAIR
Last Update Date/Time: 09/21/2016 08:38:58
 Storage Pool Data Format: Native
 Copy Storage Pool(s):
  Active Data Pool(s):
  Continue Copy on Error?: Yes
 CRC Data: No
 Reclamation Type: Threshold
  Overwrite Data when Deleted:
Deduplicate Data?: No
 Processes For Identifying Duplicates:
Duplicate Data Not Stored:
   Auto-copy Mode: Client
Contains Data Deduplicated by Client?: No


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Skylar 
Thompson
Sent: Wednesday, September 21, 2016 11:19 AM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] TSM Migration Question

Can you post the output of "Q STG F=D" for each of those pools?

On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> Within TSM I am migrating an old storage pool on a DD4200 to a new storage 
> pool on a DD4500.
>
> First of all, it worked fine yesterday.
>
> The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
> process, but I had to stop it.
>
> Now when I restart it the migration process it is migrating to the old 
> storage volumes instead of the new storage volumes. Basically it's just 
> migrating from one disk volume inside the ddstgpool to another disk volume in 
> the ddstgpool.
>
> It is not us

Re: TSM Migration Question

2016-09-21 Thread Skylar Thompson
Can you post the output of "Q STG F=D" for each of those pools?

On Wed, Sep 21, 2016 at 02:33:42PM +, Plair, Ricky wrote:
> Within TSM I am migrating an old storage pool on a DD4200 to a new storage 
> pool on a DD4500.
>
> First of all, it worked fine yesterday.
>
> The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
> process, but I had to stop it.
>
> Now when I restart it the migration process it is migrating to the old 
> storage volumes instead of the new storage volumes. Basically it's just 
> migrating from one disk volume inside the ddstgpool to another disk volume in 
> the ddstgpool.
>
> It is not using the next pool parameter,  has anyone seen this problem before?
>
> I appreciate the help.

--
-- Skylar Thompson (skyl...@u.washington.edu)
-- Genome Sciences Department, System Administrator
-- Foege Building S046, (206)-685-7354
-- University of Washington School of Medicine


TSM Migration Question

2016-09-21 Thread Plair, Ricky
Within TSM I am migrating an old storage pool on a DD4200 to a new storage pool 
on a DD4500.

First of all, it worked fine yesterday.

The nextpool is correct and migration is hi=0 lo=0 and using 25 migration 
process, but I had to stop it.

Now when I restart it the migration process it is migrating to the old storage 
volumes instead of the new storage volumes. Basically it's just migrating from 
one disk volume inside the ddstgpool to another disk volume in the ddstgpool.

It is not using the next pool parameter,  has anyone seen this problem before?

I appreciate the help.








Ricky M. Plair
Storage Engineer
HealthPlan Services
Office: 813 289 1000 Ext 2273
Mobile: 813 357 9673



_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
CONFIDENTIALITY NOTICE: This email message, including any attachments, is for 
the sole use of the intended recipient(s) and may contain confidential and 
privileged information and/or Protected Health Information (PHI) subject to 
protection under the law, including the Health Insurance Portability and 
Accountability Act of 1996, as amended (HIPAA). If you are not the intended 
recipient or the person responsible for delivering the email to the intended 
recipient, be advised that you have received this email in error and that any 
use, disclosure, distribution, forwarding, printing, or copying of this email 
is strictly prohibited. If you have received this email in error, please notify 
the sender immediately and destroy all copies of the original message.


Backup image question

2016-09-13 Thread Robert Ouzen
Hi to all

I want to make a backup image of a drive , we not need to backup anymore.

My question is if it’s a limit of the image created ?

Windows drive , backup image P:

tsm: ADSM2>q occ vod4

Node Name  Type Filespace  FSID Storage Number of   
 Physical Logical
  Name  
Pool Name   FilesSpace Space

   Occupied Occupied

   (MB) (MB)
--  --  --  
---   --- ---
VOD4 Bkup \\vod4\p$ 4 LTO5
14,185  1,775,444.5 1,774,923.7

TSM  client 7.1.2.2
Tsm Server  7.1.6.0

Best Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Re: VM snapshot question

2016-08-23 Thread James Thorne
Hi Robert.

No, you won't see the snapshots for the restored VM.  They are not backed up or 
restored by TSM for VE.

James.

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Robert 
Ouzen
Sent: 21 June 2016 13:26
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] VM snapshot question

HI to all

A quick question before I investigate , using  TSM for VE V7.1.4 to back up our 
VMware environment .

I  wonder if I have a VM  with on it several snapshot ,  if after a VE backup 
and restoring the VM in alternate place.

What will be the status of this machine meaning If I will  see as  at the 
original machine those snapshots ???

Best Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838 אתר אגף 
מחשוב ומערכות מידע: http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Re: Journaling question

2016-08-13 Thread Stefan Folkerts
>I don't believe you can use journal-based-backup for network connected
file systems.

Very true, the journaling agent only works on local supported filesystems
because it is not aware of writes to remote filesystems so it can't add
them to the journal of files to backup.
I would advice to use a nojournal backup once a week/month with journaling
as well for the same reason Paul uses the rebase backup on the NAS.
Journaling cannot be trusted for 100% all the time, I have seen things go
wrong with several versions on the B/A client and it is difficult to
explain.


On Fri, Aug 12, 2016 at 9:43 PM, Paul Zarnowski  wrote:

> Michael,
>
> I don't believe you can use journal-based-backup for network connected
> file systems.  If you want to avoid incremental backups walking a NAS
> filesystem looking for changed files, you can look into NetApp (which TSM
> supports with snapdiff incrementals) and IBM Sonas.  Snapdiff uses a NetApp
> API which compares two snapshots, producing a list of changed files which
> TSM then backs up.  This eliminates the time it takes to "walk" the
> filesystem looking for changes, and this can dramatically speed up
> incremental backups.  We use this, but we also do a monthly walk (called a
> rebase) just to catch any edge cases that snapdiff misses.
>
> I believe Isilon also has a similar unsupported API, but as it was not
> supported we opted not to go with Isilon.
>
> ..Paul
>
>
> At 11:42 AM 8/12/2016, Michael P Hizny wrote:
> >All,
> >
> >Does anyone know a way to use journaling in TSM on a networked file
> system.
> >If you go through the wizard, you can select to journal the local file
> >systems, but there is no option to journal network connected file systems.
> >We can back them up by specifying them in the opt file as:
> >
> >DOMAIN "\\xxx.xxx.xxx.xxx\folder1"
> >DOMAIN "\\xxx.xxx.xxx.xxx\folder2"But can you set up and use journaling
> on them?
> >
> >Thanks,
> >Mike
> >
> >Michael Hizny
> >Binghamton University
> >mhi...@binghamton.edu
>
>
> --
> Paul ZarnowskiPh: 607-255-4757
> IT at Cornell / InfrastructureFx: 607-255-8521
> 719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu
>


Re: Journaling question

2016-08-12 Thread Paul Zarnowski
Michael,

I don't believe you can use journal-based-backup for network connected file 
systems.  If you want to avoid incremental backups walking a NAS filesystem 
looking for changed files, you can look into NetApp (which TSM supports with 
snapdiff incrementals) and IBM Sonas.  Snapdiff uses a NetApp API which 
compares two snapshots, producing a list of changed files which TSM then backs 
up.  This eliminates the time it takes to "walk" the filesystem looking for 
changes, and this can dramatically speed up incremental backups.  We use this, 
but we also do a monthly walk (called a rebase) just to catch any edge cases 
that snapdiff misses.

I believe Isilon also has a similar unsupported API, but as it was not 
supported we opted not to go with Isilon.

..Paul


At 11:42 AM 8/12/2016, Michael P Hizny wrote:
>All,
>
>Does anyone know a way to use journaling in TSM on a networked file system.
>If you go through the wizard, you can select to journal the local file
>systems, but there is no option to journal network connected file systems.
>We can back them up by specifying them in the opt file as:
>
>DOMAIN "\\xxx.xxx.xxx.xxx\folder1"
>DOMAIN "\\xxx.xxx.xxx.xxx\folder2"But can you set up and use journaling on 
>them?
>
>Thanks,
>Mike
>
>Michael Hizny
>Binghamton University
>mhi...@binghamton.edu


--
Paul ZarnowskiPh: 607-255-4757
IT at Cornell / InfrastructureFx: 607-255-8521
719 Rhodes Hall, Ithaca, NY 14853-3801Em: p...@cornell.edu


Journaling question

2016-08-12 Thread Michael P Hizny
All,

Does anyone know a way to use journaling in TSM on a networked file system.
If you go through the wizard, you can select to journal the local file
systems, but there is no option to journal network connected file systems.
We can back them up by specifying them in the opt file as:

DOMAIN "\\xxx.xxx.xxx.xxx\folder1"
DOMAIN "\\xxx.xxx.xxx.xxx\folder2"But can you set up and use journaling on them?

Thanks,
Mike

Michael Hizny
Binghamton University
mhi...@binghamton.edu


Re: query auditocc question

2016-08-10 Thread Paul_Dudley
To all,

It appears that I have solved this problem. The issue was that the "expire 
inventory" command was not running to completion.
It was being run with a duration parameter which did not allow it to complete 
the expire inventory of all nodes.

Thanks & Regards
Paul



-Original Message-
From: Paul_Dudley [mailto:pdud...@anl.com.au]
Sent: Thursday, 28 July 2016 10:11 AM
To: 'ADSM: Dist Stor Manager'
Subject: RE: query auditocc question

Thanks. I ran this command and it ran successfully and provided me with a list 
of totals.
However they are a lot higher than what is reported by my query auditocc 
command.
Does it total just the primary storage or all storage pools?

I am just interested in primary storage to keep track of storage used as we 
have the unified terabyte licensing for TSM.

Thanks & Regards
Paul



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Ehresman
Sent: Wednesday, 27 July 2016 11:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] query auditocc question

Perhaps the following query would give you better data:

select node_name,sum(reporting_mb) as "TOTAL_OCCUPANCY" from occupancy group by 
node_name


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Tuesday, July 26, 2016 9:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] query auditocc question

To keep track of our primary storage usage, on a weekly basis I run the 
following command:



query auditocc pooltype=primary



For one of the nodes - an SQL server - it is reporting that the backup storage 
used is 7Tb.

However knowing the size of the databases on that server and the number of 
copies of each backup we keep, I believe that to be incorrect.

I started the DP for SQL client on that server, clicked on Restore Data, and 
then selected all backups (both active and inactive).

I then added up the size of each backup (both active and inactive) and it added 
up to 2.2Tb.



How could there be such a huge difference between each total?





Thanks & Regards

Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add ressees. Any unauthorised dissemination or use is strictly 
prohibited. If you received this e-mail in error, please immediately notify the 
sender by return e-mail from your s ystem. Please do not copy, use or make 
reference to it for any purpose, or disclose its  contents to any person.
--
Message  protected by MailGuard: e-mail anti-virus, anti-spam and content 
filtering.
http://www.mailguard.com.au

Click here to report this message as spam:
https://console.mailguard.com.au/ras/1OVoBqtOtJ/3vhxB9XK9tUP8GWQIH6joI/0.022







ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you 
received
this e-mail in error, please immediately notify the sender by return e-mail 
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or 
disclose its
 contents to any person.


Re: Move Container question

2016-08-07 Thread Ron Delaware
Robert,

This is a housekeeping type task that is performed, think of it like you 
if you ran a defrag disk.  This task takes place so that admin task of 
reclamation is not needed.  All of that is performed by the code without 
configuration, that same for your deduplication. Containers are dedup 
enabled out of the box and as data goes in, the code decides if the data 
should be deduped or not, and no configuration its all done in the 
background. 
_
Ronald C. Delaware
IBM IT Plus Certified Specialist - Expert
IBM Corporation | Tivoli Software
IBM Certified Solutions Advisor - Tivoli Storage
IBM Certified Deployment Professional
916-458-5726 (Office)
925-457-9221 (cell phone)
email: ron.delaw...@us.ibm.com
Storage Services Offerings








From:   Robert Ouzen <rou...@univ.haifa.ac.il>
To: ADSM-L@VM.MARIST.EDU
Date:   08/06/2016 07:07 PM
Subject:[ADSM-L] Move Container question
Sent by:"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>



Hi to all

Can anybody give me more information of the process for Storage of type 
DIRECTORY

ANR0986I Process 263 for Move Container (Automatic)   running in the 
BACKGROUND processed 73,945 items for a
  total of 10,705,469,440 bytes with a completion 
state of  SUCCESS at 03:23:06. (PROCESS: 263)

It’s run automatic , where is the configuration about it ?

There with a way to create a script to get report of this running. When I 
tried the regular  Q ACT begind=-3 s= ANR0986I

Got also Reclamation and Move Container and Identify Duplicates and ….

Thanks Tobert







Move Container question

2016-08-06 Thread Robert Ouzen
Hi to all

Can anybody give me more information of the process for Storage of type 
DIRECTORY

ANR0986I Process 263 for Move Container (Automatic)   running in the BACKGROUND 
processed 73,945 items for a
  total of 10,705,469,440 bytes with a completion state 
of  SUCCESS at 03:23:06. (PROCESS: 263)

It’s run automatic , where is the configuration about it ?

There with a way to create a script to get report of this running. When I tried 
the regular  Q ACT begind=-3 s= ANR0986I

Got also Reclamation and Move Container and Identify Duplicates and ….

Thanks Tobert


Re: query auditocc question

2016-08-01 Thread Paul_Dudley
Thanks David - I ran the second command which provided me with a report. 
However the figures in that report match what I already receive from the query 
auditocc command.
This makes me think that there is something wrong with the data that it is 
reporting on. There are two SQL database servers for which the reported data 
appears to be obviously wrong.

For one of them the reported primary storage pool data has increased over the 
past 4 weeks from 2.3Tb, to 4.5Tb, to 7.0Tb and now this week it is 9.1Tb.
We have not increased the number of backups kept, and we have not added any new 
SQL databases.
When I start the DP for SQL client and add up all the active and inactive 
databases which I can restore from, the total amount of data is 2.3Tb.

On another SQL server, the reported primary storage pool data has increased 
over the past 4 weeks from 310Gb, to 610Gb, then 950Gb and now it is 1300Gb.
When I start the DP for SQL client and add up all the active and inactive 
databases which I can restore from, the total amount of data is around 310Gb.

It is as if each week the amount of primary storage pool data is being added to 
rather than replaced. However this is only happening for SQL database servers.
For other non-database servers the amount of reported primary storage pool data 
has been stable.

Where is this the information for this reported primary storage pool data kept? 
Is it on our TSMADMIN server somewhere?

Thanks & Regards
Paul



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Ehresman
Sent: Thursday, 28 July 2016 10:57 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] query auditocc question

Paul,

You are right.  All of my data is in primary storage pools so I no longer worry 
about copy pools.  Back when I had copy pools, my storage pool names would 
indicate whether it was primary or copy pool.  Can you mask on pool name to 
distinguish between primary and copy pool?

If so,

select node_name,sum(reporting_mb) as "TOTAL_OCCUPANCY" from occupancy group by 
node_name  where stgpool_name like '%NAME MASK%'c

Alternatively,

select node_name,sum(reporting_mb) as "TOTAL_OCCUPANCY" from occupancy where 
stgpool_name in (select stgpool_name from stgpools where POOLTYPE='PRIMARY' ) 
group by node_name

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Wednesday, July 27, 2016 8:11 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] query auditocc question

Thanks. I ran this command and it ran successfully and provided me with a list 
of totals.
However they are a lot higher than what is reported by my query auditocc 
command.
Does it total just the primary storage or all storage pools?

I am just interested in primary storage to keep track of storage used as we 
have the unified terabyte licensing for TSM.

Thanks & Regards
Paul



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Ehresman
Sent: Wednesday, 27 July 2016 11:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] query auditocc question

Perhaps the following query would give you better data:

select node_name,sum(reporting_mb) as "TOTAL_OCCUPANCY" from occupancy group by 
node_name


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Tuesday, July 26, 2016 9:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] query auditocc question

To keep track of our primary storage usage, on a weekly basis I run the 
following command:



query auditocc pooltype=primary



For one of the nodes - an SQL server - it is reporting that the backup storage 
used is 7Tb.

However knowing the size of the databases on that server and the number of 
copies of each backup we keep, I believe that to be incorrect.

I started the DP for SQL client on that server, clicked on Restore Data, and 
then selected all backups (both active and inactive).

I then added up the size of each backup (both active and inactive) and it added 
up to 2.2Tb.



How could there be such a huge difference between each total?





Thanks & Regards

Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add ressees. Any unauthorised dissemination or use is strictly 
prohibited. If you received this e-mail in error, please immediately notify the 
sender by return e-mail from your s ystem. Please do not copy, use or make 
reference to it for any purpose, or disclose its  contents to any person.
--
Message  protected by MailGuard: e-mail anti-virus, anti-spam and content 
filtering.
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mailguard.com.au=AwIFAg=SgMrq23dbjbGX6e0ZsSHgEZX6A4IAf1SO3AJ2bNrHlk=dOGCMY197NTNH1k_wcsrWS3_fxedKW4rpKJ8cHCD2L8=cU1nSXe9zeJeTX

Re: query auditocc question

2016-07-28 Thread David Ehresman
Paul,

You are right.  All of my data is in primary storage pools so I no longer worry 
about copy pools.  Back when I had copy pools, my storage pool names would 
indicate whether it was primary or copy pool.  Can you mask on pool name to 
distinguish between primary and copy pool?

If so,

select node_name,sum(reporting_mb) as "TOTAL_OCCUPANCY" from occupancy group by 
node_name  where stgpool_name like '%NAME MASK%'c

Alternatively,

select node_name,sum(reporting_mb) as "TOTAL_OCCUPANCY" from occupancy where 
stgpool_name in (select stgpool_name from stgpools where POOLTYPE='PRIMARY' ) 
group by node_name

David

-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Wednesday, July 27, 2016 8:11 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] query auditocc question

Thanks. I ran this command and it ran successfully and provided me with a list 
of totals.
However they are a lot higher than what is reported by my query auditocc 
command.
Does it total just the primary storage or all storage pools?

I am just interested in primary storage to keep track of storage used as we 
have the unified terabyte licensing for TSM.

Thanks & Regards
Paul



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Ehresman
Sent: Wednesday, 27 July 2016 11:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] query auditocc question

Perhaps the following query would give you better data:

select node_name,sum(reporting_mb) as "TOTAL_OCCUPANCY" from occupancy group by 
node_name


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Tuesday, July 26, 2016 9:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] query auditocc question

To keep track of our primary storage usage, on a weekly basis I run the 
following command:



query auditocc pooltype=primary



For one of the nodes - an SQL server - it is reporting that the backup storage 
used is 7Tb.

However knowing the size of the databases on that server and the number of 
copies of each backup we keep, I believe that to be incorrect.

I started the DP for SQL client on that server, clicked on Restore Data, and 
then selected all backups (both active and inactive).

I then added up the size of each backup (both active and inactive) and it added 
up to 2.2Tb.



How could there be such a huge difference between each total?





Thanks & Regards

Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add ressees. Any unauthorised dissemination or use is strictly 
prohibited. If you received this e-mail in error, please immediately notify the 
sender by return e-mail from your s ystem. Please do not copy, use or make 
reference to it for any purpose, or disclose its  contents to any person.
--
Message  protected by MailGuard: e-mail anti-virus, anti-spam and content 
filtering.
https://urldefense.proofpoint.com/v2/url?u=http-3A__www.mailguard.com.au=AwIFAg=SgMrq23dbjbGX6e0ZsSHgEZX6A4IAf1SO3AJ2bNrHlk=dOGCMY197NTNH1k_wcsrWS3_fxedKW4rpKJ8cHCD2L8=cU1nSXe9zeJeTXfHCck544vQ4tE90eevPhlpXBxSpRM=GAitD2h-npPX-E1RY9gr082WxH3H0K633k3kCMja6lM=
 

Click here to report this message as spam:
https://urldefense.proofpoint.com/v2/url?u=https-3A__console.mailguard.com.au_ras_1OVoBqtOtJ_3vhxB9XK9tUP8GWQIH6joI_0.022=AwIFAg=SgMrq23dbjbGX6e0ZsSHgEZX6A4IAf1SO3AJ2bNrHlk=dOGCMY197NTNH1k_wcsrWS3_fxedKW4rpKJ8cHCD2L8=cU1nSXe9zeJeTXfHCck544vQ4tE90eevPhlpXBxSpRM=S_a4rP7L9CwRlVTgwQvwpZYsMSjKtQXVSKY59I4w4eQ=
 







ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you 
received
this e-mail in error, please immediately notify the sender by return e-mail 
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or 
disclose its
 contents to any person.


Re: query auditocc question

2016-07-27 Thread Paul_Dudley
Thanks. I ran this command and it ran successfully and provided me with a list 
of totals.
However they are a lot higher than what is reported by my query auditocc 
command.
Does it total just the primary storage or all storage pools?

I am just interested in primary storage to keep track of storage used as we 
have the unified terabyte licensing for TSM.

Thanks & Regards
Paul



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of David 
Ehresman
Sent: Wednesday, 27 July 2016 11:13 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] query auditocc question

Perhaps the following query would give you better data:

select node_name,sum(reporting_mb) as "TOTAL_OCCUPANCY" from occupancy group by 
node_name


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Tuesday, July 26, 2016 9:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] query auditocc question

To keep track of our primary storage usage, on a weekly basis I run the 
following command:



query auditocc pooltype=primary



For one of the nodes - an SQL server - it is reporting that the backup storage 
used is 7Tb.

However knowing the size of the databases on that server and the number of 
copies of each backup we keep, I believe that to be incorrect.

I started the DP for SQL client on that server, clicked on Restore Data, and 
then selected all backups (both active and inactive).

I then added up the size of each backup (both active and inactive) and it added 
up to 2.2Tb.



How could there be such a huge difference between each total?





Thanks & Regards

Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add ressees. Any unauthorised dissemination or use is strictly 
prohibited. If you received this e-mail in error, please immediately notify the 
sender by return e-mail from your s ystem. Please do not copy, use or make 
reference to it for any purpose, or disclose its  contents to any person.
--
Message  protected by MailGuard: e-mail anti-virus, anti-spam and content 
filtering.
http://www.mailguard.com.au

Click here to report this message as spam:
https://console.mailguard.com.au/ras/1OVoBqtOtJ/3vhxB9XK9tUP8GWQIH6joI/0.022







ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you 
received
this e-mail in error, please immediately notify the sender by return e-mail 
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or 
disclose its
 contents to any person.


Re: query auditocc question

2016-07-27 Thread Paul_Dudley
I checked and our audit license process runs at night every second day. It ran 
about 10 hours prior to the command below being run.

Thanks & Regards
Paul



-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of Paul 
Zarnowski
Sent: Wednesday, 27 July 2016 2:33 PM
To: ADSM-L@VM.MARIST.EDU
Subject: Re: [ADSM-L] query auditocc question

Paul,

The metrics displayed by Q AUDITOCC are updated by AUDIT LICENSE; not in real 
time.

..Paul
(sent from my iPhone)

On Jul 26, 2016, at 9:37 PM, Paul_Dudley 
<pdud...@anl.com.au<mailto:pdud...@anl.com.au>> wrote:

To keep track of our primary storage usage, on a weekly basis I run the 
following command:



query auditocc pooltype=primary



For one of the nodes - an SQL server - it is reporting that the backup storage 
used is 7Tb.

However knowing the size of the databases on that server and the number of 
copies of each backup we keep, I believe that to be incorrect.

I started the DP for SQL client on that server, clicked on Restore Data, and 
then selected all backups (both active and inactive).

I then added up the size of each backup (both active and inactive) and it added 
up to 2.2Tb.



How could there be such a huge difference between each total?





Thanks & Regards

Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au<mailto:pdud...@anl.com.au>








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add ressees. Any unauthorised dissemination or use is strictly 
prohibited. If you received this e-mail in error, please immediately notify the 
sender by return e-mail from your s ystem. Please do not copy, use or make 
reference to it for any purpose, or disclose its contents to any person.
--
Message  protected by MailGuard: e-mail anti-virus, anti-spam and content 
filtering.
http://www.mailguard.com.au

Click here to report this message as spam:
https://console.mailguard.com.au/ras/1OVfnIigZp/43RuRtkrn8UcyaDwDNdZy5/0.222







ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you 
received
this e-mail in error, please immediately notify the sender by return e-mail 
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or 
disclose its
 contents to any person.


Re: query auditocc question

2016-07-27 Thread David Ehresman
Perhaps the following query would give you better data:

select node_name,sum(reporting_mb) as "TOTAL_OCCUPANCY" from occupancy group by 
node_name


-Original Message-
From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of 
Paul_Dudley
Sent: Tuesday, July 26, 2016 9:35 PM
To: ADSM-L@VM.MARIST.EDU
Subject: [ADSM-L] query auditocc question

To keep track of our primary storage usage, on a weekly basis I run the 
following command:



query auditocc pooltype=primary



For one of the nodes - an SQL server - it is reporting that the backup storage 
used is 7Tb.

However knowing the size of the databases on that server and the number of 
copies of each backup we keep, I believe that to be incorrect.

I started the DP for SQL client on that server, clicked on Restore Data, and 
then selected all backups (both active and inactive).

I then added up the size of each backup (both active and inactive) and it added 
up to 2.2Tb.



How could there be such a huge difference between each total?





Thanks & Regards

Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you 
received
this e-mail in error, please immediately notify the sender by return e-mail 
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or 
disclose its
 contents to any person.


Re: query auditocc question

2016-07-26 Thread Paul Zarnowski
Paul,

The metrics displayed by Q AUDITOCC are updated by AUDIT LICENSE; not in real 
time.

..Paul
(sent from my iPhone)

On Jul 26, 2016, at 9:37 PM, Paul_Dudley 
> wrote:

To keep track of our primary storage usage, on a weekly basis I run the 
following command:



query auditocc pooltype=primary



For one of the nodes - an SQL server - it is reporting that the backup storage 
used is 7Tb.

However knowing the size of the databases on that server and the number of 
copies of each backup we keep, I believe that to be incorrect.

I started the DP for SQL client on that server, clicked on Restore Data, and 
then selected all backups (both active and inactive).

I then added up the size of each backup (both active and inactive) and it added 
up to 2.2Tb.



How could there be such a huge difference between each total?





Thanks & Regards

Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you 
received
this e-mail in error, please immediately notify the sender by return e-mail 
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or 
disclose its
contents to any person.


query auditocc question

2016-07-26 Thread Paul_Dudley
To keep track of our primary storage usage, on a weekly basis I run the 
following command:



query auditocc pooltype=primary



For one of the nodes - an SQL server - it is reporting that the backup storage 
used is 7Tb.

However knowing the size of the databases on that server and the number of 
copies of each backup we keep, I believe that to be incorrect.

I started the DP for SQL client on that server, clicked on Restore Data, and 
then selected all backups (both active and inactive).

I then added up the size of each backup (both active and inactive) and it added 
up to 2.2Tb.



How could there be such a huge difference between each total?





Thanks & Regards

Paul Dudley

IT Operations

ANL Container Line

pdud...@anl.com.au








ANL DISCLAIMER

This e-mail and any file attached is confidential, and intended solely to the 
named add
ressees. Any unauthorised dissemination or use is strictly prohibited. If you 
received
this e-mail in error, please immediately notify the sender by return e-mail 
from your s
ystem. Please do not copy, use or make reference to it for any purpose, or 
disclose its
 contents to any person.


Re: Question about restore db

2016-07-09 Thread Francisco Javier
As I remember you can do in both ways ...

Regards
El 9 jul. 2016 21:09, "Robert Ouzen" <rou...@univ.haifa.ac.il> escribió:

> Hello All
>
> A quick question about restore db . I f I made the backup db with  TSM
> server  version 7.1.5.200.
>
> Can I do a restore db if I previously install TSM server 7.1.6 ?
>
> Or I need first to install TSM server version 7.1.5.200 , made the restore
> db and after it update version to 7.1.6 ?
>
> T.I.A Regards
>
> Robert
>


Question about restore db

2016-07-09 Thread Robert Ouzen
Hello All

A quick question about restore db . I f I made the backup db with  TSM server  
version 7.1.5.200.

Can I do a restore db if I previously install TSM server 7.1.6 ?

Or I need first to install TSM server version 7.1.5.200 , made the restore db 
and after it update version to 7.1.6 ?

T.I.A Regards

Robert


VM snapshot question

2016-06-21 Thread Robert Ouzen
HI to all

A quick question before I investigate , using  TSM for VE V7.1.4 to back up our 
VMware environment .

I  wonder if I have a VM  with on it several snapshot ,  if after a VE backup 
and restoring the VM in alternate place.

What will be the status of this machine meaning If I will  see as  at the 
original machine those snapshots ???

Best Regards

Robert


[cid:image001.png@01D1284F.C3B0B400]<http://computing.haifa.ac.il/>

רוברט אוזן
ראש תחום שרידות וזמינות נתונים.
אוניברסיטת חיפה
משרד: בניין ראשי, חדר 5015
טלפון:  04-8240345 (פנימי: 2345)
דואר: rou...@univ.haifa.ac.il
_
אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
אתר אגף מחשוב ומערכות מידע: 
http://computing.haifa.ac.il<http://computing.haifa.ac.il/>


Re: Aw: Re: [ADSM-L] postschedulecmd question

2016-06-20 Thread Andrew Raibeck
Hi John,

The idea behind PRESCHEDULECMD is that you have some task that must
complete successfully prior to the scheduled operation, else there is no
point in running the scheduled operation. Given that Spectrum Protect has
no knowledge of what your tasks do, but that POSTSCHEDULECMD operations are
typically based on the actions of the PRESCHEDULECMD, we deemed it
generally "safer" to not potentially compound run-time problems by running
the POSTSCHEDULECMD action if the PRESCHEDULECMD action was unsuccessful.

With that behavior being well defined, you can customize your
PRESCHEDULECMD and POSTSCHEDULECMD actions to accommodate most situations.
That is, your PRESCHEDULECMD operation could be a script that performs the
desired actions, then exits with a return code of its choosing to determine
whether it is safe for the scheduled operation to proceed.

For example, suppose on a Windows system you want to stop some business
critical application prior to running a backup, to help ensure that the
application's data files are backed up in an integral state. In this case,
you could do something like this:

1. Use PRESCHEDULECMD "net stop my_application" to stop the application
before the backup begins.

2. Perform the scheduled backup

3. Use POSTSCHEDULECMD "net start my_application" to restart the
application after the backup completes.

If the PRESCHEDULECMD is unsuccessful (return code is nonzero), then the
backup does not run. And since the application was not stopped, there is
little point in trying to restart it.

Now suppose that stopping the application is desired, but not required, to
run the backup. In this case,  you could wrap the "net stop" command in a
script that always exits with return code 0; then use PRESCHEDULECMD
"script_name".

Similarly, you could write a POSTSCHEDULECMD script that restarts the
application if it is not running, and exits with a return code of your
choosing (e.g., some nonzero return code if you want the scheduler to
report a warning-level return code to alert you to a potential problem).

Best regards,

Andy



Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com

IBM Tivoli Storage Manager links:
Product support:
https://www.ibm.com/support/entry/portal/product/tivoli/tivoli_storage_manager

Online documentation:
http://www.ibm.com/support/knowledgecenter/SSGSG7/landing/welcome_ssgsg7.html

Product Wiki:
https://www.ibm.com/developerworks/community/wikis/home/wiki/Tivoli%20Storage%20Manager

"ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2016-06-20
04:53:29:

> From: John Keyes <ra...@gmx.net>
> To: ADSM-L@VM.MARIST.EDU
> Date: 2016-06-20 05:17
> Subject: Aw: Re: [ADSM-L] postschedulecmd question
> Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
>
> I see, thank you.
> It's a bummer really, since other backup solutions like bareos do
> provide a more elaborate control over post schedule events.
>
> Best regards,
> John
>
> > Gesendet: Donnerstag, 16. Juni 2016 um 15:46 Uhr
> > Von: "Pagnotta, Pamela (CONTR)" <pamela.pagno...@hq.doe.gov>
> > An: ADSM-L@VM.MARIST.EDU
> > Betreff: Re: [ADSM-L] postschedulecmd question
> >
> > Hi John,
> >
> > We use these commands sparingly so I am just reading the
> documentation. The way I read the description, it seems as if the
> postschedulecmd will run regardless of the return code of the actual
> scheduled backup.
> >
> > Notes:
> > If the postschedulecmd command does not complete with return code
> 0, the client will report that the scheduled event completed with
> return code 8 (unless the scheduled operation encounters a more
> severe error yielding a higher return code). If you do not want the
> postschedulecmd command to be governed by this rule, you can create
> a script or batch file that invokes the command and exits with
> return code 0. Then configure postschedulecmd to invoke the script
> or batch file. The return code for the postnschedulecmd command is
> not tracked, and does not influence the return code of the scheduled
event.
> >
> >
> > Also looking at the perschedulecmd notes, it says
> >
> > Note:
> > Successful completion of the preschedulecmd command is considered
> to be a prerequisite to running the scheduled operation. If the
> preschedulecmd command does not complete with return code 0, the
> scheduled operation and any postschedulecmd and postnschedulecmd
> commands will not run. The client reports that the scheduled event
> failed, and the return code is 12. If you do not want the
> preschedulecmd command to be governed by this rule, you can create a
> script or batch file that invokes the command a

Aw: Re: [ADSM-L] postschedulecmd question

2016-06-20 Thread John Keyes
I see, thank you.
It's a bummer really, since other backup solutions like bareos do provide a 
more elaborate control over post schedule events.

Best regards,
John

> Gesendet: Donnerstag, 16. Juni 2016 um 15:46 Uhr
> Von: "Pagnotta, Pamela (CONTR)" <pamela.pagno...@hq.doe.gov>
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] postschedulecmd question
>
> Hi John,
>
> We use these commands sparingly so I am just reading the documentation. The 
> way I read the description, it seems as if the postschedulecmd will run 
> regardless of the return code of the actual scheduled backup.
>
> Notes:
> If the postschedulecmd command does not complete with return code 0, the 
> client will report that the scheduled event completed with return code 8 
> (unless the scheduled operation encounters a more severe error yielding a 
> higher return code). If you do not want the postschedulecmd command to be 
> governed by this rule, you can create a script or batch file that invokes the 
> command and exits with return code 0. Then configure postschedulecmd to 
> invoke the script or batch file. The return code for the postnschedulecmd 
> command is not tracked, and does not influence the return code of the 
> scheduled event.
>
>
> Also looking at the perschedulecmd notes, it says
>
> Note:
> Successful completion of the preschedulecmd command is considered to be a 
> prerequisite to running the scheduled operation. If the preschedulecmd 
> command does not complete with return code 0, the scheduled operation and any 
> postschedulecmd and postnschedulecmd commands will not run. The client 
> reports that the scheduled event failed, and the return code is 12. If you do 
> not want the preschedulecmd command to be governed by this rule, you can 
> create a script or batch file that invokes the command and exits with return 
> code 0. Then configure preschedulecmd to invoke the script or batch file. The 
> return code for the prenschedulecmd command is not tracked, and does not 
> influence the return code of the scheduled event.
>
>
> So it appears that the postschedulecmd would run regardless of the RC of the 
> scheduled backup, but not if there is a non-zero RC during the preschedulecmd.
>
>
> Pam
>
> Pam Pagnotta
> Sr. System Engineer
> Criterion Systems, Inc./ActioNet
> Contractor to US. Department of Energy
> Office of the CIO/IM-622
> Office: 301-903-5508
> Mobile: 301-335-8177
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of John 
> Keyes
> Sent: Thursday, June 16, 2016 9:28 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] postschedulecmd question
>
> Hi guys, I did not find an answer on the internet to the following question:
> Will a command defined as postschedulecmd be executed if the schedule itself 
> fails?
>
> Best regards,
> John
>


  1   2   3   4   5   6   7   8   9   10   >