Re: Good Bye

2023-05-27 Thread Stefan Folkerts
I've had a bit of a sneak peak into what is coming from IBM and I must say
it's the first time in a while that I got excited for what is coming from
IBM in this domain. It's a major change and I think it was needed but the
architecture means that it's very capable, at least in the datacenter, from
day 1.

Will this list cover everything within the IBM defender offering?

Regards,
Stefan

On Thu, May 25, 2023 at 1:17 AM Del Hoobler  wrote:

> I’d love to talk to all of you about what we are doing with IBM Storage
> Defender. It’s combining the power, scale and maturity of TSM with some of
> the new capabilities that Cohesity brings to the table.
>
> But, we are taking it to a higher level. Backups are critical, but
> bringing in storage snapshots, SIEM/SOAR, alerting, data classification,
> recovery orchestration, and Ransomware scanning takes it to the next level.
>
> I started with WDSF in 1991. This is the biggest thing I have seen in data
> protection since I started with IBM.
>
> Happy to meet with you and talk about what we are doing.
>
> Del
> —
> Del Hoobler
> Principal Storage SWAT Specialist, IBM Technology, Americas
>
>
> > On May 24, 2023, at 7:02 PM, Zoltan Forray  wrote:
> >
> > Marcel,
> >
> > I too thought it was very interesting that IBM would collaborate with
> > Cohesity (
> >
> https://www.cohesity.com/press/ibm-and-cohesity-announce-new-data-security-and-resiliency-collaboration-advancing-enterprises-ability-to-fight-the-impacts-of-breaches-and-cyberattacks/
> ).
> > I guess the saying "*If you can't beat them.Join them*" might apply.
> >
> >> On Wed, May 24, 2023 at 5:41 PM Marcel Anthonijsz <
> >> marcel.anthoni...@gmail.com> wrote:
> >>
> >> Zoltan,
> >>
> >> "The Times They are A-Changin", I remember that saying well.. And now
> with
> >> the upcoming announcement for IBM Storage Defender I feel they
> definitely
> >> are!
> >> Thanks for your questions, your answers and the support you gave to this
> >> community.
> >>
> >> Live long and prosper!
> >>
> >> Op wo 24 mei 2023 om 22:01 schreef Zoltan Forray :
> >>
> >>> Folks,
> >>>
> >>> It has been a fun, wild, sometimes entertaining, exasperating,
> >> frustrating,
> >>> exhausting (insert your adjective/adverb) learning adventure working
> with
> >>> DSF / ADSM / TSM / Spectrum Protect / whatever-the-new-name-is, for the
> >>> past 30+ years!
> >>>
> >>> But, as they say, "*The Times They Are A-Changin*". After more than a
> >> year
> >>> of analysis, comparisons, discussions, demonstrations, meetings, etc,
> it
> >>> was decided to transition away from IBM/ISP to a new "Enterprise Data
> >>> Protection" solution.
> >>>
> >>> The transition is well underway and we expect to (must) be completely
> off
> >>> ISP by the end of 2023.
> >>>
> >>> Thank you for all the support I/we have received from this
> >>> mailing-list/forums contributors.  There are some truly stellar, gifted
> >>> individuals here!
> >>>
> >>> SIGNING OFF
> >>>
> >>> --
> >>> *Zoltan Forray*
> >>> Enterprise Data Protection Administrator
> >>> VMware Systems Administrator
> >>> Enterprise Compute & Storage Platforms Team
> >>> VCU Infrastructure Services
> >>> zfor...@vcu.edu - 804-828-4807
> >>>
> >>
> >>
> >> --
> >> Kind Regards, Groetje, 73,
> >>
> >> Marcel Anthonijsz
> >>
> >
> >
> > --
> > *Zoltan Forray*
> > Enterprise Data Protection Administrator
> > VMware Systems Administrator
> > Enterprise Compute & Storage Platforms Team
> > VCU Infrastructure Services
> > zfor...@vcu.edu - 804-828-4807
>


Re: Finding the right server when using replication (Windows)

2021-08-11 Thread Stefan Folkerts
Hi Eric, I am by no means a programmer but I have done some work with
powershell. I would while loop thru the servers and use something like
transcribe (powershell command) to capture and log the output of the dsmc
command to a temp file on disk after each dsmc attempt and then use
select-string (
https://www.thomasmaurer.ch/2011/03/powershell-search-for-string-or-grep-for-powershell/)
to search for the string you want to check on, and start the correct
function based on the outcome. I'm 100% sure it can be done without these
writes and reads but for these kinds of things I figure it won't make that
much of a difference.

Regards,
  Stefan

On Tue, Jul 27, 2021 at 4:41 PM Loon, Eric van (ITOP NS) - KLM <
eric-van.l...@klm.com> wrote:

> Hi everybody,
>
> We recently switched to a new server design based on node replication. My
> Windows users always used scripts which went through all TSM/SP servers to
> find the right one by just trying to connect to them, one by one. As soon
> as the dsmc q sess command returned return code 0, it found the right
> server. This always worked, until we switched to node replication. Now all
> of a sudden, there are two servers which return rc=0, but only one of them
> can be used for backups, the primary replication server.
> I have been struggling to find a way how to determine which server is the
> primary and which is the replica server. I noted that if you connect to the
> correct server, the last line of the query session output is "Configured
> for failover to server ". If you connect to the replica, the
> last line is "Not configured for failover".
> Now I need to find a way (preferably though PowerShell) to read this last
> line and generate a rc=0 if it contains "Configured for failover to server"
> and a higher return code if it contains "Not configured for failover". Has
> anybody with Windows scripting knowledge has any idea how to do this?
> Thanks for any help in advance!
>
> Kind regards,
> Eric van Loon
> Air France/KLM Storage & Backup
> 
> For information, services and offers, please visit our web site:
> http://www.klm.com. This e-mail and any attachment may contain
> confidential and privileged material intended for the addressee only. If
> you are not the addressee, you are notified that no part of the e-mail or
> any attachment may be disclosed, copied or distributed, and that any other
> action related to this e-mail or attachment is strictly prohibited, and may
> be unlawful. If you have received this e-mail by error, please notify the
> sender immediately by return e-mail, and delete this message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
> employees shall not be liable for the incorrect or incomplete transmission
> of this e-mail or any attachments, nor responsible for any delay in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
> Airlines) is registered in Amstelveen, The Netherlands, with registered
> number 33014286
> 
>


Replication target server down causing client schedules to miss on replication source?

2020-11-30 Thread Stefan Folkerts
Hi all,

Quick question.
Has anybody ever witnessed client schedules to go into a missed state
because the replication target is down?
I have seen this happen last weekend on a very recent SP 8.1 version
running on Linux, no other changes where made and missed schedules are very
rare in this fairly static environment.

Nothing on the server or client logs other than the source noticing the
target is no longer available and then finding it again.
No running processes on the replication source or target when the target
went down gracefully to be moved.

Since I have nothing in logs I can't really create a call at IBM but I
figured I ask here since it was a strange thing to witness.
It could just be a coincidence.

Regards,
   Stefan


Re: AW: [ADSM-L] Move Container (Automatic) filling up the directories

2020-08-06 Thread Stefan Folkerts
That's great Del, thank you for letting me know.

Stefan

On Mon, 3 Aug 2020 at 15:54, Del Hoobler  wrote:

> APAR IT33478 was created for adding this capability. This is targeted for
> the next release.
>
> This APAR will introduce a built-in quota for the defrag engine to limit
> the # of containers processed in one day
>
>
> Del
>
> 
>
>
> "ADSM: Dist Stor Manager"  wrote on 08/01/2020
> 08:32:13 AM:
>
> > From: Stefan Folkerts 
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 08/01/2020 08:32 AM
> > Subject: [EXTERNAL] Re: AW: [ADSM-L] Move Container (Automatic)
> > filling up the directories
> > Sent by: "ADSM: Dist Stor Manager" 
> >
> > This automatic defragmentation needs an additional parameter to limit
> the
> > number of containers it will defrag in a day or a max percentage of
> actual
> > space used that when reached will stop the defrag so it can't fill up
> the
> > disk to 100% anymore.
> > I see too many cases of Spectrum Protect filling the filesystems to 100%
> > with these processes.
> > It's Saturday and I am working on one right now.
> >
> > Please look into this IBM.
> >
> >
> >
> > On Fri, Dec 28, 2018 at 1:56 PM Erwann SIMON 
> wrote:
> >
> > > Hello,
> > >
> > > I've just discovered that the defaults have been changed in 8.1.6
> (I've a
> > > test system just upgraded to 8.1.6.100) :
> > >
> > >
> > > Protect : TSMSRV1>q opt defragCntrTrigger
> > >
> > > Server Option  Option Setting
> > > - ---
> > > defragCntrTrigger  90
> > >
> > > Protect : TSMSRV1>q opt defragFsTrigger
> > >
> > > Server Option  Option Setting
> > > - ---
> > >   defragFsTrigger  95
> > >
> > > I was running with 90 and 90 since my last post.
> > >
> > > --
> > > Best regards / Cordialement / مع تحياتي
> > > Erwann SIMON
> > >
> > > - Mail original -
> > > De: "Erwann SIMON" 
> > > À: "ADSM: Dist Stor Manager" 
> > > Envoyé: Jeudi 6 Décembre 2018 17:32:34
> > > Objet: Re: [ADSM-L] AW: [ADSM-L] Move Container (Automatic) filling up
> the
> > > directories
> > >
> > > Hello Michael,
> > >
> > > Sorry for the delay and thanks a lot for the advices. I'm trying to
> find
> > > the right numbers for those two parameters.
> > >
> > > --
> > > Best regards / Cordialement / مع تحياتي
> > > Erwann SIMON
> > >
> > > - Mail original -
> > > De: "Michael Prix" 
> > > À: ADSM-L@VM.MARIST.EDU
> > > Envoyé: Jeudi 15 Novembre 2018 16:05:59
> > > Objet: Re: [ADSM-L] AW: [ADSM-L] Move Container (Automatic) filling up
> the
> > > directories
> > >
> > > Hello Erwann,
> > >
> > >   it depends on what you want to archive. These options were
> implemented to
> > > reduce the fragmentation of the containers, like the good old
> reclamation
> > > of
> > > sequential volumes.
> > >   Adding to this, it came to the attention of IBM that a container,
> which
> > > got
> > > moved, should be subject to the reusedelay of the container storage
> pool.
> > > This
> > > wasn't taken care of until 7.1.8 / 8.1.x.
> > >   Speaking of this and putting the pieces together, you may get
> massive
> > > amount
> > > of "reclamation", the cleaned up containers are subject to reusedelay
> and
> > > suddenly you are in urgent need for additional and unplanned storage
> space.
> > >
> > >   Speaking of such, setting DEFRAGCNTRTRIGGER and DEFRAGFSTRIFFER to
> 99
> > > disables the automatic move container, then you can start lowering
> both
> > > parameters down to a level which suits you best.
> > >
> > > --
> > > Michael Prix
> > >
> > > On Thu, 2018-11-15 at 07:06 +0100, Erwann SIMON wrote:
> > > > Hi Michael and Uwe
> > > >
> > > > Thanks *a lot* for your replies. This is excatly what I was looking
> for.
> > > >
> > > > Did you adjust those op

Re: AW: [ADSM-L] Move Container (Automatic) filling up the directories

2020-08-01 Thread Stefan Folkerts
This automatic defragmentation needs an additional parameter to limit the
number of containers it will defrag in a day or a max percentage of actual
space used that when reached will stop the defrag so it can't fill up the
disk to 100% anymore.
I see too many cases of Spectrum Protect filling the filesystems to 100%
with these processes.
It's Saturday and I am working on one right now.

Please look into this IBM.



On Fri, Dec 28, 2018 at 1:56 PM Erwann SIMON  wrote:

> Hello,
>
> I've just discovered that the defaults have been changed in 8.1.6 (I've a
> test system just upgraded to 8.1.6.100) :
>
>
> Protect : TSMSRV1>q opt defragCntrTrigger
>
> Server Option  Option Setting
> - ---
> defragCntrTrigger  90
>
> Protect : TSMSRV1>q opt defragFsTrigger
>
> Server Option  Option Setting
> - ---
>   defragFsTrigger  95
>
> I was running with 90 and 90 since my last post.
>
> --
> Best regards / Cordialement / مع تحياتي
> Erwann SIMON
>
> - Mail original -
> De: "Erwann SIMON" 
> À: "ADSM: Dist Stor Manager" 
> Envoyé: Jeudi 6 Décembre 2018 17:32:34
> Objet: Re: [ADSM-L] AW: [ADSM-L] Move Container (Automatic) filling up the
> directories
>
> Hello Michael,
>
> Sorry for the delay and thanks a lot for the advices. I'm trying to find
> the right numbers for those two parameters.
>
> --
> Best regards / Cordialement / مع تحياتي
> Erwann SIMON
>
> - Mail original -
> De: "Michael Prix" 
> À: ADSM-L@VM.MARIST.EDU
> Envoyé: Jeudi 15 Novembre 2018 16:05:59
> Objet: Re: [ADSM-L] AW: [ADSM-L] Move Container (Automatic) filling up the
> directories
>
> Hello Erwann,
>
>   it depends on what you want to archive. These options were implemented to
> reduce the fragmentation of the containers, like the good old reclamation
> of
> sequential volumes.
>   Adding to this, it came to the attention of IBM that a container, which
> got
> moved, should be subject to the reusedelay of the container storage pool.
> This
> wasn't taken care of until 7.1.8 / 8.1.x.
>   Speaking of this and putting the pieces together, you may get massive
> amount
> of "reclamation", the cleaned up containers are subject to reusedelay and
> suddenly you are in urgent need for additional and unplanned storage space.
>
>   Speaking of such, setting DEFRAGCNTRTRIGGER and DEFRAGFSTRIFFER to 99
> disables the automatic move container, then you can start lowering both
> parameters down to a level which suits you best.
>
> --
> Michael Prix
>
> On Thu, 2018-11-15 at 07:06 +0100, Erwann SIMON wrote:
> > Hi Michael and Uwe
> >
> > Thanks *a lot* for your replies. This is excatly what I was looking for.
> >
> > Did you adjust those options ? How ?
> >
> > I've was unable to find those server options in the Admin Reference Guide
> > and not visible with a standard Q OPT or with a generic Q OPT *DEFR*. I
> was
> > required to use Q OPT DEFRAGCNTRTRIGGER and Q OPT DEFRAGFSTRIGGER. Both
> > options are set to 50% (default).
> >
> >
> > I've found some words in a 8.1.5 presentation :
> >
> > 8.1.5+
> >
> > Defrag uses a new path in MOVE CONTAINER process to read in the chunks
> of a
> > given container and rewrite them via the standard ingest path,
> > redistributing them in available container space
> >
> > Move Container with defrag allows the user to achieve the goals of
> previous
> > workarounds much more easily
> > Container defrag can be invoked manually, with “MOVE CONTAINER  > name> DEFRAG=YES”
> >
> > Auto-defrag engine periodically checks for container fragmentation
> > If containers surpass DEFRAGCNTRTRIGGER  & DEFRAGFSTRIGGER thresholds :
> > Automatically runs defrags
> > If % of available filesystem space or the % of container space which is
> > usable/available falls below these thresholds : Auto-defrag will
> > periodically start defrag processes for fragmented containers
> >
> > @Uwe
> > For one of the servers (the smallest one, Est. Cap of 12,5TB, Pct. Util
> of
> > 45%), upgrade has been made at the end of august. The conversion had been
> > made monthes ago, when the server was still in 7.1.
> > For another one (the biggest, Est. Cap of 235TB, Pct. Util of 65%),
> upgrade
> > has been made at the end of august too, then I started conversion and it
> has
> > just finished.
> >
> >
> > --
> > Best regards / Cordialement / مع تحياتي
> > Erwann SIMON
> >
> > - Mail original -
> > De: "Michael Prix" 
> > À: ADSM-L@VM.MARIST.EDU
> > Envoyé: Mercredi 14 Novembre 2018 19:25:10
> > Objet: Re: [ADSM-L] AW: [ADSM-L] Move Container (Automatic) filling up
> the
> > directories
> >
> > Hello,
> >
> >   and, to tell the rest, as both server options are not documented until
> > now,
> > I quote my documentation from a PMR:
> >
> > DEFRAGFSTRIGGER   0-99   

Re: Ideal storage solution for SP and SP+ ???

2020-06-28 Thread Stefan Folkerts
The amount of TB's is important of course because the system must be able
to provide the storage amount but its also about performance.
We use IBM V5030E systems and under maximum load with 48 nearline drives
these systems are at about 30% load so they are overkill for 48 drive
systems, the nearline drivers aren't fast enough to max out the cpus on the
storage controllers.
But we also have a customer running a Netapp storage solution, not an Ontap
system but a simple entry-level SAN solution with high density storage that
has >100 nearline drivers in it. That solution does in fact get very close
to being CPU bound under maximum Spectrum Protect load. So I would stick
with the amount of nearline drives the blueprint suggest for the
performance you see in the blueprints and ask the storage provider what
storage controller you need to be sure that the drivers don't overwhelm the
controllers, take some growth into account etc.
You can compare random read I/O (check the block sizes) between
manufacturers to see how they compare, it's never a 100% accurate but it
will give you a better indication than sequential imho.
With most high end storage systems using nearline drivers you will
basically always have the drivers as a bottleneck because a 7.2K drive just
doesn't really perform that well and most high-end storage solutions are
build for flash and can do insane amounts of IOP/s given the correct
storage.

I think Spectrum Protect has become very I/O efficient since they
introduced the containerpool, especially when comparing it to the old
filepool using dedup.



On Thu, Jun 4, 2020 at 11:32 PM Rick Adamson 
wrote:

> What storage solution is at the top of your list for Spectrum Protect and
> Spectrum Protect Plus?
>
> I am a former a Data Domain abuser who (regretfully) moved to EMC Isilon
> and now looking for the best possible replacement.
> Usable storage requirement of somewhere in the neighborhood of 500+ tb.
>
> Operation protects roughly 2,100 clients. About 75% VMware VMs and SQL
> with a scattering of AIX hosting DB2 & Oracle.
>
> Currently primarily protected using Spectrum Protect servers using BA,
> TDP, and SP4VE but looking to migrate all possible operations to Spectrum
> Protect Plus.
>
> At this time the plan would be to retain only the most recent (active)
> data at our production facility on high performing storage to provide quick
> recovery then using replicate inactive data to our DR facility to be
> offloaded to Spectrum Protect servers and/or cloud.
>
> All feedback and advice welcome. Thanks !
>
> **CONFIDENTIALITY NOTICE** This electronic message contains information
> from Southeastern Grocers, Inc and is intended only for the use of the
> addressee. This message may contain information that is privileged,
> confidential and/or exempt from disclosure under applicable Law. This
> message may not be read, used, distributed, forwarded, reproduced or stored
> by any other than the intended recipient. If you are not the intended
> recipient, please delete and notify the sender.
>


Re: What is the process to wipe LTO

2019-12-21 Thread Stefan Folkerts
I've created my own in the past using a Linux system, bash script and the
IBM tape tools.
I would create a loop to mount all slots to a drive and write until full,
then eject back to the slot and go to the next element number in the
library and do the same.
It would take a long time to do this and I'm sure there are better ways of
doing it but the script was simple, only took an hour or so to write but
this was 10 years ago, maybe the cli tools have changed, I do very little
with tape these days.


On Mon, Dec 16, 2019 at 9:30 PM Skylar Thompson  wrote:

> TSM does not, and my understanding is that most regulatory environments
> require some kind of physical destruction anyways as simply overwriting the
> tape (even multiple times) is not sufficient to guarantee that the data are
> unreadable.
>
> Note also that TSM can manage hardware encryption with LTO drives (the
> mechanism varies depending on library and generation of LTO) which might be
> sufficient, though you have to take care that your database backups are
> handled separately since it will be unencrypted to allow the encryption key
> to be read in a disaster.
>
> On Fri, Dec 13, 2019 at 05:31:41PM -0500, yoda woya wrote:
> > Does TSM offer a utility to wipe LT0 tapes.  I would like to
> > override whatever data is there.  Thanks in advance for your assistance.
>
> --
> -- Skylar Thompson (skyl...@u.washington.edu)
> -- Genome Sciences Department, System Administrator
> -- Foege Building S046, (206)-685-7354
> -- University of Washington School of Medicine
>


Re: Converting Bareos to Spectrum Protect

2019-10-07 Thread Stefan Folkerts
You can't convert, you have to retrieve to disk and re-archive to Spectrum.

I've done this for other systems and also moved archives from Spectrum
Protect to other systems.
Scripting is what I used, inventory all the archives via a script and
create a script that retrieves them to a logical directory and then archive
that with dynamically generated names with a loop.
If the source doesn't support scripting it can be tedious, I've done that
before and it can take months and Excel sheets to keep track of everything.







On Sat, Oct 5, 2019 at 1:24 AM Deschner, Roger Douglas 
wrote:

> I have been asked to move a departmental system into our central Spectrum
> Protect backup system. The departmental system is running an open-source
> backup product called Bareos, which is a fork of Backula. Anybody done
> anything like this?
>
> I assume there is no practical way to simply import the Bareos database
> and backup tapes into Spectrum Protect. Nice if we could, but I doubt it
> would be possible.
>
> The biggest pitfall I see is that it is not just doing backups, but also
> archives. The backups are easy - we start backups to Spectrum Protect, and
> then keep Bareos going for restores only for the duration of the promised
> restore window, and then just stop running Bareos. The archives will be
> harder. Any ideas for moving archived data between backup systems? How have
> you dealt with file name-space collisions in a temporary restore strategy
> of archived data?
>
> They are, of course, in a hurry.
>
> Roger Deschner  University of Illinois at Chicago rog...@uic.edu
> I have not lost my mind -- it is backed up on tape somewhere.


Re: ISP server as a VM

2019-09-06 Thread Stefan Folkerts
I have asked IBM at the SP Symposium in Germany two years ago what I should
expect performance wise when I run my SP server as a VM.
They gave me an indication of -20% from the blueprints based on blueprints
hardware + vSphere 6.5 or higher.
I've been running a few SP servers in our cloud environments as VM's and
for me it's more like -5 to -10% that I see based on blueprints vs VM's on
blueprint hardware.
I've not done any special tuning, just the normal stuff running on vSphere
6.5 with the storage attached via FC.
SP server uses VMDK's, no RDM's needed to get good performance, we use two
SCSI controllers per SP server, one for the OS + database and activity log
and one for the data disks.
I will admit that our systems are small, we don't come close to running
full medium workloads but so far it runs just fine.
I like the fact that we now have HA based on vSphere.


On Thu, Sep 5, 2019 at 5:10 PM Rick Adamson 
wrote:

> Zoltan,
> As I understand it IBM does support virtual ISP servers but I recall
> seeing a document that stated they will not support their performance or
> scalability, so for a small application it is probably fine.
>
> My management team requested that I move our operation virtual (VMware)
> roughly 2 years ago and while it was an enormous struggle they perform
> adequately now.
> I am now of the opinion for larger installations it is just not worth the
> trouble.
> We now have three pairs of ISP backups servers, each consists of a source
> production server and a replication target server at our DR facility.
> They maintain a variety of client backup including; ISP 4  VE, Windows,
> AIX, Linux, DB2, Oracle, and SQL, as well as many physical servers
> performing traditional file system backups.
> A total of roughly 2,000 clients.
>
> I strongly recommend following the IBM blueprints with a few
> modifications, mostly on VMware.
> All of mine are currently running Windows 2012 R2 and now looking at
> possibly moving to 2016, they all use directory container pools, server
> side deduplication, and node replication.
>
> There were many times that I thought of going back to physical servers but
> am kind of hard-headed and was determined to make these things work if it
> was at all  possible.
>
> For me:
> Clinging to the thought process of not wanting to put too many eggs in one
> basket I followed the blueprints for a medium sized server, at the time not
> knowing how they would scale.
>
> It is my experience that these things are of the utmost importance if you
> want to minimize your headaches and have the servers run at an optimal
> level:
> - As we hear over and over again; DO NOT skimp on the performance of disks
> for the ISP server database and logs. Mine are all on EMC Powermax. Each
> server has 8 individual LUNS for the ISP database volumes and a dedicated
> LUN for each log (ACTIVE, ARCHIVE, ARCHFAILOVER).
>
> - Dedicate ESX/ESXi hosts to the ISP Servers and pin them to the host and
> disable VMotion. It was my experience that if VMware moved an ISP Server
> it would either crash or minimally all in progress sessions and processes
> would fail.
>
> - By design the VMware host will attempt to throttle the ISP server's
> access to all datastores, this is done to assure that one VM does not
> deprive other VMs from access to resources, kind of load balancing. Since
> your datastores are dedicated to a single VM (your ISP server) disable all
> VMware storage IO control. This will increase system performance
> exponentially. I have heard that IBM now recommends using RAW volumes which
> may mitigate this requirement.
>
> - Add additional SCSI controllers (up to 4) and change the default driver
> from LSI Logic driver to VMware Paravirtual for all disks except the
> operating system.
>
>
> In the end the servers perform acceptably at a relatively lower cost  but
> are still no match for physical servers.
>
> -Rick Adamson
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager  On Behalf Of Zoltan
> Forray
> Sent: Wednesday, September 4, 2019 8:05 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] ISP server as a VM
>
> * This email originated outside of the organization. Use caution when
> opening attachments or clicking links. *
>
> --
> Hi Eric,
>
>
>
> No inbound deduplication on the clients and right now for the existing 6
>
> nodes there is a total of 2TB of occupancy and we do not expect much if any
>
> growth.
>
>
>
> ---
>
> Zoltan Forray
>
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
>
> VMware Administrator
>
> Xymon Monitor Administrator
>
> Virginia Commonwealth University
>
> UCC/Office of Technology Services
>
>
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ucc.vcu.edu=DwIBaQ=AzgFQeXLLKhxSQaoFCm29A=eqh5PzQPIsPArLoI_uV1mKvhIpcNP1MsClDPSJjFfxw=nTyF5koaHwvBqdd41KeQB3Db_rRDfX840uwwtltakA0=qErCVxpl9HVMO8Gjj_93IdyW1B7oSVf6776TEvz-vbY=
>
> zfor...@vcu.edu - 

Re: TSM server performance continuing

2019-09-03 Thread Stefan Folkerts
Very strange issue indeed Eric, I have never heard of a specific client
causing slowdowns, I would understand if it where a massive amount of
oracle sessions starting but if it's really platform and not session count
related it would have to be something that IBM needs to look into I guess.
May I ask, is there a specific reason you are running fairly old server
code in the environment?


On Tue, Aug 27, 2019 at 1:01 PM Loon, Eric van (ITOP NS) - KLM <
eric-van.l...@klm.com> wrote:

> Hi Stefan,
>
> There is no noticable impact on the TSM performance when we use the
> tsmdiskperf tool on the database volumes.
> We recently discovered the issue seems to be related to the TDP for Oracle
> clients. When none are running the server response is OK, even during a
> reasonable load. But as soon as the TDP for Oracle clients start kicking
> in, the server becomes slow. We brought it to the attention of the
> developers, but no response yet...
> Thanks for your help!
>
> Kind regards,
> Eric van Loon
> Air France/KLM Storage & Backup
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: maandag 26 augustus 2019 10:39
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: TSM server performance continuing
>
> Eric,
>
> What happens when you benchmark the DB volumes using the tool provided
> with the blueprints on an idle system, does the system also slow down with
> commands such as q stgpool or does it stay fast when the benchmark is
> running on all volumes?
> Also, what kind of a result does the benchmark give you and what do you
> use for your database storage?
>
> Regards,
>Stefan
>
>
>
> On Thu, Aug 22, 2019 at 3:36 PM PAC Brion Arnaud <
> arnaud.br...@panalpina.com>
> wrote:
>
> > Hi Eric,
> >
> > These seem to be default values when installing Red Hat Enterprise
> > Linux Server 7.3.
> > I checked several other machines, all are having the same value.
> >
> > And following page
> > https://stackoverflow.com/questions/55428812/how-the-values-of-kernel-
> > parameters-are-defined
> > seems to confirm my statement.
> >
> > Cheers.
> >
> > Arnaud
> >
> >
> >
> > **
> > **
> > Backup and Recovery Systems Administrator Panalpina Management Ltd.,
> > Basle, Switzerland, CIT Department Viadukstrasse 42, P.O. Box 4002
> > Basel/CH
> > Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
> > Direct: +41 (61) 226 19 78
> > e-mail: arnaud.br...@panalpina.com
> > This electronic message transmission contains information from
> > Panalpina and is confidential or privileged.
> > This information is intended only for the person (s) named above. If
> > you are not the intended recipient, any disclosure, copying,
> > distribution or use or any other action based on the contents of this
> > information is strictly prohibited.
> >
> > If you receive this electronic transmission in error, please notify
> > the sender by e-mail, telephone or fax at the numbers listed above.
> Thank you.
> >
> > **
> > **
> > www.panalpina.com
> >
> >
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> > Of Loon, Eric van (ITOP NS) - KLM
> > Sent: Thursday, August 22, 2019 2:31 PM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: TSM server performance continuing
> >
> > Hi Arnoud,
> >
> > I do not understand why your max seg size (kbytes) = 18014398509465599
> > and max total shared memory (kbytes) = 18014397435740096 are so huge.
> > If you have 256 GB RAM, like I do, I would expect max seg size
> > (kbytes) =
> > 268435456 and max total shared memory (kbytes) = 268435456.
> > Thanks for your help!
> >
> > Kind regards,
> > Eric van Loon
> > Air France/KLM Storage & Backup
> >
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> > Of PAC Brion Arnaud
> > Sent: donderdag 22 augustus 2019 09:02
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: TSM server performance continuing
> >
> > Hi Eric,
> >
> > Running 4 TSM servers at 8.1.6.1 (PowerLinux servers with 256 GB RAM),
> > and making use of directory-container storage pools only.
> > No server performa

Re: TSM server performance continuing

2019-08-26 Thread Stefan Folkerts
Eric,

What happens when you benchmark the DB volumes using the tool provided with
the blueprints on an idle system, does the system also slow down with
commands such as q stgpool or does it stay fast when the benchmark is
running on all volumes?
Also, what kind of a result does the benchmark give you and what do you use
for your database storage?

Regards,
   Stefan



On Thu, Aug 22, 2019 at 3:36 PM PAC Brion Arnaud 
wrote:

> Hi Eric,
>
> These seem to be default values when installing Red Hat Enterprise Linux
> Server 7.3.
> I checked several other machines, all are having the same value.
>
> And following page
> https://stackoverflow.com/questions/55428812/how-the-values-of-kernel-parameters-are-defined
> seems to confirm my statement.
>
> Cheers.
>
> Arnaud
>
>
>
> 
> Backup and Recovery Systems Administrator
> Panalpina Management Ltd., Basle, Switzerland,
> CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
> Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
> Direct: +41 (61) 226 19 78
> e-mail: arnaud.br...@panalpina.com
> This electronic message transmission contains information from Panalpina
> and is confidential or privileged.
> This information is intended only for the person (s) named above. If you
> are not the intended recipient, any disclosure, copying, distribution or
> use or any other action based on the contents of this
>  information is strictly prohibited.
>
> If you receive this electronic transmission in error, please notify the
> sender by e-mail, telephone or fax at the numbers listed above. Thank you.
>
> 
> www.panalpina.com
>
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Loon, Eric van (ITOP NS) - KLM
> Sent: Thursday, August 22, 2019 2:31 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: TSM server performance continuing
>
> Hi Arnoud,
>
> I do not understand why your max seg size (kbytes) = 18014398509465599 and
> max total shared memory (kbytes) = 18014397435740096 are so huge. If you
> have 256 GB RAM, like I do, I would expect max seg size (kbytes) =
> 268435456 and max total shared memory (kbytes) = 268435456.
> Thanks for your help!
>
> Kind regards,
> Eric van Loon
> Air France/KLM Storage & Backup
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> PAC Brion Arnaud
> Sent: donderdag 22 augustus 2019 09:02
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: TSM server performance continuing
>
> Hi Eric,
>
> Running 4 TSM servers at 8.1.6.1 (PowerLinux servers with 256 GB RAM), and
> making use of directory-container storage pools only.
> No server performance or responsiveness  issues here anymore, since we
> changed our storage to make use of IBM Isilon.
>
> Here my values :
>
> vmstat 1
> procs ---memory-- ---swap-- -io -system--
> --cpu-
>  r  b   swpd   free   buff  cache   si   sobibo   in   cs us sy id
> wa st
> 13  0 1420032 1152064 1429248 24384806400  7500  376600
> 24  3 61 11  0
> 13  1 1420032 1046016 1428288 24394348800 61425680 22427 42342
> 50  4 39  7  0
> 12  0 1420032 1144640 1426688 24386195200 542840 0 19109 33340
> 49  2 43  6  0
> 12  4 1420032 883904 1427136 24413382400 577108 0 19777 34954
> 49  2 44  6  0
> 12  1 1420032 936576 1425344 24407276800 613152   372 21110 37559
> 49  2 43  6  0
> 11  5 1420032 1028096 1425600 24397734400 549024 0 24599 47296
> 48  2 42  8  0
> 12  0 1420032 1028032 1424832 24398342400 56618016 21693 39976
> 49  2 43  6  0
>
>
> ipcs -l
>
> -- Messages Limits 
> max queues system wide = 250880
> max size of message (bytes) = 65536
> default max size of queue (bytes) = 65536
>
> -- Shared Memory Limits 
> max number of segments = 62720
> max seg size (kbytes) = 18014398509465599 max total shared memory (kbytes)
> = 18014397435740096 min seg size (bytes) = 1
>
> -- Semaphore Limits 
> max number of arrays = 62720
> max semaphores per array = 250
> max semaphores system wide = 256000
> max ops per semop call = 32
> semaphore max value = 32767
>
>
> Specific tuning done : disabling CPU virtualization (led to sever hangs
> when active)
>
> ppc64_cpu --smt=off
>
>
> Cheers.
>
> Arnaud
>
>
> 
> Backup and Recovery Systems Administrator Panalpina Management Ltd.,
> Basle, Switzerland, CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
> Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
> Direct: +41 (61) 226 19 78
> e-mail: arnaud.br...@panalpina.com
> This electronic 

Re: TDP for ERP (S4 Hana) questions about longer-term retention

2019-07-25 Thread Stefan Folkerts
I think one protect only works for B/A clients and VM's, not for TDP's, but
I could be wrong or that information might be outdated (I think this was
the case for 8.1.7).
So i'm thinking this might require dumps to disk and archives with removal
via the B/A client for the montly's and yearly's.


On Mon, Jul 22, 2019 at 6:20 PM Matthew McGeary 
wrote:

> Folks,
>
> We are currently in the process of building out a S4 Hana environment as
> an ERP replacement and I have a requirement from business to retain backups
> of the S4 databases for the following periods:
>
> Daily incrementals - 30 days
> Weekly fulls - 90 days
> Monthly fulls - 1 year
> Yearly fulls - 7 years
>
> As the TDP for Hana client sends data as archive only and appears to only
> have the ability to tweak either:
>
> - retention of archive objects on the SP server
> - versions retained on the config file local to the SAP S4 Database
>
> What is my best strategy for delivering on these requirements?  Is the new
> retention function (One Protect) my best bet for this requirement?
>
> Thanks
>
> __
> Matthew McGeary
> Service Delivery Manager/Solutions Architect, Compute & Storage
> Information Technology
> T: (306) 933-8921
> www.nutrien.com
>
> For more information on Nutrien's email policy or to unsubscribe, click
> here: https://www.nutrien.com/important-notice
> Pour plus de renseignements sur la politique de courrier électronique de
> Nutrien ou pour vous désabonner, cliquez ici:
> https://www.nutrien.com/avis-important
>


Re: Spectrum Protect and The mystery of data reduction differences on the replication target

2019-07-09 Thread Stefan Folkerts
Well, I don't think that's correct Rick. :-)
The data is deduplicated and compressed at the source and only the new
chunks are replicated to the target daily but the data reduction
information on the target storagepool is based on the managed data figures
just like at the source and they are the same.
That's why we see percentages, they are the percentages based on the
managed data within that containerpool.
If it works like you said I would see no deduplication or compression on
the target because it's only replicated data on that machine.
There is no data on the target machine that's not on the source machine.

On Tue, Jul 9, 2019 at 5:27 PM Rick Adamson 
wrote:

> Stefan,
> I understand it as Karel.
> The replicate (or protect stg) process data is deduplicated at the source.
> It would be inefficient to consume network bandwidth sending duplicate
> extents.
>
> -Rick Adamson
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager  On Behalf Of Stefan
> Folkerts
> Sent: Monday, July 8, 2019 3:23 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Spectrum Protect and The mystery of data reduction
> differences on the replication target
>
> * This email originated outside of the organization. Use caution when
> opening attachments or clicking links. *
>
> --
> I don't think that's how it works Karel, the values are based on managed
> data and are normally around the same on the source and on the target.
>
> On Mon, Jul 8, 2019 at 9:16 AM Karel Bos  wrote:
>
> > As the target server only receives already dedupped data it makes
> > sense their will be no or barely anything to be dedupped.
> >
> > Regards,
> >
> > Karel
> >
> > On Mon, 8 Jul 2019 at 08:16, Stefan Folkerts
> > 
> > wrote:
> >
> > > Hi all,
> > >
> > > I'm seeing something very strange at a customer site that has two
> > Spectrum
> > > Protect servers.
> > > One receives backups and is the replication source.
> > > The other is the replication target for that single source and
> > > doesn't receive any backup data.
> > > All data goes into a single containerpool on each server.
> > >
> > > Check this out:
> > >
> > > Containerpool on source:
> > >
> > >  Deduplication Savings: 35,173 G (52.50%)
> > >Compression Savings: 16,083 G (50.54%)
> > >  Total Space Saved: 51,255 G (76.51%)
> > >
> > > Containerpool on target:
> > >
> > >  Deduplication Savings: 3,962 G (6.19%)
> > >Compression Savings: 34,193 G (56.97%)
> > >  Total Space Saved: 38,155 G (59.63%)
> > >
> > > Deduplication is all but completely failing on the target.
> > > Everything is replicated (replicate node *) All filespaces and
> > > occupancy stats are equal on both sides.
> > > Has anybody seen this before.
> > > Running 8.1.7
> > >
> > > Regards,
> > > Stefan
> > >
> >
>
> **CONFIDENTIALITY NOTICE** This electronic message contains information
> from Southeastern Grocers, Inc and is intended only for the use of the
> addressee. This message may contain information that is privileged,
> confidential and/or exempt from disclosure under applicable Law. This
> message may not be read, used, distributed, forwarded, reproduced or stored
> by any other than the intended recipient. If you are not the intended
> recipient, please delete and notify the sender.
>


Re: Spectrum Protect and The mystery of data reduction differences on the replication target

2019-07-08 Thread Stefan Folkerts
I don't think that's how it works Karel, the values are based on managed
data and are normally around the same on the source and on the target.

On Mon, Jul 8, 2019 at 9:16 AM Karel Bos  wrote:

> As the target server only receives already dedupped data it makes sense
> their will be no or barely anything to be dedupped.
>
> Regards,
>
> Karel
>
> On Mon, 8 Jul 2019 at 08:16, Stefan Folkerts 
> wrote:
>
> > Hi all,
> >
> > I'm seeing something very strange at a customer site that has two
> Spectrum
> > Protect servers.
> > One receives backups and is the replication source.
> > The other is the replication target for that single source and doesn't
> > receive any backup data.
> > All data goes into a single containerpool on each server.
> >
> > Check this out:
> >
> > Containerpool on source:
> >
> >  Deduplication Savings: 35,173 G (52.50%)
> >Compression Savings: 16,083 G (50.54%)
> >  Total Space Saved: 51,255 G (76.51%)
> >
> > Containerpool on target:
> >
> >  Deduplication Savings: 3,962 G (6.19%)
> >Compression Savings: 34,193 G (56.97%)
> >  Total Space Saved: 38,155 G (59.63%)
> >
> > Deduplication is all but completely failing on the target.
> > Everything is replicated (replicate node *)
> > All filespaces and occupancy stats are equal on both sides.
> > Has anybody seen this before.
> > Running 8.1.7
> >
> > Regards,
> > Stefan
> >
>


Spectrum Protect and The mystery of data reduction differences on the replication target

2019-07-08 Thread Stefan Folkerts
Hi all,

I'm seeing something very strange at a customer site that has two Spectrum
Protect servers.
One receives backups and is the replication source.
The other is the replication target for that single source and doesn't
receive any backup data.
All data goes into a single containerpool on each server.

Check this out:

Containerpool on source:

 Deduplication Savings: 35,173 G (52.50%)
   Compression Savings: 16,083 G (50.54%)
 Total Space Saved: 51,255 G (76.51%)

Containerpool on target:

 Deduplication Savings: 3,962 G (6.19%)
   Compression Savings: 34,193 G (56.97%)
 Total Space Saved: 38,155 G (59.63%)

Deduplication is all but completely failing on the target.
Everything is replicated (replicate node *)
All filespaces and occupancy stats are equal on both sides.
Has anybody seen this before.
Running 8.1.7

Regards,
Stefan


Re: Living in the future, wanting to go back (date issue on Spectrum Protect server)

2019-06-21 Thread Stefan Folkerts
Thanks all, we will look into it.

On Thu, Jun 20, 2019 at 4:03 PM Michael Prix  wrote:

> Hello Stefan,
>
>   one more thought:
> I assume that the behaviour is because expiration ran on the instance
> while in
> the future. Thus, at least some CTL-files got expired and when switching
> the
> date back, there are some missing data.
> VMVERIFYIFLATEST YES in combination with VMVERIFYACTION FORECEFULL in the
> dsm.opt of the Datamover might solve your problem yet it will start a full
> backup in this described case. IC33026 addresses the same type of error
> for an
> SQL backup and shares some background to this type of error, yet you might
> have luck in this case with a fullbackup.
>
> --
> Michael Prix
>
> On Wed, 2019-06-19 at 11:25 +0200, Stefan Folkerts wrote:
> > Thanks Michael, we are opening a pmr @IBM to see what they will say about
> > it.
> >
> > On Tue, Jun 18, 2019 at 5:01 PM Michael Prix  wrote:
> >
> > > Hello Stefan,
> > >
> > >   as far as I know the only possibility is to restore the latest good
> > > backup.
> > > Until some versions ago it was not even possible to start the instance
> > > sucessfully until the time of the last shutdown was reached. I tried
> these
> > > tests with the 6.x and 7.x versions once or twice and always ended
> with a
> > > total unusable installation afterwards. ACCEPT DATE isn't what it has
> been
> > > on
> > > 5.x.
> > >
> > > --
> > > Michael Prix
> > >
> > > On Tue, 2019-06-18 at 08:51 +0200, Stefan Folkerts wrote:
> > > > Hi all,
> > > >
> > > > I've got a new one.
> > > > A customer of ours has set the date of the SP server to December
> 2019.
> > > > As you may know, it's currently not December. :-)
> > > > They accepted the date change in SP with an accept date.
> > > > Now the issue is when they move the date back to what it actually is
> > > > they
> > > > get this error:
> > > >
> > > > ERROR ANS4174E Full VM backup of VMware Virtual Machine ''
> > > > failed
> > > > with RC=45 mode=Incremental Forever - Full, target node name=' NODE
> > > > NAME>', data mover node name=''
> > > > ERROR ANS0328E (RC45) The specified objects failed the merge test.
> > > >
> > > > I fully understand why they get this error and it was fixed by
> > > going...back
> > > > to the future (sorry, had to do it).
> > > >
> > > > So i'm thinking, how do we fix this?
> > > > I was thinking about a script that would change the date each day
> but I
> > > > don't expect that this is the best solution to this issue, renaming
> all
> > > > filespaces in VE and then doing new full backups is another
> possibility.
> > > >
> > > > Any other thoughts?
> > > >
> > > > Regards,
> > > > Stefan
>


Re: Living in the future, wanting to go back (date issue on Spectrum Protect server)

2019-06-19 Thread Stefan Folkerts
Thanks Michael, we are opening a pmr @IBM to see what they will say about
it.

On Tue, Jun 18, 2019 at 5:01 PM Michael Prix  wrote:

> Hello Stefan,
>
>   as far as I know the only possibility is to restore the latest good
> backup.
> Until some versions ago it was not even possible to start the instance
> sucessfully until the time of the last shutdown was reached. I tried these
> tests with the 6.x and 7.x versions once or twice and always ended with a
> total unusable installation afterwards. ACCEPT DATE isn't what it has been
> on
> 5.x.
>
> --
> Michael Prix
>
> On Tue, 2019-06-18 at 08:51 +0200, Stefan Folkerts wrote:
> > Hi all,
> >
> > I've got a new one.
> > A customer of ours has set the date of the SP server to December 2019.
> > As you may know, it's currently not December. :-)
> > They accepted the date change in SP with an accept date.
> > Now the issue is when they move the date back to what it actually is they
> > get this error:
> >
> > ERROR ANS4174E Full VM backup of VMware Virtual Machine '' failed
> > with RC=45 mode=Incremental Forever - Full, target node name=' > NAME>', data mover node name=''
> > ERROR ANS0328E (RC45) The specified objects failed the merge test.
> >
> > I fully understand why they get this error and it was fixed by
> going...back
> > to the future (sorry, had to do it).
> >
> > So i'm thinking, how do we fix this?
> > I was thinking about a script that would change the date each day but I
> > don't expect that this is the best solution to this issue, renaming all
> > filespaces in VE and then doing new full backups is another possibility.
> >
> > Any other thoughts?
> >
> > Regards,
> > Stefan
>


Living in the future, wanting to go back (date issue on Spectrum Protect server)

2019-06-18 Thread Stefan Folkerts
Hi all,

I've got a new one.
A customer of ours has set the date of the SP server to December 2019.
As you may know, it's currently not December. :-)
They accepted the date change in SP with an accept date.
Now the issue is when they move the date back to what it actually is they
get this error:

ERROR ANS4174E Full VM backup of VMware Virtual Machine '' failed
with RC=45 mode=Incremental Forever - Full, target node name='', data mover node name=''
ERROR ANS0328E (RC45) The specified objects failed the merge test.

I fully understand why they get this error and it was fixed by going...back
to the future (sorry, had to do it).

So i'm thinking, how do we fix this?
I was thinking about a script that would change the date each day but I
don't expect that this is the best solution to this issue, renaming all
filespaces in VE and then doing new full backups is another possibility.

Any other thoughts?

Regards,
Stefan


Trying to disable all the ANR3692W messages, getting weird database error

2019-06-03 Thread Stefan Folkerts
Hi all,

I'm wondering if this is a weird database error or if my syntax isn't
correct and the error on my syntax isn't handled correctly.

I have a customer that is seeing absurd amounts of these warnings in the
activity log and, of course, the daily reporting:

06/03/2019 13:20:47  ANR3692W A client backup anomaly was detected for
node  NODENAME, session number 657715. The average number of
  backed up bytes is 242325924, the actual number of

  backed up bytes was 76021771, the average data

  deduplication is 0 percent, and the actual data

  deduplication was 0 percent. (SESSION: 657715)

Now we know this and don't really care.
So what I did was run: *disable events ALL ANR3692 *to suppress these
messages.

That command gives me:
13:59:24   SERVERNAME: disable events ALL ANR3692
ANR1844I DISABLE EVENTS command processed.

But it doesn't work and I think I know why, at the same time I get this
error in the activity log:

03/06/2019 13:58:31 ANR0103E admevent.c(5260): Error 1114 updating row in
table "Server.Eventvectors".

Running SP version 8.1.7.0 on Linux.

Regards,
   Stefan


Re: SV: [ADSM-L] Restore new fation ISP v8.1.x

2019-05-27 Thread Stefan Folkerts
This is fantastic, do you plan on maintaining these this when new versions
are released?


On Fri, May 24, 2019 at 8:47 AM Leif Torstensen  wrote:

>
> Hi
>
> We done a lot of work making WindowsPE boot iso for HyperV and VMware
> baremetal recovery (properly also usable on physical server) and try to
> automate the procedure by asking for node/password, SP server and ipadresse
> in the boot.
>
> They are place on this site.
>
> https://download.backuphosting.dk/Clients/restore%20iso/
>
> Best Regards
>
> Leif Torstensen
>
>
> Backup Engineer | Denmark
> leif.torsten...@sentia.dk| +45 63 14 40 19 | sentia.com/dk Munkerisvej 1,
> 5230 Odense M, Denmark
>
>
>
> This e-mail may contain information which is privileged or confidential.
> If you received this e-mail by mistake, please notify us immediately by
> e-mail or telephone and delete the e-mail without copying or disclosing its
> contents to any other person.
>
> -Oprindelig meddelelse-
> Fra: ADSM: Dist Stor Manager  På vegne af .Maurizio
> TERUZZI
> Sendt: Thursday, 23 May 2019 15.51
> Til: ADSM-L@VM.MARIST.EDU
> Emne: [ADSM-L] Restore new fation ISP v8.1.x
>
> Hello Guys/Girls,
>
> I discoverd something "strange" during a Full restore on production.
>
> On Windows server 2016 (+ windows 10) since Spectrum protect (tsm) version
> 8.1.x the SystemState restore is not more possible (deprecated) on the
> online machine.
>
> Instreat everyone should build a WinPE customized to do the boot and
> bareMetal Restore.
>
> I spend some hour playing that, builting images but Is still have no
> solutions to recover the initial server.
>
> By chance I was able to move the server function to a temporary server.
>
>
> Any one as a standard build to boot-up or some experience?
>
>
> What I offcially found is:
>
> https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Tivoli%20Storage%20Manager/page/Creating%20Bootable%20WinPE%20Media%20for%20Recovery%20of%20Microsoft%20Windows%20Server%202016%20and%20Microsoft%20Windows%2010
>
> but not easy as describet.
>
>
> Grazie
>
> Sincerely
> Maurizio
>


Re: Question on Replication: unsuccessful replication due to sessions terminated by admin

2019-05-13 Thread Stefan Folkerts
Did you use something like iperf with a long and heavy load? a bad nic or
driver might cause this, so it might still be the network.

On Mon, May 13, 2019 at 4:15 PM Bjørn Nachtwey 
wrote:

> Hi all,
>
> we planned to switch from COPYPOOL to Replication for having a second
> copy of the data, therefore we bought a new server that should become
> the primary TSM/ISP server and then make the old one holding the
> replicates.
>
> what we did:
>
> we started by exporting the nodes, which worked well. But as the
> "incremental" exports even took some time, we set up a replication from
> old server "A" to the new one "B". For all nodes already exported we set
> up the replication vice versa: TSM "B" replicates them to TSM "A".
>
> well, the replication jobs did not finish, some data and files were
> missing as long as we replicated using a node group. Now we use
> replication for each single node and it works -- for most of them :-(
>
> Replication the "bad" nodes from "TSM A" to "TSM B" first the sessions
> hang for many minutes, sometimes even hours, then they got "terminated -
> forced by administrator" (ANR0483W), e.g.:
>
> 05/13/2019 15:23:16ANR2017I Administrator GK issued command:
> REPLICATE NODE vsbck  (SESSION: 26128)
> 05/13/2019 15:23:16ANR1626I The previous message (message number
> 2017) was repeated 1 times.
> 05/13/2019 15:23:16ANR0984I Process 494 for Replicate Node started
> in the BACKGROUND at 15:23:16. (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:16ANR2110I REPLICATE NODE started as process 494.
> (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:16ANR0408I Session 26184 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:16ANR0408I Session 26185 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:16ANR0408I Session 26186 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26187 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26188 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26189 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26190 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
> 05/13/2019 15:23:17ANR0408I Session 26191 started for server SM283
> (Linux/x86_64) (Tcp/Ip) for replication.  (SESSION: 26128, PROCESS: 494)
>
> 05/13/2019 15:24:57ANR0483W Session 26187 for node SM283
> (Linux/x86_64) terminated - forced by administrator. (SESSION: 26128,
> PROCESS: 494)
>
> on the target server we observe at that time:
>
> 13.05.2019 15:25:51 ANR8213E Socket 34 aborted due to send error; error
> 104.
> 13.05.2019 15:25:51 ANR3178E A communication error occurred during
> session 65294 with replication server TSM.
> 13.05.2019 15:25:51 ANR0479W Session 65294 for server TSM (Windows)
> terminated - connection with server severed.
> 13.05.2019 15:25:51 ANR8213E Socket 34 aborted due to send error; error 32.
>
> => Any idea why this replication aborts?
>
> => why is there a "socket abortion error"?
>
>
> well, we already opened a SR case, send lots of logs and traces. as IBM
> suspects a network problem, now both serves use a cross link connection
> without nothing but NIC/GBICs, plugs and wires.
>
> thanks & best
>
> Bjørn
>
> --
>
> --
> Bjørn Nachtwey
>
> Arbeitsgruppe "IT-Infrastruktur“
> Tel.: +49 551 201-2181, E-Mail:bjoern.nacht...@gwdg.de
>
> --
> Gesellschaft für wissenschaftliche Datenverarbeitung mbH Göttingen (GWDG)
> Am Faßberg 11, 37077 Göttingen, URL:http://www.gwdg.de
> Tel.: +49 551 201-1510, Fax: +49 551 201-2150, E-Mail:g...@gwdg.de
> Service-Hotline: Tel.: +49 551 201-1523, E-Mail:supp...@gwdg.de
> Geschäftsführer: Prof. Dr. Ramin Yahyapour
> Aufsichtsratsvorsitzender: Prof. Dr. Christian Griesinger
> Sitz der Gesellschaft: Göttingen
> Registergericht: Göttingen, Handelsregister-Nr. B 598
>
> --
> Zertifiziert nach ISO 9001
>
> --
>


Re: Migration plans

2019-04-16 Thread Stefan Folkerts
I think it depends on the size/load on the server.
For smaller environments we even use the midrange read intensive SSD's in
raid 1 (2 of them) or sometimes even raid 5 with a spare and they run for
years, this works just fine for S en M blueprints.
For very intensive environments we tent to use the write intensive SSD's
and in raid 1 (L blueprints).

We run on SUSE Linux and this has been a very stable platform for many of
our customers and our own cloud platform.
With internal SSD's make sure you buy the best internal raid controller you
can get with the correct SSD upgrades, don't skimp on that part.

Size the active log correctly and place that on SSD as well, make
everything a bit bigger than need be, you want to have at least 10GB of
free space on the active log filesystem.




On Tue, Apr 16, 2019 at 11:16 AM Michael Stahlberg <
michael.stahlb...@de.verizon.com> wrote:

> HI all,
>
> after long working with ADSM/TSM and migrating from Solaris to AIX about
> 9 years ago, we are now asked to refresh our hardware again. It came out
> that our old backup server are the only AIX server we still have and we
> should thinking about migrating to linux. I'm not real happy about it,
> but I think that it will work. I have some more doubts about the
> hardware we should use. I know that we should have a lot of disks for
> the database, but our databases aren't very big (the biggest is about
> 400 GByte) and to get small disks is difficult. At the moment we have an
> AIX server with an external storage (DS3500) with a lot of disks. The
> plans are to migrate to a HP DL380 system with internal disks. The
> database should be on SSDs so we are not sure if we still should use a
> lot of disks, or if it makes sense to use only 2 or 4 disks mirrored
> with a spare disk, or if it might be useful to use Raid5. Has anybody
> experiences with it, please. The documentation I found about this is in
> my eyes not up to date and so I'm asking this group now, for their
> experiences about this.
> It will be also fine, if somebody did such a migration already. I read
> about export and import the database from AIX to Linux. Has somebody do
> this, please. How did it work?
>
> many thanks in advanced
>
> regards
>
> Michael Stahlberg
>


Re: AW: [ADSM-L] Confused about Spectrum Protect replication compatibility

2019-04-10 Thread Stefan Folkerts
Uwe,

This was 100% it.
The "q server" output showed transitional but the admin was strict on the
8.1 system.
I did not realize that this would cause issues with a ping server and an
export server command.

I created a new admin, set to transtional and used the old servers dsmadmc
to login to the systems and now the ping server, export and replication
works just fine.

So thank you Uwe!



On Tue, Apr 9, 2019 at 11:16 AM Uwe Schreiber 
wrote:

> Hi Stefan,
>
> i assume the admin-id you are using during the "ping server" is running in
> "strict-mode" on the 8.1.7 instance, but not on the 7.1.3 instance.
> This will cause the error message "down level".
>
> Regards, Uwe
>
> -Ursprüngliche Nachricht-----
> Von: ADSM: Dist Stor Manager  Im Auftrag von Stefan
> Folkerts
> Gesendet: Montag, 8. April 2019 19:38
> An: ADSM-L@VM.MARIST.EDU
> Betreff: Re: [ADSM-L] Confused about Spectrum Protect replication
> compatibility
>
> No, I have not and it seems very valuable, thank you very much. I will
> look into this tomorrow.
>
> On Mon, 8 Apr 2019 at 18:41, Zoltan Forray  wrote:
>
> > Assume you saw this discussion:
> >
> >
> > https://adsm.org/forum/index.php?threads/tsm-replication-between-serve
> > rs.32369/
> >
> >
> > On Mon, Apr 8, 2019 at 12:30 PM Stefan Folkerts
> >  > >
> > wrote:
> >
> > > I don’t think so, the q server output says that it is transitional.
> > >
> > > On Mon, 8 Apr 2019 at 16:35, Zoltan Forray  wrote:
> > >
> > > > Probably due to the enforced-by-default TLS/SSL level of anything
> > > at/beyond
> > > > 7.1.8.x/8.1.2.x.  See this article:
> > > >
> > > > https://www-01.ibm.com/support/docview.wss?uid=swg22004844
> > > >
> > > > On Mon, Apr 8, 2019 at 3:06 AM Stefan Folkerts <
> > > stefan.folke...@gmail.com>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > This page of the 8.1.7 knowledge center:
> > > > >
> > > > >
> > > > >
> > > >
> > >
> > https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.7/srv.admin/r_a
> > dm_repl_compat.html
> > > > >
> > > > > States:
> > > > >
> > > > > Before you set up replication operations with IBM Spectrum
> > > > > Protect™,
> > > you
> > > > > must ensure that the source and target replication servers are
> > > compatible
> > > > > for replication.
> > > > >
> > > > > In that table you will find
> > > > >
> > > > > Source   target
> > > > > 7.1.3  7.1.3 or later
> > > > >
> > > > > So, I have a 7.1.3 server with a 8.1.7 target, that should be
> > supported
> > > > > according to the documentation.
> > > > >
> > > > > However, it doesn't work, I can't even do a ping server:
> > > > >
> > > > > 09:00:30   713srv : ping server 817srv
> > > > > ANR0454E Session rejected by server  817srv  , reason: 7 - Down
> > Level.
> > > > > ANR1705W A ping request to server ' 817srv  ' was not able to
> > > establish a
> > > > > connection by using administrator credentials.
> > > > > ANS8001I Return code 4.
> > > > >
> > > > > So, I get a "reason 7 - Down Level" error.
> > > > > Export server gives the same error message.
> > > > >
> > > > > Does anybody know why this is happening?
> > > > >
> > > > > Regards,
> > > > >Stefan
> > > > >
> > > >
> > > >
> > > > --
> > > > *Zoltan Forray*
> > > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > > Xymon Monitor Administrator VMware Administrator Virginia
> > > > Commonwealth University UCC/Office of Technology Services
> > > > www.ucc.vcu.edu zfor...@vcu.edu - 804-828-4807 Don't be a phishing
> > > > victim - VCU and other reputable organizations will never use
> > > > email to request that you reply with your password, social
> > > > security number or confidential personal information. For more
> > > > details visit http://phishing.vcu.edu/
> > > >
> > >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon
> > Monitor Administrator VMware Administrator Virginia Commonwealth
> > University UCC/Office of Technology Services www.ucc.vcu.edu
> > zfor...@vcu.edu - 804-828-4807 Don't be a phishing victim - VCU and
> > other reputable organizations will never use email to request that you
> > reply with your password, social security number or confidential
> > personal information. For more details visit http://phishing.vcu.edu/
> >
>


Re: Confused about Spectrum Protect replication compatibility

2019-04-08 Thread Stefan Folkerts
No, I have not and it seems very valuable, thank you very much. I will look
into this tomorrow.

On Mon, 8 Apr 2019 at 18:41, Zoltan Forray  wrote:

> Assume you saw this discussion:
>
>
> https://adsm.org/forum/index.php?threads/tsm-replication-between-servers.32369/
>
>
> On Mon, Apr 8, 2019 at 12:30 PM Stefan Folkerts  >
> wrote:
>
> > I don’t think so, the q server output says that it is transitional.
> >
> > On Mon, 8 Apr 2019 at 16:35, Zoltan Forray  wrote:
> >
> > > Probably due to the enforced-by-default TLS/SSL level of anything
> > at/beyond
> > > 7.1.8.x/8.1.2.x.  See this article:
> > >
> > > https://www-01.ibm.com/support/docview.wss?uid=swg22004844
> > >
> > > On Mon, Apr 8, 2019 at 3:06 AM Stefan Folkerts <
> > stefan.folke...@gmail.com>
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > This page of the 8.1.7 knowledge center:
> > > >
> > > >
> > > >
> > >
> >
> https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.7/srv.admin/r_adm_repl_compat.html
> > > >
> > > > States:
> > > >
> > > > Before you set up replication operations with IBM Spectrum Protect™,
> > you
> > > > must ensure that the source and target replication servers are
> > compatible
> > > > for replication.
> > > >
> > > > In that table you will find
> > > >
> > > > Source   target
> > > > 7.1.3  7.1.3 or later
> > > >
> > > > So, I have a 7.1.3 server with a 8.1.7 target, that should be
> supported
> > > > according to the documentation.
> > > >
> > > > However, it doesn't work, I can't even do a ping server:
> > > >
> > > > 09:00:30   713srv : ping server 817srv
> > > > ANR0454E Session rejected by server  817srv  , reason: 7 - Down
> Level.
> > > > ANR1705W A ping request to server ' 817srv  ' was not able to
> > establish a
> > > > connection by using administrator credentials.
> > > > ANS8001I Return code 4.
> > > >
> > > > So, I get a "reason 7 - Down Level" error.
> > > > Export server gives the same error message.
> > > >
> > > > Does anybody know why this is happening?
> > > >
> > > > Regards,
> > > >Stefan
> > > >
> > >
> > >
> > > --
> > > *Zoltan Forray*
> > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > > Xymon Monitor Administrator
> > > VMware Administrator
> > > Virginia Commonwealth University
> > > UCC/Office of Technology Services
> > > www.ucc.vcu.edu
> > > zfor...@vcu.edu - 804-828-4807
> > > Don't be a phishing victim - VCU and other reputable organizations will
> > > never use email to request that you reply with your password, social
> > > security number or confidential personal information. For more details
> > > visit http://phishing.vcu.edu/
> > >
> >
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://phishing.vcu.edu/
>


Re: Confused about Spectrum Protect replication compatibility

2019-04-08 Thread Stefan Folkerts
I don’t think so, the q server output says that it is transitional.

On Mon, 8 Apr 2019 at 16:35, Zoltan Forray  wrote:

> Probably due to the enforced-by-default TLS/SSL level of anything at/beyond
> 7.1.8.x/8.1.2.x.  See this article:
>
> https://www-01.ibm.com/support/docview.wss?uid=swg22004844
>
> On Mon, Apr 8, 2019 at 3:06 AM Stefan Folkerts 
> wrote:
>
> > Hi all,
> >
> > This page of the 8.1.7 knowledge center:
> >
> >
> >
> https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.7/srv.admin/r_adm_repl_compat.html
> >
> > States:
> >
> > Before you set up replication operations with IBM Spectrum Protect™, you
> > must ensure that the source and target replication servers are compatible
> > for replication.
> >
> > In that table you will find
> >
> > Source   target
> > 7.1.3  7.1.3 or later
> >
> > So, I have a 7.1.3 server with a 8.1.7 target, that should be supported
> > according to the documentation.
> >
> > However, it doesn't work, I can't even do a ping server:
> >
> > 09:00:30   713srv : ping server 817srv
> > ANR0454E Session rejected by server  817srv  , reason: 7 - Down Level.
> > ANR1705W A ping request to server ' 817srv  ' was not able to establish a
> > connection by using administrator credentials.
> > ANS8001I Return code 4.
> >
> > So, I get a "reason 7 - Down Level" error.
> > Export server gives the same error message.
> >
> > Does anybody know why this is happening?
> >
> > Regards,
> >Stefan
> >
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://phishing.vcu.edu/
>


Confused about Spectrum Protect replication compatibility

2019-04-08 Thread Stefan Folkerts
Hi all,

This page of the 8.1.7 knowledge center:

https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.7/srv.admin/r_adm_repl_compat.html

States:

Before you set up replication operations with IBM Spectrum Protect™, you
must ensure that the source and target replication servers are compatible
for replication.

In that table you will find

Source   target
7.1.3  7.1.3 or later

So, I have a 7.1.3 server with a 8.1.7 target, that should be supported
according to the documentation.

However, it doesn't work, I can't even do a ping server:

09:00:30   713srv : ping server 817srv
ANR0454E Session rejected by server  817srv  , reason: 7 - Down Level.
ANR1705W A ping request to server ' 817srv  ' was not able to establish a
connection by using administrator credentials.
ANS8001I Return code 4.

So, I get a "reason 7 - Down Level" error.
Export server gives the same error message.

Does anybody know why this is happening?

Regards,
   Stefan


Re: TSMWorks and ART product

2019-03-16 Thread Stefan Folkerts
Steven,

I can help you out if you are looking for a tool to validate Spectrum
Protect for Virtual Environments vSphere backups with a comprehensive
validation that includes a test against an SLA and a quick test to see if
the VM actually boots after the restore and many other features.
I'm not sure if this public space is the best way to get into the details
but if you want to know more shoot me an email.

Regards,
   Stefan


On Mon, Feb 25, 2019 at 4:41 AM Harris, Steven <
steven.har...@btfinancialgroup.com> wrote:

> Hi all
>
> I'm trying to recommend an automated restore testing tool like TSMWorks'
> ART to management.
>
> TSMWorks web site appears to have been hijacked.
> Lindsay Morris' Linkedin page seems to be dead. Twitter feed too.
> Nothing from TSMWorks by Google search since about 2009
>
> Does anyone know what has happened to Lindsay, the company or the
> product?  If it's dead are there any alternatives?  This company uses
> Netbackup and Networker as well as Spectrum Protect.
>
> While I was looking I see that ADSM.ORG has no adsm-l posts since April
> 18. Is that dead too?
>
> Cheers
>
> Steve
>
> TSM Admin/Consultant
> Canberra Australia
>
>
>
> This message and any attachment is confidential and may be privileged or
> otherwise protected from disclosure. You should immediately delete the
> message if you are not the intended recipient. If you have received this
> email by mistake please delete it from your system; you should not copy the
> message or disclose its content to anyone.
>
> This electronic communication may contain general financial product advice
> but should not be relied upon or construed as a recommendation of any
> financial product. The information has been prepared without taking into
> account your objectives, financial situation or needs. You should consider
> the Product Disclosure Statement relating to the financial product and
> consult your financial adviser before making a decision about whether to
> acquire, hold or dispose of a financial product.
>
> For further details on the financial product please go to
> http://www.bt.com.au
>
> Past performance is not a reliable indicator of future performance.
>


Re: Repli server db usage is high than Primary server

2019-02-08 Thread Stefan Folkerts
delete filespaces on primary that have not been on on the replica?
I have seen a lot of that going on, you get stale filespaces on the replica
with the metadata but due to deduplication the impact is largest on the
database.
If this is not it I would run the IBM database reorg perl script, you might
be able to do a few reduce max commands that will solve the problem.


On Sat, Feb 2, 2019 at 7:55 PM Sasa Drnjevic  wrote:

> Hi,
> are you sure that all of the policies are the same?
>
> All retention periods are the same?
>
> And are you sure that expiration process regularly runs on the target
> server? Chec for errors in the activity log...
>
> Rgds,
>
> --
> Sasa Drnjevic
> www.srce.unizg.hr/en/
>
>
>
>
> On 2019-02-02 15:10, Saravanan Palanisamy wrote:
> > Hello Experts,
> >
> > What cases repl server would have more db usage than primary server.
> Here I go with one setup in similar situation.
> > Primary server db usage - 67% (out of 6TB)
> > Replication Server db usage  - 96% (out of 6TB)
> >
> > repli server uses only for replication purpose, no there native backups
> configured over there.
> >
> > suggestion pl.
> >
> > - Saravanan
> >
>


Re: I keep getting lan-free backup errors, but we have migrated away from lanfree backups

2018-10-23 Thread Stefan Folkerts
Hi Marc,

Thank you for your reply.

1 - the wrong stanza was edited
There was only one stanza using lanfree on the host, the other (b/a backup)
always used lan-based backups, I checked that at least 3 times and my
colleague looked over my shoulder because I felt like I was going a bit
crazy. ;-)

2 - DSM_DIR is set and the wrong dsm.sys file is edited
I searched the entire machine, al dsm.sys files are links to the same file
that was edited, all have * in front of all lanfree components.

3 - there's another machine also configured to use that nodename
This appears to be it because the fqdn I see in the messages resolved to a
different server that I know nothing about. Maybe they have something
standby that is connecting using the same nodename.
The other server also has an IP that's not on the server that I migrated.
:-)
Thanks for this, I was a bit stuck thinking it had to come from the same
machine!

Sometimes you just need a little nudge in the right direction, so thanks
again.

On Tue, Oct 23, 2018 at 1:41 PM Marc Lanteigne 
wrote:

> Hi Stefan,
>
> It's likely one of those things:
> 1 - the wrong stanza was edited
> 2 - DSM_DIR is set and the wrong dsm.sys file is edited
> 3 - there's another machine also configured to use that nodename
> 4 - there's more than one dsmcad and the wrong one was restarted.  You
> should be able to see the IP in the activity log when the session is
> established, that will at least help you narrow down which machine it's
> coming from.
> 5 - if enablelanfree is specified more than once in a stanza, the last
> reference to it will be the one taking effect.
>
> -
> Thanks,
> Marc...
> 
> Marc Lanteigne
> Spectrum Protect Specialist AVP / SRT
> 416.478.0233 | marclantei...@ca.ibm.com
> Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern
>
> Latest Servermon:
> http://www-01.ibm.com/support/docview.wss?uid=swg21432937
> Performance Mustgather:
> http://www-01.ibm.com/support/docview.wss?uid=swg22013355
>
> Follow me on: Twitter, developerWorks, LinkedIn
>
>
>
> -Original Message-
> From: Stefan Folkerts 
> Sent: Tuesday, October 23, 2018 07:24 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] I keep getting lan-free backup errors, but we have
> migrated away from lanfree backups
>
> Hi,
>
> Quick question, I can't seem to solve this little problem.
>
> I've migrated a couple of nodes from making lanfree backups to making
> lan-based backups over 10Gb/s to another server (exported node definitions
> only).
> Everything works fine except for the fact that actlog is filled with these
> warnings for all nodes that made lanfree backups in the past:
>
> ANE4048W (Session: 755286, Node: )  LAN-Free connection failed.
> (SESSION: 755286)
>
> Now, I removed everythink lanfree related from the stanza of these nodes
> and restarted the cad (no scheduler active).
> The dsmsta isn't running.
> I still get these warnings, I'm a bit at a loss as to why.
> I thought enablelanfree yes->no would do the trick but now I have removed
> all lanfree entry's and restarted the dsmcad as I said, but still the
> messages keep popping up.
>
> Haven't worked with this in a while, so there is a good chance I'm just
> forgetting something.
> Any suggestions?
>
> Regards,
>Stefan
>


I keep getting lan-free backup errors, but we have migrated away from lanfree backups

2018-10-23 Thread Stefan Folkerts
Hi,

Quick question, I can't seem to solve this little problem.

I've migrated a couple of nodes from making lanfree backups to making
lan-based backups over 10Gb/s to another server (exported node definitions
only).
Everything works fine except for the fact that actlog is filled with these
warnings for all nodes that made lanfree backups in the past:

ANE4048W (Session: 755286, Node: )  LAN-Free connection failed.
(SESSION: 755286)

Now, I removed everythink lanfree related from the stanza of these nodes
and restarted the cad (no scheduler active).
The dsmsta isn't running.
I still get these warnings, I'm a bit at a loss as to why.
I thought enablelanfree yes->no would do the trick but now I have removed
all lanfree entry's and restarted the dsmcad as I said, but still the
messages keep popping up.

Haven't worked with this in a while, so there is a good chance I'm just
forgetting something.
Any suggestions?

Regards,
   Stefan


Re: 8.1.6.0 woes

2018-10-22 Thread Stefan Folkerts
Any clarity on this yet Hans?



On Tue, Oct 16, 2018 at 5:26 PM Hans Christian Riksheim 
wrote:

> Stefan.
>
> Not a known issue. The PMR was submitted last Thursday and it was
> transferred to L2 today.
>
>
> Hans Chr.
>
> On Tue, Oct 16, 2018 at 3:01 PM Stefan Folkerts  >
> wrote:
>
> > Hi Hans,
> >
> > Did IBM support report that this is a known issue?
> >
> > Regards,
> >Stefan
> >
> > On Mon, Oct 15, 2018 at 4:06 PM Hans Christian Riksheim <
> bull...@gmail.com
> > >
> > wrote:
> >
> > > Just putting it out there.
> > >
> > > We upgraded our SP servers to 8.1.6.0 and now all our file pools with
> > > deduplication are corrupt. Several systems all running AIX.
> > >
> > > Hans Chr.
> > >
> >
>


Re: SP DR to clout

2018-10-16 Thread Stefan Folkerts
IMHO Spectrum Protect cloud pools are more for deep archive/low tier
options, not for secondary server DR options.
It's simply to slow for 90%+ of the DR cases and I believe that's why it's
not a traditional copypool option.
You tier data to the cloud because you don't want to keep buying frame for
your tapelibrary or expansion units for your disk storage subsystem that
holds data you don't need to restore at high speeds.
It's not for operation protection, it's for expanding your capacity in the
cloud for long term retention of rarely accessed items.
It's basically the exact opposite of the the container pool.

Again, that's how I see it, IBM's view might be very different. :-)



On Tue, Oct 16, 2018 at 11:58 AM adsm consulting  wrote:

> Hi everyone,
> I have 2 spectrum protect 8.1.4 servers spread over two locations.
> The server on the primary site is the largest one, it makes backups to
> directory container storage pool, and protects the storage pool locally to
> the Tape Library.
> The company plans to dispose of the tape library and tape DRM rotation, but
> there is not enough backend storage on the secondary site to be able to
> receive the primary SP server replication.
> At this point I wonder if it is possible on the secondary server, implement
> cloud pool storage container as target of the primary SP server.
> Has anyone implemented a solution like this?
>   [IBM-SPA]   --->[IBM-SPB]
> (SPDCVE_DISK)   (SPDCVE_CLOUD)
>
>
> I wonder why IBM has not yet implemented the protect storage pool directly
> on the cloud, it seems absurd that from 7.1.3 to 8.1.6  still the cloud
> storage pools have not been considered as a native solution of DR.
>
> Thank you.
> JP
>


Re: 8.1.6.0 woes

2018-10-16 Thread Stefan Folkerts
Hi Hans,

Did IBM support report that this is a known issue?

Regards,
   Stefan

On Mon, Oct 15, 2018 at 4:06 PM Hans Christian Riksheim 
wrote:

> Just putting it out there.
>
> We upgraded our SP servers to 8.1.6.0 and now all our file pools with
> deduplication are corrupt. Several systems all running AIX.
>
> Hans Chr.
>


Re: CONTAINER pool experiences

2018-09-28 Thread Stefan Folkerts
And with " Okay, having used traditional disks with the containerpool I
will say that that is not a good combination" I mean for the Spectrum
Protect database and activelog, not for the storage of actual data. :-)

On Fri, Sep 28, 2018 at 10:06 AM Stefan Folkerts 
wrote:

> Okay, having used traditional disks with the containerpool I will say that
> that is not a good combination since it will actually slow down the backup
> performance, with the filepool it can take more time to chunck and
> indentify after the data has landed on disk.
>
> The thing is however that you need a lot more DB capacity with the
> filepool than with the containerpool and you can in fact use the newer
> generation read-intensive SSD's for the containerpool in most cases, we
> calculated it for our use cases and came to a 4-5 year lifespan. The
> Spectrum Protect does mostly reads. Anyway, switches to SSD's in a server
> (especially Intel systems) isn't expensive anymore and 2TB's worth of SSD
> database& activelog for the containerpool is worth about the same as 4TB
> worth for a fileclass type of dedup storagepool. I will even say that some
> acceptable database performance from 10K/10K drives is a LOT more expensive
> than blistering performance form a few SSD's.
>
> I don't understand what is happening with the database data that didn't
> want to enter the containerpool, I do remember certain types of data not
> being compatible, there was something (not any longer) with NDMP data I
> believe but just databases...never seen a problem with them pointing to the
> containerpool and I used the first release of Spectrum Protect that had the
> containerpool to backup/replicate mssql/oracle/db2 without issues and I
> can't think of a database that couldn't land in the containerpool, maybe
> somebody else knows.
>
>
>
> On Thu, Sep 27, 2018 at 9:14 PM Zoltan Forray  wrote:
>
>> We didn't do any kind of conversion since we don't have the free space.
>> Everything is tied up in a 500TB ISILON/NFS space that is 92% full and
>> nowhere to grow.
>>
>> We created a container and the directory pointing to a 15TB internal disk
>> space we had free.  Create a new PS/Domain/MC pointing it to the
>> container.
>>
>> Then we changed the MC of a Domain that is used for the DB backups from
>> the
>> production servers and started a new DB backups on one of the production
>> servers.  When that failed and we saw messages saying you couldn't use
>> containers/directories for DB backup storage, we switch gears and took an
>> MC for one of the small production servers/replicas and changed it to
>> point
>> to the container stgpool.  We started a replication on that production
>> server and again we saw all the incoming replica going to the nextstgpool
>> and nothing in the container/directory
>>
>> That was when we tried a regular client and it worked.
>>
>> Maybe it is the ISP server level?  Maybe it didn't like having some of the
>> replicas in non-container pools?
>>
>> The lack of SSD's for DB is another reason we can't afford to use
>> containers on production servers.  Only 1-production server has 1.9TB
>> SSD's
>> and with replication and dedup it has quickly consumed 86% of it and we
>> had
>> to stop dedup.
>>
>>
>>
>> On Thu, Sep 27, 2018 at 1:56 PM Stefan Folkerts <
>> stefan.folke...@gmail.com>
>> wrote:
>>
>> > Very strange, I can only say that I have done this sequence at least 5
>> > times.
>> > First, you upgrade/migrate the replication target to the containerpool,
>> > then the source.
>> > I've never had any issues with data not landing in the pool so I really
>> > expect there was some anomaly happening there or a misconfiguration of
>> > sorts.
>> >
>> > For me the containerpool has been one of the biggest improvement on
>> > Spectrum Protect on the server side since replication was introduced
>> and I
>> > can only think of Virtual Environments for VMware multi-session restores
>> > being more important that the containerpool for my customers.
>> >
>> > As long as your using SSD's for the database (I think that's as good as
>> a
>> > must-have) and have the compute power/memory to run the pools they are
>> > really worth the effort of looking at again because I'm sure you can do
>> > what you want, you can use the containerpool as a replication target
>> from
>> > other pools on other servers running replication-compatible versions of
>> > Spectrum Protect.
>> >
>> >
>> >
>> > On 

Re: CONTAINER pool experiences

2018-09-28 Thread Stefan Folkerts
Okay, having used traditional disks with the containerpool I will say that
that is not a good combination since it will actually slow down the backup
performance, with the filepool it can take more time to chunck and
indentify after the data has landed on disk.

The thing is however that you need a lot more DB capacity with the filepool
than with the containerpool and you can in fact use the newer generation
read-intensive SSD's for the containerpool in most cases, we calculated it
for our use cases and came to a 4-5 year lifespan. The Spectrum Protect
does mostly reads. Anyway, switches to SSD's in a server (especially Intel
systems) isn't expensive anymore and 2TB's worth of SSD database& activelog
for the containerpool is worth about the same as 4TB worth for a fileclass
type of dedup storagepool. I will even say that some acceptable database
performance from 10K/10K drives is a LOT more expensive than blistering
performance form a few SSD's.

I don't understand what is happening with the database data that didn't
want to enter the containerpool, I do remember certain types of data not
being compatible, there was something (not any longer) with NDMP data I
believe but just databases...never seen a problem with them pointing to the
containerpool and I used the first release of Spectrum Protect that had the
containerpool to backup/replicate mssql/oracle/db2 without issues and I
can't think of a database that couldn't land in the containerpool, maybe
somebody else knows.



On Thu, Sep 27, 2018 at 9:14 PM Zoltan Forray  wrote:

> We didn't do any kind of conversion since we don't have the free space.
> Everything is tied up in a 500TB ISILON/NFS space that is 92% full and
> nowhere to grow.
>
> We created a container and the directory pointing to a 15TB internal disk
> space we had free.  Create a new PS/Domain/MC pointing it to the container.
>
> Then we changed the MC of a Domain that is used for the DB backups from the
> production servers and started a new DB backups on one of the production
> servers.  When that failed and we saw messages saying you couldn't use
> containers/directories for DB backup storage, we switch gears and took an
> MC for one of the small production servers/replicas and changed it to point
> to the container stgpool.  We started a replication on that production
> server and again we saw all the incoming replica going to the nextstgpool
> and nothing in the container/directory
>
> That was when we tried a regular client and it worked.
>
> Maybe it is the ISP server level?  Maybe it didn't like having some of the
> replicas in non-container pools?
>
> The lack of SSD's for DB is another reason we can't afford to use
> containers on production servers.  Only 1-production server has 1.9TB SSD's
> and with replication and dedup it has quickly consumed 86% of it and we had
> to stop dedup.
>
>
>
> On Thu, Sep 27, 2018 at 1:56 PM Stefan Folkerts  >
> wrote:
>
> > Very strange, I can only say that I have done this sequence at least 5
> > times.
> > First, you upgrade/migrate the replication target to the containerpool,
> > then the source.
> > I've never had any issues with data not landing in the pool so I really
> > expect there was some anomaly happening there or a misconfiguration of
> > sorts.
> >
> > For me the containerpool has been one of the biggest improvement on
> > Spectrum Protect on the server side since replication was introduced and
> I
> > can only think of Virtual Environments for VMware multi-session restores
> > being more important that the containerpool for my customers.
> >
> > As long as your using SSD's for the database (I think that's as good as a
> > must-have) and have the compute power/memory to run the pools they are
> > really worth the effort of looking at again because I'm sure you can do
> > what you want, you can use the containerpool as a replication target from
> > other pools on other servers running replication-compatible versions of
> > Spectrum Protect.
> >
> >
> >
> > On Thu, Sep 27, 2018 at 4:27 PM Zoltan Forray  wrote:
> >
> > > We wondered about that too.  But then we setup a desktop as a
> client/node
> > > and it backed up into the container just fine. IMHO, getting rid of the
> > > node and its backups was unnecessarily complicated but we finally
> deleted
> > > it and the directory/container.
> > >
> > > On Thu, Sep 27, 2018 at 8:43 AM Stefan Folkerts <
> > stefan.folke...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Zoltan,
> > > >
> > > > That is very strange, I've used the containerpool as a replication
> > target
> > > > for filepools bef

Re: CONTAINER pool experiences

2018-09-27 Thread Stefan Folkerts
Very strange, I can only say that I have done this sequence at least 5
times.
First, you upgrade/migrate the replication target to the containerpool,
then the source.
I've never had any issues with data not landing in the pool so I really
expect there was some anomaly happening there or a misconfiguration of
sorts.

For me the containerpool has been one of the biggest improvement on
Spectrum Protect on the server side since replication was introduced and I
can only think of Virtual Environments for VMware multi-session restores
being more important that the containerpool for my customers.

As long as your using SSD's for the database (I think that's as good as a
must-have) and have the compute power/memory to run the pools they are
really worth the effort of looking at again because I'm sure you can do
what you want, you can use the containerpool as a replication target from
other pools on other servers running replication-compatible versions of
Spectrum Protect.



On Thu, Sep 27, 2018 at 4:27 PM Zoltan Forray  wrote:

> We wondered about that too.  But then we setup a desktop as a client/node
> and it backed up into the container just fine. IMHO, getting rid of the
> node and its backups was unnecessarily complicated but we finally deleted
> it and the directory/container.
>
> On Thu, Sep 27, 2018 at 8:43 AM Stefan Folkerts  >
> wrote:
>
> > Zoltan,
> >
> > That is very strange, I've used the containerpool as a replication target
> > for filepools before with replication in place and this works fine.
> > It's does not work the other way around, you can't replicate a
> > containerpool to a filepool.
> > I would almost say that there was a mistake made, maybe no real storage
> > connected to the pool or something else that went wrong because what you
> > described should work.
> >
> > Regards,
> > Stefan
> >
> >
> > On Thu, Sep 27, 2018 at 2:35 PM Zoltan Forray  wrote:
> >
> > > Stefan,
> > >
> > > Thank you for the reply. You said *"but for backup data on disk I think
> > > nothing beats it, maybe even in any product I've ever used."*
> > >
> > > Unfortunately, due to the overhead/requirements/demands (as other
> replies
> > > have pointed out) we can NOT use it for client/node backups since none
> of
> > > our current production ISP servers have the CPU/storage (most are
> > 32-thread
> > > and NFS/ISILON storage)
> > >
> > > What we wanted to use it for was on our offsite replication TARGET
> server
> > > (which has 64-threads and 256GB RAM),  which is used primarily for
> > > replication of data from our onsite ISP servers as well as offsite
> > database
> > > backups for the onsite ISP servers (devclass server).  When we
> > > tested/created a small directory/container area on internal disk on
> this
> > > server, any management class we pointed to it was rejected and the
> > incoming
> > > replicas/DB backups were redirected to the NEXTSTGPOOL which was a
> > normal,
> > > devclass FILE storagepool.
> > >
> > > Perhaps next year when we replace one of our local ISP servers with a
> > much
> > > bigger/beefier (72-threads and 120TB internal disk storage).
> > >
> > > On Wed, Sep 26, 2018 at 6:17 AM Stefan Folkerts <
> > stefan.folke...@gmail.com
> > > >
> > > wrote:
> > >
> > > > Zoltan,
> > > >
> > > > I'm not sure I understand your issues, we use directory
> containerpools
> > > for
> > > > all but a few of our Spectrum Protect customers and it's miles ahead
> of
> > > > what the fileclass-based storagepool bring in terms of performance,
> > > > Spectrum Protect database impact (size wise). Yes, it isn't capable
> of
> > > > certain, until then standard Spectrum Protect storagepool functions
> but
> > > for
> > > > backup data on disk I think nothing beats it, maybe even in any
> product
> > > > I've ever used.
> > > >
> > > > Of course you can't write Spectrum Protect database backups to it, it
> > > > doesn't even have a device class and it's most certainly not
> sequential
> > > in
> > > > any way but normal database backups are very much able to land in the
> > > > directory containerpool and you will enjoy enormous deduplication and
> > > > compression benefits, much higher net savings then when using the
> > > filepool
> > > > at any customer site I've implemented it.
> > > >
> > > > So, could you pleas

Re: CONTAINER pool experiences

2018-09-27 Thread Stefan Folkerts
Zoltan,

That is very strange, I've used the containerpool as a replication target
for filepools before with replication in place and this works fine.
It's does not work the other way around, you can't replicate a
containerpool to a filepool.
I would almost say that there was a mistake made, maybe no real storage
connected to the pool or something else that went wrong because what you
described should work.

Regards,
Stefan


On Thu, Sep 27, 2018 at 2:35 PM Zoltan Forray  wrote:

> Stefan,
>
> Thank you for the reply. You said *"but for backup data on disk I think
> nothing beats it, maybe even in any product I've ever used."*
>
> Unfortunately, due to the overhead/requirements/demands (as other replies
> have pointed out) we can NOT use it for client/node backups since none of
> our current production ISP servers have the CPU/storage (most are 32-thread
> and NFS/ISILON storage)
>
> What we wanted to use it for was on our offsite replication TARGET server
> (which has 64-threads and 256GB RAM),  which is used primarily for
> replication of data from our onsite ISP servers as well as offsite database
> backups for the onsite ISP servers (devclass server).  When we
> tested/created a small directory/container area on internal disk on this
> server, any management class we pointed to it was rejected and the incoming
> replicas/DB backups were redirected to the NEXTSTGPOOL which was a normal,
> devclass FILE storagepool.
>
> Perhaps next year when we replace one of our local ISP servers with a much
> bigger/beefier (72-threads and 120TB internal disk storage).
>
> On Wed, Sep 26, 2018 at 6:17 AM Stefan Folkerts  >
> wrote:
>
> > Zoltan,
> >
> > I'm not sure I understand your issues, we use directory containerpools
> for
> > all but a few of our Spectrum Protect customers and it's miles ahead of
> > what the fileclass-based storagepool bring in terms of performance,
> > Spectrum Protect database impact (size wise). Yes, it isn't capable of
> > certain, until then standard Spectrum Protect storagepool functions but
> for
> > backup data on disk I think nothing beats it, maybe even in any product
> > I've ever used.
> >
> > Of course you can't write Spectrum Protect database backups to it, it
> > doesn't even have a device class and it's most certainly not sequential
> in
> > any way but normal database backups are very much able to land in the
> > directory containerpool and you will enjoy enormous deduplication and
> > compression benefits, much higher net savings then when using the
> filepool
> > at any customer site I've implemented it.
> >
> > So, could you please explain what you are trying to do that doesn't work?
> >
> > Regards,
> >Stefan
> >
> >
> > On Mon, Sep 24, 2018 at 3:49 PM Zoltan Forray  wrote:
> >
> > > Thanks for all the comments/suggestions and a somewhat consensus to
> avoid
> > > directory/containers.
> > >
> > > We decided to at least get our "feet wet" and play with
> > > directory/containers on our offsite replica-target server (which has
> the
> > > horsepower) only to realize everything we tried to use it for was
> > > not-allowed (DB backups from production servers and replication target
> > > storage pools) and everything we directed to it was redirected to the
> > "next
> > > stgpool"?
> > >
> > > So we are confused - what can you use directory/containers for at the
> > > 7.1.7.400 server level?  Only real client backups?
> > >
> > >
> > >
> > > On Tue, Sep 18, 2018 at 11:40 AM PAC Brion Arnaud <
> > > arnaud.br...@panalpina.com> wrote:
> > >
> > > > Zoltan,
> > > >
> > > > If I understood well, your storage is Isilon based : in this case do
> > not
> > > > even think of using CONTAINER pools, as performance will be horrible.
> > > > Not much time to talk about this, but to make a very long story
> short,
> > we
> > > > are about to dump/trash /resell the brand new Isilon arrays we bought
> > 8
> > > > months ago, and to replace them with direct attached storage
> > (Storwize),
> > > as
> > > > we never reached sufficient performance levels. Cases have been
> opened
> > > with
> > > > IBM and EMC as well, to no result at all, beside a suspected block
> size
> > > > issue which would refrain the Isilons to work at expected speed.
> > > > If you plan to go for such a hardware configuration, my only advice
> is
> > :
> > >

Re: CONTAINER pool experiences

2018-09-26 Thread Stefan Folkerts
Zoltan,

I'm not sure I understand your issues, we use directory containerpools for
all but a few of our Spectrum Protect customers and it's miles ahead of
what the fileclass-based storagepool bring in terms of performance,
Spectrum Protect database impact (size wise). Yes, it isn't capable of
certain, until then standard Spectrum Protect storagepool functions but for
backup data on disk I think nothing beats it, maybe even in any product
I've ever used.

Of course you can't write Spectrum Protect database backups to it, it
doesn't even have a device class and it's most certainly not sequential in
any way but normal database backups are very much able to land in the
directory containerpool and you will enjoy enormous deduplication and
compression benefits, much higher net savings then when using the filepool
at any customer site I've implemented it.

So, could you please explain what you are trying to do that doesn't work?

Regards,
   Stefan


On Mon, Sep 24, 2018 at 3:49 PM Zoltan Forray  wrote:

> Thanks for all the comments/suggestions and a somewhat consensus to avoid
> directory/containers.
>
> We decided to at least get our "feet wet" and play with
> directory/containers on our offsite replica-target server (which has the
> horsepower) only to realize everything we tried to use it for was
> not-allowed (DB backups from production servers and replication target
> storage pools) and everything we directed to it was redirected to the "next
> stgpool"?
>
> So we are confused - what can you use directory/containers for at the
> 7.1.7.400 server level?  Only real client backups?
>
>
>
> On Tue, Sep 18, 2018 at 11:40 AM PAC Brion Arnaud <
> arnaud.br...@panalpina.com> wrote:
>
> > Zoltan,
> >
> > If I understood well, your storage is Isilon based : in this case do not
> > even think of using CONTAINER pools, as performance will be horrible.
> > Not much time to talk about this, but to make a very long story short, we
> > are about to dump/trash /resell the brand new Isilon arrays we bought  8
> > months ago, and to replace them with direct attached storage (Storwize),
> as
> > we never reached sufficient performance levels. Cases have been opened
> with
> > IBM and EMC as well, to no result at all, beside a suspected block size
> > issue which would refrain the Isilons to work at expected speed.
> > If you plan to go for such a hardware configuration, my only advice is :
> > run away, as fast as you can !
> >
> > Cheers.
> >
> > Arnaud
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> > Zoltan Forray
> > Sent: Tuesday, September 18, 2018 3:37 PM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: CONTAINER pool experiences
> >
> > We are investigating using CONTAINER pools for our offsite replica server
> > vs the current FILE method which is killing us with the constant dedup,
> > reclaims, etc.
> >
> > So, what are the "gotchas' ?   We are still at V7.1.7.400 so I figure we
> > will have to do without any new features added in the V8 branch. But is
> it
> > problematic enough at V7 to avoid it?
> >
> > Your thoughts?  Experiences?
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zfor...@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit http://phishing.vcu.edu/
> >
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> Xymon Monitor Administrator
> VMware Administrator
> Virginia Commonwealth University
> UCC/Office of Technology Services
> www.ucc.vcu.edu
> zfor...@vcu.edu - 804-828-4807
> Don't be a phishing victim - VCU and other reputable organizations will
> never use email to request that you reply with your password, social
> security number or confidential personal information. For more details
> visit http://phishing.vcu.edu/
>


Re: MariaDB backups using modern MariaDB methods and high performance restores

2018-09-12 Thread Stefan Folkerts
Sorry for not getting back to this sooner Remco & Skylar, we are looking
into the SAP protect solution from Repostor that might be a good fit for
this customers requirements, I will share the results once I have them with
the features/limitations etc.

On Wed, Sep 5, 2018 at 1:54 AM Skylar Thompson  wrote:

> We do this with PostgreSQL - take a snapshot and mount it with a
> preschedulecmd, run an incremental
> backup on the snapshot, and then unmount and destroy it with a
> postschedulecmd. The complication for Stefan would be that he would have
> to restore the entire snapshot in order to have a usable database, which
> might take too long for his restore objective. PostgreSQL also supports
> continuous backups of the WAL (journal)[1] which allow for more
> fine-grained point-in-time restores, but I'm not sure if MySQL/MariaDB have
> an equivalent solution.
>
> [1]
> https://www.postgresql.org/docs/current/static/continuous-archiving.html
>
> On Tue, Sep 04, 2018 at 05:29:18PM +0200, Remco Post wrote:
> > Just a thought. This is a linux server, right? So you have linux LVM. I
> think it should be possible to make a consistent snapshot using MariaDB and
> LVM. Then you can backup the snapshot and in case of a disaster restore
> that. Now, I’ve never attempted this, and I don’t know how to do it, but it
> seems to be the only viable acceptable solution.
> >
> > > Op 4 sep. 2018, om 09:49 heeft Stefan Folkerts <
> stefan.folke...@gmail.com> het volgende geschreven:
> > >
> > > Hi all,
> > >
> > > I'm currently looking for the best backup option for a large and
> extremely
> > > transaction-heavy MariaDB database environment. I'm talking about up to
> > > 100.000.000 transactions a year (payment industry).
> > >
> > > It needs to connect to Spectrum Protect to store it's database data,
> it is
> > > acceptable if this is a two stage backup solution but not for restores
> due
> > > to the duration of a two stage restore.
> > >
> > > We have looked at one option but that used the traditional mysqldump
> > > methods that have proven to be unusable for this customer because the
> > > restore is up to 8 times slower than the backup and during the backup
> all
> > > transactions are stored to be committed later, this is an issue with
> this
> > > many transactions.
> > >
> > > zmanda seems to offer newer backup mechanics for MariaDB, i'm
> wondering if
> > > anybody used this with Spectrum Protect that can share some experiences
> > > with this solution.
> > > Also, any other ideas for solutions that are officially supporting
> Spectrum
> > > Protect would be great.
> > >
> > > Thanks in advance,
> > >   Stefan
> >
> > --
> >
> >  Met vriendelijke groeten/Kind Regards,
> >
> > Remco Post
> > r.p...@plcs.nl
> > +31 6 248 21 622
>
> --
> -- Skylar Thompson (skyl...@u.washington.edu)
> -- Genome Sciences Department, System Administrator
> -- Foege Building S046, (206)-685-7354
> -- University of Washington School of Medicine
>


Re: MariaDB backups using modern MariaDB methods and high performance restores

2018-09-04 Thread Stefan Folkerts
> 100.000.000 transactions/year is a little over 3 transactions per second.

With transaction I mean payment, not a single database transaction, I don't
know how many database transactions a single payment creates. What I do
know is that they are running a high database load, especially during peak
hours.
I'm sure there are databases doing many many more transactions than this
but it's enough to rule out the mysqldump backup solution for them knowing
what they know about how it operates.




On Tue, Sep 4, 2018 at 1:12 PM Loon, Eric van (ITOPT3) - KLM <
eric-van.l...@klm.com> wrote:

> Hi Stefan,
> Just out of curiosity: 100.000.000 transactions/year is a little over 3
> transactions per second. I would not call that heavily used, are these
> figures correct?
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: dinsdag 4 september 2018 11:29
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: MariaDB backups using modern MariaDB methods and high
> performance restores
>
> Yes we did, Repostor uses (at least the version we tested) mysql tools to
> backup and restore the database, the backup-impact and restore performance
> of the tool doesn't suite this customer environment.
> Now, let me be clear, I don't want to be negative about Repostor data
> protector because it worked great and is very easy to setup and use, it's
> just that the very high amount of transactions that are done at this site
> eliminate this tool as a solution.
>
> I will for sure advise Repostor data protector solutions when the demands
> are not so extreme as they are at this site.
>
>
>
>
> On Tue, Sep 4, 2018 at 11:20 AM Uwe Schreiber  >
> wrote:
>
> > Hi Stefan,
> >
> > did you have a look on Repostor DATA Protector?
> >
> > Regards Uwe
> >
> > > Am 04.09.2018 um 09:49 schrieb Stefan Folkerts <
> > stefan.folke...@gmail.com>:
> > >
> > > Hi all,
> > >
> > > I'm currently looking for the best backup option for a large and
> > extremely
> > > transaction-heavy MariaDB database environment. I'm talking about up
> > > to
> > > 100.000.000 transactions a year (payment industry).
> > >
> > > It needs to connect to Spectrum Protect to store it's database data,
> > > it
> > is
> > > acceptable if this is a two stage backup solution but not for
> > > restores
> > due
> > > to the duration of a two stage restore.
> > >
> > > We have looked at one option but that used the traditional mysqldump
> > > methods that have proven to be unusable for this customer because
> > > the restore is up to 8 times slower than the backup and during the
> > > backup all transactions are stored to be committed later, this is an
> > > issue with this many transactions.
> > >
> > > zmanda seems to offer newer backup mechanics for MariaDB, i'm
> > > wondering
> > if
> > > anybody used this with Spectrum Protect that can share some
> > > experiences with this solution.
> > > Also, any other ideas for solutions that are officially supporting
> > Spectrum
> > > Protect would be great.
> > >
> > > Thanks in advance,
> > >   Stefan
> >
> 
> For information, services and offers, please visit our web site:
> http://www.klm.com. This e-mail and any attachment may contain
> confidential and privileged material intended for the addressee only. If
> you are not the addressee, you are notified that no part of the e-mail or
> any attachment may be disclosed, copied or distributed, and that any other
> action related to this e-mail or attachment is strictly prohibited, and may
> be unlawful. If you have received this e-mail by error, please notify the
> sender immediately by return e-mail, and delete this message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
> employees shall not be liable for the incorrect or incomplete transmission
> of this e-mail or any attachments, nor responsible for any delay in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
> Airlines) is registered in Amstelveen, The Netherlands, with registered
> number 33014286
> 
>


Re: MariaDB backups using modern MariaDB methods and high performance restores

2018-09-04 Thread Stefan Folkerts
Yes we did, Repostor uses (at least the version we tested) mysql tools to
backup and restore the database, the backup-impact and restore performance
of the tool doesn't suite this customer environment.
Now, let me be clear, I don't want to be negative about Repostor data
protector because it worked great and is very easy to setup and use, it's
just that the very high amount of transactions that are done at this site
eliminate this tool as a solution.

I will for sure advise Repostor data protector solutions when the demands
are not so extreme as they are at this site.




On Tue, Sep 4, 2018 at 11:20 AM Uwe Schreiber 
wrote:

> Hi Stefan,
>
> did you have a look on Repostor DATA Protector?
>
> Regards Uwe
>
> > Am 04.09.2018 um 09:49 schrieb Stefan Folkerts <
> stefan.folke...@gmail.com>:
> >
> > Hi all,
> >
> > I'm currently looking for the best backup option for a large and
> extremely
> > transaction-heavy MariaDB database environment. I'm talking about up to
> > 100.000.000 transactions a year (payment industry).
> >
> > It needs to connect to Spectrum Protect to store it's database data, it
> is
> > acceptable if this is a two stage backup solution but not for restores
> due
> > to the duration of a two stage restore.
> >
> > We have looked at one option but that used the traditional mysqldump
> > methods that have proven to be unusable for this customer because the
> > restore is up to 8 times slower than the backup and during the backup all
> > transactions are stored to be committed later, this is an issue with this
> > many transactions.
> >
> > zmanda seems to offer newer backup mechanics for MariaDB, i'm wondering
> if
> > anybody used this with Spectrum Protect that can share some experiences
> > with this solution.
> > Also, any other ideas for solutions that are officially supporting
> Spectrum
> > Protect would be great.
> >
> > Thanks in advance,
> >   Stefan
>


MariaDB backups using modern MariaDB methods and high performance restores

2018-09-04 Thread Stefan Folkerts
Hi all,

I'm currently looking for the best backup option for a large and extremely
transaction-heavy MariaDB database environment. I'm talking about up to
100.000.000 transactions a year (payment industry).

It needs to connect to Spectrum Protect to store it's database data, it is
acceptable if this is a two stage backup solution but not for restores due
to the duration of a two stage restore.

We have looked at one option but that used the traditional mysqldump
methods that have proven to be unusable for this customer because the
restore is up to 8 times slower than the backup and during the backup all
transactions are stored to be committed later, this is an issue with this
many transactions.

zmanda seems to offer newer backup mechanics for MariaDB, i'm wondering if
anybody used this with Spectrum Protect that can share some experiences
with this solution.
Also, any other ideas for solutions that are officially supporting Spectrum
Protect would be great.

Thanks in advance,
   Stefan


Re: Improving Replication performance

2018-04-27 Thread Stefan Folkerts
We have just build a setup at a large university that replicates (on
average) 6.9TB per hour but that data is deduplicated on the source server.
If you have 10Gb/s+ bandwith and no limitation on the performance on the
source you should but able to handle 7TB per day (mixed workload, not only
tiny files) on a target server if it's an M blueprint or faster model
without any issue.
The specifications of the compute hardware at this customer are less then
what you specify memory wise, we are spot on M blueprint but with NVME's
for the database.
If your source server can deliver the data fast enough it's all about super
fast database and activelog performance on the target, it's needs to do an
insane amount of iop/s to chunk, hash and check all the chunks you are
throwing at it. And yes, memory helps but 256GB isn't as important as
database speed and raw CPU power to plow thru that data.

Did you run benchmarks on the database and activelog volumes on your target
server?
We reach about 110.000 IOP/s on the database volumes using NVME and we have
found that to be the key to unlocking Spectrum Protect ludicrous-mode.

https://imgur.com/a/SAA7OAZ


On Thu, Apr 26, 2018 at 10:37 PM, Skylar Thompson  wrote:

> Are you CPU or disk-bound on the source or target servers? Even if you have
> lots of CPUs, replication might be running on a single thread and just
> using
> one CPU.
>
> On Thu, Apr 26, 2018 at 02:46:24PM -0400, Zoltan Forray wrote:
> > As we get deeper into Replication and my boss wants to use it more and
> more
> > as an offsite recovery platform.
> >
> > As we try to reach "best practices" of replicating everything, we are
> > finding this desire to be difficult if not impossible to achieve due to
> the
> > resource demands.
> >
> > Total we want to eventually replicate is around 700TB from 5-source
> servers
> > to 1-target server which is dedicated to replication.
> >
> > So the big question is, can this be done?
> >
> > We recently rebuilt the offsite target server to as big as we could
> afford
> > ($38K).  It has 256GB of RAM.  64-threads of CPU. Storage is primarily
> > 500TB of ISILON/NFS. Connectivity is via quad 10G (2-for IP traffic from
> > source servers and 2-for ISILON/NFS).
> >
> > Yet we can only replicate around 3TB daily when we backup around 7TB.
> >
> > Looking for suggestions/thoughts/experiences?
> >
> > All boxes are RHEL Linux and 7.1.7.300
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zfor...@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit http://phishing.vcu.edu/
>
> --
> -- Skylar Thompson (skyl...@u.washington.edu)
> -- Genome Sciences Department, System Administrator
> -- Foege Building S046, (206)-685-7354
> -- University of Washington School of Medicine
>


Re: convert stgpool

2018-04-19 Thread Stefan Folkerts
I'm afraid Remco is right, the server can't (or simply won't) uncompress
the older client-side compression so it will convert it compressed to the
containerpool and you will most likely get very poor deduplication on that
data.
What kind of data is it and for how long do you need to keep it?

On Thu, Apr 19, 2018 at 11:25 AM, Remco Post  wrote:

> it’s not a perfect world, AFAIK.
>
> > On 19 Apr 2018, at 10:36, Michael Prix  wrote:
> >
> > Hello *SMers,
> >
> >  just out of curiosity:
> > Given you have a storagepool with a legacy, non-deduplicated,
> file-devclass,
> > inside which files are stored with "compression yes" on the clientside.
> This
> > storagepool is converted to a directory-containerpool. How are the
> compressed
> > files handled during conversion?
> > In a perfect world, I would assume the files will be decompressed before
> > stored in the directorypool, so that the data can make use of the
> > deduplication.
> >
> > --
> > Michael Prix
>
> --
>
>  Met vriendelijke groeten/Kind Regards,
>
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622
>


Re: disabling compression and/or deduplication for a client backing up against deduped/compressed directory-based storage pool

2018-04-13 Thread Stefan Folkerts
I dind't want to pull you into that Arnaud, I was just interested in the
performance test results, that's all.
I hope it's gets worked out and the performance improves, good luck.


On Thu, Apr 12, 2018 at 5:15 PM, PAC Brion Arnaud <
arnaud.br...@panalpina.com> wrote:

> Stefan,
>
> I do not want to enter the details of a 6 months lasting story, but to
> summarize it, such performance tests have been successfully conducted
> against our very first setup, which in between time has been subject to
> countless changes (TSM version, O.S. version, endianness from Big to Little
> endian, extension of the FS900 capacity, redesign of the storage pools
> layout and so on), the whole under huge time pressure, having the result
> that the current setup could not be benchmarked anymore, as it was already
> productive ...
>
> Cheers.
>
> Arnaud
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, April 12, 2018 2:10 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: disabling compression and/or deduplication for a client
> backing up against deduped/compressed directory-based storage pool
>
> I understand, I didn't know you were that deep into the case already, I
> wouldn't presume to be able to solve this via a few emails if support is
> working on it.
>
> What I am interested in is did you run the blueprint benchmarks? the perl
> script that can benchmark your database and your containerpool volumes?
> The blueprints give values that you should be getting in order to expect
> blueprint performance results, this way you can quantify your performance
> issue in how many IOP/s or MB/s your behind where you need to be to run the
> load you need to run.
>
> Regards,
> Stefan
>
>
>
>
> On Wed, Apr 11, 2018 at 2:10 PM, PAC Brion Arnaud <
> arnaud.br...@panalpina.com> wrote:
>
> > Dear Stefan,
> >
> > Thanks a lot for your very kind offer !
> >
> > Without underrating the power of this list, I however doubt that we will
> > able to find a solution that easily : we opened a case with IBM and
> > involved EMC/Dell as well, so far without much success, even after 5
> months
> > intensive monitoring and tuning attempts at all levels (Linux kernel,
> > communication layer, TSM DB fixes, access mode change on Isilon etc ...)
> >
> > I must share as well that some of our partners voices raised when the
> > decision had been made to go with Isilon storage, warning us that the
> > offered solution would not be powerful enough to sustain the intended
> > workload. It might very well be that they were right, and that in this
> > particular case, budget considerations have ruled over pragmatism,
> leading
> > to that very uncomfortable situation ...
> >
> > To finally answer your question : both active log and database are stored
> > on a Flashsytem 900 array, dedicated to TSM server only.
> >
> > Cheers.
> >
> > Arnaud
> >
> > 
> > 
> > Backup and Recovery Systems Administrator
> > Panalpina Management Ltd., Basle, Switzerland,
> > CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
> > Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
> > Direct: +41 (61) 226 19 78
> > e-mail: arnaud.br...@panalpina.com
> > This electronic message transmission contains information from Panalpina
> > and is confidential or privileged.
> > This information is intended only for the person (s) named above. If you
> > are not the intended recipient, any disclosure, copying, distribution or
> > use or any other action based on the contents of this
> >  information is strictly prohibited.
> >
> > If you receive this electronic transmission in error, please notify the
> > sender by e-mail, telephone or fax at the numbers listed above. Thank
> you.
> > 
> > 
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> > Stefan Folkerts
> > Sent: Wednesday, April 11, 2018 7:43 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: disabling compression and/or deduplication for a client
> > backing up against deduped/compressed directory-based storage pool
> >
> > That's no fun, maybe we can help!
> > What storage are you using for your active log and database?
> >
> > Regards,
> >   

Re: disabling compression and/or deduplication for a client backing up against deduped/compressed directory-based storage pool

2018-04-12 Thread Stefan Folkerts
I understand, I didn't know you were that deep into the case already, I
wouldn't presume to be able to solve this via a few emails if support is
working on it.

What I am interested in is did you run the blueprint benchmarks? the perl
script that can benchmark your database and your containerpool volumes?
The blueprints give values that you should be getting in order to expect
blueprint performance results, this way you can quantify your performance
issue in how many IOP/s or MB/s your behind where you need to be to run the
load you need to run.

Regards,
Stefan




On Wed, Apr 11, 2018 at 2:10 PM, PAC Brion Arnaud <
arnaud.br...@panalpina.com> wrote:

> Dear Stefan,
>
> Thanks a lot for your very kind offer !
>
> Without underrating the power of this list, I however doubt that we will
> able to find a solution that easily : we opened a case with IBM and
> involved EMC/Dell as well, so far without much success, even after 5 months
> intensive monitoring and tuning attempts at all levels (Linux kernel,
> communication layer, TSM DB fixes, access mode change on Isilon etc ...)
>
> I must share as well that some of our partners voices raised when the
> decision had been made to go with Isilon storage, warning us that the
> offered solution would not be powerful enough to sustain the intended
> workload. It might very well be that they were right, and that in this
> particular case, budget considerations have ruled over pragmatism, leading
> to that very uncomfortable situation ...
>
> To finally answer your question : both active log and database are stored
> on a Flashsytem 900 array, dedicated to TSM server only.
>
> Cheers.
>
> Arnaud
>
> 
> 
> Backup and Recovery Systems Administrator
> Panalpina Management Ltd., Basle, Switzerland,
> CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
> Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
> Direct: +41 (61) 226 19 78
> e-mail: arnaud.br...@panalpina.com
> This electronic message transmission contains information from Panalpina
> and is confidential or privileged.
> This information is intended only for the person (s) named above. If you
> are not the intended recipient, any disclosure, copying, distribution or
> use or any other action based on the contents of this
>  information is strictly prohibited.
>
> If you receive this electronic transmission in error, please notify the
> sender by e-mail, telephone or fax at the numbers listed above. Thank you.
> 
> ********
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Wednesday, April 11, 2018 7:43 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: disabling compression and/or deduplication for a client
> backing up against deduped/compressed directory-based storage pool
>
> That's no fun, maybe we can help!
> What storage are you using for your active log and database?
>
> Regards,
>Stefan
>
> On Mon, Apr 9, 2018 at 6:06 PM, PAC Brion Arnaud <
> arnaud.br...@panalpina.com
> > wrote:
>
> > Hi Stefan,
> >
> > Thanks a lot for appreciated feedback !
> >
> > >> You can, however, disable compression on the storagepool-level.
> >
> > This is unfortunately what I intended to avoid : if I disable it, then
> > lots of clients will be impacted, and the server's performance will for
> > sure improve ...
> >
> > >> Are you using an IBM blueprint configuration for the Spectrum Protect
> >
> > I wish I could : my life would have been much easier ! Unfortunately
> > management took the (definitively bad) decision to invest in a EMC/Dell
> > Isilon array to be our Spectrum Scale server storage.
> > I'm now fighting since 6 months to have the whole working together, so
> far
> > without real success : performance is horrible  :-(
> >
> > Cheers.
> >
> > Arnaud
> >
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> > Stefan Folkerts
> > Sent: Thursday, April 05, 2018 5:48 PM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: disabling compression and/or deduplication for a client
> > backing up against deduped/compressed directory-based storage pool
> >
> > Hi,
> >
> > With the directory containerpool you cannot, for as far as I know,
> disable
> > an attempt to deduplicate the data and if the data is able to deduplica

Re: disabling compression and/or deduplication for a client backing up against deduped/compressed directory-based storage pool

2018-04-10 Thread Stefan Folkerts
That's no fun, maybe we can help!
What storage are you using for your active log and database?

Regards,
   Stefan

On Mon, Apr 9, 2018 at 6:06 PM, PAC Brion Arnaud <arnaud.br...@panalpina.com
> wrote:

> Hi Stefan,
>
> Thanks a lot for appreciated feedback !
>
> >> You can, however, disable compression on the storagepool-level.
>
> This is unfortunately what I intended to avoid : if I disable it, then
> lots of clients will be impacted, and the server's performance will for
> sure improve ...
>
> >> Are you using an IBM blueprint configuration for the Spectrum Protect
>
> I wish I could : my life would have been much easier ! Unfortunately
> management took the (definitively bad) decision to invest in a EMC/Dell
> Isilon array to be our Spectrum Scale server storage.
> I'm now fighting since 6 months to have the whole working together, so far
> without real success : performance is horrible  :-(
>
> Cheers.
>
> Arnaud
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, April 05, 2018 5:48 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: disabling compression and/or deduplication for a client
> backing up against deduped/compressed directory-based storage pool
>
> Hi,
>
> With the directory containerpool you cannot, for as far as I know, disable
> an attempt to deduplicate the data and if the data is able to deduplicate
> it will be deduplicated.
> You can, however, disable compression on the storagepool-level. If you
> disable it on the containerpool client-side settings for deduplication will
> have no effect on compression within the pool.
>
> Are you using an IBM blueprint configuration for the Spectrum Protect
> environment?
>
> Regards,
>Stefan
>
>
> On Tue, Apr 3, 2018 at 6:06 PM, PAC Brion Arnaud <
> arnaud.br...@panalpina.com
> > wrote:
>
> > Hi All,
> >
> > Following to global client backup performance issues on some new TSM
> > server, which I suspect to be related to the workload induced on TSM
> > instance by deduplication/compression operations, I would like to do some
> > testing with a client, selectively disabling compression or
> deduplication,
> > possibly both of them on it.
> >
> > However, the TSM server has been configured to only make use of
> > directory-based storage pools, which have been defined having
> deduplication
> > and compression enabled.
> >
> > Thus my question : is there any mean to configure a client, so that its
> > data  will not be compressed or deduplicated ?
> >
> > From my understanding, setting up "compression no" in the client option
> > file will be of no use, as the server will still be compressing the data
> at
> > storage pool level.
> > Likewise, setting up "deduplication no" in the client option file will
> > refrain the client to proceed to deduplication, but the server still
> will.
> > The last remaining possibility  that I can think of, to disable
> > deduplication, would be to make use of some "exclude.dedup" statement on
> > client side, that would exclude anything subject to backup.
> >
> > What are your thoughts ? Am I condemned to define new storage pools not
> > enabled for deduplication and or compression to do such testing, or is
> > there some other mean ?
> >
> > Thanks a lot for appreciated feedback !
> >
> > Cheers.
> >
> > Arnaud
> >
> > 
> > 
> > Backup and Recovery Systems Administrator
> > Panalpina Management Ltd., Basle, Switzerland,
> > CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
> > Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
> > Direct: +41 (61) 226 19 78
> > e-mail: arnaud.br...@panalpina.com<mailto:arnaud.br...@panalpina.com>
> > This electronic message transmission contains information from Panalpina
> > and is confidential or privileged.
> > This information is intended only for the person (s) named above. If you
> > are not the intended recipient, any disclosure, copying, distribution or
> > use or any other action based on the contents of this
> > information is strictly prohibited.
> >
> > If you receive this electronic transmission in error, please notify the
> > sender by e-mail, telephone or fax at the numbers listed above. Thank
> you.
> > 
> > 
> >
>


Re: disabling compression and/or deduplication for a client backing up against deduped/compressed directory-based storage pool

2018-04-05 Thread Stefan Folkerts
Hi,

With the directory containerpool you cannot, for as far as I know, disable
an attempt to deduplicate the data and if the data is able to deduplicate
it will be deduplicated.
You can, however, disable compression on the storagepool-level. If you
disable it on the containerpool client-side settings for deduplication will
have no effect on compression within the pool.

Are you using an IBM blueprint configuration for the Spectrum Protect
environment?

Regards,
   Stefan


On Tue, Apr 3, 2018 at 6:06 PM, PAC Brion Arnaud  wrote:

> Hi All,
>
> Following to global client backup performance issues on some new TSM
> server, which I suspect to be related to the workload induced on TSM
> instance by deduplication/compression operations, I would like to do some
> testing with a client, selectively disabling compression or deduplication,
> possibly both of them on it.
>
> However, the TSM server has been configured to only make use of
> directory-based storage pools, which have been defined having deduplication
> and compression enabled.
>
> Thus my question : is there any mean to configure a client, so that its
> data  will not be compressed or deduplicated ?
>
> From my understanding, setting up "compression no" in the client option
> file will be of no use, as the server will still be compressing the data at
> storage pool level.
> Likewise, setting up "deduplication no" in the client option file will
> refrain the client to proceed to deduplication, but the server still will.
> The last remaining possibility  that I can think of, to disable
> deduplication, would be to make use of some "exclude.dedup" statement on
> client side, that would exclude anything subject to backup.
>
> What are your thoughts ? Am I condemned to define new storage pools not
> enabled for deduplication and or compression to do such testing, or is
> there some other mean ?
>
> Thanks a lot for appreciated feedback !
>
> Cheers.
>
> Arnaud
>
> 
> 
> Backup and Recovery Systems Administrator
> Panalpina Management Ltd., Basle, Switzerland,
> CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
> Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
> Direct: +41 (61) 226 19 78
> e-mail: arnaud.br...@panalpina.com
> This electronic message transmission contains information from Panalpina
> and is confidential or privileged.
> This information is intended only for the person (s) named above. If you
> are not the intended recipient, any disclosure, copying, distribution or
> use or any other action based on the contents of this
> information is strictly prohibited.
>
> If you receive this electronic transmission in error, please notify the
> sender by e-mail, telephone or fax at the numbers listed above. Thank you.
> 
> 
>


Re: Why is is dedup cache file not excluded by default ?

2018-02-26 Thread Stefan Folkerts
Ha, I didn't know that! I guess there is one directory excluded by default.
Thanks for pointing that out.

On Mon, Feb 26, 2018 at 10:59 AM, Martin Janosik <martin.jano...@cz.ibm.com>
wrote:

> Hello,
>
> your presumtion doesn't seems accurate to me (Windows, Linux, AIX clients,
> mostly v7.1):
>
> tsm> q inclexc
> *** FILE INCLUDE/EXCLUDE ***
> Mode Function  Pattern (match from top down)  Source File
>  - -- -
> No exclude filespace statements defined.
> Excl Directory /.../.TsmCacheDir  TSM
> Include Snapshot Retry
> TSM500*/opt/tivoli/tsm/client/ba/bin/dsm.sys
> No DFS include/exclude statements defined.
>
> Any directory .TsmCacheDir is excluded by software itself
>
> Martin J.
>
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 2018-02-26
> 07:40:18:
>
> > From: Stefan Folkerts <stefan.folke...@gmail.com>
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 2018-02-26 07:42
> > Subject: Re: [ADSM-L] Why is is dedup cache file not excluded by
> default ?
> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> >
> > For as far as I know nothing is excluded by the code itself and I think
> > this is a policy thing.
> >
> > Since you can install the software in different locations and you can use
> > different names for different files (ie log files) IBM can never be 100%
> > sure that the file it is hardcoded to exclude is the file we think it is.
> > There is always that chance. It would have to be hardcoded as a sort of
> > dynamic exclude like "I will exclude this is cache file that I am
> currently
> > actually using during the each backup".
> >
> > On Sat, Feb 24, 2018 at 5:26 PM, Erwann SIMON <erwann.si...@free.fr>
> wrote:
> >
> > > Hi Rick,
> > >
> > > Yes, this is what I always do. But I was wondering why it's not
> excluded
> > > by the code itself.
> > >
> > > --
> > > Best regards / Cordialement / مع تحياتي
> > > Erwann SIMON
> > >
> > > - Mail original -
> > > De: "Rick Adamson" <rickadam...@segrocers.com>
> > > À: ADSM-L@VM.MARIST.EDU
> > > Envoyé: Vendredi 23 Février 2018 12:31:17
> > > Objet: Re: [ADSM-L] Why is is dedup cache file not excluded by
> default ?
> > >
> > > Erwann,
> > > I have client option sets defined on the server that I use to globally
> > > exclude it from all file level backups.
> > >
> > > -Rick Adamson
> > >
> > > -Original Message-
> > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> Of
> > > Erwann SIMON
> > > Sent: Friday, February 23, 2018 1:14 AM
> > > To: ADSM-L@VM.MARIST.EDU
> > > Subject: [ADSM-L] Why is is dedup cache file not excluded by default ?
> > >
> > > * This email originated outside of the organization. Use caution when
> > > opening attachments or clicking links. *
> > >
> > > --
> > > Hi all,
> > >
> > > I'm wondering why the dedup cache file
> (TSMDEDUPDB_servernamenodename.DB,
> > > located in the dedupcachepath directory) is not excluded by default ?
> > > I generally exclude it because it causes somes retries and do think
> that
> > > there's no need to back it up. But I may be wrong.
> > > So, must we really back it up ? And why ?
> > >
> > > --
> > > Best regards / Cordialement / مع تحياتي
> > > Erwann SIMON
> > >
> > > **CONFIDENTIALITY NOTICE** This electronic message contains information
> > > from Southeastern Grocers, LLC and is intended only for the use of the
> > > addressee. This message may contain information that is privileged,
> > > confidential and/or exempt from disclosure under applicable Law. This
> > > message may not be read, used, distributed, forwarded, reproduced or
> stored
> > > by any other than the intended recipient. If you are not the intended
> > > recipient, please delete and notify the sender
> > >
> >
>


Re: Why is is dedup cache file not excluded by default ?

2018-02-25 Thread Stefan Folkerts
For as far as I know nothing is excluded by the code itself and I think
this is a policy thing.

Since you can install the software in different locations and you can use
different names for different files (ie log files) IBM can never be 100%
sure that the file it is hardcoded to exclude is the file we think it is.
There is always that chance. It would have to be hardcoded as a sort of
dynamic exclude like "I will exclude this is cache file that I am currently
actually using during the each backup".

On Sat, Feb 24, 2018 at 5:26 PM, Erwann SIMON  wrote:

> Hi Rick,
>
> Yes, this is what I always do. But I was wondering why it's not excluded
> by the code itself.
>
> --
> Best regards / Cordialement / مع تحياتي
> Erwann SIMON
>
> - Mail original -
> De: "Rick Adamson" 
> À: ADSM-L@VM.MARIST.EDU
> Envoyé: Vendredi 23 Février 2018 12:31:17
> Objet: Re: [ADSM-L] Why is is dedup cache file not excluded by default ?
>
> Erwann,
> I have client option sets defined on the server that I use to globally
> exclude it from all file level backups.
>
> -Rick Adamson
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Erwann SIMON
> Sent: Friday, February 23, 2018 1:14 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Why is is dedup cache file not excluded by default ?
>
> * This email originated outside of the organization. Use caution when
> opening attachments or clicking links. *
>
> --
> Hi all,
>
> I'm wondering why the dedup cache file (TSMDEDUPDB_servernamenodename.DB,
> located in the dedupcachepath directory) is not excluded by default ?
> I generally exclude it because it causes somes retries and do think that
> there's no need to back it up. But I may be wrong.
> So, must we really back it up ? And why ?
>
> --
> Best regards / Cordialement / مع تحياتي
> Erwann SIMON
>
> **CONFIDENTIALITY NOTICE** This electronic message contains information
> from Southeastern Grocers, LLC and is intended only for the use of the
> addressee. This message may contain information that is privileged,
> confidential and/or exempt from disclosure under applicable Law. This
> message may not be read, used, distributed, forwarded, reproduced or stored
> by any other than the intended recipient. If you are not the intended
> recipient, please delete and notify the sender
>


Re: empty containers?

2018-02-21 Thread Stefan Folkerts
I know 8.1.4.0 has a defrag option, that moves the content to existing
containers that have space instead of moving the container as a whole, that
might be a solution.
I do think it's new for 8.1.4.0 and I don't think there is a way to get rid
of them before that release.


On Wed, Feb 21, 2018 at 5:52 PM, Remco Post  wrote:

> I don’t know, nor do I care one bit. What I care about are those empty
> containers that I can’t get rid of.
>
> > On 21 Feb 2018, at 16:39, Loon, Eric van (ITOPT3) - KLM <
> eric-van.l...@klm.com> wrote:
> >
> > Hi Remco,
> > Could it be that your SQL Server admins are compressing the data which
> is send to the TSM server? If I'm correct you can turn on compression on
> the SQL server itself as well as on the TDP/API client.
> > Kind regards,
> > Eric van Loon
> > Air France/KLM Storage Engineering
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> Of Remco Post
> > Sent: woensdag 21 februari 2018 16:23
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: empty containers?
> >
> > we’re getting about 0% dedup savings (3 * full, plus diffful, plus log
> backups) and about maybe 20% compression savings.
> >
> >> On 21 Feb 2018, at 16:03, PAC Brion Arnaud 
> wrote:
> >>
> >> Hi Remco,
> >>
> >> No direct answer to your question, but your statement " dedup on SQL is
> non-existant" is kind of astonishing me ...
> >>
> >> Here an extract of the output for "q stg DIR_SQL f=d" on my server (you
> guessed it, this stgpool is dedicated to SQL backups ...) :
> >>
> >>   Deduplication Savings: 34,513 G (60.21%)
> >>  Compression Savings: 17,517 G (76.81%)
> >>Total Space Saved: 52,030 G (90.77%)
> >>
> >> Altogether 90,7 % data reduction !
> >>
> >> I would be curious to hear what our TSM fellows in this list are
> achieving ...
> >>
> >> Cheers.
> >>
> >> Arnaud
> >>
> >>
> >> -Original Message-
> >> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> Of Remco Post
> >> Sent: Wednesday, February 21, 2018 3:06 PM
> >> To: ADSM-L@VM.MARIST.EDU
> >> Subject: empty containers?
> >>
> >> Hi All,
> >>
> >> today I ran into something new: an empty directory container. No, not
> an empty directory, and empty container. We’re moving SQL backups from
> directories to traditional tape, dedup on SQL is non-existant, so tape is
> cheaper, and after the expiration period the container pool should be
> empty. Well this being the real world and all, of course I have a few
> containers left. Move container bla bla to reduce disk usage. All nice,
> exceprt for some containers that are now empty, thus can’t be moved… How do
> I get rid of those? TSM server level 8.1.1.100
> >>
> >> --
> >>
> >> Met vriendelijke groeten/Kind Regards,
> >>
> >> Remco Post
> >> r.p...@plcs.nl
> >> +31 6 248 21 622
> >
> > --
> >
> > Met vriendelijke groeten/Kind Regards,
> >
> > Remco Post
> > r.p...@plcs.nl
> > +31 6 248 21 622
> > 
> > For information, services and offers, please visit our web site:
> http://www.klm.com. This e-mail and any attachment may contain
> confidential and privileged material intended for the addressee only. If
> you are not the addressee, you are notified that no part of the e-mail or
> any attachment may be disclosed, copied or distributed, and that any other
> action related to this e-mail or attachment is strictly prohibited, and may
> be unlawful. If you have received this e-mail by error, please notify the
> sender immediately by return e-mail, and delete this message.
> >
> > Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or
> its employees shall not be liable for the incorrect or incomplete
> transmission of this e-mail or any attachments, nor responsible for any
> delay in receipt.
> > Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
> Airlines) is registered in Amstelveen, The Netherlands, with registered
> number 33014286
> > 
>
> --
>
>  Met vriendelijke groeten/Kind Regards,
>
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622
>


Re: Exchange backup speed

2018-02-19 Thread Stefan Folkerts
Wow, that's a huge environment.
I was asking because the best way to improve Exchange backup and restore
performance is (in my opinion) to use the Virtual Environment method of
Exchange backup and restore, every backup is essentially incremental and a
restore only really restores the data you actually want to restore instead
of a entire mailstore for that single 27KB mail.



On Sun, Feb 18, 2018 at 8:38 PM, Tom Alverson <tom.alver...@gmail.com>
wrote:

> No, they tried VM's once and the performance was poor.  They had to switch
> back to physical servers which have 4 cores (32 processors) and 384GB of
> ram each.
>
> On Sun, Feb 18, 2018 at 10:37 AM, Stefan Folkerts <
> stefan.folke...@gmail.com
> > wrote:
>
> > Hi Tom, are the Exchange servers virtualized on vSphere?
> >
> > On Sat, Feb 17, 2018 at 12:55 AM, Tom Alverson <tom.alver...@gmail.com>
> > wrote:
> >
> > > >
> > > >
> > > > We are trying to speed up our Exchange backups that are currently
> only
> > > using about 15% of the network bandwidth.  Our servers are running
> > Windows
> > > 2012R2 and Exchange 2013 CU15 with TSM 7.1.0.1 and TDPEXC 7.1.0.1.
> > > Currently we are backing up 15 DAGS per Exchange server (we have
> multiple
> > > exchange servers) and we are only backing up on servers that are
> standby
> > > replicas.  Currently we are trying a 14 day schedule were we do a full
> > > backup of a different DAG per day, and incrementals on the rest.  Even
> > > doing this we are having trouble completing them in 24 hours (before
> the
> > > next day's backup is supposed to start).
> > >
> > > I saw an old posting from Del saying to increase RESOURCEUTILIZATION on
> > the
> > > DSMAGENT.  Does that mean the DSM.OPT in the BACLIENT folder?  It was
> set
> > > at 2.  Do either the buffers or buffrsize options make any difference?
> > >
> > > Also if we want to "parallelize" the backups does that mean separate
> > > scheduler services for each one?  We currently use 14 different batch
> > files
> > > (for the 14 days of the cycle) with something like this:
> > >
> > > [day1.bat]
> > >
> > > tdpexcc.exe backup dag1 full
> > > tdpexcc.exe backup dag2,dag3,dag4,dag5 incr
> > > tdpexcc.exe backup dag6,dag7,dag8,dag9 incr
> > > tdpexcc.exe backup dag10,dag11,dag12,dag13 incr
> > > tcpexcc.exe backup dag14,dag15 incr
> > > exit
> > >
> >
>


Re: Exchange backup speed

2018-02-18 Thread Stefan Folkerts
Hi Tom, are the Exchange servers virtualized on vSphere?

On Sat, Feb 17, 2018 at 12:55 AM, Tom Alverson 
wrote:

> >
> >
> > We are trying to speed up our Exchange backups that are currently only
> using about 15% of the network bandwidth.  Our servers are running Windows
> 2012R2 and Exchange 2013 CU15 with TSM 7.1.0.1 and TDPEXC 7.1.0.1.
> Currently we are backing up 15 DAGS per Exchange server (we have multiple
> exchange servers) and we are only backing up on servers that are standby
> replicas.  Currently we are trying a 14 day schedule were we do a full
> backup of a different DAG per day, and incrementals on the rest.  Even
> doing this we are having trouble completing them in 24 hours (before the
> next day's backup is supposed to start).
>
> I saw an old posting from Del saying to increase RESOURCEUTILIZATION on the
> DSMAGENT.  Does that mean the DSM.OPT in the BACLIENT folder?  It was set
> at 2.  Do either the buffers or buffrsize options make any difference?
>
> Also if we want to "parallelize" the backups does that mean separate
> scheduler services for each one?  We currently use 14 different batch files
> (for the 14 days of the cycle) with something like this:
>
> [day1.bat]
>
> tdpexcc.exe backup dag1 full
> tdpexcc.exe backup dag2,dag3,dag4,dag5 incr
> tdpexcc.exe backup dag6,dag7,dag8,dag9 incr
> tdpexcc.exe backup dag10,dag11,dag12,dag13 incr
> tcpexcc.exe backup dag14,dag15 incr
> exit
>


Re: VMCTLMC space for a full backup.

2018-02-07 Thread Stefan Folkerts
I understand Remco's point but I think TSM can handle 7 years of retention
just fine and that the format is the bigger challenge here...full VM.
I'm sure there are rules that require you to do this but 7 years is crazy
long for full VM backups if it's about a lot of source data.
So far i've always been able to convince the customer that a regular
incremental backup with a B/A node of specific data with 7 years retention
will be just fine and a lot smaller than the whole VM, it also gives you a
better guarantee that you will be able to restore because a VM that's 7
years old might have compatibility issues with the than current vSphere
stack.

I'm sorry I can't help you with your sizing, I would share if I would have
the data but I don't, I give VM backups a maximum of 3 months retention and
that's incremental forever only.
Good luck.


On Wed, Feb 7, 2018 at 9:07 AM, Remco Post  wrote:

> > Op 7 feb. 2018, om 03:16 heeft Harris, Steven  BTFINANCIALGROUP.COM> het volgende geschreven:
> >
> > Hi Guys
> >
> > There is a really good explanation of VMCTLMC Sizing for incremental VM
> backups at  http://adsm.se/?p=546
> > I run a monthly full for compliance reasons on all the Prod Vms, and am
> trying to understand the implications for   VMCTLMC Sizing.  So far I
> suppose its 8000 megablocks  as well as 8000 control files per TB so
> 8000*(128+73) KB ~= 1.5 MB/TB.
> >
> > As there is a 7 year retention for these I see why I'm running out of
> space.
> >
> > The only option I can see would be a 1 year retention and a monthly
> export to tape for this data.  What do others do?
> >
>
> I just keep on insisting: backups are for disasters (big and small)
> archives are for lawyers (and other nitwits that make up rules). So, 7 year
> retention goes into an archive solution, not TSM. And I can make people
> understand quite simply: so the e-mail I get on the 5th of the month and
> delete the next day is less important to rule makers  than the one I get on
> the day of the month that we happen to take our full backups? I know what
> I’ll do, I won’t do the illegal stuff at the end of the month!
>
> Yes, that doesn’t solve your problem….
>
> > Cheers
> >
> > Steve
> >
> > TSM Admin/Consultant
> > Canberra/Australia
> >
> > This message and any attachment is confidential and may be privileged or
> otherwise protected from disclosure. You should immediately delete the
> message if you are not the intended recipient. If you have received this
> email by mistake please delete it from your system; you should not copy the
> message or disclose its content to anyone.
> >
> > This electronic communication may contain general financial product
> advice but should not be relied upon or construed as a recommendation of
> any financial product. The information has been prepared without taking
> into account your objectives, financial situation or needs. You should
> consider the Product Disclosure Statement relating to the financial product
> and consult your financial adviser before making a decision about whether
> to acquire, hold or dispose of a financial product.
> >
> > For further details on the financial product please go to
> http://www.bt.com.au
> >
> > Past performance is not a reliable indicator of future performance.
>
> --
>
>  Met vriendelijke groeten/Kind Regards,
>
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622
>


Re: Need guidance on how to protect stgpool to cloud

2018-02-02 Thread Stefan Folkerts
>I don't you can use cloud storage as a copypool
That was supposed to be "I don't think you can use cloud storage as a
copypool..."


On Fri, Feb 2, 2018 at 12:40 PM, Stefan Folkerts <stefan.folke...@gmail.com>
wrote:

>
> I don't you can use cloud storage as a copypool so that would mean you are
> placing your primarypool in the cloud and limiting (in an extreme way) your
> restore performance.
>
> It is fast enough for archival purposes (if you set it up correctly and
> have enough of a local buffering pool and bandwith) and you can but backup
> data in but getting backup data out fast enough to meet any serious SLA?
> Hmm...I would look closely into that and start with a very small POC,
> that's the thing with cloud storage, you can actually do that fairly easy.
>
> I wouldn't use cloud based storage for TSM for anything other than a
> replacement for archive like data with a low retrieve rate.
>
> If you want to get rid of tape and not invest in setting up your own
> offsite setup I'm sure there are plenty of electronic-vaulting solutions
> like the ones we offer you can simply plug into. Just make sure your DR
> plan is compatible with these solutions. It's can get complicated, it's all
> connected.
>
>
>
> On Thu, Feb 1, 2018 at 3:27 PM, Michaud, Luc [Analyste principal -
> environnement AIX] <luc.micha...@stm.info> wrote:
>
>> Greetings to you all,
>>
>> We've setup a blueprint replicated environment with directory container
>> pools on both sides.
>>
>> We protect the primary node stgpools to tape as well, with offsite
>> movements.
>>
>> Now we want to get rid of the tapes, most likely by leveraging AWS
>> Glacier.
>>
>> We have been exposed to a limitation of containers only being able to
>> have 1 container protection and 1 tape protection.
>>
>> How have you guys done it ?  Any trick or caveat that we should be aware
>> of ?
>>
>> Regards,
>>
>> Luc Michaud
>> Société de Transport de Montréal
>>
>>
>


Re: Need guidance on how to protect stgpool to cloud

2018-02-02 Thread Stefan Folkerts
I don't you can use cloud storage as a copypool so that would mean you are
placing your primarypool in the cloud and limiting (in an extreme way) your
restore performance.

It is fast enough for archival purposes (if you set it up correctly and
have enough of a local buffering pool and bandwith) and you can but backup
data in but getting backup data out fast enough to meet any serious SLA?
Hmm...I would look closely into that and start with a very small POC,
that's the thing with cloud storage, you can actually do that fairly easy.

I wouldn't use cloud based storage for TSM for anything other than a
replacement for archive like data with a low retrieve rate.

If you want to get rid of tape and not invest in setting up your own
offsite setup I'm sure there are plenty of electronic-vaulting solutions
like the ones we offer you can simply plug into. Just make sure your DR
plan is compatible with these solutions. It's can get complicated, it's all
connected.



On Thu, Feb 1, 2018 at 3:27 PM, Michaud, Luc [Analyste principal -
environnement AIX]  wrote:

> Greetings to you all,
>
> We've setup a blueprint replicated environment with directory container
> pools on both sides.
>
> We protect the primary node stgpools to tape as well, with offsite
> movements.
>
> Now we want to get rid of the tapes, most likely by leveraging AWS Glacier.
>
> We have been exposed to a limitation of containers only being able to have
> 1 container protection and 1 tape protection.
>
> How have you guys done it ?  Any trick or caveat that we should be aware
> of ?
>
> Regards,
>
> Luc Michaud
> Société de Transport de Montréal
>
>


Spectre & Meltdown patching in relation to Spectrum Protect

2018-01-22 Thread Stefan Folkerts
Hi,

Has anybody seen any information from IBM in relation to Spectre & Meltdown
patching for Spectrum Protect servers?
We have a customer who has found that systems that do lot's of small IO's
that performance can drop 50% on intel systems, these were seen with
synthetic benchmarks, not actual load.

So this was certainly not a Spectrum Protect load but I was thinking that
Spectrum Protect might fall into the category that can experience some sort
of performance drop when patched for Spectre and Meltdown.

Does anybody know what is the impact of these patches on Spectrum Protect
systems?

Will the blueprints need to be adjusted for instance or is there some
percentage of lower performance we should expect after patching and does it
vary per platform?

Regards,
   Stefan


Re: Looking for a solution for DFS backups - a.k.a. how do you do it

2018-01-18 Thread Stefan Folkerts
I agree with Steven, the only solutions I can think of are build it or buy
it (if it's available, I never heard of anything like this).
I would design and build a low-level solution myself and test that with a
few users, if that works try and find somebody that can actually create a
pretty GUI around it and build it up from there.
As long as you are still using the BA client below it all and you only use
it for restores there isn't that much risk in it I think, I would go with
Powershell since you are doing the backup on Windows and it's capable stuff
and build something (that is very flexible) from there to later have
somebody code a webgui for.

On Thu, Jan 11, 2018 at 11:49 PM, Harris, Steven <
steven.har...@btfinancialgroup.com> wrote:

> I feel for you Zoltan
>
> My users are demanding, but at least the corporate management structure
> means I have some measure of control over their demands.
>
> There is a TSM client REST API that came out at 7.1.3.  This can be used
> to run backups and restores although the API guide explicitly states that
> it is not supposed to be used for long-running tasks.  I don’t know if
> later versions have lifted that restriction.
>
> Since you are a university, you might have some smart coders available. I
> envisage a web-based service that at the back end runs this API to select
> what to restore and then run it.  That could all reside on the one box,
> with it effectively being a reverse proxy to your Windows backup servers.
>
> As you aren’t the only one in this particular bind, the result might be
> commercially viable as a paid product.  If it gets that far I claim 5% OK?
>
> Cheers
>
> Steve
>
> Steven Harris
> TSM Admin/Consultant
> Canberra Australia
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Zoltan Forray
> Sent: Friday, 12 January 2018 7:39 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Looking for a solution for DFS backups - a.k.a. how do
> you do it
>
> With the demise of the B/A web-client in 7.1.8+, we are in desperate need
> of an alternative solution to handling our DFS/ISILON backups.
>
> Being a university, the big issue is that everyone wants control over
> backups to be able to perform restores by themselves!
>
> Our current (soon to be unusable) solution is 3-dedicated physical Windows
> servers with 25-configurations/services (each) of the B/A client (each with
> unique ports for the web-client).  The backup schedules contain the
> specific filesystem/mount it backs up.  So, department level folks can use
> a web browser to connect to the correct port on the backup servers to
> manage their restores.
>
> The volume of backups makes it almost impossible to shift everything to
> one node (380TB / 225M objects) and if one server was able to handle it, it
> would shift restore responsibility to some sort of "help desk"!
>
> So, how do you handle this kind of scenario in your
> institution/organization?
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon
> Monitor Administrator VMware Administrator Virginia Commonwealth University
> UCC/Office of Technology Services www.ucc.vcu.edu zfor...@vcu.edu -
> 804-828-4807 Don't be a phishing victim - VCU and other reputable
> organizations will never use email to request that you reply with your
> password, social security number or confidential personal information. For
> more details visit http://phishing.vcu.edu/
>
>
> This message and any attachment is confidential and may be privileged or
> otherwise protected from disclosure. You should immediately delete the
> message if you are not the intended recipient. If you have received this
> email by mistake please delete it from your system; you should not copy the
> message or disclose its content to anyone.
>
> This electronic communication may contain general financial product advice
> but should not be relied upon or construed as a recommendation of any
> financial product. The information has been prepared without taking into
> account your objectives, financial situation or needs. You should consider
> the Product Disclosure Statement relating to the financial product and
> consult your financial adviser before making a decision about whether to
> acquire, hold or dispose of a financial product.
>
> For further details on the financial product please go to
> http://www.bt.com.au
>
> Past performance is not a reliable indicator of future performance.
>


Re: Hyper-V VM backup and restore sessions (single VM restore performance)

2017-12-17 Thread Stefan Folkerts
Thanks Del.

On Fri, Dec 15, 2017 at 5:43 PM, Del Hoobler <hoob...@us.ibm.com> wrote:

> Hi Stefan,
>
> Data Protection for Hyper-V does not currently have the same
> parallelization that is there for VMware (except for an entire VM in
> parallel).
>
> It is under consideration for a future release.
>
>
> Del
>
> 
>
>
> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 12/15/2017
> 02:29:29 AM:
>
> > From: Stefan Folkerts <stefan.folke...@gmail.com>
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 12/15/2017 02:32 AM
> > Subject: Hyper-V VM backup and restore sessions (single VM restore
> > performance)
> > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> >
> > Hi all,
> >
> > I have very little experience with Hyper-V backups up until now but we
> have
> > a customer who is interested in this functionality using Spectrum
> Protect.
> >
> > My question is, can Hyper-V backups but especially restores utilize
> > multiple session to and from the server per backup and restore
> operation?
> > It’s important because a single stream restore like VMware used to have
> > before 8.1 will not meet the customers RTO and the documentation states
> > that the maxrestoresessions parameter is only for VMware, but there
> might
> > be something else ging on with Hyper-V restores.
> >
> > Thanks,
> >Stefan
> >
>
>
>


Hyper-V VM backup and restore sessions (single VM restore performance)

2017-12-14 Thread Stefan Folkerts
Hi all,

I have very little experience with Hyper-V backups up until now but we have
a customer who is interested in this functionality using Spectrum Protect.

My question is, can Hyper-V backups but especially restores utilize
multiple session to and from the server per backup and restore operation?
It’s important because a single stream restore like VMware used to have
before 8.1 will not meet the customers RTO and the documentation states
that the maxrestoresessions parameter is only for VMware, but there might
be something else ging on with Hyper-V restores.

Thanks,
   Stefan


Re: Moving archive data to a directory container

2017-12-10 Thread Stefan Folkerts
Yup, that's the trick, it seems Remco and I wear the same t-shirt. :-)

On Thu, Dec 7, 2017 at 3:04 PM, Remco Post  wrote:

> Hi Eric,
>
> 1- on the target copy the domain, change copygroup destinations to
> filepool, act poli
> 2- on the target, reg node on the copied dom
> 3- export node filed=whatever replacedefs=n merge=yes
> 4- on the target, upd n to desired dom
> 5- on the target convert stg
>
> been there, done that, got the t-shirt. Thanks for hiring a specialist ;-)
>
> > On 7 Dec 2017, at 11:27, Loon, Eric van (ITOPT3) - KLM <
> eric-van.l...@klm.com> wrote:
> >
> > Hi guys,
> > Just to let you know: the trick to export to a file pool and convert it
> afterwards won't work... You cannot specify a different storagepool in the
> export node command, so the export will go to the storagepool defined in
> the copygroup of the node's domain, which is the containerpool and thus the
> export fails. I can't change the nodes domain either, because it's already
> backing up to the containerpool and I only wanted to move the archive data
> created in the past on the source server.
> > I will have to use replication instead. Thanks for all your help!
> > Kind regards,
> > Eric van Loon
> > Air France/KLM Storage Engineering
> >
> > -Original Message-
> > From: Loon, Eric van (ITOPT3) - KLM
> > Sent: donderdag 30 november 2017 12:15
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: RE: Moving archive data to a directory container
> >
> > Hi Marc,
> > I didn't know you can convert a storagepool into an already existing
> containerpool, thanks. I think I will remove some directories from the
> containerpool to make room for a new filepool. I will export all archives
> in there and convert it to the existing containerpool afterwards. As soon
> as all nodes are moved, I will delete the filepool and add the disks back
> to the containerpool.
> > Thanks for your help!
> > Kind regards,
> > Eric van Loon
> > Air France/KLM Storage Engineering
> >
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> Of Marc Lanteigne
> > Sent: woensdag 29 november 2017 17:42
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: Moving archive data to a directory container
> >
> > Hi,
> >
> > You might be able to use node replication to move the data from the old
> server into the new server.
> >
> > If you go with the import into a filepool method, you don't need two
> directory container pools, you can convert the filepool into the existing
> container pool.
> > -
> > Thanks,
> > Marc...
> > 
> > Marc Lanteigne
> > Accelerated Value Specialist for Spectrum Protect
> > 416.478.0233 | marclantei...@ca.ibm.com
> > Office Hours:  Monday to Friday, 7:00 to 16:00 Eastern
> >
> > Follow me on: Twitter, developerWorks, LinkedIn
> >
> >
> > -Original Message-
> > From: Loon, Eric van (ITOPT3) - KLM [mailto:eric-van.l...@klm.com]
> > Sent: Wednesday, November 29, 2017 12:18 PM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: [ADSM-L] Moving archive data to a directory container
> >
> > Hi guys!
> > We are migrating our clients from a version 6 server to a new 7.1.7
> server with a directory container pool. Switching clients is easy, just
> updating the option files and they will start a new backup cycle on the new
> TSM server. But a lot of clients also store long term archives. I can't
> think of a way to move the archives from the v6 server to the v7 server
> since import is not supported to a directory pool. The only trick I can
> come up with is defining a file pool on the v7 server, moving all archives
> in here and converting it to a directory container afterwards, but I need
> extra storage for it and I end up with two directory pools (at least until
> all archives are gone) and that is not what I want...
> > Does anybody else know some trick to move these archives?
> > Thanks for any help in advance!
> > Kind regards,
> > Eric van Loon
> > Air France/KLM Storage Engineering
> >
> > 
> > For information, services and offers, please visit our web site:
> > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.klm.
> com=DwIFAg=jf_iaSHvJObTbx-siA1ZOg=hMBqtRSV0jXgOdXEmlNk_-
> O9LHkPCGSh9PJBRSlL8Q4=MHcroyMTlfnvImqh8Xh27U_KSsiOrHx1DbJnIMM3kLo=
> 7tfcdgEk2ax4u5oKCXjTNUIys5evvzyXrdwJ2_hzr0o=.
> > This e-mail and any attachment may contain confidential and privileged
> material intended for the addressee only. If you are not the addressee, you
> are notified that no part of the e-mail or any attachment may be disclosed,
> copied or distributed, and that any other action related to this e-mail or
> attachment is strictly prohibited, and may be unlawful. If you have
> received this e-mail by error, please notify the sender immediately by
> return e-mail, and delete this message.
> >
> > Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or
> its 

Re: Moving archive data to a directory container

2017-11-29 Thread Stefan Folkerts
That's a way to do it, you do need to reduce max and offline reorg the
database after the conversion to get everything to run at 100% speed
without wasted database space after the conversion
I prefer to upgrade the old server to 7.1.7 and replicate the data from
tape/VTL/whatever to the new server running the containerpool, it works
much beter than an export in my opinion, you have more control and it's
much faster.
If an upgrade isn't an option I would export to a filepool as Matthew
suggested and convert that to a containerpool, be sure you have enough
database space to hold the filepool metadata since it's about twice as much
as the containerpool data and when you convert you need space for both
types of metadata until you do your reorg and reduce max. Oh, you can
reduce max at certain points during the conversion to reclaim some slack
space on the database.




On Wed, Nov 29, 2017 at 6:40 PM, Matthew McGeary <
matthew.mcge...@potashcorp.com> wrote:

> Node replication would be the only real way to go directly to the
> container pool.
>
> You could define the file-class pool on the same storage directories as
> the new container pool, so that once you land everything and convert, you
> have two pools but don't waste any storage.
>
> __
> Matthew McGeary
> Senior Technical Specialist - Infrastructure Management Services
> PotashCorp
> T: (306) 933-8921
> www.potashcorp.com
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Loon, Eric van (ITOPT3) - KLM
> Sent: Wednesday, November 29, 2017 10:01 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [EXT SENDER] [ADSM-L] Moving archive data to a directory container
>
> Hi guys!
> We are migrating our clients from a version 6 server to a new 7.1.7 server
> with a directory container pool. Switching clients is easy, just updating
> the option files and they will start a new backup cycle on the new TSM
> server. But a lot of clients also store long term archives. I can't think
> of a way to move the archives from the v6 server to the v7 server since
> import is not supported to a directory pool. The only trick I can come up
> with is defining a file pool on the v7 server, moving all archives in here
> and converting it to a directory container afterwards, but I need extra
> storage for it and I end up with two directory pools (at least until all
> archives are gone) and that is not what I want...
> Does anybody else know some trick to move these archives?
> Thanks for any help in advance!
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
> 
> For information, services and offers, please visit our web site:
> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.klm.
> com=DwIFAg=tBK9IFHToo3Ip4kInV6npfSphg9iMZSnuvLLtDL_h5E=
> 9B53f9OJCAaZMdwQs8Rx4OInzZ1vusLF8Ft4jfrywjo=S9QLT9cTnOfkB2BTaZyDF_
> kGAtfzMnQibS2mOIu3TwA=rv0g65y9vujKE7SdQFc7zspq6wHCYNRQ94RdB4lUYFE=.
> This e-mail and any attachment may contain confidential and privileged
> material intended for the addressee only. If you are not the addressee, you
> are notified that no part of the e-mail or any attachment may be disclosed,
> copied or distributed, and that any other action related to this e-mail or
> attachment is strictly prohibited, and may be unlawful. If you have
> received this e-mail by error, please notify the sender immediately by
> return e-mail, and delete this message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
> employees shall not be liable for the incorrect or incomplete transmission
> of this e-mail or any attachments, nor responsible for any delay in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
> Airlines) is registered in Amstelveen, The Netherlands, with registered
> number 33014286
> 
>


Re: Monthly backups of VMs

2017-11-18 Thread Stefan Folkerts
Steven,

It's probably not the reply you are looking for but Spectrum Protect Plus
can do this, you can have multiple SLA's attached to a VM and have one SLA
do a backup every day and retain it for say a month and have another SLA
create a backup every year and retain it for 7 years.

With Spectrum Protect I don't think it's a problem to make multiple
incremental backups of a VM to different datacenter nodes, I've seen people
do incremental forever backups via VE and also use other solutions on the
same VM to do the exact same without issues during backup or restore.

I haven't implemented it myself but the way CBT works with change ID's on
blocks it shouldn't be a problem.
So you can just "hack" the datamovers, create extra .opt files, schedulers
and schedule these backups from the Spectrum Protect server directly (not
using the VE webgui).
So that should open up the solution to create two incremental forever
backups using VE, just be sure you don't use tape. ;-)

Regards,
   Stefan (the Dutch Steven I guess :-) )


On Fri, Nov 17, 2017 at 1:20 PM, Lee, Gary  wrote:

> I assume that the data stored in the SQL databases is the primary
> retention target.
> If that is the case, how about a flat file dump to a central storage, then
> use another client to scoop that up monthly.
> Use resourceutilization to give it several sessions, and back up to the
> VTL.
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Harris, Steven
> Sent: Thursday, November 16, 2017 5:43 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Monthly backups of VMs
>
> Thanks for the reply Richard
>
> Backupsets apply only to BA client data.  Theoretically exports are
> possible. I've had issues with backupsets in the past and even if it were
> not possible would be loath to go there again (e.g. backupset is
> essentially a restore so it would take a drive to start, but then not have
> priority to take another drive to write its data and fail, so I didn't get
> a good backupset and whatever was interrupted also failed).
>
> Management of exports is also less than ideal. And they are slow, hmmm,
> unless an active pool was used.
>
> The problem with mixing monthlies and dailies is that they both use the
> same-named snapshots and so if one is running and the other starts it
> causes the existing snapshot to be deleted and the running backup fails.
> If there were a way to alter the snapshot name for the monthlies, that
> might help, but afaik there is not.  Without that then we need to
> manipulate the domain.vmfull (or any alternatives) on a daily basis to
> exclude that day's monthlies from daily backups and include into that day's
> monthlies.  Not simple.
>
> Thanks for making me explain this.  Active pool and exports may be the way
> to go.  Define the export volumes explicitly with a name that identifies
> their contents, then back them up with the BA client.
>
> Cheers
>
> Steve
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Richard Cowen
> Sent: Friday, 17 November 2017 9:06 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Monthly backups of VMs
>
> Can you use backupsets or export nodes to real tape (no client impact.) Or
> full restores to a dummy node and then archive those to real tape  (once a
> month), again no direct client impact.
> Can the "monthlles" be spread over 30 days?
>
> Richard
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Harris, Steven
> Sent: Thursday, November 16, 2017 4:51 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Monthly backups of VMs
>
> HI All
>
> Environment is
> TSM 7.1.1 server on AIX. 7.1.1 Storage agents on Linux,  7.1.1  BA
> clients, 7.1.1 VE clients,  VMWare 5.5.  The VMware backups are via the SAN
> to a Protectier VTL.
>
> My Client is an international financial organization so we have lots or
> regulatory requirements including SARBOX.  All of these require a monthly
> backup retained 7 years.  Recent trends in application design have resulted
> in multiple large MSSQL databases - up to 10 TB that never delete their
> data.  Never mind the logic, the hard requirement is that these be backed
> up monthly and kept for 7 years, and that no variation will be made to the
> application design.
>
> Standard process has been a daily VE incremental backup to a daily node
> and monthly full to a separate node.  The fulls are becoming untenable on
> several grounds.  The VBS Servers need to run a scsi rescan on weekdays to
> pick up any changed disk allocations, and this interrupts any running
> backups.  The individual throughput of the Virtual tape drives is limited
> so sessions run for a long time and there is not enough real tape to use
> that.   Long running backups cause issues with the storage on the back end
> because the snapshots are held so long.
>
> Does anyone have any practical 

Re: DBS vault volume question

2017-11-12 Thread Stefan Folkerts
Hi Robert,

That's not the way you get vault tape back from the vault.
DR will put vault tapes in vault retrieve for you, you don't have to do
that by yourself, it's will adhere to this setting (
https://www.ibm.com/support/knowledgecenter/en/SSEQVQ_8.1.3/srv.reference/r_cmd_drmdbbackupexpiredays_set.html
).
So if you set that to 3 (I think the minimum value) the system will put the
dbs you have offsite in vaultretrieve after that time, then you can place
it in courierretrieve or go straight to onsiteretrieve.

Make sure you align the value above with the reuse delay of the copypool(s)
you manage with drm.

Regards,
   Stefan


On Sun, Nov 12, 2017 at 10:56 AM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:

> Hi to all
>
> I run the query: q drmedia * source=dbs
>
> Got this output
>
> Volume Name  State Last Update Date/Time
> Automated LibName
>  - ---
>
> 55L5 Vault  08/04/2017 19:37:13
>
> Want to delete this volume 00055L5   but NO success ….
>
> I check the query: q drmstatus
>
> Protect: ADSM2>q drmstatus
>
>   Recovery Plan Prefix:
>   Plan Instructions Prefix:
> Replacement Volume Postfix: @
>  Primary Storage Pools:
> Copy Storage Pools:
>  Active-Data Storage Pools:
>   Container Copy Storage Pools:
>Not Mountable Location Name: Offsite DataBank
>   Courier Name: COURIER
>Vault Site Name: Databank
>   DB Backup Series Expiration Days: 0 Day(s)
> Recovery Plan File Expiration Days: 60 Day(s)
>   Check Label?: Yes
>  Process FILE Device Type?: No
>  Command File Name:
>
> I cannot succeed to bring the state of this volume to  VAULTRETREIVE
>
> Run the command:  move drmedia  55L5 source=dbs
> wherestate=vaultretrieve tostate=onsiteretrieve
>
> But nothing happen !
>
> Any suggestion ….
>
> Best  Regards
>
> Robert
>
>
>
>
> [cid:image001.png@01D1284F.C3B0B400]
>
> רוברט אוזן
> ראש תחום שרידות וזמינות נתונים.
> אוניברסיטת חיפה
> משרד: בניין ראשי, חדר 5015
> טלפון:  04-8240345 (פנימי: 2345)
> דואר: rou...@univ.haifa.ac.il
> _
> אוניברסיטת חיפה | שד' אבא חושי 199 | הר הכרמל, חיפה | מיקוד: 3498838
> אתר אגף מחשוב ומערכות מידע: http://computing.haifa.ac.il ttp://computing.haifa.ac.il/>
>


Re: Invoking backup from powercli

2017-10-21 Thread Stefan Folkerts
Nice little challenge this one.

I would build something that when they place a trigger-file somewhere that
executes a fixed macro that they can’t modify or read on another system.
The macro runs a clientaction within spectrum protect that starts the
backup.

So they only place the file somewhere, that location in looked at by a
script every minute and as soon as the file is found it removes it and
starts a client action.

They don’t get the access you don’t want to give them but it is still an
action you can monitor.

Regards,
   Stefan

On Wed, 18 Oct 2017 at 00:24, Harris, Steven <
steven.har...@btfinancialgroup.com> wrote:

> Hello again.
>
> TSM Server 7.1.1 AIX,  TSM VE 7.1.1 linux X64
>
> I have an application that needs to co-ordinate the taking of their TSM
> for VE backups with their housekeeping.  It's a large distributed postgres
> database and they want to shut down the database on all members of the
> cluster and trigger a snapshot backup across them all once a week to get a
> consistent restore point.
>
> vSphere web client is not an option, and neither is running a dsmc command
> on the vbs. Neither do we want to give them access to run a clientaction on
> the TSM Server.
>
> Is there any known way to trigger a TSM for VE backup on demand? I'm
> thinking that VMware  PowerCLI could be the mechanism here, but would be
> pleased to hear of any other.
>
> Thanks
>
> Steve
>
> Steven Harris
> TSM Admin/Consultant
> Canberra Australia
>
> This message and any attachment is confidential and may be privileged or
> otherwise protected from disclosure. You should immediately delete the
> message if you are not the intended recipient. If you have received this
> email by mistake please delete it from your system; you should not copy the
> message or disclose its content to anyone.
>
> This electronic communication may contain general financial product advice
> but should not be relied upon or construed as a recommendation of any
> financial product. The information has been prepared without taking into
> account your objectives, financial situation or needs. You should consider
> the Product Disclosure Statement relating to the financial product and
> consult your financial adviser before making a decision about whether to
> acquire, hold or dispose of a financial product.
>
> For further details on the financial product please go to
> http://www.bt.com.au
>
> Past performance is not a reliable indicator of future performance.
>


Re: 7.1.8/8.1.3 Security Upgrade Install Issues

2017-10-06 Thread Stefan Folkerts
Roger,

There has been a discussion about a few things you are asking questions
about just a day or so ago, I gave my view on the client and admin
situation.
I will use the same old and new definitions as you did.

It basically boils down to this for client and admin sessions

Once a node uses the "new" software that node goes from transitional to
strict on that server, the node cannot be logged in with the old software
from lets say a different server.
Once an administrator uses the "new" software that admin goes from
transitional strict on that server, the admin cannot be logged in with the
old software from lets say a different server or an outdated tool such as
TSMmanager, you must contact the support organisation of Servergraph to ask
whether your version is capable of using the strict sessions, TSMmanager
has a new version out that is capable of this.
You can manually set these objects to strict to ensure encryption on parts
of the session (metadata and authentication) but you can't change them from
strict to transitional.

So the lockdown is on a per admin and per node basis and happens with the
first admin and/or client login from the new software.

There is no way around the way the strict mode turns on, nodes logging in
using new clients go to strict and the same goes for admins.
Really have a look at the environment, especially where you dsmadmc from,
what tooling, scripting you use and from where and what clients share a
node in ISP that might run into issues.

At the same time I see customers that don't even read the readme, upgrade
themselves thinking it's just another patch and don't even notice anything
because of they way they run their shop.






On Fri, Oct 6, 2017 at 4:07 AM, Roger Deschner  wrote:

> Versions 7.1.8 and 8.1.3 of WDSF/ADSM/TSM/SP have now been made
> available containing substantial security upgrades. A bunch of security
> advisories were sent this week containing details of the vulnerabilities
> patched. Some are serious; our security folks are pushing to get patches
> applied.
>
> For the sake of discussion, I will simply call versions 7.1.7 and before
> and 8.1.1 "Old", and I'll call 7.1.8 and 8.1.3 "New". (Not really sure
> where 8.1.2 falls, because some of the security issues are only fixed in
> 8.1.3.)
>
> There are some totally unclear details outlined in
> http://www-01.ibm.com/support/docview.wss?uid=swg22004844. What's most
> unclear is how to upgrade a complex, multi-server, library-manager
> configuration. It appears from this document, that you must jump all in
> at once, and upgrade all servers and clients from Old to New at the same
> time. That is simply impractical. There is extensive discussion of the
> new SESSIONSECURITY parameter, but no discussion of what happens when
> connecting to an Old client or server that does not even have the
> SESSIONSECURITY parameter.
>
> We have 4 TSM servers. One is a library manager. Two of them are clients
> of the manager. The 4th server manages its tapes by itself, though it
> still communicates with all the other servers. That 4th server, the
> independent one, is what I'm going to upgrade first, because it is the
> easiest. All our clients are Old.
>
> The question is, what's going to happen next? Will this one New server
> still be able communicate with the other Old servers?
>
> Once my administrator id connects to a New server, this document says
> that my admin id can no longer connect to Old servers. (SESSIONSECURITY
> is automatically changed to STRICT.) Or does that restriction only apply
> if I connect from a New client? This could be an issue since I regularly
> connect to all servers in a normal day's work. We also have automated
> scripts driven by cron that fetch information from each of the servers.
> The bypass of creating another administrator ID is also not practical,
> because that would involve tracking down and changing all of these
> cron-driven scripts. So, the question here is, at the intermediate phase
> where some servers are Old and some New, can I circumvent this Old/New
> administrator ID issue by only connecting using dsmadmc on Old clients?
>
> This has also got to have an impact on users of software like
> Servergraph.
>
> There's also the issue of having to manually configure certificates
> between our library managers and library clients, but at least the steps
> to do that are listed in that document. (Comments? Circumventions?)
>
> We're plunging ahead regardless, because of a general policy to apply
> patches quickly for all published security issues. (Like Equifax didn't
> do for Apache.) I'm trying to figure this out fast, because we're doing
> it this coming weekend. I'm sure there are parts of this I don't
> understand. I'm trying to figure out how ugly it's going to be.
>
> Roger Deschner  University of Illinois at Chicago rog...@uic.edu
> ==I have not lost my mind -- it is backed up on tape somewhere.=
>


Re: SharePoint

2017-10-04 Thread Stefan Folkerts
Object-level restores for Sharepoint from Spectrum Protect are often done
by using Docave's solution for Sharepoint, it works really well and is
fairly simple to implement.

On Wed, Oct 4, 2017 at 2:41 PM, Kizzire, Chris 
wrote:

> Is anyone using SP 8.1.x to backup & restore SharePoint?
> If so, are you having any issues. My understanding SP will back it up, but
> cannot do a file level restore. We would like get red of of our 3rd party
> backup solution for SharePoint & only use SP.
>
> Chris Kizzire
> Backup Administrator (Network Engineer II)
>
> BROOKWOOD BAPTIST HEALTH
> Information Systems
> O:   205.820.5973
>
> chris.kizz...@bhsala.com
> BROOKWOODBAPTISTHEALTH.COM
>
>
>
> --- Confidentiality Notice: The
> information contained in this email message is privileged and confidential
> information and intended only for the use of the individual or entity named
> in the address. If you are not the intended recipient, you are hereby
> notified that any dissemination, distribution, or copying of this
> information is strictly prohibited. If you received this information in
> error, please notify the sender and delete this information from your
> computer and retain no copies of any of this information.


Re: 8.1.2 client and 7.1.7 servers

2017-10-03 Thread Stefan Folkerts
Zoltan,
I'm pretty sure IBM they just want to cover all all the things that could
go wrong with that statement, as long as a downlevel client node isn't used
with a 8.1.2+ client version there shouldn't be any interruption in the
client activity.

There are also potential issues with tooling and scripting of course such
as TSMmanager that need to be upgraded to the current (as of now) release
otherwise it can't connect once your admin goes strict.

People build really crazy things with clients sharing nodes all over the
place, upgrading a part of that or just having a part of that on 8.1.2
already and then upgrading the server will move that node into strict once
the 8.1.2 client connects again, after that the older clients won't connect.

It's basically impossible for IBM to write out every possible scenario so
they don't. It wasn't clear for me for a while but the ISP Symposium of
last week sorted that out nicely.





On Tue, Oct 3, 2017 at 2:29 PM, Zoltan Forray <zfor...@vcu.edu> wrote:

> Thanks for all the feedback. Lots of good information. I did not mention
> that all of my servers are 7.1.7.300 (soon to look at 7.1.8.000 just
> release).  I have no plans or desires to jump to 8.1.2/3 if it is going to
> cause such disruption, especially if I have to run a utility on clients to
> install certs.
>
> That being said, with a big PCI project on the horizon, I am probably going
> to be forced to turn on SSL/TLS communications between SP servers, as a
> minimum.  However, if I install 8.1.2 on the PCI SP server, I won't be able
> to perform replication to my target replica server until I upgrade it to
> 8.1.2/3, which would then disrupt the 5-other SP servers doing replication
>  - What a mess!
>
> So, back to IBM's statement of "*Upgrade your IBM Spectrum Protect™ servers
> to Version 8.1.2 before you upgrade the backup-archive clients. If you do
> not upgrade your servers first, communication between servers and clients
> might be interrupted.*"  Since in many cases, this has not been true, what
> is the mix of client/server/config that might cause communications to be
> disrupted?
>
>
> On Tue, Oct 3, 2017 at 4:41 AM, Stefan Folkerts <stefan.folke...@gmail.com
> >
> wrote:
>
> > A 8.1.2.0 server should work with older clients as long as the node is in
> > transitional mode (q node f=d), when they jump to strict because, for
> > example, a 8.1.2 (or higher) client version restored something from that
> > node it jumps to strict and pre 8.1.2 clients will no longer be able to
> > connect using that nodename.
> > The same goes for admins.
> > The moment an admin uses the 8.1.2 (or higher) dsmadmc it jumps to strict
> > and that specific admin won't be able to connect to the server using a
> pre
> > 8.1.2 dsmadmc anymore.
> >
> > As far as I have seen there are no other reasons why older clients would
> > have issues, just keep them in transitional and don't connect to that
> > nodename with a 8.1.2 or higher client.
> > You can also manually set the nodes to strict using update node but that
> > can't be reversed.
> >
> > That is what I understand about the situation at this point, it's all
> > related to TLS strict mode that is used on a per node and per admin
> basis.
> >
> >
> >
> >
> > On Tue, Oct 3, 2017 at 4:26 AM, J. Pohlmann <jpohlm...@shaw.ca> wrote:
> >
> > > I have tried an 8.1.2 client on an 8.1.1.0 server, and as in your case,
> > it
> > > worked fine. Then, when I upgraded the server to 8.1.2, the client
> > stopped
> > > working until I ran dsmcert.exe on my Windows client machine to install
> > the
> > > certificate. After that, the client worked. I  have not tried this
> > scenario
> > > with UNIX. If anyone has any experience, please let us know.
> > >
> > > I am also concerned as to what needs to be done with "really old"
> > clients.
> > > I have some installations that are running v5 and v6 clients due to old
> > OS
> > > levels. Again, if anyone has any experience, I would appreciate
> knowing.
> > >
> > > Best regards,
> > > Joerg Pohlmann
> > >
> > > -Original Message-
> > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> Of
> > > Zoltan Forray
> > > Sent: Monday, October 02, 2017 13:21
> > > To: ADSM-L@VM.MARIST.EDU
> > > Subject: Re: [ADSM-L] 8.1.2 client and 7.1.7 servers
> > >
> > > I want to revisit this thread.  Unbeknownst to me, many of my
> co-workers
> > > have installed 8.1.2.0 (both Windows and Linux systems), totally
> ignoring
>

Re: 8.1.2 client and 7.1.7 servers

2017-10-03 Thread Stefan Folkerts
A 8.1.2.0 server should work with older clients as long as the node is in
transitional mode (q node f=d), when they jump to strict because, for
example, a 8.1.2 (or higher) client version restored something from that
node it jumps to strict and pre 8.1.2 clients will no longer be able to
connect using that nodename.
The same goes for admins.
The moment an admin uses the 8.1.2 (or higher) dsmadmc it jumps to strict
and that specific admin won't be able to connect to the server using a pre
8.1.2 dsmadmc anymore.

As far as I have seen there are no other reasons why older clients would
have issues, just keep them in transitional and don't connect to that
nodename with a 8.1.2 or higher client.
You can also manually set the nodes to strict using update node but that
can't be reversed.

That is what I understand about the situation at this point, it's all
related to TLS strict mode that is used on a per node and per admin basis.




On Tue, Oct 3, 2017 at 4:26 AM, J. Pohlmann  wrote:

> I have tried an 8.1.2 client on an 8.1.1.0 server, and as in your case, it
> worked fine. Then, when I upgraded the server to 8.1.2, the client stopped
> working until I ran dsmcert.exe on my Windows client machine to install the
> certificate. After that, the client worked. I  have not tried this scenario
> with UNIX. If anyone has any experience, please let us know.
>
> I am also concerned as to what needs to be done with "really old" clients.
> I have some installations that are running v5 and v6 clients due to old OS
> levels. Again, if anyone has any experience, I would appreciate knowing.
>
> Best regards,
> Joerg Pohlmann
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Zoltan Forray
> Sent: Monday, October 02, 2017 13:21
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] 8.1.2 client and 7.1.7 servers
>
> I want to revisit this thread.  Unbeknownst to me, many of my co-workers
> have installed 8.1.2.0 (both Windows and Linux systems), totally ignoring
> my email that said not to.   However, we have not experienced any
> disruptions in communications, in contradiction to IBM's dire warning that
> said not to upgrade clients to 8.1.2.0 before first upgrading the servers
> to 8.1.2.0 (now 8.1.3.0).
>
> Can someone from IBM explain the details behind the dire warning and
> perhaps explain what combination of server/client will NOT work if someone
> installs 8.1.2.x client?
>
> We aren't running any kind of TLS between our servers, although moving to
> TLS is on the horizon due to a big PCI projects that will totally isolate
> such servers and even require me to stand up a lone, isolated SP server
> within the PCI walled-garden/network.
>
> On Tue, Aug 29, 2017 at 6:08 PM, Remco Post  wrote:
>
> > What is totally clear to me is that the entire transition to TLS1.2
> > all the way is potentially messy. We possibly have to remove server
> > definitions in an enterprise setup, communications might (or might
> > not) break, and in any case an extra server restart is required after
> > everything has been upgraded.
> >
> > What is not clear to me is what will and will not be encrypted by the
> > TLS once it is in place? Will that be everything, all server 2 server
> > and client 2 server comms? And if so, what can we expect the impact on
> > the CPU load to be? Our servers move a substantial amount of data
> > every night ( 50
> > - 100 TB each ) how many CPU’s should we be adding?
> >
> > And then the administrators… really, is there no way to guarantee that
> > an admin can connect to the server using a downlevel client once he
> > has used TLS? At least in my world the server and OC get upgraded by
> > one team, while the client is managed by a different team, each at their
> discretion.
> >
> > > On 29 Aug 2017, at 15:24, Mikhail Tolkonyuk 
> > wrote:
> > >
> > > You must update server certificate to SHA-256 before upgrading
> > > clients
> > or disable SSL in dsm.opt on all of them.
> > >
> > > BAC 8.1.2 remembers server certificate and uses TLS by default, it
> > > will
> > work with old 7.1.x SHA-1 (or MD5) certificate until you upgrade
> > server and OC to 8.1.2. During upgrade server generates new SHA-256
> > certificate and clients no more able to connect to "untrusted server"
> with new certificate.
> > > As workaround you can remove dsmcert.idx, dsmcert.kdb, dsmcert.sth
> > > files
> > from client folder and reset transport method for node after server
> > update, but it's much easier to solve the issue in advance.
> > >
> > > Check the default cert with the following command:
> > > gsk8capicmd_64 -cert -list -db C:\tsminst1\cert.kdb -stashed
> > >
> > > For more details watch Tricia's video about TLS 1.2:
> > > https://youtu.be/QVPrxjmo_aU
> > >
> > > And see technote 2004844:
> > > https://www-01.ibm.com/support/docview.wss?uid=swg22004844
> > >
> > >
> > > -Original Message-
> > > From: ADSM: 

Re: ANS1357S after upgrade to V8.1.2 Server

2017-09-21 Thread Stefan Folkerts
Andrew,

Just to be clear, because there is a bit of confusion here (and other
places) about the new security restrictions of the 8.1.2 release in regards
to older clients.

When we upgrade a Spectrum Protect server to 8.1.2 and use a mix of older
and newer clients versions, the operations center and dsmadmc via new
and/or older versions and we run into issues with connecting with older
clients we can "update node" and set the SESSIONSECURITY to TRANSITIONAL
for those clients to allow older clients to still function on the newer
server without client upgrades?



On Wed, Sep 20, 2017 at 6:46 PM, Andrew Raibeck  wrote:

> Hi Bill,
>
> This document describes client/server compatibility:
>
> http://www.ibm.com/support/docview.wss?uid=swg21053218
>
> In your case, is it possible that the node or admin had, at some time,
> authenticated using an 8.1.2 client? For affected nodes, check the
> SESSIONSECURITY setting, see if it is set to STRICT. If so, try setting
> back to TRANSITIONAL.
>
> https://www.ibm.com/support/knowledgecenter/SSEQVQ_8.1.2/
> srv.reference/r_cmd_node_update.html
>
> Regards,
>
> Andy
>
> 
> 
>
> Andrew Raibeck | IBM Spectrum Protect Level 3 | stor...@us.ibm.com
>
> IBM Tivoli Storage Manager links:
> Product support:
> https://www.ibm.com/support/entry/portal/product/tivoli/
> tivoli_storage_manager
>
> Online documentation:
> http://www.ibm.com/support/knowledgecenter/SSGSG7/
> landing/welcome_ssgsg7.html
>
> Product Wiki:
> https://www.ibm.com/developerworks/community/wikis/home/wiki/Tivoli%
> 20Storage%20Manager
>
> "ADSM: Dist Stor Manager"  wrote on 2017-09-19
> 15:54:59:
>
> > From: Bill Boyer 
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 2017-09-19 15:55
> > Subject: ANS1357S after upgrade to V8.1.2 Server
> > Sent by: "ADSM: Dist Stor Manager" 
> >
> > After upgrading my (test) TSM server to V8.1.2 my clients are now getting
> > ANS1357S with back-level client version. Even a V7.1.2 client. Is there a
> > page that lists the client versions supported by V8.1.2 Server? We have
> some
> > older OS's that are running old client versions. Been trying to find a
> > compatibility matrix just for serve V8.1.2.
> >
> >
> >
> > Tia,
> >
> >
> >
> > Bill Boyer
> > "Enjoy life. It has an expiration date." - ??
> >
>


Re: Mountpoints question

2017-09-09 Thread Stefan Folkerts
Robert,

Nope, I would not enable client-side compression when backing up to a u7c
device class, the drive can do it much more efficiently without putting
extra CPU load on the client.


On Sat, Sep 9, 2017 at 11:42 AM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:

> Hi Stefan 
>
> Great  Make sense !
>
> I made all the changes and works fine.
>
> The only thing I have to take care is the communication between the client
> to the TSM server, very slow !
>
> Meantime It will be wise to activate in the dsm.opt of the client the
> option:  compression YES 
>
> I remind that I backup directly to LTO7 tape (ULTRIUM7C).
>
> Best Regards
>
> Robert
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, September 7, 2017 5:28 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Mountpoints question
>
> Robert,
>
> The keep mount option is only applicable if you migrate the data to a
> sequential volume *after* you store it in the diskpool, so with your direct
> to tape setup here it doesn't do anything.
> The session will hold a single drive for the entire duration of its backup
> session.
>
> On Thu, Sep 7, 2017 at 3:23 PM, rou...@univ.haifa.ac.il <
> rou...@univ.haifa.ac.il> wrote:
>
> > Great Stefan
> >
> > What about the keep mount option need it ?
> >
> > Best regards
> >
> > Robert
> >
> >
> >
> > Envoyé depuis mon smartphone Samsung Galaxy.
> >
> >
> >  Message d'origine 
> > De : Stefan Folkerts <stefan.folke...@gmail.com> Date : 07/09/2017
> > 15:41 (GMT+02:00) À : ADSM-L@VM.MARIST.EDU Objet : Re: [ADSM-L]
> > Mountpoints question
> >
> > Robert,
> >
> > >09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> for
> > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of
> > >>
> > mount points. (SESSION: 20900)
> >
> > You stated that the default resource util is 2, you also stated that
> > "maxnummp is 1" well, that's when you get this message.
> > The client is trying to get two mountpounts but you are restricting it
> > to one.
> > The maxnummp must be the same or higher as the resource util, now you
> > have it the other way around.
> > Just lower the default value of 2 for the resource util to 1.
> >
> >
> >
> >
> > On Thu, Sep 7, 2017 at 10:29 AM, Krzysztof Przygoda
> > <przy...@gmail.com>
> > wrote:
> >
> > > Hi Robert
> > > Number of possible mount points you control from server. In your
> > > case it will be:
> > > update node your_nodename MAXNUMMP=1
> > >
> > > Good luck and regards
> > > Krzysztof
> > >
> > > 2017-09-07 9:36 GMT+02:00 rou...@univ.haifa.ac.il <
> > rou...@univ.haifa.ac.il
> > > >:
> > >
> > > > Hi Stefan
> > > >
> > > > First thanks for the input .
> > > >
> > > > O.K but how I avoid this message:
> > > >
> > > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session
> 20900
> > > for
> > > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number
> > > > > of  >
> > > > mount points. (SESSION: 20900)
> > > >
> > > > Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is
> > NO)
> > > >
> > > > Regards
> > > >
> > > > Robert
> > > >
> > > >
> > > >
> > > > -Original Message-
> > > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
> > > > Behalf
> > Of
> > > > Stefan Folkerts
> > > > Sent: Thursday, September 7, 2017 8:49 AM
> > > > To: ADSM-L@VM.MARIST.EDU
> > > > Subject: Re: [ADSM-L] Mountpoints question
> > > >
> > > > It might seem pretty obvious but you need to set
> > > > resourceutilization
> > to 1
> > > > and not 3.
> > > > Also, the amount of mountpoints doesn't equal the amount of tapes
> > > > a
> > > backup
> > > > can use, it limits the amount of concurrent mountpoints a backup
> > > > can
> > use.
> > > >
> > > >
> > > >
> > > > On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
> > > > rou...@univ.haifa.ac.il> wrote:
> > > >
> > > > > Hi to all
> > > > >
> > > > > Want to backup  directly to LTO7 tapes and only in one tape till
> > > > > is
> > > full.
> > > > >
> > > > > When my maxnummp is 1 for the node name got a lot of warning as:
> > > > >
> > > > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session
> > 20900
> > > > for
> > > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number
> > > > > of mount points. (SESSION: 20900)
> > > > >
> > > > > I made some research and find this article:
> > > > > http://www-01.ibm.com/support/docview.wss?uid=swg21584672
> > > > >
> > > > > No resourceutilization option specified so default:2
> > > > >
> > > > > As the article said  I increase the maxnummp to 3 , this night
> > > > > the backup took 3 tapes !
> > > > >
> > > > > In my q node f=d I see:Keep Mount Point?: No
> > > > >
> > > > > Did I have to change it to YES , to backup to only one tape ?
> > > > >
> > > > > Any suggestions ???
> > > > >
> > > > > Best Regards
> > > > >
> > > > > Robert
> > > > >
> > > >
> > >
> >
>


Re: Mountpoints question

2017-09-07 Thread Stefan Folkerts
Robert,

The keep mount option is only applicable if you migrate the data to a
sequential volume *after* you store it in the diskpool, so with your direct
to tape setup here it doesn't do anything.
The session will hold a single drive for the entire duration of its backup
session.

On Thu, Sep 7, 2017 at 3:23 PM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:

> Great Stefan
>
> What about the keep mount option need it ?
>
> Best regards
>
> Robert
>
>
>
> Envoyé depuis mon smartphone Samsung Galaxy.
>
>
> ---- Message d'origine 
> De : Stefan Folkerts <stefan.folke...@gmail.com>
> Date : 07/09/2017 15:41 (GMT+02:00)
> À : ADSM-L@VM.MARIST.EDU
> Objet : Re: [ADSM-L] Mountpoints question
>
> Robert,
>
> >09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> > node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
> mount points. (SESSION: 20900)
>
> You stated that the default resource util is 2, you also stated that
> "maxnummp
> is 1" well, that's when you get this message.
> The client is trying to get two mountpounts but you are restricting it to
> one.
> The maxnummp must be the same or higher as the resource util, now you have
> it the other way around.
> Just lower the default value of 2 for the resource util to 1.
>
>
>
>
> On Thu, Sep 7, 2017 at 10:29 AM, Krzysztof Przygoda <przy...@gmail.com>
> wrote:
>
> > Hi Robert
> > Number of possible mount points you control from server. In your case it
> > will be:
> > update node your_nodename MAXNUMMP=1
> >
> > Good luck and regards
> > Krzysztof
> >
> > 2017-09-07 9:36 GMT+02:00 rou...@univ.haifa.ac.il <
> rou...@univ.haifa.ac.il
> > >:
> >
> > > Hi Stefan
> > >
> > > First thanks for the input .
> > >
> > > O.K but how I avoid this message:
> > >
> > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> > for
> > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
> > > mount points. (SESSION: 20900)
> > >
> > > Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is
> NO)
> > >
> > > Regards
> > >
> > > Robert
> > >
> > >
> > >
> > > -Original Message-
> > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> Of
> > > Stefan Folkerts
> > > Sent: Thursday, September 7, 2017 8:49 AM
> > > To: ADSM-L@VM.MARIST.EDU
> > > Subject: Re: [ADSM-L] Mountpoints question
> > >
> > > It might seem pretty obvious but you need to set resourceutilization
> to 1
> > > and not 3.
> > > Also, the amount of mountpoints doesn't equal the amount of tapes a
> > backup
> > > can use, it limits the amount of concurrent mountpoints a backup can
> use.
> > >
> > >
> > >
> > > On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
> > > rou...@univ.haifa.ac.il> wrote:
> > >
> > > > Hi to all
> > > >
> > > > Want to backup  directly to LTO7 tapes and only in one tape till is
> > full.
> > > >
> > > > When my maxnummp is 1 for the node name got a lot of warning as:
> > > >
> > > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session
> 20900
> > > for
> > > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of
> > > > mount points. (SESSION: 20900)
> > > >
> > > > I made some research and find this article:
> > > > http://www-01.ibm.com/support/docview.wss?uid=swg21584672
> > > >
> > > > No resourceutilization option specified so default:2
> > > >
> > > > As the article said  I increase the maxnummp to 3 , this night the
> > > > backup took 3 tapes !
> > > >
> > > > In my q node f=d I see:Keep Mount Point?: No
> > > >
> > > > Did I have to change it to YES , to backup to only one tape ?
> > > >
> > > > Any suggestions ???
> > > >
> > > > Best Regards
> > > >
> > > > Robert
> > > >
> > >
> >
>


Re: Mountpoints question

2017-09-07 Thread Stefan Folkerts
Robert,

>09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
mount points. (SESSION: 20900)

You stated that the default resource util is 2, you also stated that "maxnummp
is 1" well, that's when you get this message.
The client is trying to get two mountpounts but you are restricting it to
one.
The maxnummp must be the same or higher as the resource util, now you have
it the other way around.
Just lower the default value of 2 for the resource util to 1.




On Thu, Sep 7, 2017 at 10:29 AM, Krzysztof Przygoda <przy...@gmail.com>
wrote:

> Hi Robert
> Number of possible mount points you control from server. In your case it
> will be:
> update node your_nodename MAXNUMMP=1
>
> Good luck and regards
> Krzysztof
>
> 2017-09-07 9:36 GMT+02:00 rou...@univ.haifa.ac.il <rou...@univ.haifa.ac.il
> >:
>
> > Hi Stefan
> >
> > First thanks for the input .
> >
> > O.K but how I avoid this message:
> >
> > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> for
> > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of  >
> > mount points. (SESSION: 20900)
> >
> > Maybe the option in node :  Keep Mount Point?: TO YES   ( Now is NO)
> >
> > Regards
> >
> > Robert
> >
> >
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> > Stefan Folkerts
> > Sent: Thursday, September 7, 2017 8:49 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: [ADSM-L] Mountpoints question
> >
> > It might seem pretty obvious but you need to set resourceutilization to 1
> > and not 3.
> > Also, the amount of mountpoints doesn't equal the amount of tapes a
> backup
> > can use, it limits the amount of concurrent mountpoints a backup can use.
> >
> >
> >
> > On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
> > rou...@univ.haifa.ac.il> wrote:
> >
> > > Hi to all
> > >
> > > Want to backup  directly to LTO7 tapes and only in one tape till is
> full.
> > >
> > > When my maxnummp is 1 for the node name got a lot of warning as:
> > >
> > > 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900
> > for
> > > node IIP DM-LAB-FISH. This node has exceeded its maximum number of
> > > mount points. (SESSION: 20900)
> > >
> > > I made some research and find this article:
> > > http://www-01.ibm.com/support/docview.wss?uid=swg21584672
> > >
> > > No resourceutilization option specified so default:2
> > >
> > > As the article said  I increase the maxnummp to 3 , this night the
> > > backup took 3 tapes !
> > >
> > > In my q node f=d I see:Keep Mount Point?: No
> > >
> > > Did I have to change it to YES , to backup to only one tape ?
> > >
> > > Any suggestions ???
> > >
> > > Best Regards
> > >
> > > Robert
> > >
> >
>


Re: Mountpoints question

2017-09-06 Thread Stefan Folkerts
It might seem pretty obvious but you need to set resourceutilization to 1
and not 3.
Also, the amount of mountpoints doesn't equal the amount of tapes a backup
can use, it limits the amount of concurrent mountpoints a backup can use.



On Thu, Sep 7, 2017 at 7:00 AM, rou...@univ.haifa.ac.il <
rou...@univ.haifa.ac.il> wrote:

> Hi to all
>
> Want to backup  directly to LTO7 tapes and only in one tape till is full.
>
> When my maxnummp is 1 for the node name got a lot of warning as:
>
> 09/05/2017 16:44:17  ANR0539W Transaction failed for session 20900 for
> node IIP DM-LAB-FISH. This node has exceeded its maximum number of  mount
> points. (SESSION: 20900)
>
> I made some research and find this article:
> http://www-01.ibm.com/support/docview.wss?uid=swg21584672
>
> No resourceutilization option specified so default:2
>
> As the article said  I increase the maxnummp to 3 , this night the backup
> took 3 tapes !
>
> In my q node f=d I see:Keep Mount Point?: No
>
> Did I have to change it to YES , to backup to only one tape ?
>
> Any suggestions ???
>
> Best Regards
>
> Robert
>


Re: Spectrum Protect for VE - how to get started

2017-09-04 Thread Stefan Folkerts
Eric,

Yes, I've seen that information, I'm not saying it can't integrate, I'm
saying that if you have Spectrum Protect running for an enterprise
environment that I wouldn't wait for Spectrum Protect Plus before
implementing Virtual Environments based on what I know. Based on what I
know it seems easier to deploy and is based on a totally different server
and storage architecture, we shouldn't expect things from it that aren't
document just because it shares the Spectrum Protect name.

It seems spectrum Protect Plus has it's own storage solution but can
*also *send
the data to Spectrum Protect.
We don't know much about application support within VM's other technical
requirements or limitations.
It's not Spectrum Protect as we know it so I'm careful advising somebody
about it until more information becomes available.
I'm not saying it's not a better solution for Rick, I'm saying that we know
too little to say that it might be. :-)




On Mon, Sep 4, 2017 at 10:50 AM, Loon, Eric van (ITOPT3) - KLM <
eric-van.l...@klm.com> wrote:

> Hi Stefan,
> From what I have been reading SP Plus will be a software appliance,
> delivered as an OVA file, which can be used as a stand-alone solution, but
> which also should fit into an existing TSM/SP environment. From
> https://www.ibm.com/us-en/marketplace/ibm-spectrum-protect-plus: "It can
> be implemented as a stand-alone solution or integrate with your IBM
> Spectrum Protect environment to off-load copies for long term storage and
> data governance with scale and efficiency".
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: zondag 3 september 2017 8:04
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Spectrum Protect for VE - how to get started
>
> Steve,
>
> I don't think you should look at Spectrum Protect Plus as an improved
> Spectrum Protect.
> It's a totally different solution for a, for the most part, different
> environment.
> Spectrum Protect is an enterprise class catch-all backup & recovery
> solution, Spectrum Protect Plus looks more to me like an entry-level
> vSphere/Hyper-V point solution.
>
> That doesn't mean it doesn't have its place in the market, it does, but
> that does mean that if you have a Spectrum Protect solution in place it
> would seem a bit silly maybe, but then again, there are very few details
> released at this point.
>
>
>
> On Sat, Sep 2, 2017 at 6:59 PM, Schaub, Steve <steve_sch...@bcbst.com>
> wrote:
>
> > Just got back from VMWorld where IBM announced their new & improved SP
> > product, I would hold off until you can check it out, much easier to
> > deploy & manage.
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> > Of Stefan Folkerts
> > Sent: Thursday, August 31, 2017 1:06 PM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: [ADSM-L] Spectrum Protect for VE - how to get started
> >
> > Rick,
> >
> > I would run the 8.1.1 VE version if your vSphere stack supports it,
> > the biggest improvements in my eyes above the 7.1 version are the
> > restore performance that can go 5x in the right configuration (in my
> > experience) and the tagging support (if you vShpere folder structure
> > will work with it, but if not, you can always go for the classic
> scheduling method.
> > The improved restore performance of 8.1.0+ has proven very important
> > and it was good it came when it did because in previous versions it
> > became a pretty big issue.
> > 8.1+ VE on a 7.1 server should not be a problem in my experience.
> >
> >
> > On Thu, Aug 31, 2017 at 5:52 PM, Ehresman,David E. <
> > david.ehres...@louisville.edu> wrote:
> >
> > > It's been reported here that the restore time in the v8.1 CLIENT is
> > > much improved over v7 so I would recommend that one for the client.
> > > I am not sure the server version makes a difference for this situation.
> > >
> > > David Ehresman
> > >
> > > -Original Message-
> > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
> > > Behalf Of Rhodes, Richard L.
> > > Sent: Thursday, August 31, 2017 10:51 AM
> > > To: ADSM-L@VM.MARIST.EDU
> > > Subject: [ADSM-L] Spectrum Protect for VE - how to get started
> > >
> > > Hello,
> > >
> > > Up till now we have used BA clients in VM's.  We would like to look
> > > into implementing VE.  I would appreciate any wisdom on bringing it
> > > up first time.  A specif

Re: Spectrum Protect for VE - how to get started

2017-09-03 Thread Stefan Folkerts
Steve,

I don't think you should look at Spectrum Protect Plus as an improved
Spectrum Protect.
It's a totally different solution for a, for the most part, different
environment.
Spectrum Protect is an enterprise class catch-all backup & recovery
solution, Spectrum Protect Plus looks more to me like an entry-level
vSphere/Hyper-V point solution.

That doesn't mean it doesn't have its place in the market, it does, but
that does mean that if you have a Spectrum Protect solution in place it
would seem a bit silly maybe, but then again, there are very few details
released at this point.



On Sat, Sep 2, 2017 at 6:59 PM, Schaub, Steve <steve_sch...@bcbst.com>
wrote:

> Just got back from VMWorld where IBM announced their new & improved SP
> product, I would hold off until you can check it out, much easier to deploy
> & manage.
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: Thursday, August 31, 2017 1:06 PM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] Spectrum Protect for VE - how to get started
>
> Rick,
>
> I would run the 8.1.1 VE version if your vSphere stack supports it, the
> biggest improvements in my eyes above the 7.1 version are the restore
> performance that can go 5x in the right configuration (in my experience)
> and the tagging support (if you vShpere folder structure will work with it,
> but if not, you can always go for the classic scheduling method.
> The improved restore performance of 8.1.0+ has proven very important and
> it was good it came when it did because in previous versions it became a
> pretty big issue.
> 8.1+ VE on a 7.1 server should not be a problem in my experience.
>
>
> On Thu, Aug 31, 2017 at 5:52 PM, Ehresman,David E. <
> david.ehres...@louisville.edu> wrote:
>
> > It's been reported here that the restore time in the v8.1 CLIENT is
> > much improved over v7 so I would recommend that one for the client.  I
> > am not sure the server version makes a difference for this situation.
> >
> > David Ehresman
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> > Of Rhodes, Richard L.
> > Sent: Thursday, August 31, 2017 10:51 AM
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: [ADSM-L] Spectrum Protect for VE - how to get started
> >
> > Hello,
> >
> > Up till now we have used BA clients in VM's.  We would like to look
> > into implementing VE.  I would appreciate any wisdom on bringing it up
> > first time.  A specific question would be which version.  Our current
> > TSM servers are v7.1.5, but I'm wondering if bringing up a v8.1 server
> > and VE would be best.
> >
> >
> > Rick
> >
> >
> > 
> > --
> >
> > The information contained in this message is intended only for the
> > personal and confidential use of the recipient(s) named above. If the
> > reader of this message is not the intended recipient or an agent
> > responsible for delivering it to the intended recipient, you are
> > hereby notified that you have received this document in error and that
> > any review, dissemination, distribution, or copying of this message is
> > strictly prohibited. If you have received this communication in error,
> > please notify us immediately, and delete the original message.
> >
>
> 
> --
> Please see the following link for the BlueCross BlueShield of Tennessee
> E-mail disclaimer:  http://www.bcbst.com/email_disclaimer.shtm
>


Re: Spectrum Protect for VE - how to get started

2017-08-31 Thread Stefan Folkerts
Rick,

I would run the 8.1.1 VE version if your vSphere stack supports it, the
biggest improvements in my eyes above the 7.1 version are the restore
performance that can go 5x in the right configuration (in my experience)
and the tagging support (if you vShpere folder structure will work with it,
but if not, you can always go for the classic scheduling method.
The improved restore performance of 8.1.0+ has proven very important and it
was good it came when it did because in previous versions it became a
pretty big issue.
8.1+ VE on a 7.1 server should not be a problem in my experience.


On Thu, Aug 31, 2017 at 5:52 PM, Ehresman,David E. <
david.ehres...@louisville.edu> wrote:

> It's been reported here that the restore time in the v8.1 CLIENT is much
> improved over v7 so I would recommend that one for the client.  I am not
> sure the server version makes a difference for this situation.
>
> David Ehresman
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Rhodes, Richard L.
> Sent: Thursday, August 31, 2017 10:51 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Spectrum Protect for VE - how to get started
>
> Hello,
>
> Up till now we have used BA clients in VM's.  We would like to look into
> implementing VE.  I would appreciate any wisdom on bringing it up first
> time.  A specific question would be which version.  Our current TSM servers
> are v7.1.5, but I'm wondering if bringing up a v8.1 server and VE would be
> best.
>
>
> Rick
>
>
> 
> --
>
> The information contained in this message is intended only for the
> personal and confidential use of the recipient(s) named above. If the
> reader of this message is not the intended recipient or an agent
> responsible for delivering it to the intended recipient, you are hereby
> notified that you have received this document in error and that any review,
> dissemination, distribution, or copying of this message is strictly
> prohibited. If you have received this communication in error, please notify
> us immediately, and delete the original message.
>


Re: 8.1.2 client and 7.1.7 servers

2017-08-23 Thread Stefan Folkerts
>From what I understand so far (pre 8.1) the only thing that totally stopped
working on basically prehistoric clients is the scheduling services on the
client so people that need to backup for instance, an old NT 4.0 server or
something need to use some other scheduler but other than that all basics
still work with version 3 clients from what I heard...no guarantees from
me! :-P


On Wed, Aug 23, 2017 at 2:46 PM, Zoltan Forray <zfor...@vcu.edu> wrote:

> My other concern is really old, unsupported clients.  So far my OS400/BRMS
> 5.5 client is still working with 7.1.7 and we have a smattering of 6.3 and
> older unsupported clients (old Solaris, RHEL x32, Windows 2003 and until
> recently an IRIX box) so 8.1.x is not on any current schedule.
>
> On Wed, Aug 23, 2017 at 8:14 AM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > I agree Zoltan, it's very specific and I too would like for IBM to
> explain
> > what the reasoning behind the warning is.
> > Personally, I suspect it's a cover your *** type of deal, with that many
> > releases of so many different kind of clients floating around I would
> kind
> > of understand the reasoning behind something like that.
> >
> > On Wed, Aug 23, 2017 at 2:04 PM, Zoltan Forray <zfor...@vcu.edu> wrote:
> >
> > > I agree with both of your responses that it should not be an issue and
> > the
> > > "compatibility" page says it should be good as far as server/client
> > levels
> > > go.  But since the docs have the very specific warning about upgrading
> > the
> > > servers to 8.1.2 before clients or *"If you do not upgrade your servers
> > > first, communication between servers and clients might be
> > interrupted."*, I
> > > felt I should ask for empirical evidence that it wont cause problems.
> The
> > > warning was not specific on what kinds of communications problems to
> > expect
> > > or if/how they can be addresses if someone decides to download and
> > upgrade
> > > their client simply based on the "yes, 7.1 servers are compatible with
> > 8.1
> > > clients" document and didn't see the warning I read.
> > >
> > > Perhaps it is due to TLS 1.2 now turned ON by default.  Either way,
> more
> > > details/specifics from IBM would be helpful.
> > >
> > > On Wed, Aug 23, 2017 at 4:21 AM, Stefan Folkerts <
> > > stefan.folke...@gmail.com>
> > > wrote:
> > >
> > > > I agree with Eric and I have 8.2 VE setups connected to 7.1 servers
> at
> > > > multiple sites, i've never seen any issues. I do find the
> documentation
> > > > strange if it say's you need to upgrade your clients before upgrading
> > the
> > > > server. The only thing that makes sense to me is client-side dedup
> with
> > > > compression, for that you want your clients on the latest level
> > otherwise
> > > > the compression isn't compatible with the server compression and you
> > > don't
> > > > get optimal results.
> > > >
> > > > On Wed, Aug 23, 2017 at 9:17 AM, Loon, Eric van (ITOPT3) - KLM <
> > > > eric-van.l...@klm.com> wrote:
> > > >
> > > > > Nor with up-leveled clients by the way, which is the case here.
> > > > > Kind regards,
> > > > > Eric van Loon
> > > > > Air France/KLM Storage Engineering
> > > > >
> > > > >
> > > > > -Original Message-
> > > > > From: Loon, Eric van (ITOPT3) - KLM
> > > > > Sent: woensdag 23 augustus 2017 9:15
> > > > > To: ADSM-L@VM.MARIST.EDU
> > > > > Subject: RE: 8.1.2 client and 7.1.7 servers
> > > > >
> > > > > Hi Zoltan!
> > > > > The page http://www-01.ibm.com/support/docview.wss?uid=swg21053218
> > > shows
> > > > > that the SP 8.1.2 client is supported with all 8.1 and 7.1
> servers. I
> > > > have
> > > > > never encountered issues with down-leveled clients in my 20+ years
> of
> > > TSM
> > > > > history.
> > > > > Kind regards,
> > > > > Eric van Loon
> > > > > Air France/KLM Storage Engineering
> > > > >
> > > > > -Original Message-
> > > > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On
> > Behalf
> > > Of
> > > > > Zoltan Forray
> > > > > Sent: dinsdag 22 augustus 2017 15:03
> >

Re: 8.1.2 client and 7.1.7 servers

2017-08-23 Thread Stefan Folkerts
I agree Zoltan, it's very specific and I too would like for IBM to explain
what the reasoning behind the warning is.
Personally, I suspect it's a cover your *** type of deal, with that many
releases of so many different kind of clients floating around I would kind
of understand the reasoning behind something like that.

On Wed, Aug 23, 2017 at 2:04 PM, Zoltan Forray <zfor...@vcu.edu> wrote:

> I agree with both of your responses that it should not be an issue and the
> "compatibility" page says it should be good as far as server/client levels
> go.  But since the docs have the very specific warning about upgrading the
> servers to 8.1.2 before clients or *"If you do not upgrade your servers
> first, communication between servers and clients might be interrupted."*, I
> felt I should ask for empirical evidence that it wont cause problems. The
> warning was not specific on what kinds of communications problems to expect
> or if/how they can be addresses if someone decides to download and upgrade
> their client simply based on the "yes, 7.1 servers are compatible with 8.1
> clients" document and didn't see the warning I read.
>
> Perhaps it is due to TLS 1.2 now turned ON by default.  Either way, more
> details/specifics from IBM would be helpful.
>
> On Wed, Aug 23, 2017 at 4:21 AM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > I agree with Eric and I have 8.2 VE setups connected to 7.1 servers at
> > multiple sites, i've never seen any issues. I do find the documentation
> > strange if it say's you need to upgrade your clients before upgrading the
> > server. The only thing that makes sense to me is client-side dedup with
> > compression, for that you want your clients on the latest level otherwise
> > the compression isn't compatible with the server compression and you
> don't
> > get optimal results.
> >
> > On Wed, Aug 23, 2017 at 9:17 AM, Loon, Eric van (ITOPT3) - KLM <
> > eric-van.l...@klm.com> wrote:
> >
> > > Nor with up-leveled clients by the way, which is the case here.
> > > Kind regards,
> > > Eric van Loon
> > > Air France/KLM Storage Engineering
> > >
> > >
> > > -Original Message-
> > > From: Loon, Eric van (ITOPT3) - KLM
> > > Sent: woensdag 23 augustus 2017 9:15
> > > To: ADSM-L@VM.MARIST.EDU
> > > Subject: RE: 8.1.2 client and 7.1.7 servers
> > >
> > > Hi Zoltan!
> > > The page http://www-01.ibm.com/support/docview.wss?uid=swg21053218
> shows
> > > that the SP 8.1.2 client is supported with all 8.1 and 7.1 servers. I
> > have
> > > never encountered issues with down-leveled clients in my 20+ years of
> TSM
> > > history.
> > > Kind regards,
> > > Eric van Loon
> > > Air France/KLM Storage Engineering
> > >
> > > -Original Message-
> > > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> Of
> > > Zoltan Forray
> > > Sent: dinsdag 22 augustus 2017 15:03
> > > To: ADSM-L@VM.MARIST.EDU
> > > Subject: 8.1.2 client and 7.1.7 servers
> > >
> > > Has anyone tried using the latest 8.1.2 clients with 7.1.7 servers?  I
> > > haven't had the chance to test such a configuration (since my lone test
> > > server is at 8.1.1) and with the dire-warnings in the readme docs, I
> made
> > > sure everyone on my staff knows to NOT install 8.1.2 clients.
> > >
> > > From the readme/docs:
> > >
> > > Upgrade your IBM Spectrum Protect™ servers to Version 8.1.2 before you
> > > upgrade the backup-archive clients.
> > >
> > >
> > >
> > > If you do not upgrade your servers first, communication between servers
> > > and clients might be interrupted.
> > >
> > >
> > > --
> > > *Zoltan Forray*
> > > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon
> > > Monitor Administrator VMware Administrator Virginia Commonwealth
> > University
> > > UCC/Office of Technology Services www.ucc.vcu.edu zfor...@vcu.edu -
> > > 804-828-4807 Don't be a phishing victim - VCU and other reputable
> > > organizations will never use email to request that you reply with your
> > > password, social security number or confidential personal information.
> > For
> > > more details visit http://infosecurity.vcu.edu/phishing.html
> > > 
> > > For information, services and offers, please visit our web site:
> > > http://www

Re: 8.1.2 client and 7.1.7 servers

2017-08-23 Thread Stefan Folkerts
I agree with Eric and I have 8.2 VE setups connected to 7.1 servers at
multiple sites, i've never seen any issues. I do find the documentation
strange if it say's you need to upgrade your clients before upgrading the
server. The only thing that makes sense to me is client-side dedup with
compression, for that you want your clients on the latest level otherwise
the compression isn't compatible with the server compression and you don't
get optimal results.

On Wed, Aug 23, 2017 at 9:17 AM, Loon, Eric van (ITOPT3) - KLM <
eric-van.l...@klm.com> wrote:

> Nor with up-leveled clients by the way, which is the case here.
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
>
> -Original Message-
> From: Loon, Eric van (ITOPT3) - KLM
> Sent: woensdag 23 augustus 2017 9:15
> To: ADSM-L@VM.MARIST.EDU
> Subject: RE: 8.1.2 client and 7.1.7 servers
>
> Hi Zoltan!
> The page http://www-01.ibm.com/support/docview.wss?uid=swg21053218 shows
> that the SP 8.1.2 client is supported with all 8.1 and 7.1 servers. I have
> never encountered issues with down-leveled clients in my 20+ years of TSM
> history.
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Zoltan Forray
> Sent: dinsdag 22 augustus 2017 15:03
> To: ADSM-L@VM.MARIST.EDU
> Subject: 8.1.2 client and 7.1.7 servers
>
> Has anyone tried using the latest 8.1.2 clients with 7.1.7 servers?  I
> haven't had the chance to test such a configuration (since my lone test
> server is at 8.1.1) and with the dire-warnings in the readme docs, I made
> sure everyone on my staff knows to NOT install 8.1.2 clients.
>
> From the readme/docs:
>
> Upgrade your IBM Spectrum Protect™ servers to Version 8.1.2 before you
> upgrade the backup-archive clients.
>
>
>
> If you do not upgrade your servers first, communication between servers
> and clients might be interrupted.
>
>
> --
> *Zoltan Forray*
> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator Xymon
> Monitor Administrator VMware Administrator Virginia Commonwealth University
> UCC/Office of Technology Services www.ucc.vcu.edu zfor...@vcu.edu -
> 804-828-4807 Don't be a phishing victim - VCU and other reputable
> organizations will never use email to request that you reply with your
> password, social security number or confidential personal information. For
> more details visit http://infosecurity.vcu.edu/phishing.html
> 
> For information, services and offers, please visit our web site:
> http://www.klm.com. This e-mail and any attachment may contain
> confidential and privileged material intended for the addressee only. If
> you are not the addressee, you are notified that no part of the e-mail or
> any attachment may be disclosed, copied or distributed, and that any other
> action related to this e-mail or attachment is strictly prohibited, and may
> be unlawful. If you have received this e-mail by error, please notify the
> sender immediately by return e-mail, and delete this message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
> employees shall not be liable for the incorrect or incomplete transmission
> of this e-mail or any attachments, nor responsible for any delay in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
> Airlines) is registered in Amstelveen, The Netherlands, with registered
> number 33014286
> 
>


Re: manually set repltcpserveraddress for clients from server

2017-08-08 Thread Stefan Folkerts
Okay, that worked different that I thought, thanks for the quick reply
again Anders, that fixed it!

On Tue, Aug 8, 2017 at 3:02 PM, Anders Räntilä <and...@rantila.com> wrote:

>  You should use the SET FAILOVERHLADDRESS  on the target server.   Then
> the primary server will pick this up and distribute the new address to the
> clients (after the first replication I guess).
>
> If you have symmetric replication you set it on both servers, but each
> server should specify its own public address.
>
> /Anders
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: den 8 augusti 2017 14:54
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: [ADSM-L] manually set repltcpserveraddress for clients from
> server
>
> Great, there is a solution!
> Strange thing is that the client still places the old IP in it's
> configuration file..very strange.
>
> On Tue, Aug 8, 2017 at 2:31 PM, Anders Räntilä <and...@rantila.com> wrote:
>
> > Hi,
> >
> > SET FAILOVERHLADDRESS  xxx
> >
> > /Anders Räntilä
> >
> >
> > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> > RÄNKAB - Räntilä Konsult AB
> > Klippingvägen 23
> > SE-196 32 Kungsängen
> > Sweden
> >
> > Email: and...@rantila.com
> > Phone: +46 701 489 431
> > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> >
> >
> >
> >
>


Re: manually set repltcpserveraddress for clients from server

2017-08-08 Thread Stefan Folkerts
Great, there is a solution!
Strange thing is that the client still places the old IP in it's
configuration file..very strange.

On Tue, Aug 8, 2017 at 2:31 PM, Anders Räntilä  wrote:

> Hi,
>
> SET FAILOVERHLADDRESS  xxx
>
> /Anders Räntilä
>
>
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> RÄNKAB - Räntilä Konsult AB
> Klippingvägen 23
> SE-196 32 Kungsängen
> Sweden
>
> Email: and...@rantila.com
> Phone: +46 701 489 431
> =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
>
>
>
>


manually set repltcpserveraddress for clients from server

2017-08-08 Thread Stefan Folkerts
Hi all,

I've been looking and reading but I can't find it so it might not exist
yet! :-)
In our current site the clients can't reach the replication server because
the Spectrum Protect servers replicate via a separate network that the
clients can't reach.

I would like to be able to setopt something that would adjust the IP/DNS
name of the replication server for the clients so the REPLSERVERNAME stanza
is adjusted to this with adjusting the server to server connection?

Does that make sense or did I miss something and is it already in place?

Regards,
   Stefan


Re: Database restore and containerpools

2017-08-03 Thread Stefan Folkerts
Hi Eric,

I've been in this situation before a while back and what I did was about
the same, I did a query container from a for loop based on the output from
the find command on the filesytems I believe and every container I did not
find in Spectrum protect was deleted (rc != 0 on the dsmadmc q container)
was deleted on the filesystem by hand, I double checked everything.





On Thu, Aug 3, 2017 at 3:14 PM, Loon, Eric van (ITOPT3) - KLM <
eric-van.l...@klm.com> wrote:

> Hi all,
> I'm working on a procedure on how to handle a TSM server (with a container
> pool) when a database is restored to the latest backup. When you do this,
> you might end up with containers which were created after the last backup
> and which are not known to the TSM server because they were not yet created
> at the time of the last backup.
> In my case all containers are located in subdirectories in
> /tech/tsm/server, I use the following command to count them:
>
> find /tech/tsm/server/container* -type f|wc -l
>
> and then I use this command to count the amount of containers in TSM:
>
> select count(*) from containers
>
> The amount should be the same. If there are more files on the local
> filesystem than in TSM, these are obsolete and should be removed. The SP
> manual (chapter Recovery from data loss outages) suggests to use the audit
> container  action=removedamaged to delete the container,
> but that doesn't seem to work.  To test this I copied a container to a temp
> file on the local filesystem, moved the source container to a new one in
> TSM (temporarily set reusedelay=0) and as soon as the source file was gone,
> renamed the temp file to the same name as the source file. As soon as I
> audit it, it is not being removed:
>
> ANR2017I Administrator ADMIN command: AUDIT CONTAINER
> /tech/tsm/server/container00/01/01c7.dcf action=removedamaged
> ANR3710I This command will delete container 
> /tech/tsm/server/container00/01/01c7.dcf
> from the file system.
> ANR3711E Container /tech/tsm/server/container00/01/01c7.dcf
> will not be removed from the file system.
> ANR2017I Administrator ADMIN issued command: ROLLBACK
>
> The message ANR3711E seems to indicate that the header could not be
> validated...
>
> What should be the right procedure to handle such a situation then? Just
> delete the obsolete containers manually?
> Thanks for any help in advance!
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
> 
> For information, services and offers, please visit our web site:
> http://www.klm.com. This e-mail and any attachment may contain
> confidential and privileged material intended for the addressee only. If
> you are not the addressee, you are notified that no part of the e-mail or
> any attachment may be disclosed, copied or distributed, and that any other
> action related to this e-mail or attachment is strictly prohibited, and may
> be unlawful. If you have received this e-mail by error, please notify the
> sender immediately by return e-mail, and delete this message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
> employees shall not be liable for the incorrect or incomplete transmission
> of this e-mail or any attachments, nor responsible for any delay in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
> Airlines) is registered in Amstelveen, The Netherlands, with registered
> number 33014286
> 
>


Re: Backing out of a container storagepool

2017-08-03 Thread Stefan Folkerts
Remco, all this information about NDMP and the current restrictions of the
containerpool are well documented. I believe it's (at least in part) due to
the agile development process. We get our new pools and stuff quicker but
it will take a bit of time for them to get all the bells and whistles.

I don't think you can replicate data from a containerpool to any type of
pool either (if it is compressed in the containerpool) I don't think you
will be able to replicate it to a filepool for instance but I'm not a 100%
sure about that. You can replicate the filepool to the containerpool but I
don't believe this works the other way around.

On Wed, Aug 2, 2017 at 10:53 PM, Remco Post  wrote:

> > On 2 Aug 2017, at 22:33, Anders Räntilä  wrote:
> >
> >> export & import the node will do I guess.
> >
> > No!  Export/import doesn't work with container pools.
> >
>
> f*ck, you must be kidding! (no you’re not). So I have tons of archive data
> to move from one TSM server to another, and basically I can’t? Really IBM?
> Do you guys actually want us to implement this new technology, or what? How
> hard can it be to do things right?
>
> > You can replicate the data to another server - to any type of pool - and
> then replicate it back.
> >
>
> that might work for what we are planning. Thanks for the heads up!
>
> > /Anders Räntilä
> >
> > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> > RÄNKAB - Räntilä Konsult AB
> > Klippingvägen 23
> > SE-196 32 Kungsängen
> > Sweden
> >
> > Email: and...@rantila.com
> > Phone: +46 701 489 431
> > =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
> >
> >
> >
>
> --
>
>  Met vriendelijke groeten/Kind Regards,
>
> Remco Post
> r.p...@plcs.nl
> +31 6 248 21 622
>


Re: ndmp

2017-08-02 Thread Stefan Folkerts
Same here, we have more than a few clients running NAS solutions going up
to I believe about 200TB and the backups are done via a windows server
connected to cifs shares.
file attributes might become an issue if the NAS is sharing via CIFS and
NFS but other than that it seems to work okay, it's a pity (but logical)
that you can't run journalled backups but at least you can backup 10+
shares at once, still, if you have many small files backups can take a long
time to complete and don't undersize the server that will be running the
backups either.


On Wed, Aug 2, 2017 at 12:04 AM, Skylar Thompson 
wrote:

> I agree, we use this approach as well. NDMP has scaling issues, doesn't
> play nicely with TSM incremental backups or include/exclude lists, and ties
> you into a single storage vendor for both backups and restores. That last
> point is particularly scary for anyone writing DR plans, since who knows
> what storage you'll end up with after a real disaster.
>
> On Tue, Aug 01, 2017 at 09:27:06PM +, Thomas Denier wrote:
> > You might be better off having proxy systems access the NAS contents
> using CIFS and/or NFS, and having the proxy systems use the backup/archive
> client to back up the NAS contents.
> >
> > My department supports Commvault as well as TSM (the result of a merger
> of previously separate IT organizations). The Commvault workload includes a
> NAS server on the same scale as yours. Our Commvault representative advised
> us to forget about Commvault's NDMP support and use the Commvault analog of
> the approach described in the previous paragraph.
> >
> > The subject of NAS backup coverage arose at an IBM training/marketing
> event for the Spectrum family of products. The IBM representative who
> responded was not as bluntly dismissive of NDMP as our Commvault
> representative, but he sounded decidedly unenthusiastic when he mentioned
> NDMP as a possible approach to NAS backups.
> >
> > Thomas Denier,
> > Thomas Jefferson University
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> Of Remco Post
> > Sent: Monday, July 31, 2017 16:41
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: [ADSM-L] ndmp
> >
> > Hi all,
> >
> > I???m working on a large TSM implementation for a customer who also has
> HDS NAS systems, and quite some data in those systems, more than 100 TB
> that needs to be backed up. We were planning to go 100% directory container
> for the new environment, but alas IBM???s ???best of both worlds" (DISK &
> FILE) doesn???t support NDMP and I don???t like FILE with deduplication
> (too much of a hassle), so is it really true, are we really stuck with
> tape? ISn???t it about time after so many years that IBM finally gives us a
> decent solution to backup NAS systems?
> >
> > --
> >
> >  Met vriendelijke groeten/Kind Regards,
> >
> > Remco Post
> > r.p...@plcs.nl
> > +31 6 248 21 622
> > The information contained in this transmission contains privileged and
> confidential information. It is intended only for the use of the person
> named above. If you are not the intended recipient, you are hereby notified
> that any review, dissemination, distribution or duplication of this
> communication is strictly prohibited. If you are not the intended
> recipient, please contact the sender by reply email and destroy all copies
> of the original message.
> >
> > CAUTION: Intended recipients should NOT use email communication for
> emergent or urgent health care matters.
> >
>
> --
> -- Skylar Thompson (skyl...@u.washington.edu)
> -- Genome Sciences Department, System Administrator
> -- Foege Building S046, (206)-685-7354
> -- University of Washington School of Medicine
>


Re: tsm node replication fails no reason why

2017-07-27 Thread Stefan Folkerts
The reason is usually given just before the first line in your log on the
source server, so the line above the one with the ANR0986I code.


On Wed, Jul 26, 2017 at 8:07 PM, Tim Brown  wrote:

> TSM node replication fails , no indication why on source or target server?
>
> Source server
>
> 07/26/2017 13:25:49  ANR0986I Process 385 for Replicate Node running
> in the
>   BACKGROUND processed 3,782,768 items for a total
> of
>   777,525,123,113 bytes with a completion state of
> FAILURE
>   at 13:25:49. (SESSION: 21075, PROCESS: 385)
> 07/26/2017 13:25:49  ANR1893E Process 385 for Replicate Node completed
> with a
>   completion state of FAILURE. (SESSION: 21075,
> PROCESS:
>   385)
>
> Target server
>
> 07/26/2017 13:25:51  ANR0514I Session 486 closed volume
>   N:\ECM_PRIM2\01BC04E0.BFS. (SESSION: 486)
> 07/26/2017 13:26:22  ANR0408I Session 605 started for server
> TSMPOK_SERVER1
>   (Windows) (Tcp/Ip) for replication.  (SESSION:
> 605)
> 07/26/2017 13:26:22  ANR0409I Session 486 ended for server
> TSMPOK_SERVER1
>   (Windows). (SESSION: 486)
> 07/26/2017 13:26:22  ANR0409I Session 605 ended for server
> TSMPOK_SERVER1
>   (Windows). (SESSION: 605)
> 07/26/2017 13:26:41  ANR0985I Process 51 for Replicate Node ( As
> Secondary )
>   running in the BACKGROUND completed with
> completion state
>   FAILURE at 13:26:41. (SESSION: 410, PROCESS: 51)
> 07/26/2017 13:26:41  ANR1893E Process 51 for Replicate Node ( As
> Secondary )
>   completed with a completion state of FAILURE.
> (SESSION:
>   410, PROCESS: 51)
>
> Tim
>


Re: Sloooow deletion of objects on Replication target server

2017-07-26 Thread Stefan Folkerts
Oh, and one more thing.
About not putting SSD's in the replication server, I think that might not
be a smart place to save money. I understand the reasoning behind it but
I've seen enough trouble with spinning disks in replicating and
deduplicating setups to want to try and warn people and explain what we do
to remedy the issue without spending an insane amount.

The database performance if the replication server is, in my opinion,
almost as import as that if the source server. When running replication
there are an insane amount of transactions on the active log and the
database volumes on the target server. And when you ever use the target
server in a dr (test) and it has a slow database the restores will not be
at their best. Again, a lot of sites could get away with a newer generation
if read intensive internal SSD's on a good internal controller, that really
isn't that expansive and that really gets things moving.

The read intensive ssd's can be the enterprise value type from lenovo, they
have a long enough life span to last a year or 5 when I calculate it based
on my honest raid 5 writes. I've even seen people use the samsung pro ssd's
made for consumers in servers and unleash great performance.
those m.2 read intensive ssd's are crazy fast if your system supports them,
I know some dell systems do.

I would stick with supported ssd's but there are a lot if creative people
out there that get great performance at a good pricepoint.

Just for your consideration of course!


On Thu, 27 Jul 2017 at 06:25, Stefan Folkerts <stefan.folke...@gmail.com>
wrote:

>
> The 2TB archive log has never been completely full in my case no, it's IBM
> Blueprint spec and it gives you some time when the database backup breaks
> for whatever reason, also, it's just 2TB of slow nearline storage so it
> doesn't cost much at all.
>
> Have you done something like run a dd on the NFS archive storage to
> isolate it's performance when creating a large file? It's a simple test buy
> when that's fast a db backup should be fast to on the writing side if
> things and the issue will be on the reading and/or compute side if the
> backup load.
>
> If the dd is slow I was and it's (and least in part) your NFS storage that
> is causing the slow backups my guess was wrong. A db backup is just a
> sequential stream of data to a disk on the target. You probably run a few
> streams correct? I have found compression to make the db backup a lot
> slower as well but save a lot of space so that's always a duration vs
> capacity question if you ask me.
>
> Keep us posted!
>
>
> On Wed, 26 Jul 2017 at 22:22, Zoltan Forray <zfor...@vcu.edu> wrote:
>
>> 2TB archlog?  I have never had more that 400GB on any of my systems and
>> have never filled up any of them, until now.  You must have a huge amount
>> of backups.
>>
>> Per your suggestion, we are running nmon for a 24-hour period to see what
>> it comes up with.  I am finding that running the DBBackup locally (from
>> the
>> internal 15K disk to the ISILON/NFS mount), is taking considerably longer
>> than what I was doing, which is sending it upstream via 10G to one of my
>> other TSM servers, 2-miles away. Last DBBackup to NFS took 9.5Hours for
>> 1.5TB while the last upstream backup ran 7-hours.  Doesn't make any sense
>> at all.
>>
>> I will ask for SSD but the chance of getting 2TB of SSD for a backup
>> replication server, is highly unlikely.  There has to be a less expensive
>> way to boost performance. Obviously getting more CPU threads is important.
>>
>> Thank you for all your help/knowledge. It is greatly appreciated!
>>
>> On Wed, Jul 26, 2017 at 3:40 PM, Stefan Folkerts <
>> stefan.folke...@gmail.com>
>> wrote:
>>
>> > Yes, a 300GB archivelog is tiny, that won't work for anything but the
>> > smallest of environments, a believe a medium sized server has a 2TB
>> archive
>> > log.
>> > database backups take a lot of extra time when reorgs and/or (for
>> example)
>> > dereference processes are running on 15K database disks, the system
>> simply
>> > doesn't have the time on the drives to create a speedy database backup
>> > anymore.
>> > Database backups achieve a more consistent and lower duration time when
>> the
>> > database is on SSD's because there is so much performance potential that
>> > doing multiple things no longer bothers the system as much.
>> >
>> > It would surprise me a lot if reducing the memory in the server would
>> fix
>> > the problems, I've never seen anything like that with Spectrum Protect
>> but
>> > I guess there is a first time for everything. :-)
>> >
>> >
&

Re: Sloooow deletion of objects on Replication target server

2017-07-26 Thread Stefan Folkerts
The 2TB archive log has never been completely full in my case no, it's IBM
Blueprint spec and it gives you some time when the database backup breaks
for whatever reason, also, it's just 2TB of slow nearline storage so it
doesn't cost much at all.

Have you done something like run a dd on the NFS archive storage to isolate
it's performance when creating a large file? It's a simple test buy when
that's fast a db backup should be fast to on the writing side if things and
the issue will be on the reading and/or compute side if the backup load.

If the dd is slow I was and it's (and least in part) your NFS storage that
is causing the slow backups my guess was wrong. A db backup is just a
sequential stream of data to a disk on the target. You probably run a few
streams correct? I have found compression to make the db backup a lot
slower as well but save a lot of space so that's always a duration vs
capacity question if you ask me.

Keep us posted!


On Wed, 26 Jul 2017 at 22:22, Zoltan Forray <zfor...@vcu.edu> wrote:

> 2TB archlog?  I have never had more that 400GB on any of my systems and
> have never filled up any of them, until now.  You must have a huge amount
> of backups.
>
> Per your suggestion, we are running nmon for a 24-hour period to see what
> it comes up with.  I am finding that running the DBBackup locally (from the
> internal 15K disk to the ISILON/NFS mount), is taking considerably longer
> than what I was doing, which is sending it upstream via 10G to one of my
> other TSM servers, 2-miles away. Last DBBackup to NFS took 9.5Hours for
> 1.5TB while the last upstream backup ran 7-hours.  Doesn't make any sense
> at all.
>
> I will ask for SSD but the chance of getting 2TB of SSD for a backup
> replication server, is highly unlikely.  There has to be a less expensive
> way to boost performance. Obviously getting more CPU threads is important.
>
> Thank you for all your help/knowledge. It is greatly appreciated!
>
> On Wed, Jul 26, 2017 at 3:40 PM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > Yes, a 300GB archivelog is tiny, that won't work for anything but the
> > smallest of environments, a believe a medium sized server has a 2TB
> archive
> > log.
> > database backups take a lot of extra time when reorgs and/or (for
> example)
> > dereference processes are running on 15K database disks, the system
> simply
> > doesn't have the time on the drives to create a speedy database backup
> > anymore.
> > Database backups achieve a more consistent and lower duration time when
> the
> > database is on SSD's because there is so much performance potential that
> > doing multiple things no longer bothers the system as much.
> >
> > It would surprise me a lot if reducing the memory in the server would fix
> > the problems, I've never seen anything like that with Spectrum Protect
> but
> > I guess there is a first time for everything. :-)
> >
> >
> >
> > On Wed, Jul 26, 2017 at 4:04 PM, Zoltan Forray <zfor...@vcu.edu> wrote:
> >
> > > Another point of interest is the archlog filesystem.  We originally had
> > it
> > > at 300GB but kept constantly overflowing & crashing since the DB
> backups
> > > that trigger at 80% wouldn't finish (>5-hours) before it reached 100%.
> > So
> > > we recently increased it to 1TB.  Now, the last DBbackup has been
> running
> > > for >24-hours and I have been sitting here watching the archlog
> > filesystem
> > > %used go from 80% to now 38%.  It is taking a long, long time to empty
> > it,
> > > even with nothing running but the DBBackup. With nothing but the
> DBBackup
> > > (and archlog flushing) running, the load average is still >25.
> > >
> > > I really think the additional memory is killing this box.  It was never
> > > this slow or overloaded before!
> > >
> > > On Wed, Jul 26, 2017 at 8:26 AM, Stefan Folkerts <
> > > stefan.folke...@gmail.com>
> > > wrote:
> > >
> > > > Oh, I just now read the 16 threads correctly, I was thinking you
> wrote
> > 16
> > > > cores!
> > > > 8 cores is far below specification if your running M-size blueprint
> > > ingest
> > > > figures.
> > > > I've seen 16 core intel servers (2016 spec xeon CPU's) go up to 70%
> > > > utilization so that kind of load would never work on 8 cores, but
> > again,
> > > I
> > > > don't know how much managed data you have and what your ingest
> figures
> > > are.
> > > >
> > > >
> > > > On Wed, Jul 26, 2017 at 2:02 PM, Zol

Re: Sloooow deletion of objects on Replication target server

2017-07-26 Thread Stefan Folkerts
Yes, a 300GB archivelog is tiny, that won't work for anything but the
smallest of environments, a believe a medium sized server has a 2TB archive
log.
database backups take a lot of extra time when reorgs and/or (for example)
dereference processes are running on 15K database disks, the system simply
doesn't have the time on the drives to create a speedy database backup
anymore.
Database backups achieve a more consistent and lower duration time when the
database is on SSD's because there is so much performance potential that
doing multiple things no longer bothers the system as much.

It would surprise me a lot if reducing the memory in the server would fix
the problems, I've never seen anything like that with Spectrum Protect but
I guess there is a first time for everything. :-)



On Wed, Jul 26, 2017 at 4:04 PM, Zoltan Forray <zfor...@vcu.edu> wrote:

> Another point of interest is the archlog filesystem.  We originally had it
> at 300GB but kept constantly overflowing & crashing since the DB backups
> that trigger at 80% wouldn't finish (>5-hours) before it reached 100%.  So
> we recently increased it to 1TB.  Now, the last DBbackup has been running
> for >24-hours and I have been sitting here watching the archlog filesystem
> %used go from 80% to now 38%.  It is taking a long, long time to empty it,
> even with nothing running but the DBBackup. With nothing but the DBBackup
> (and archlog flushing) running, the load average is still >25.
>
> I really think the additional memory is killing this box.  It was never
> this slow or overloaded before!
>
> On Wed, Jul 26, 2017 at 8:26 AM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > Oh, I just now read the 16 threads correctly, I was thinking you wrote 16
> > cores!
> > 8 cores is far below specification if your running M-size blueprint
> ingest
> > figures.
> > I've seen 16 core intel servers (2016 spec xeon CPU's) go up to 70%
> > utilization so that kind of load would never work on 8 cores, but again,
> I
> > don't know how much managed data you have and what your ingest figures
> are.
> >
> >
> > On Wed, Jul 26, 2017 at 2:02 PM, Zoltan Forray <zfor...@vcu.edu> wrote:
> >
> > > I kinda feel the same way since my networking folks say it isn't the
> 10G
> > > links (Xymon shows peaks of 2Gb), eventhough at it's peak processing
> load
> > > it would be handling 5-TSM servers sending replications across the same
> > 10G
> > > links also used for the NFS.
> > >
> > > If the current processes ever finish (delete of 9M objects is now into
> > > 48-hours, I will let the server sit for a day-or-two to see if it
> > > improves.  I have noticed that even with the server idle (no processes
> or
> > > sessions), the CPU load-average was still higher than the 16-threads
> > > available.  I am seriously thinking about going back to the original
> 96GB
> > > of RAM since it seems a lot of this slowdown started after bumping to
> > > 192GB.
> > >
> > > On Wed, Jul 26, 2017 at 3:16 AM, Stefan Folkerts <
> > > stefan.folke...@gmail.com>
> > > wrote:
> > >
> > > > Interesting, why would NFS be the problem if the deletion of objects
> > > > doesn't really touch the storagepools?
> > > >
> > > > I would wager that a straight up dd on the system to create a large
> > file
> > > > via 10Gb/s on NFS would be blazing fast but the database backup is
> slow
> > > > because it's almost never idle, it's always behind it's intern
> > processes
> > > > such as reorgs.
> > > >
> > > > place your bets! :-)
> > > >
> > > > http://www.strawpoll.me/13536369
> > > >
> > > >
> > > > On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic <
> sasa.drnje...@srce.hr>
> > > > wrote:
> > > >
> > > > > Not sure of course...But, I would blame NFS
> > > > >
> > > > > Did you check the negotiated speed of your NFS eth 10G ifaces?
> > > > > And that network?
> > > > >
> > > > > Regards,
> > > > >
> > > > > --
> > > > > Sasa Drnjevic
> > > > > www.srce.unizg.hr
> > > > >
> > > > >
> > > > > On 24.7.2017. 15:49, Zoltan Forray wrote:
> > > > > > 8-cores/16-threads.  It wasn't bad when it was replicating from
> > > > 4-SP/TSM
> > > > > > servers.  We had to stop all replication due to running out of
> > space
> &

Re: Sloooow deletion of objects on Replication target server

2017-07-26 Thread Stefan Folkerts
Oh, I just now read the 16 threads correctly, I was thinking you wrote 16
cores!
8 cores is far below specification if your running M-size blueprint ingest
figures.
I've seen 16 core intel servers (2016 spec xeon CPU's) go up to 70%
utilization so that kind of load would never work on 8 cores, but again, I
don't know how much managed data you have and what your ingest figures are.


On Wed, Jul 26, 2017 at 2:02 PM, Zoltan Forray <zfor...@vcu.edu> wrote:

> I kinda feel the same way since my networking folks say it isn't the 10G
> links (Xymon shows peaks of 2Gb), eventhough at it's peak processing load
> it would be handling 5-TSM servers sending replications across the same 10G
> links also used for the NFS.
>
> If the current processes ever finish (delete of 9M objects is now into
> 48-hours, I will let the server sit for a day-or-two to see if it
> improves.  I have noticed that even with the server idle (no processes or
> sessions), the CPU load-average was still higher than the 16-threads
> available.  I am seriously thinking about going back to the original 96GB
> of RAM since it seems a lot of this slowdown started after bumping to
> 192GB.
>
> On Wed, Jul 26, 2017 at 3:16 AM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > Interesting, why would NFS be the problem if the deletion of objects
> > doesn't really touch the storagepools?
> >
> > I would wager that a straight up dd on the system to create a large file
> > via 10Gb/s on NFS would be blazing fast but the database backup is slow
> > because it's almost never idle, it's always behind it's intern processes
> > such as reorgs.
> >
> > place your bets! :-)
> >
> > http://www.strawpoll.me/13536369
> >
> >
> > On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic <sasa.drnje...@srce.hr>
> > wrote:
> >
> > > Not sure of course...But, I would blame NFS
> > >
> > > Did you check the negotiated speed of your NFS eth 10G ifaces?
> > > And that network?
> > >
> > > Regards,
> > >
> > > --
> > > Sasa Drnjevic
> > > www.srce.unizg.hr
> > >
> > >
> > > On 24.7.2017. 15:49, Zoltan Forray wrote:
> > > > 8-cores/16-threads.  It wasn't bad when it was replicating from
> > 4-SP/TSM
> > > > servers.  We had to stop all replication due to running out of space
> > and
> > > > until I finish this cleanup, I have been holding off replication.
> So,
> > > the
> > > > deletion has been running standalone.
> > > >
> > > > I forgot to mention that DB backups are also running very long.
> 1.5TB
> > DB
> > > > backup runs 8+hours to NFS storage.  These are connected via 10G.
> > > >
> > > > On Mon, Jul 24, 2017 at 9:41 AM, Sasa Drnjevic <
> sasa.drnje...@srce.hr>
> > > > wrote:
> > > >
> > > >> On 24.7.2017. 15:25, Zoltan Forray wrote:
> > > >>> Due to lack of resources, we have had to stop replication on one of
> > our
> > > >> SP
> > > >>> servers. The replication target server is 7.1.6.3 RHEL 7, Dell T710
> > > with
> > > >>> 192GB RAM.  NFS/ISILON storage.
> > > >>>
> > > >>> After removing replication from the nodes on source server, I have
> > been
> > > >>> cleaning up the replication server by deleting the filespaces for
> the
> > > >> nodes
> > > >>> we are no longer replicating.
> > > >>>
> > > >>> My issue is the delete filespaces on the replication server is
> taking
> > > >>> forever.  It took over a week to delete one filespace with
> 31-million
> > > >>> objects?
> > > >>
> > > >>
> > > >> That is definitely to lng :-(
> > > >>
> > > >> It would take 6-8 hrs max, in my environment even under "standard"
> > > load...
> > > >>
> > > >> How many CPU cores does it have?
> > > >>
> > > >> And how is/was it performing the role of a target repl. server
> > > >> performance wise?
> > > >>
> > > >> Regards,
> > > >>
> > > >> --
> > > >> Sasa Drnjevic
> > > >> www.srce.unizg.hr
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>>

Re: Sloooow deletion of objects on Replication target server

2017-07-26 Thread Stefan Folkerts
Interesting, why would NFS be the problem if the deletion of objects
doesn't really touch the storagepools?

I would wager that a straight up dd on the system to create a large file
via 10Gb/s on NFS would be blazing fast but the database backup is slow
because it's almost never idle, it's always behind it's intern processes
such as reorgs.

place your bets! :-)

http://www.strawpoll.me/13536369


On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic 
wrote:

> Not sure of course...But, I would blame NFS
>
> Did you check the negotiated speed of your NFS eth 10G ifaces?
> And that network?
>
> Regards,
>
> --
> Sasa Drnjevic
> www.srce.unizg.hr
>
>
> On 24.7.2017. 15:49, Zoltan Forray wrote:
> > 8-cores/16-threads.  It wasn't bad when it was replicating from 4-SP/TSM
> > servers.  We had to stop all replication due to running out of space and
> > until I finish this cleanup, I have been holding off replication.  So,
> the
> > deletion has been running standalone.
> >
> > I forgot to mention that DB backups are also running very long.  1.5TB DB
> > backup runs 8+hours to NFS storage.  These are connected via 10G.
> >
> > On Mon, Jul 24, 2017 at 9:41 AM, Sasa Drnjevic 
> > wrote:
> >
> >> On 24.7.2017. 15:25, Zoltan Forray wrote:
> >>> Due to lack of resources, we have had to stop replication on one of our
> >> SP
> >>> servers. The replication target server is 7.1.6.3 RHEL 7, Dell T710
> with
> >>> 192GB RAM.  NFS/ISILON storage.
> >>>
> >>> After removing replication from the nodes on source server, I have been
> >>> cleaning up the replication server by deleting the filespaces for the
> >> nodes
> >>> we are no longer replicating.
> >>>
> >>> My issue is the delete filespaces on the replication server is taking
> >>> forever.  It took over a week to delete one filespace with 31-million
> >>> objects?
> >>
> >>
> >> That is definitely to lng :-(
> >>
> >> It would take 6-8 hrs max, in my environment even under "standard"
> load...
> >>
> >> How many CPU cores does it have?
> >>
> >> And how is/was it performing the role of a target repl. server
> >> performance wise?
> >>
> >> Regards,
> >>
> >> --
> >> Sasa Drnjevic
> >> www.srce.unizg.hr
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>>
> >>> To me it is highly unusual to take this long. Your thoughts on this?
> >>>
> >>> --
> >>> *Zoltan Forray*
> >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> >>> Xymon Monitor Administrator
> >>> VMware Administrator
> >>> Virginia Commonwealth University
> >>> UCC/Office of Technology Services
> >>> www.ucc.vcu.edu
> >>> zfor...@vcu.edu - 804-828-4807
> >>> Don't be a phishing victim - VCU and other reputable organizations will
> >>> never use email to request that you reply with your password, social
> >>> security number or confidential personal information. For more details
> >>> visit http://infosecurity.vcu.edu/phishing.html
> >>>
> >>
> >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zfor...@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit http://infosecurity.vcu.edu/phishing.html
> >
>


Re: Sloooow deletion of objects on Replication target server

2017-07-25 Thread Stefan Folkerts
You're welcome, happy to help.
Deleting objects is very database and active log intensive, but it also
hits the CPU, that said, I've never seen a 16 core machine really struggle
on CPU within the blueprint specs, even with compression enabled on the
containerpool running maximum backup performance there was still some room
at 10Gb/s line-speed backup performance.
The biggest and most noticeable upgrade I have given Spectrum Protect
servers is a swap to SSD, it changes everything, even admin commands become
more responsive.

Most of the time it starts out fine, but then, as the system gets more load
and the database grows it slowly gets slower when later (it seems pretty
suddenly) it becomes unworkable.
I think the memory upgrade is most likely just a coincidence timing wise.
But again, run the benchmarks and nmon and check the data. :-)



On Tue, Jul 25, 2017 at 9:20 PM, Zoltan Forray <zfor...@vcu.edu> wrote:

> Thanks for the suggestions.  I am looking at the "blueprint" stuff and it
> looks pretty heavy-duty.  I will look into running nmon.  Things seem to
> have gotten worse since I upgraded the memory.  DB backups to NFS/ISILON
> are now running 15+ hours with very little load (stopped all replications
> since they were becoming never-ending).  Deleting of lots of objects
> (20-million is one example) is running into many days if not a week.
>
> On Tue, Jul 25, 2017 at 2:57 PM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > Another thing you could do besides the benchmark is run nmon in batch
> mode
> > on the Linux Spectrum Protect servers and analyze that, run it for an
> hour
> > or 2 when the load is heavy, I could help you with the output if you
> could
> > use some help, no problem.
> >
> > On Tue, Jul 25, 2017 at 8:49 PM, Stefan Folkerts <
> > stefan.folke...@gmail.com>
> > wrote:
> >
> > > How many drives in what kind of a raid setup? what kinds of performance
> > > are you getting from them, do you have any idea?
> > >
> > > I have had nothing but issues with performance on deduplicating setups,
> > > especially with replication in play when you use 15K disks.
> > > I've had setups with 24x15K drives in raid 10 in a V7000 and I still
> was
> > > having a hard time getting all the work done, as soon as you drop in
> some
> > > SSD's you set, but I suppose they can work on small setups.
> > > I would never use 15K drives again for any kinds of disk based
> > > deduplicating setup however, SSD's are not that expensive anymore, you
> > can
> > > use read intensive SSD's in most cases and be fine.
> > >
> > > Really, the blueprint benchmark tool works fine and gives a good
> > > indication of your base disk performance, if it's far below what's
> > > described for the blueprint that's probably the problem.
> > > I would put my money on the 15K drives for the database and my
> suggestion
> > > (in the case of insufficient database performance) would be place the
> > > database and the active log on SSD's
> > >
> > >
> > >
> > > On Tue, Jul 25, 2017 at 7:58 PM, Zoltan Forray <zfor...@vcu.edu>
> wrote:
> > >
> > >> The two database filesystems (1TB each) are on internal, 15K SAS
> drives.
> > >>
> > >> On Tue, Jul 25, 2017 at 1:34 PM, Stefan Folkerts <
> > >> stefan.folke...@gmail.com>
> > >> wrote:
> > >>
> > >> > My question would be on what type of storage is the Spectrum Protect
> > >> > database located.
> > >> > Second question, have you run the IBM blueprint benchmark tool on
> the
> > >> > storagepool and database storage, and if so, what were the results?
> > >> >
> > >> > On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic <
> sasa.drnje...@srce.hr
> > >
> > >> > wrote:
> > >> >
> > >> > > Not sure of course...But, I would blame NFS
> > >> > >
> > >> > > Did you check the negotiated speed of your NFS eth 10G ifaces?
> > >> > > And that network?
> > >> > >
> > >> > > Regards,
> > >> > >
> > >> > > --
> > >> > > Sasa Drnjevic
> > >> > > www.srce.unizg.hr
> > >> > >
> > >> > >
> > >> > > On 24.7.2017. 15:49, Zoltan Forray wrote:
> > >> > > > 8-cores/16-threads.  It wasn't bad when it was replicating from
> > >> > 4-SP/TSM
> > >

Re: Sloooow deletion of objects on Replication target server

2017-07-25 Thread Stefan Folkerts
Another thing you could do besides the benchmark is run nmon in batch mode
on the Linux Spectrum Protect servers and analyze that, run it for an hour
or 2 when the load is heavy, I could help you with the output if you could
use some help, no problem.

On Tue, Jul 25, 2017 at 8:49 PM, Stefan Folkerts <stefan.folke...@gmail.com>
wrote:

> How many drives in what kind of a raid setup? what kinds of performance
> are you getting from them, do you have any idea?
>
> I have had nothing but issues with performance on deduplicating setups,
> especially with replication in play when you use 15K disks.
> I've had setups with 24x15K drives in raid 10 in a V7000 and I still was
> having a hard time getting all the work done, as soon as you drop in some
> SSD's you set, but I suppose they can work on small setups.
> I would never use 15K drives again for any kinds of disk based
> deduplicating setup however, SSD's are not that expensive anymore, you can
> use read intensive SSD's in most cases and be fine.
>
> Really, the blueprint benchmark tool works fine and gives a good
> indication of your base disk performance, if it's far below what's
> described for the blueprint that's probably the problem.
> I would put my money on the 15K drives for the database and my suggestion
> (in the case of insufficient database performance) would be place the
> database and the active log on SSD's
>
>
>
> On Tue, Jul 25, 2017 at 7:58 PM, Zoltan Forray <zfor...@vcu.edu> wrote:
>
>> The two database filesystems (1TB each) are on internal, 15K SAS drives.
>>
>> On Tue, Jul 25, 2017 at 1:34 PM, Stefan Folkerts <
>> stefan.folke...@gmail.com>
>> wrote:
>>
>> > My question would be on what type of storage is the Spectrum Protect
>> > database located.
>> > Second question, have you run the IBM blueprint benchmark tool on the
>> > storagepool and database storage, and if so, what were the results?
>> >
>> > On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic <sasa.drnje...@srce.hr>
>> > wrote:
>> >
>> > > Not sure of course...But, I would blame NFS
>> > >
>> > > Did you check the negotiated speed of your NFS eth 10G ifaces?
>> > > And that network?
>> > >
>> > > Regards,
>> > >
>> > > --
>> > > Sasa Drnjevic
>> > > www.srce.unizg.hr
>> > >
>> > >
>> > > On 24.7.2017. 15:49, Zoltan Forray wrote:
>> > > > 8-cores/16-threads.  It wasn't bad when it was replicating from
>> > 4-SP/TSM
>> > > > servers.  We had to stop all replication due to running out of space
>> > and
>> > > > until I finish this cleanup, I have been holding off replication.
>> So,
>> > > the
>> > > > deletion has been running standalone.
>> > > >
>> > > > I forgot to mention that DB backups are also running very long.
>> 1.5TB
>> > DB
>> > > > backup runs 8+hours to NFS storage.  These are connected via 10G.
>> > > >
>> > > > On Mon, Jul 24, 2017 at 9:41 AM, Sasa Drnjevic <
>> sasa.drnje...@srce.hr>
>> > > > wrote:
>> > > >
>> > > >> On 24.7.2017. 15:25, Zoltan Forray wrote:
>> > > >>> Due to lack of resources, we have had to stop replication on one
>> of
>> > our
>> > > >> SP
>> > > >>> servers. The replication target server is 7.1.6.3 RHEL 7, Dell
>> T710
>> > > with
>> > > >>> 192GB RAM.  NFS/ISILON storage.
>> > > >>>
>> > > >>> After removing replication from the nodes on source server, I have
>> > been
>> > > >>> cleaning up the replication server by deleting the filespaces for
>> the
>> > > >> nodes
>> > > >>> we are no longer replicating.
>> > > >>>
>> > > >>> My issue is the delete filespaces on the replication server is
>> taking
>> > > >>> forever.  It took over a week to delete one filespace with
>> 31-million
>> > > >>> objects?
>> > > >>
>> > > >>
>> > > >> That is definitely to lng :-(
>> > > >>
>> > > >> It would take 6-8 hrs max, in my environment even under "standard"
>> > > load...
>> > > >>
>> > > >> How many CPU cores does it have?
>> > > >>
>> > > >> A

Re: Sloooow deletion of objects on Replication target server

2017-07-25 Thread Stefan Folkerts
How many drives in what kind of a raid setup? what kinds of performance are
you getting from them, do you have any idea?

I have had nothing but issues with performance on deduplicating setups,
especially with replication in play when you use 15K disks.
I've had setups with 24x15K drives in raid 10 in a V7000 and I still was
having a hard time getting all the work done, as soon as you drop in some
SSD's you set, but I suppose they can work on small setups.
I would never use 15K drives again for any kinds of disk based
deduplicating setup however, SSD's are not that expensive anymore, you can
use read intensive SSD's in most cases and be fine.

Really, the blueprint benchmark tool works fine and gives a good indication
of your base disk performance, if it's far below what's described for the
blueprint that's probably the problem.
I would put my money on the 15K drives for the database and my suggestion
(in the case of insufficient database performance) would be place the
database and the active log on SSD's



On Tue, Jul 25, 2017 at 7:58 PM, Zoltan Forray <zfor...@vcu.edu> wrote:

> The two database filesystems (1TB each) are on internal, 15K SAS drives.
>
> On Tue, Jul 25, 2017 at 1:34 PM, Stefan Folkerts <
> stefan.folke...@gmail.com>
> wrote:
>
> > My question would be on what type of storage is the Spectrum Protect
> > database located.
> > Second question, have you run the IBM blueprint benchmark tool on the
> > storagepool and database storage, and if so, what were the results?
> >
> > On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic <sasa.drnje...@srce.hr>
> > wrote:
> >
> > > Not sure of course...But, I would blame NFS
> > >
> > > Did you check the negotiated speed of your NFS eth 10G ifaces?
> > > And that network?
> > >
> > > Regards,
> > >
> > > --
> > > Sasa Drnjevic
> > > www.srce.unizg.hr
> > >
> > >
> > > On 24.7.2017. 15:49, Zoltan Forray wrote:
> > > > 8-cores/16-threads.  It wasn't bad when it was replicating from
> > 4-SP/TSM
> > > > servers.  We had to stop all replication due to running out of space
> > and
> > > > until I finish this cleanup, I have been holding off replication.
> So,
> > > the
> > > > deletion has been running standalone.
> > > >
> > > > I forgot to mention that DB backups are also running very long.
> 1.5TB
> > DB
> > > > backup runs 8+hours to NFS storage.  These are connected via 10G.
> > > >
> > > > On Mon, Jul 24, 2017 at 9:41 AM, Sasa Drnjevic <
> sasa.drnje...@srce.hr>
> > > > wrote:
> > > >
> > > >> On 24.7.2017. 15:25, Zoltan Forray wrote:
> > > >>> Due to lack of resources, we have had to stop replication on one of
> > our
> > > >> SP
> > > >>> servers. The replication target server is 7.1.6.3 RHEL 7, Dell T710
> > > with
> > > >>> 192GB RAM.  NFS/ISILON storage.
> > > >>>
> > > >>> After removing replication from the nodes on source server, I have
> > been
> > > >>> cleaning up the replication server by deleting the filespaces for
> the
> > > >> nodes
> > > >>> we are no longer replicating.
> > > >>>
> > > >>> My issue is the delete filespaces on the replication server is
> taking
> > > >>> forever.  It took over a week to delete one filespace with
> 31-million
> > > >>> objects?
> > > >>
> > > >>
> > > >> That is definitely to lng :-(
> > > >>
> > > >> It would take 6-8 hrs max, in my environment even under "standard"
> > > load...
> > > >>
> > > >> How many CPU cores does it have?
> > > >>
> > > >> And how is/was it performing the role of a target repl. server
> > > >> performance wise?
> > > >>
> > > >> Regards,
> > > >>
> > > >> --
> > > >> Sasa Drnjevic
> > > >> www.srce.unizg.hr
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>
> > > >>>
> > > >>> To me it is highly unusual to take this long. Your thoughts on
> this?
> > > >>>
> > > >>> --
> > > >>> *Zoltan Forray*
> > > >>> Spectrum Protect (p.k.a. TSM) Software & Hardware 

Re: Sloooow deletion of objects on Replication target server

2017-07-25 Thread Stefan Folkerts
My question would be on what type of storage is the Spectrum Protect
database located.
Second question, have you run the IBM blueprint benchmark tool on the
storagepool and database storage, and if so, what were the results?

On Mon, Jul 24, 2017 at 3:55 PM, Sasa Drnjevic 
wrote:

> Not sure of course...But, I would blame NFS
>
> Did you check the negotiated speed of your NFS eth 10G ifaces?
> And that network?
>
> Regards,
>
> --
> Sasa Drnjevic
> www.srce.unizg.hr
>
>
> On 24.7.2017. 15:49, Zoltan Forray wrote:
> > 8-cores/16-threads.  It wasn't bad when it was replicating from 4-SP/TSM
> > servers.  We had to stop all replication due to running out of space and
> > until I finish this cleanup, I have been holding off replication.  So,
> the
> > deletion has been running standalone.
> >
> > I forgot to mention that DB backups are also running very long.  1.5TB DB
> > backup runs 8+hours to NFS storage.  These are connected via 10G.
> >
> > On Mon, Jul 24, 2017 at 9:41 AM, Sasa Drnjevic 
> > wrote:
> >
> >> On 24.7.2017. 15:25, Zoltan Forray wrote:
> >>> Due to lack of resources, we have had to stop replication on one of our
> >> SP
> >>> servers. The replication target server is 7.1.6.3 RHEL 7, Dell T710
> with
> >>> 192GB RAM.  NFS/ISILON storage.
> >>>
> >>> After removing replication from the nodes on source server, I have been
> >>> cleaning up the replication server by deleting the filespaces for the
> >> nodes
> >>> we are no longer replicating.
> >>>
> >>> My issue is the delete filespaces on the replication server is taking
> >>> forever.  It took over a week to delete one filespace with 31-million
> >>> objects?
> >>
> >>
> >> That is definitely to lng :-(
> >>
> >> It would take 6-8 hrs max, in my environment even under "standard"
> load...
> >>
> >> How many CPU cores does it have?
> >>
> >> And how is/was it performing the role of a target repl. server
> >> performance wise?
> >>
> >> Regards,
> >>
> >> --
> >> Sasa Drnjevic
> >> www.srce.unizg.hr
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >>>
> >>> To me it is highly unusual to take this long. Your thoughts on this?
> >>>
> >>> --
> >>> *Zoltan Forray*
> >>> Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> >>> Xymon Monitor Administrator
> >>> VMware Administrator
> >>> Virginia Commonwealth University
> >>> UCC/Office of Technology Services
> >>> www.ucc.vcu.edu
> >>> zfor...@vcu.edu - 804-828-4807
> >>> Don't be a phishing victim - VCU and other reputable organizations will
> >>> never use email to request that you reply with your password, social
> >>> security number or confidential personal information. For more details
> >>> visit http://infosecurity.vcu.edu/phishing.html
> >>>
> >>
> >
> >
> >
> > --
> > *Zoltan Forray*
> > Spectrum Protect (p.k.a. TSM) Software & Hardware Administrator
> > Xymon Monitor Administrator
> > VMware Administrator
> > Virginia Commonwealth University
> > UCC/Office of Technology Services
> > www.ucc.vcu.edu
> > zfor...@vcu.edu - 804-828-4807
> > Don't be a phishing victim - VCU and other reputable organizations will
> > never use email to request that you reply with your password, social
> > security number or confidential personal information. For more details
> > visit http://infosecurity.vcu.edu/phishing.html
> >
>


Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?

2017-07-06 Thread Stefan Folkerts
Great, and you're very welcome Eric!

On Thu, Jul 6, 2017 at 12:15 PM, Loon, Eric van (ITOPT3) - KLM <
eric-van.l...@klm.com> wrote:

> Hi Stefan!
> I just discovered that raising the sysctl net.core.rmem_max and
> net.core.rmem_max to 124928 fixes the error! Thanks for pointing me in the
> right direction.
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Stefan Folkerts
> Sent: donderdag 6 juli 2017 9:13
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?
>
> Hi Eric,
>
> I think some Linux sysctl tuning might be required to raise the Linux OS
> tcp window limit from 224.
> with scaling the system can adjust it if needed, that might work for the
> TSM client as well.
>
> net.ipv4.tcp_window_scaling = 1
>
> Regards,
>   Stefan
>
>
> On Wed, Jul 5, 2017 at 4:30 PM, Loon, Eric van (ITOPT3) - KLM <
> eric-van.l...@klm.com> wrote:
>
> > Hi Del!
> > Because of your mail down below I implemented TCPWINDOWSIZE 512 on the
> > (LINUX) clients on our new TSM server with a directory containerpool,
> > but the client logs are now filled with the following messages:
> >
> > ANS5246W TCPWINDOWSIZE 512 is specified, but exceeds the maximum value
> > allowed by the operating system. TCPWINDOWSIZE 244 will be used instead.
> >
> > What is you recommendation,  from a performance perspective, in this
> > situation?
> > Thanks again for your help!
> > Kind regards,
> > Eric van Loon
> > Air France/KLM Storage Engineering
> >
> > -Original Message-
> > From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf
> > Of Del Hoobler
> > Sent: dinsdag 21 maart 2017 2:11
> > To: ADSM-L@VM.MARIST.EDU
> > Subject: Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?
> >
> > Hi Ben,
> >
> > Here are some items to get you started:
> >
> >
> > Backup-Archive client with limited, high latency network (WAN backups):
> > ===
> > TCPWINDOWSIZE   512
> > RESOURCEUTILIZATION 4
> > COMPRESSION Yes
> > DEDUPLICATION   Yes
> > ENABLEDEDUPCACHEYes
> >
> > Tip:  Do not use the client deduplication caching for applications
> > that use the IBM Spectrum Protect API.  Refer to section 1.2.3.2.1 for
> > additional details.
> >
> >
> > Backup/Archive client or Client API with limited network (Gigabit LAN
> > backups):
> > ===
> > TCPWINDOWSIZE   512
> > RESOURCEUTILIZATION 10
> > COMPRESSION Yes
> > DEDUPLICATION   Yes
> > ENABLEDEDUPCACHENo
> >
> >
> > Backup/Archive client or Client API with high speed network (10
> > Gigabit + LAN backups) ===
> > TCPWINDOWSIZE   512
> > RESOURCEUTILIZATION 10
> > COMPRESSION No
> > DEDUPLICATION   No
> > ENABLEDEDUPCACHENo
> >
> >
> > Tip:  For optimal data reduction, avoid the following client option
> > combination:
> >
> > COMPRESSION Yes
> > DEDUPLICATION   No
> >
> >
> >
> >
> > Del
> >
> > 
> >
> > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> wrote on 03/15/2017
> > 02:39:04 PM:
> >
> > > From: "Alford, Ben" <balf...@utk.edu>
> > > To: ADSM-L@VM.MARIST.EDU
> > > Date: 03/15/2017 02:39 PM
> > > Subject: Best Practices/Best Performance SP/TSM B/A Client Settings ?
> > > Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU>
> > >
> > > I've looked at the IBM Blueprint documents but may have missed what
> > > I was looking for - the Best Practices for Best Performance for TSM
> > > B/A client settings.As we move from 6.4 to 7.x or 8.x clients
> > > before the 6.4 EOL, we are looking to test with the current client
> > > settings optimized for things like TCPBUFFSIZE, TCPWINDOWSIZE,
> > > TXNBYTLIMIT, etc., etc.
> > > Thanks!
> > >
> > > Ben Alford
> > > IT Manager, Office of Information Technology
> > > Systems: Shared Services
> > >
> > > The University of Tennessee
> > >
> > 
> > For information, 

Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?

2017-07-06 Thread Stefan Folkerts
Hi Eric,

I think some Linux sysctl tuning might be required to raise the Linux OS
tcp window limit from 224.
with scaling the system can adjust it if needed, that might work for the
TSM client as well.

net.ipv4.tcp_window_scaling = 1

Regards,
  Stefan


On Wed, Jul 5, 2017 at 4:30 PM, Loon, Eric van (ITOPT3) - KLM <
eric-van.l...@klm.com> wrote:

> Hi Del!
> Because of your mail down below I implemented TCPWINDOWSIZE 512 on the
> (LINUX) clients on our new TSM server with a directory containerpool, but
> the client logs are now filled with the following messages:
>
> ANS5246W TCPWINDOWSIZE 512 is specified, but exceeds the maximum value
> allowed by the operating system. TCPWINDOWSIZE 244 will be used instead.
>
> What is you recommendation,  from a performance perspective, in this
> situation?
> Thanks again for your help!
> Kind regards,
> Eric van Loon
> Air France/KLM Storage Engineering
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Del Hoobler
> Sent: dinsdag 21 maart 2017 2:11
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Best Practices/Best Performance SP/TSM B/A Client Settings ?
>
> Hi Ben,
>
> Here are some items to get you started:
>
>
> Backup-Archive client with limited, high latency network (WAN backups):
> ===
> TCPWINDOWSIZE   512
> RESOURCEUTILIZATION 4
> COMPRESSION Yes
> DEDUPLICATION   Yes
> ENABLEDEDUPCACHEYes
>
> Tip:  Do not use the client deduplication caching for applications that
> use the IBM Spectrum Protect API.  Refer to section 1.2.3.2.1 for
> additional details.
>
>
> Backup/Archive client or Client API with limited network (Gigabit LAN
> backups):
> ===
> TCPWINDOWSIZE   512
> RESOURCEUTILIZATION 10
> COMPRESSION Yes
> DEDUPLICATION   Yes
> ENABLEDEDUPCACHENo
>
>
> Backup/Archive client or Client API with high speed network (10 Gigabit +
> LAN backups)
> ===
> TCPWINDOWSIZE   512
> RESOURCEUTILIZATION 10
> COMPRESSION No
> DEDUPLICATION   No
> ENABLEDEDUPCACHENo
>
>
> Tip:  For optimal data reduction, avoid the following client option
> combination:
>
> COMPRESSION Yes
> DEDUPLICATION   No
>
>
>
>
> Del
>
> 
>
> "ADSM: Dist Stor Manager"  wrote on 03/15/2017
> 02:39:04 PM:
>
> > From: "Alford, Ben" 
> > To: ADSM-L@VM.MARIST.EDU
> > Date: 03/15/2017 02:39 PM
> > Subject: Best Practices/Best Performance SP/TSM B/A Client Settings ?
> > Sent by: "ADSM: Dist Stor Manager" 
> >
> > I've looked at the IBM Blueprint documents but may have missed what
> > I was looking for - the Best Practices for Best Performance for TSM
> > B/A client settings.As we move from 6.4 to 7.x or 8.x clients
> > before the 6.4 EOL, we are looking to test with the current client
> > settings optimized for things like TCPBUFFSIZE, TCPWINDOWSIZE,
> > TXNBYTLIMIT, etc., etc.
> > Thanks!
> >
> > Ben Alford
> > IT Manager, Office of Information Technology
> > Systems: Shared Services
> >
> > The University of Tennessee
> >
> 
> For information, services and offers, please visit our web site:
> http://www.klm.com. This e-mail and any attachment may contain
> confidential and privileged material intended for the addressee only. If
> you are not the addressee, you are notified that no part of the e-mail or
> any attachment may be disclosed, copied or distributed, and that any other
> action related to this e-mail or attachment is strictly prohibited, and may
> be unlawful. If you have received this e-mail by error, please notify the
> sender immediately by return e-mail, and delete this message.
>
> Koninklijke Luchtvaart Maatschappij NV (KLM), its subsidiaries and/or its
> employees shall not be liable for the incorrect or incomplete transmission
> of this e-mail or any attachments, nor responsible for any delay in receipt.
> Koninklijke Luchtvaart Maatschappij N.V. (also known as KLM Royal Dutch
> Airlines) is registered in Amstelveen, The Netherlands, with registered
> number 33014286
> 
>


Re: Delete filespace operation not replicated ?

2017-07-05 Thread Stefan Folkerts
It's a safety mechanism I believe. You can use decommission vm to
"decommission" a filespace, if you use that the data in the filespace will
expire using policy setting on the replica as well.
There is no way to actually delete data on the replication target with a
delete filespace on the source.

On Tue, Jul 4, 2017 at 11:38 AM, PAC Brion Arnaud <
arnaud.br...@panalpina.com> wrote:

> Hi Paul,
>
> Thanks a lot for appreciated feedback !
>
> I discussed this with some consultant in between time, who is not the same
> mind than IBM support : this might be working "as designed" , but in this
> case, the design is clearly flawed !
> I might admit that the deletion on replication server would not be
> immediate, due to some settings like the "Delay Period for Container Reuse"
> one, but the fact that deletion does not take place at all is definitely
> wrong to my eyes.
>
> Regarding your kind offer for scripts to help in such a situation : thanks
> a lot, but fortunately such "forced" deletion of file spaces in TSM is not
> very common in our shop, and I do believe that, despite my old age, I still
> will be able to type an additional  "del fi" command on the replication
> server when needed ;-)
>
> Cheers.
>
> Arnaud
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> Paul van Dongen
> Sent: Tuesday, July 04, 2017 9:20 AM
> To: ADSM-L@VM.MARIST.EDU
> Subject: Re: Delete filespace operation not replicated ?
>
> Hi Arnaud,
>
> I posted a similar question a few months ago. Got no answers from the
> list, but after opening a PMR I got the "works as designed" answer.
> Since we use a script to delete filespaces in our environment (to check
> last backup dates, customer IDs, etc.) we inserted some extra code to check
> if the node is involved in replication, and, if so, check which server is
> the replication partner in order to delete the correct filespace on the
> "target" server.
>
> Please send me a private message if you need more details.
>
>
> Kind regards/Met vriendelijke groet,
>
> Paul van Dongen
> System Expert
>
>
>
>
> T:  +31 (0)20 560 6600  |  M: +31 (0)6 41 81 44 46   |  E:
> paul.vandon...@vancis.nl
> Science Park 402 | 1098 XH Amsterdam  |  www.vancis.nl
>
>
> Dit e-mailbericht is slechts bestemd voor de persoon of entiteit aan wie
> het is gericht en kan informatie bevatten die persoonlijk of vertrouwelijk
> is en niet openbaar mag worden gemaakt krachtens wet of overeenkomst.
> Indien de ontvanger van dit bericht niet de bedoelde persoon of entiteit
> is, wordt hierbij vermeld dat verdere verspreiding, openbaarmaking of
> vermenigvuldiging van dit bericht strikt verboden is. Indien u dit bericht
> per vergissing heeft ontvangen, verzoeken wij u om dit bericht terug te
> sturen naar de afzender en het originele bericht te verwijderen zonder het
> eerst te vermenigvuldigen of te publiceren. Dank voor uw medewerking.
>
>
> -Original Message-
> From: ADSM: Dist Stor Manager [mailto:ADSM-L@VM.MARIST.EDU] On Behalf Of
> PAC Brion Arnaud
> Sent: dinsdag 20 juni 2017 12:46
> To: ADSM-L@VM.MARIST.EDU
> Subject: [ADSM-L] Delete filespace operation not replicated ?
>
> Hi all,
>
> A question regarding replicated data , and how a deleted filespace should
> be treated by the process ...
> First of all, the environment : two TSM servers 7.1.7.2, making use only
> of directory pools and configured to perform cross replication (A
> replicates to B, and B replicates to A).
> I have the situation where I recently deleted a filespace from a node on
> its primary server.
> Since this deletion, following processes took place on the primary server
> : protect stgpool, as well as replicate node and expire inventory.
> However, I can still see the deleted filespace on the replication server
> for the involved replicated node.
> Is this normal ?
>
> Thanks for any advice !
>
> Cheers.
>
> Arnaud
>
> 
> **
> Backup and Recovery Systems Administrator Panalpina Management Ltd.,
> Basle, Switzerland, CIT Department Viadukstrasse 42, P.O. Box 4002 Basel/CH
> Phone: +41 (61) 226 11 11, FAX: +41 (61) 226 17 01
> Direct: +41 (61) 226 19 78
> e-mail: arnaud.br...@panalpina.com
> This electronic message transmission contains information from Panalpina
> and is confidential or privileged. This information is intended only for
> the person (s) named above. If you are not the intended recipient, any
> disclosure, copying, distribution or use or any other action based on the
> contents of this information is strictly prohibited.
>
> If you receive this electronic transmission in error, please notify the
> sender by e-mail, telephone or fax at the numbers listed above. Thank you.
> 
> **
>


Re: vCenter certificate change with VE

2017-06-30 Thread Stefan Folkerts
I'm guessing the certificate is only for the browser connection to the
vCenter server but does anybody know if this is really the case, I can't
find anything on certificates in the VE documentation other than the one on
the VE webinterface, i'm talking abou the certificate on the vCenter server.

On Wed, Jun 28, 2017 at 4:43 PM, Stefan Folkerts <stefan.folke...@gmail.com>
wrote:

>
>
> Does anybody know what happens to Spectrum Protect for VE when the vCenter
> switches to a certificate signed by the customers root CA?
>
> Regards,
>Stefan
>
>


  1   2   3   4   >