Re: Data Deduplication
That sounds about right. Data Domain's a good product with a lot of happy customers, but TSM customers who are only backing up files and only keeping 3 versions aren't going to be among them. ;) You've got to backup database/app/email type data that does recurring full backups and/or keep a whole lot more than 3 versions to have de-dupe make sense for you. That's not a Data Domain thing. That's just how de-dupe works. In addition, it won't work if you use it as you would normally use a disk pool (1-2 days of backups and then move to tape). There won't be anything to de-dupe against, and you'll get close to nothing. You need to leave your onsite backups permanently on it for de-dupe to work. --- W. Curtis Preston Backup Blog @ www.backupcentral.com VP Data Protection, GlassHouse Technologies -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Dirk Kastens Sent: Tuesday, August 28, 2007 9:18 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Data Deduplication Hi, Jon Evans wrote: > Dirk > > I also tried Data Domain and was not impressed. I now use Diligent's > Protectier and its far more impressive. Its scalable, reasonably priced, > achieves throughput of 200mb per second and better and factoring ratio's > of > Over 10 to 1 We mainly backup normal files and only use 3 backup versions so that the compression will not be more than 3:1 or 5:1. The best results can be achieved with databases and application data like Exchange. That's what the people from DataDomain said. I'm just running another test with MySQL and Domino data. Let's wait and see :-) -- Regards, Dirk Kastens Universitaet Osnabrueck, Rechenzentrum (Computer Center) Albrechtstr. 28, 49069 Osnabrueck, Germany Tel.: +49-541-969-2347, FAX: -2470
Re: Looking for suggestions to speed up restore for a Windows server
Tom, It wasn't a study, but a series of observations based on some testing that I did. I've maintained that one can create somewhere between 50K and 100K files/hour on windows. Looking at your restore time of 60 hours, I'm thinking you're in the 50K range. I think that Windows 2003 R2 latest version may well run faster than that on fast hardware, but have not really done enough new studying to know. But your example furthers my assertion. Image restore would help. You could probably get more than 200 files from the image and maybe 100K from traditional backup. And all the directories (or at least most) will be created. And we do know that the directory does not have to exist before a file can be restored. If the client has a file to restore and the directory does not exist, the client will create the directory. If there is any specific directory information, such as an ACL, then that will get fixed up when the client finds the backup for that directory entry. Kelly J. Lipp VP Manufacturing & CTO STORServer, Inc. 485-B Elkton Drive Colorado Springs, CO 80907 719-266-8777 [EMAIL PROTECTED] -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kauffman, Tom Sent: Tuesday, August 28, 2007 11:57 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Looking for suggestions to speed up restore for a Windows server Oh the bottleneck is definitely file create -- The top three directories (drive letters): Z -- userhome -- 764,184 files, 60,281 directories Y -- 'data' -- 636,514 files, 47,144 directories W -- 'engineering' -- 745,976 files, 134,863 files The TSM server is spending all it's time in SENDW, except for the roughly 2 hours (over the course of 60) that it was in mediaw waiting to get to the directory structure on the other tape pool. And I've got some ideas from Richard that will cut that right out. I seem to recall someone actually running a study on restore performance vs file count; I'm trying to find it in the mail archives. Maybe an image backup would help -- this is an active/passive windows cluster and 'offline' is not an available backup option. Can I get away with an online image backup? Also -- we restore to unlike hardware at the hot site (install Win2003 server, install TSM client, restore) -- would this be an issue for an image restore? Thanks -- Tom -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kelly Lipp Sent: Monday, August 27, 2007 5:40 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: Looking for suggestions to speed up restore for a Windows server How about periodic Image backups of the file server volumes? Couple that with daily traditional TSM backups and perhaps you have something that works out better at the DR site. The problem is as you described it: lots of files to create. Did you observe that you were pecking through tapes, or was the bottleneck at the file create level on the Windows box? Or could you really tell? Even if you create another pool for the directory data (which is easy to implement) you would still have that stuff on many different tapes. What about a completely new storage pool hierarchy for that one client? And then aggressively reclaim the DR pool to keep the number of tapes at a very small number. I'd really like to know where the bottleneck really was. If it's file create time on the client itself, speeding up other things won't help. If that's the case, then I like the image backup notion periodically. Even if you did this once/month, the number of files that you would restore would be fairly small compared to the overall file server. And the TSM client does this for you automagically so the restore isn't hard. And this also brings up the fact that a restore of this nature in the a non DR situation probably isn't much better! Thanks, Kelly Kelly J. Lipp VP Manufacturing & CTO STORServer, Inc. 485-B Elkton Drive Colorado Springs, CO 80907 719-266-8777 [EMAIL PROTECTED] -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kauffman, Tom Sent: Monday, August 27, 2007 12:40 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Looking for suggestions to speed up restore for a Windows server We had our fall D/R hotsite test last week and all went well -- except for the recovery of our primary Windows 2003 file sharing system. It just takes WAY too long. Part of the problem is the sheer number of files/directories per drive -- I'm working with the Intel/Windows admin group to try some changes when we swap this system out in November. Part of the problem is that the directory structure is scattered over a mass of other backups. I'm looking for suggestions on this. The system is co-located by drive, but only for five of the nine logical drives on the system. I may have to bite the bullet and run all nine logical drives through co-location. Is there any way to force the directory structure for a given drive
DP for Mail/Snapmanager
We have just started on a conversion of Novell Groupwise to Microsoft Exchange 2007 (don't ask why 2007) and are trying to figure out how to implement a backup solution using DP for Mail and TSM. The Exchange system will be running on Netapp and the consultant has proposed using Snapmanager for Exchange to produce snapshots every 4 hours. The proposed solution has the last of these being 'verified' and mounted to a second server which will do the backup to TSM daily. Apparently this 'verification' also involves resetting the logs. I pointed out to them that this would essentially be a full backup to TSM daily, eventually amounting to 3-4TB of data daily which is completely unacceptable in our environment. We currently are doing incrementals of GWCopied POs in Groupwise and the volume is only about 50-100GB/day. Seems there should be some way to do say, weekly fulls, and logs in between which they were going to try and come up with. My limited understanding of DP for Mail was that it could manage the snapshots of databases/logs and allow a full/incremental setup. Anyone have a similar setup (Exchange on Netapp) and/or a simple explanation of how Snapmanager for Exchange and Data Protection for Mail work together? Thanks Sam Sheppard San Diego Data Processing Corp. (858)-581-9668
Re: Looking for suggestions to speed up restore for a Windows server
FWIW, I had a problem the other year where the TSM server was in SENDW for 30-40 seconds for every minute during a restore and it had to do with the filesystem attempting to check quotas for the 125k users in the filesystem. Disabling quota checks helped that one go much faster. -Jonathan Kauffman, Tom said the following on 08/28/2007 01:57 PM: Oh the bottleneck is definitely file create -- The top three directories (drive letters): Z -- userhome -- 764,184 files, 60,281 directories Y -- 'data' -- 636,514 files, 47,144 directories W -- 'engineering' -- 745,976 files, 134,863 files The TSM server is spending all it's time in SENDW, except for the roughly 2 hours (over the course of 60) that it was in mediaw waiting to get to the directory structure on the other tape pool. And I've got some ideas from Richard that will cut that right out. I seem to recall someone actually running a study on restore performance vs file count; I'm trying to find it in the mail archives. Maybe an image backup would help -- this is an active/passive windows cluster and 'offline' is not an available backup option. Can I get away with an online image backup? Also -- we restore to unlike hardware at the hot site (install Win2003 server, install TSM client, restore) -- would this be an issue for an image restore? Thanks -- Tom -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kelly Lipp Sent: Monday, August 27, 2007 5:40 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: Looking for suggestions to speed up restore for a Windows server How about periodic Image backups of the file server volumes? Couple that with daily traditional TSM backups and perhaps you have something that works out better at the DR site. The problem is as you described it: lots of files to create. Did you observe that you were pecking through tapes, or was the bottleneck at the file create level on the Windows box? Or could you really tell? Even if you create another pool for the directory data (which is easy to implement) you would still have that stuff on many different tapes. What about a completely new storage pool hierarchy for that one client? And then aggressively reclaim the DR pool to keep the number of tapes at a very small number. I'd really like to know where the bottleneck really was. If it's file create time on the client itself, speeding up other things won't help. If that's the case, then I like the image backup notion periodically. Even if you did this once/month, the number of files that you would restore would be fairly small compared to the overall file server. And the TSM client does this for you automagically so the restore isn't hard. And this also brings up the fact that a restore of this nature in the a non DR situation probably isn't much better! Thanks, Kelly Kelly J. Lipp VP Manufacturing & CTO STORServer, Inc. 485-B Elkton Drive Colorado Springs, CO 80907 719-266-8777 [EMAIL PROTECTED] -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kauffman, Tom Sent: Monday, August 27, 2007 12:40 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Looking for suggestions to speed up restore for a Windows server We had our fall D/R hotsite test last week and all went well -- except for the recovery of our primary Windows 2003 file sharing system. It just takes WAY too long. Part of the problem is the sheer number of files/directories per drive -- I'm working with the Intel/Windows admin group to try some changes when we swap this system out in November. Part of the problem is that the directory structure is scattered over a mass of other backups. I'm looking for suggestions on this. The system is co-located by drive, but only for five of the nine logical drives on the system. I may have to bite the bullet and run all nine logical drives through co-location. Is there any way to force the directory structure for a given drive to the same management class/storage pool as the data? I'm thinking I may have finally come up with a use for a second domain, with the default management class being the one that does co-location by drive. If I go this route -- how do I migrate all of the current data? Export/Import? How do I clean up the off-site copies? Delete volume/backup storage pool? I'm on TSM Server 5.3.2.0, with a 5.3 (not sure of exact level) client. TIA Tom Kauffman NIBCO, Inc CONFIDENTIALITY NOTICE: This email and any attachments are for the exclusive and confidential use of the intended recipient. If you are not the intended recipient, please do not read, distribute or take action in reliance upon this message. If you have received this in error, please notify us immediately by return email and promptly delete this message and its attachments from your computer system. We do not waive attorney-client or work product privilege by the transmission of this message. CONFIDENTIALITY NOTICE: This email and any attachments are for
Re: Packaging issues with TDP for Oracle on Linux
I also had this problem, and as I remember I renamed the .bin file to .jar and then used some archiver to look into the file. Somewhere deep in strange named directory I found... two known TDPO RPMs. Regards, Tomasz Hubicki Wiadomość oryginalna Temat: [ADSM-L] Packaging issues with TDP for Oracle on Linux Nadawca: Zoltan Forray/AC/VCU <[EMAIL PROTECTED]> Adresat: ADSM-L@VM.MARIST.EDU Data: 2007-08-28 17:26 > This is mostly aimed at Del, the IBM TDP guru, but all help is > appreciated. > > We are having fits trying to install the 5.4 release of the TDP for Oracle > on Linux, due to it coming in a ".bin", Java-based installer rather than a > standard rpm. > > The boxes we are trying to install this on do not have Java installed. > Even when we installed Java, we still couldn't get it to run.it keeps > hanging. > > We have had to drop back to the older version of the TDP. > > How can we request and/or get a version of the Oracle TDP package (both > x86 and x86_64) in rpm format ? > > Or do you have some way we can extract the needed parts from the .bin file > ? > > > Zoltan Forray > Virginia Commonwealth University > Office of Technology Services > University Computing Center > e-mail: [EMAIL PROTECTED] > voice: 804-828-4807 > -- Tomasz Hubicki Centrum Komputerowe ZETO S.A. TSS Technical SupportNarutowicza 136, 90-146 Łódź tel: +48 42 6756356 Sąd Rejonowy dla Łodzi-Śródmieścia w Łodzi mobile: +48 508002020XX wydział Krajowego Rejestru Sądowego KRS: 117869, NIP: 728-10-01-100 http://www.ckzeto.com.pl kapitał zakładowy: 2 149 700,00 zł
Re: Looking for suggestions to speed up restore for a Windows server
Oh the bottleneck is definitely file create -- The top three directories (drive letters): Z -- userhome -- 764,184 files, 60,281 directories Y -- 'data' -- 636,514 files, 47,144 directories W -- 'engineering' -- 745,976 files, 134,863 files The TSM server is spending all it's time in SENDW, except for the roughly 2 hours (over the course of 60) that it was in mediaw waiting to get to the directory structure on the other tape pool. And I've got some ideas from Richard that will cut that right out. I seem to recall someone actually running a study on restore performance vs file count; I'm trying to find it in the mail archives. Maybe an image backup would help -- this is an active/passive windows cluster and 'offline' is not an available backup option. Can I get away with an online image backup? Also -- we restore to unlike hardware at the hot site (install Win2003 server, install TSM client, restore) -- would this be an issue for an image restore? Thanks -- Tom -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kelly Lipp Sent: Monday, August 27, 2007 5:40 PM To: ADSM-L@VM.MARIST.EDU Subject: Re: Looking for suggestions to speed up restore for a Windows server How about periodic Image backups of the file server volumes? Couple that with daily traditional TSM backups and perhaps you have something that works out better at the DR site. The problem is as you described it: lots of files to create. Did you observe that you were pecking through tapes, or was the bottleneck at the file create level on the Windows box? Or could you really tell? Even if you create another pool for the directory data (which is easy to implement) you would still have that stuff on many different tapes. What about a completely new storage pool hierarchy for that one client? And then aggressively reclaim the DR pool to keep the number of tapes at a very small number. I'd really like to know where the bottleneck really was. If it's file create time on the client itself, speeding up other things won't help. If that's the case, then I like the image backup notion periodically. Even if you did this once/month, the number of files that you would restore would be fairly small compared to the overall file server. And the TSM client does this for you automagically so the restore isn't hard. And this also brings up the fact that a restore of this nature in the a non DR situation probably isn't much better! Thanks, Kelly Kelly J. Lipp VP Manufacturing & CTO STORServer, Inc. 485-B Elkton Drive Colorado Springs, CO 80907 719-266-8777 [EMAIL PROTECTED] -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kauffman, Tom Sent: Monday, August 27, 2007 12:40 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Looking for suggestions to speed up restore for a Windows server We had our fall D/R hotsite test last week and all went well -- except for the recovery of our primary Windows 2003 file sharing system. It just takes WAY too long. Part of the problem is the sheer number of files/directories per drive -- I'm working with the Intel/Windows admin group to try some changes when we swap this system out in November. Part of the problem is that the directory structure is scattered over a mass of other backups. I'm looking for suggestions on this. The system is co-located by drive, but only for five of the nine logical drives on the system. I may have to bite the bullet and run all nine logical drives through co-location. Is there any way to force the directory structure for a given drive to the same management class/storage pool as the data? I'm thinking I may have finally come up with a use for a second domain, with the default management class being the one that does co-location by drive. If I go this route -- how do I migrate all of the current data? Export/Import? How do I clean up the off-site copies? Delete volume/backup storage pool? I'm on TSM Server 5.3.2.0, with a 5.3 (not sure of exact level) client. TIA Tom Kauffman NIBCO, Inc CONFIDENTIALITY NOTICE: This email and any attachments are for the exclusive and confidential use of the intended recipient. If you are not the intended recipient, please do not read, distribute or take action in reliance upon this message. If you have received this in error, please notify us immediately by return email and promptly delete this message and its attachments from your computer system. We do not waive attorney-client or work product privilege by the transmission of this message. CONFIDENTIALITY NOTICE: This email and any attachments are for the exclusive and confidential use of the intended recipient. If you are not the intended recipient, please do not read, distribute or take action in reliance upon this message. If you have received this in error, please notify us immediately by return email and promptly delete this message and its attachments from your computer system. We do not waive
Re: Data Deduplication
Hi, Jon Evans wrote: Dirk I also tried Data Domain and was not impressed. I now use Diligent's Protectier and its far more impressive. Its scalable, reasonably priced, achieves throughput of 200mb per second and better and factoring ratio's of Over 10 to 1 We mainly backup normal files and only use 3 backup versions so that the compression will not be more than 3:1 or 5:1. The best results can be achieved with databases and application data like Exchange. That's what the people from DataDomain said. I'm just running another test with MySQL and Domino data. Let's wait and see :-) -- Regards, Dirk Kastens Universitaet Osnabrueck, Rechenzentrum (Computer Center) Albrechtstr. 28, 49069 Osnabrueck, Germany Tel.: +49-541-969-2347, FAX: -2470
Re: delete volhist dbb
> The switch seems to have created dbb series orphans. > Try adding DEVclass=__ to the delete. Yea, I tried this yesterday but am getting "Invalid device class" message. The weird thing is I can mount those tapes and move the data to the new tape device so I'm a bit confused as to the message. Geoff Gill TSM Administrator PeopleSoft Sr. Systems Administrator SAIC M/S-G1b (858)826-4062 Email: [EMAIL PROTECTED]
Packaging issues with TDP for Oracle on Linux
This is mostly aimed at Del, the IBM TDP guru, but all help is appreciated. We are having fits trying to install the 5.4 release of the TDP for Oracle on Linux, due to it coming in a ".bin", Java-based installer rather than a standard rpm. The boxes we are trying to install this on do not have Java installed. Even when we installed Java, we still couldn't get it to run.it keeps hanging. We have had to drop back to the older version of the TDP. How can we request and/or get a version of the Oracle TDP package (both x86 and x86_64) in rpm format ? Or do you have some way we can extract the needed parts from the .bin file ? Zoltan Forray Virginia Commonwealth University Office of Technology Services University Computing Center e-mail: [EMAIL PROTECTED] voice: 804-828-4807
Re: Data Deduplication
I have now verified with two other de-dupe vendors (Falconstor & SEPATON) that Oracle multiplexing is not an issue for them. (I also have a question in to Diligent to verify that this accurately reflects the way their product works. They said they would get back to me today.) Having said that, I think this does present something to test with any de-dupe product you are considering. I knew that multiplexing in NW and NBU might be an issue for some de-dupe products, but I never thought about multiplexing in Oracle might be an issue. I wonder what other apps might munge data together like this in their backup streams... --- W. Curtis Preston Backup Blog @ www.backupcentral.com VP Data Protection, GlassHouse Technologies -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Charles A Hart Sent: Monday, August 27, 2007 10:53 AM To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Data Deduplication According to Dilligent, when RMAN uses Multiplexing, it intermingles the data from each RMAN so the data block will be different every time so the blocks are different, similar to Multiplexing with Netbackup... I'm not an RMAN expert, just trusting what the Vendor is stating. The following link seems to match with what we are being told http://download.oracle.com/docs/cd/B19306_01/backup.102/b14191/rcmconc10 02.htm (Look for the Multiplex Section) Is there an RMAn expert in the house? Can some one confirm this info? Charles Hart UHT - Data Protection (763)744-2263 Sharepoint: http://unitedteams.uhc.com/uht/EnterpriseStorage/DataProtection/default. aspx Curtis Preston <[EMAIL PROTECTED]> Sent by: "ADSM: Dist Stor Manager" 08/27/2007 11:40 AM Please respond to "ADSM: Dist Stor Manager" To ADSM-L@VM.MARIST.EDU cc Subject Re: [ADSM-L] Data Deduplication >3) Oracle Specific >Do not use RMAN's Multiplexing in RMAN will combine 4 >Channels together and the backup data then will be unique every time thus >not allowing forde-duping) >Use the File Seq=1 (Then run multiple channels) I don't see how this would affect de-duplication if your de-dupe product knows what it's doing. Every block coming into the device should be compared to every other block ever seen by the device. So combining multiple files together using Oracle multiplexing shouldn't affect de-dupe. Did you test this, or see it in the docs somewhere? Was this true for multiple de-dupe vendors, or just the one you chose? This e-mail, including attachments, may include confidential and/or proprietary information, and may be used only by the person or entity to which it is addressed. If the reader of this e-mail is not the intended recipient or his or her authorized agent, the reader is hereby notified that any dissemination, distribution or copying of this e-mail is prohibited. If you have received this e-mail in error, please notify the sender by replying to this message and delete this e-mail immediately.
AW: Looking for suggestions to speed up restore for a Windows server
GREAT bird-view of approaching a problem! I *do* know that prior to solving a problem the problem per se has to be checked, but I oten forget to do that. You did not. Juraj -Ursprüngliche Nachricht- Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Schaub, Steve Gesendet: Dienstag, 28. August 2007 13:20 An: ADSM-L@VM.MARIST.EDU Betreff: Re: Looking for suggestions to speed up restore for a Windows server Tom, Having just gone through a similar scenario 2 weeks ago, here was my very non-technical fix: (me) "Hello, end-user? I'm not going to be able to get your 800GB of data restored in 2 hours like you want. Care to narrow down the restore to what you really need?" (end-user) "Oh. Well, we really only need the 10GB of data in the and directories to run the important stuff." (me) "Ok, done." Maybe this wont apply to you, in which case the monthly image backup seems like a good suggestion. Good luck, Steve Schaub Systems Engineer, WNI BlueCross BlueShield of Tennessee 423-535-6574 (desk) 423-785-7347 (cell) ***public*** -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kauffman, Tom Sent: Monday, August 27, 2007 2:40 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Looking for suggestions to speed up restore for a Windows server We had our fall D/R hotsite test last week and all went well -- except for the recovery of our primary Windows 2003 file sharing system. It just takes WAY too long. Part of the problem is the sheer number of files/directories per drive -- I'm working with the Intel/Windows admin group to try some changes when we swap this system out in November. Part of the problem is that the directory structure is scattered over a mass of other backups. I'm looking for suggestions on this. The system is co-located by drive, but only for five of the nine logical drives on the system. I may have to bite the bullet and run all nine logical drives through co-location. Is there any way to force the directory structure for a given drive to the same management class/storage pool as the data? I'm thinking I may have finally come up with a use for a second domain, with the default management class being the one that does co-location by drive. If I go this route -- how do I migrate all of the current data? Export/Import? How do I clean up the off-site copies? Delete volume/backup storage pool? I'm on TSM Server 5.3.2.0, with a 5.3 (not sure of exact level) client. TIA Tom Kauffman NIBCO, Inc CONFIDENTIALITY NOTICE: This email and any attachments are for the exclusive and confidential use of the intended recipient. If you are not the intended recipient, please do not read, distribute or take action in reliance upon this message. If you have received this in error, please notify us immediately by return email and promptly delete this message and its attachments from your computer system. We do not waive attorney-client or work product privilege by the transmission of this message. Please see the following link for the BlueCross BlueShield of Tennessee E-mail disclaimer: http://www.bcbst.com/email_disclaimer.shtm Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden.
Re: Data Deduplication
Dirk I also tried Data Domain and was not impressed. I now use Diligent's Protectier and its far more impressive. Its scalable, reasonably priced, achieves throughput of 200mb per second and better and factoring ratio's of Over 10 to 1 Regards Jon -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Dirk Kastens Sent: 27 August 2007 08:31 To: ADSM-L@VM.MARIST.EDU Subject: Re: [ADSM-L] Data Deduplication Hi, > Writing a de-dupe backup product isn't easy. EMC bought Avamar and > Symantec bought Data Center Technologies to get their respective > products. I don't know of any other de-dupe companies for IBM to > acquire, so they'll have to write their own. That may take them a bit > longer. We're just testing a deduplication disk array from DataDomain with TSM. The compression ratio is much less than promised by the sales people. During the last 10 days of incremental backups we only achieved a ratio of 2.6:1. The disk array is very expensive and for the money you can buy more disks than you need without compression. -- Regards, Dirk Kastens Universitaet Osnabrueck, Rechenzentrum (Computer Center) Albrechtstr. 28, 49069 Osnabrueck, Germany Tel.: +49-541-969-2347, FAX: -2470 This e-mail, including any attached files, may contain confidential and privileged information for the sole use of the intended recipient. Any review, use, distribution, or disclosure by others is strictly prohibited. If you are not the intended recipient (or authorized to receive information for the intended recipient), please contact the sender by reply e-mail and delete all copies of this message.
Re: delete volhist dbb
On Aug 27, 2007, at 12:22 PM, Gill, Geoffrey L. wrote: About a month ago I changed the db backup to use a new device class/ tape type and those are expiring as they should but the db backups that were done on a different device class before the change are still there (looks like 5 of them). Not sure why that would be an issue or if there is something I need to specifically code in the command. Anyone have a suggestion would be appreciated. We run this daily. del volhistory todate=today-5 type=dbb Geoff - The switch seems to have created dbb series orphans. Try adding DEVclass=__ to the delete. Richard Sims
Re: Looking for suggestions to speed up restore for a Windows server
Tom, Having just gone through a similar scenario 2 weeks ago, here was my very non-technical fix: (me) "Hello, end-user? I'm not going to be able to get your 800GB of data restored in 2 hours like you want. Care to narrow down the restore to what you really need?" (end-user) "Oh. Well, we really only need the 10GB of data in the and directories to run the important stuff." (me) "Ok, done." Maybe this wont apply to you, in which case the monthly image backup seems like a good suggestion. Good luck, Steve Schaub Systems Engineer, WNI BlueCross BlueShield of Tennessee 423-535-6574 (desk) 423-785-7347 (cell) ***public*** -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kauffman, Tom Sent: Monday, August 27, 2007 2:40 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Looking for suggestions to speed up restore for a Windows server We had our fall D/R hotsite test last week and all went well -- except for the recovery of our primary Windows 2003 file sharing system. It just takes WAY too long. Part of the problem is the sheer number of files/directories per drive -- I'm working with the Intel/Windows admin group to try some changes when we swap this system out in November. Part of the problem is that the directory structure is scattered over a mass of other backups. I'm looking for suggestions on this. The system is co-located by drive, but only for five of the nine logical drives on the system. I may have to bite the bullet and run all nine logical drives through co-location. Is there any way to force the directory structure for a given drive to the same management class/storage pool as the data? I'm thinking I may have finally come up with a use for a second domain, with the default management class being the one that does co-location by drive. If I go this route -- how do I migrate all of the current data? Export/Import? How do I clean up the off-site copies? Delete volume/backup storage pool? I'm on TSM Server 5.3.2.0, with a 5.3 (not sure of exact level) client. TIA Tom Kauffman NIBCO, Inc CONFIDENTIALITY NOTICE: This email and any attachments are for the exclusive and confidential use of the intended recipient. If you are not the intended recipient, please do not read, distribute or take action in reliance upon this message. If you have received this in error, please notify us immediately by return email and promptly delete this message and its attachments from your computer system. We do not waive attorney-client or work product privilege by the transmission of this message. Please see the following link for the BlueCross BlueShield of Tennessee E-mail disclaimer: http://www.bcbst.com/email_disclaimer.shtm
Re: TSM install on ESX Linux help?
Thanks to all that responded. The information transmitted is intended only for the person or entity to which it is addressed and may contain CONFIDENTIAL material. If you receive this material/information in error, please contact the sender and delete or destroy the material/information.
AW: Looking for suggestions to speed up restore for a Windows server
I can second to Kelly. On my old file server I had slow restores because of many files to create even though my directories were kept on TSM on disk storage pool. The bottleneck was file creation rate on file server. Monthly image backups helped great. Best Juraj -Ursprüngliche Nachricht- Von: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] Im Auftrag von Kelly Lipp Gesendet: Montag, 27. August 2007 23:40 An: ADSM-L@VM.MARIST.EDU Betreff: Re: Looking for suggestions to speed up restore for a Windows server How about periodic Image backups of the file server volumes? Couple that with daily traditional TSM backups and perhaps you have something that works out better at the DR site. The problem is as you described it: lots of files to create. Did you observe that you were pecking through tapes, or was the bottleneck at the file create level on the Windows box? Or could you really tell? Even if you create another pool for the directory data (which is easy to implement) you would still have that stuff on many different tapes. What about a completely new storage pool hierarchy for that one client? And then aggressively reclaim the DR pool to keep the number of tapes at a very small number. I'd really like to know where the bottleneck really was. If it's file create time on the client itself, speeding up other things won't help. If that's the case, then I like the image backup notion periodically. Even if you did this once/month, the number of files that you would restore would be fairly small compared to the overall file server. And the TSM client does this for you automagically so the restore isn't hard. And this also brings up the fact that a restore of this nature in the a non DR situation probably isn't much better! Thanks, Kelly Kelly J. Lipp VP Manufacturing & CTO STORServer, Inc. 485-B Elkton Drive Colorado Springs, CO 80907 719-266-8777 [EMAIL PROTECTED] -Original Message- From: ADSM: Dist Stor Manager [mailto:[EMAIL PROTECTED] On Behalf Of Kauffman, Tom Sent: Monday, August 27, 2007 12:40 PM To: ADSM-L@VM.MARIST.EDU Subject: [ADSM-L] Looking for suggestions to speed up restore for a Windows server We had our fall D/R hotsite test last week and all went well -- except for the recovery of our primary Windows 2003 file sharing system. It just takes WAY too long. Part of the problem is the sheer number of files/directories per drive -- I'm working with the Intel/Windows admin group to try some changes when we swap this system out in November. Part of the problem is that the directory structure is scattered over a mass of other backups. I'm looking for suggestions on this. The system is co-located by drive, but only for five of the nine logical drives on the system. I may have to bite the bullet and run all nine logical drives through co-location. Is there any way to force the directory structure for a given drive to the same management class/storage pool as the data? I'm thinking I may have finally come up with a use for a second domain, with the default management class being the one that does co-location by drive. If I go this route -- how do I migrate all of the current data? Export/Import? How do I clean up the off-site copies? Delete volume/backup storage pool? I'm on TSM Server 5.3.2.0, with a 5.3 (not sure of exact level) client. TIA Tom Kauffman NIBCO, Inc CONFIDENTIALITY NOTICE: This email and any attachments are for the exclusive and confidential use of the intended recipient. If you are not the intended recipient, please do not read, distribute or take action in reliance upon this message. If you have received this in error, please notify us immediately by return email and promptly delete this message and its attachments from your computer system. We do not waive attorney-client or work product privilege by the transmission of this message. Diese E-Mail enthält vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, informieren Sie bitte sofort den Absender und vernichten Sie diese Mail. Das unerlaubte Kopieren sowie die unbefugte Weitergabe dieser Mail ist nicht gestattet. This e-mail may contain confidential and/or privileged information. If you are not the intended recipient (or have received this e-mail in error) please notify the sender immediately and destroy this e-mail. Any unauthorised copying, disclosure or distribution of the material in this e-mail is strictly forbidden.