Restore OS of AIX 4.3.3
Recently we lost both mirror disks on one of the RS6000, AIX4.3.3 system. I installed base OS, and TSM client and than restored selected files on the new system. How can I restore complete OS? We have TSM server level 4.1.4. I have done complete restore of an HP system by installing base OS on first disk than installing OS on second disk. Than I rebooted from first disk, mounted second disk and restored OS on second disk, after restore was complete I rebooted from second disk. Will this same procedure work on AIX platform. I am told that on AIX platform after restoring OS on different hdisk system will not boot. Thanks - Yahya Ilyas Systems Programmer Sr Systems Integration Management Information Technology Arizona State University, Tempe, AZ 85287-0101 [EMAIL PROTECTED] Phone: (480) 965-4467
Re: TSM Server 4.2.1.9 on Sun Solaris 8 and more
I have a SUN 880 connected to a HDS 7700E diskarray using VxVM and VxFS on the db and log volumes. I use the mount parameter mincache=direct or unbuffered and I must say the performance is good. I will test with raw soon to se how it's working. Do anyone of you know how TSM handles the disk storage pools volumes? Are they cached as the db and log are by TSM or are they dependant of the filesystems buffers. -Original Message- From: Seay, Paul [mailto:[EMAIL PROTECTED]] Sent: den 15 mars 2002 22:28 To: [EMAIL PROTECTED] Subject: Re: TSM Server 4.2.1.9 on Sun Solaris 8 I am not going to tell you which way to do it. But, basically, I would think you would want to use TSM buffers before Veritas FS caching, so I would go with raw if it will work. It also gets into mirroring your log and database will be faster depending on your disk solution. If you were using a high function disk subsystem with cache that has mirrored disk or Raid-5 disk in it and you are comfortable that these do not fail from a hardware error then you can avoid mirroring your data altogether. We use Shark. If you are using raw D1000 or A5000 disk, then you should spread the DB and Log across the disk. In no case should you put the log and DB in the same file system or on the same raw disk. I would recommend even avoiding them on the same physical disk in a high function disk subsystem if you can. -Original Message- From: Bruce Lowrie [mailto:[EMAIL PROTECTED]] Sent: Friday, March 15, 2002 1:10 PM To: [EMAIL PROTECTED] Subject: TSM Server 4.2.1.9 on Sun Solaris 8 Hello all, Running TSM 4.2.1.9 on a SUN E6000 with Solaris 8 in 64-bits mode. We run Veritas Volume Manager and File System as a Disk Mgr. We have two options for our TSM DB and LOG a) Use RAW partitions (Vertias VM partitions) b) Use File Systems (Veritas FS) FYI Our DB is about 20 GB in Size and Our LOG 1 GB I believe that the benefit of using RAW partitions is that it is faster ( NO FS overhead) The down side is that we are then unable to use the Unix Filesystem caching in RAM. I would like to know which is the best solution (speed and reliability). Bruce E. Lowrie Sr. Systems Analyst Information Technology Services Storage, Output, Legacy *E-Mail: [EMAIL PROTECTED] *Voice: (989) 496-6404 7 Fax: (989) 496-6437 *Post: 2200 W. Salzburg Rd. *Post: Mail: CO2111 *Post: Midland, MI 48686-0994 This e-mail transmission and any files that accompany it may contain sensitive information belonging to the sender. The information is intended only for the use of the individual or entity named. If you are not the intended recipient, you are hereby notified that any disclosure, copying, distribution, or the taking of any action in reliance on the contents of this information is strictly prohibited. Dow Corning's practice statement for digitally signed messages may be found at http://www.dowcorning.com/dcps. If you have received this e-mail transmission in error, please immediately notify the Security Administrator at mailto:[EMAIL PROTECTED] This email has been scanned for all viruses by the MessageLabs SkyScan service. For more information on a proactive anti-virus service working around the clock, around the globe, visit http://www.messagelabs.com
Filespace name is blank
Do anyone know why the filespace name is ... Node Name Filespace FSID Platform FilespaceIs Capacity = Pct Name Type Filespace (MB) = Util Unicode? =20 --- --- - - = - ELVISDB ...1 WinNTNTFS Yes 4 060,1 = 66,6 ELVISDB ...2 WinNTNTFS Yes 8 667,9 = 10,0 ELVISDB ...3 WinNTNTFS Yes21 930,0 = 10,8 ELVISDB ...4 WinNTSYSTEM Yes 0,0 = 0,0 Christian Pallinder Storage Specialist Wineasy AB, Dal=E9num, Hus 112, SE-181 70 Liding=F6 Phone: +46 8 563 110 00 Direct: +46 8 563 110 44 Cell: +46 701 880 044 Fax: +46 8 563 110 10 [EMAIL PROTECTED]
Re: Filespace name is blank
Hi Christian There is an APAR for this problem. Actually, there is several APARs(PQ56327, IC32554, IC32553 and IC32237), but I'd guess this would do it. The problem has to do with UNICODE conversion of filespace names. APAR= PQ56327 SER=IN INCORROUT UNICODE FILESPACE NAME DISPLAYED AS ... Status: CLOSED Closed: 01/04/02 Apar Information: RCOMP= 5698TSMVSTIV 390 STORAGE RREL= R421 FCOMP= 5698TSMVSTIV 390 STORAGE PFREL= F999 TREL= T SRLS: NONE Return Codes: Applicable Component Level/SU: R421 PSY UP Error Description: The TSM server for Solaris may display ... for some filesystem names that should be converted to the current code page. - - Note: It may not be possible to convert evry filespace name to the current code page. - - If the file space name is Unicode enabled, the name is converted to the server's code page for display. The results of the conversion for characters not supported by the current code page depends on the operating system. For names that TSM is able to partially convert, you may see question marks (??), blanks, unprintable characters, or These characters indicate to the administrator that files do exist. If the conversion is not successful, the name is displayed as Conversion can fail if the string includes characters that are not available in the server code page, or if the server has a problem accessing system conversion routines. - - The following steps can help to verify that system conversion routines are functioning properly: - - A server trace during a query filespace with the unicode trace flag enabled may indicate problems with the conversion. 14:38:56.389 25psxpg.c(1231): Failure opening string conversion -source UTF-8 and target 8859-1. - - The customer can verify the system conversion routines are able to process the conversion with the following command: iconv -f UTF-8 -t 8859-1 any_textfile - - If the iconv command succeeds and the TSM server is notable to convert the filesystem name, the fix for this APAR should be applied. This APAR is for Solaris only. Local Fix: Apply patch 4.2.1.8 or higher, available through Tivoli Web Page to resolve this problem or fixing PTF when available. Problem Summary: * USERS AFFECTED: ALL TSM 4.2 Server * * PROBLEM DESCRIPTION: The TSM server may display ... for * * some filespace names that should be * * converted to the current code page. * * RECOMMENDATION: * Filespace names could not be converted to the current code page. Temporary Fix: Comments: MODULES/MACROS: NONE Problem Conclusion: Modified code so filespace names can be converted. Best Regards Daniel Sparrman --- Daniel Sparrman Exist i Stockholm AB Bergkällavägen 31D 192 79 SOLLENTUNA Växel: 08 - 754 98 00 Mobil: 070 - 399 27 51 Christian PallinderTo: [EMAIL PROTECTED] C.Pallinder@WINEcc: ASY.SE Subject: Filespace name is blank Sent by: ADSM: Dist Stor Manager [EMAIL PROTECTED] .EDU 2002-03-16 11:03
Re. War stories: Restores 200GB ?
There are several keys to speed in restoring a large number files with TSM; they are: 1.. If using WindowsNT/2000 or AIX, be sure to use DIRMC, storing primary pool on disk, migrate to FILE on disk, then copy-pool both (this avoids tape mounts for the directories not stored in TSM db due to ACL's); I've seen *two* centralized ways to implement DIRMC -- (1) using client-option-set, or (2) establish the DIRMC management class as the one with the longest retention (in each affected policy domain); 2.. Restore the directories first, using -DIRSONLY (this minimizes NTFS db-insert thrashing); 3.. Consider multiple, parallel restores of high-level directories -- despite potential contention for tapes in common, you want to keep the data flowing on at least one session to maximize restore speed; 4.. Consider using CLASSIC restore, rather than no-query restore -- this will minimize tape mounts, as classic restore analyzes which files to request and has the server sort the tapes needed -- though tape mounts may not be an issue with your high-performance configuration; 5.. If you must use RAID-5, realize that you will spend TWO write cycles for every write; if using EMC RAID-S (or ESS), you may want to increase write-cache to as large as allowed (or turn it off, altogether). Using 9 or 15 physical disks will help. A client of mine just had a server disk failure last weekend; it had local disk configured with RAID-5 (hardware RAID controller attached to Dell-Win2000 server) -- after addressing items 1 to 3, above, we were able to saturate the 100Mbps network, achieving 10-15 GB/Hr for the entire restore -- only delays incurred were attributable to tape mounts... this customer had an over-committed silo, so tapes not in silo had to be checked-in on demand. 316 GB restored in approx. 30 hours. Their data was stored under 10 high-level directories, so we ran two restore sessions in parallel -- only had two tape drives -- and disabled other client schedules during this exercise. For your situation, 250 GB and millions of files, and assuming DIRMC (item #1, above), you should be able to see 5 - 10 GB/Hr -- 50 hours at 5 GB/Hr, 25 hours at 10 GB/Hr. So you are looking at two or three days, typically. Large numbers of small files is the Achilles Heal of any file-based backup/restore operation -- restore is the slowest (since you are fighting with the file system of the client OS) because of the way file systems traverse directories and reorganize branches on the fly, it's important to minimize the re-org processing (in NTFS, by populating the branches with leaves AFTER first creating all the branches). We did some benchmarks and compared notes with IBM; on another client, we developed the basic expectation that 2-7 GB/Hr was the standard for comparison purposes -- you can exceed that number by observing the first 3 recommended configuration items, above. How to mitigate this: (a) use image backup (now available for Unix, soon to be available on Win2000) in concert with file-level progressive incremental; and (b) limit your file server file systems to either 100 GB or X million files, then start a separate file system or server upon reaching that threshold... You need to test for your environment to determine what is the acceptable standard to implement. Hope this helps. Don France Technical Architect - Tivoli Certified Consultant Professional Association of Contract Employees (P.A.C.E.) San Jose, CA
Re: HELP !Miracle Netware changed to NT. Node cant access server anymore !
On Tue, 29 Jan 2002 15:43:16 +0100, it was written: A have a wired problem: A Win2000 node accessed the TSM 4.2 server using the nodename of a Netware 5 client. Now the Netware client cant access the server anymore. The message ANS1357S Session rejected: Downlevel client code version is displayed. The Platform in a q node changed from NetWare to WinNT. I think a similar problem is described in IC32139 on Tivoli support page. But there is no fix for this. I already opened a PMR - but maybe YOU can help me faster ? Here's what I bet happened. You had a node defined on your TSM server for a NetWare 5 box. (Call it GEORGE.) At some later time, you defined a Windows 2000 box as a TSM node using the same name (GEORGE). The Windows 2000 client accessed the TSM server, using a version of the Windows client with a higher version number than the TSM client version used on the NetWare 5 box. If this is the problem, you will need to upgrade your NetWare box's TSM client to an equal or higher version number than the client on the Windows 2000. (We had this problem crop up at our place of business recently.) -- Mark Stapleton ([EMAIL PROTECTED])
Help with Netware Client
Hi to All I had a Netware 5.1 Client Tivoli 4.2.1.29 when I load the dsmc and run q mgm I got: Node Name: NOVADMINEW Please enter your user id NOVADMINEW I have to type NOVADMINEW to get the information. After quitting and coming back same scenario. I have on my dsm.opt: NODENAME NOVADMINEW Passwordaccessgenerate NWPWFILE yes What amazing too is I have this Client version on similar Netware without any problem !!! Help will be very appreciate .. T.I.A Regards Robert Ouzen [EMAIL PROTECTED]
Re: Stop with DRM
On Tue, 29 Jan 2002 10:03:27 +0100, it was written: if I make a backup of the TSM database the databasetape is moved to a DRM off-site state. Now I want to get rid of that feature. Easy. If you run Q DRM the default flag for source is DBB (full backups). If you don't want to send your backups to off-site vaulting, run Q DRM SOURCE=DBS The only database backups that will go to off-site vaulting then would be database snapshots. -- Mark Stapleton ([EMAIL PROTECTED])
Magstar 3494 Tape Library
hi, I hope anybody can help me. I have 3494 TL with 4 3590B1A drives. Is it SAN ready? Can I able to connect this tape drives (3590B1A) on the IBM SAN? Or i need to upgrade the drive to 3590E1A to provide SAN Connectivity. Is this library exclusive use only for ADSM/TSM software? thanks in advance, Zosi Noriega ADNOC-UAE