Dear all, We are experiencing performance problems with the TSM Storage Agent for Solaris.
This is regardless of if we are doing restores or backups. The problem manifests itself mainly when restoring or backing data with DB2, but I get the same poor performance when sending gig sized files from the file system. Performance seems to be CPU bound, and each restore/backup session takes 100% of one CPU. So, on a 400mHz machine I can get around 10-15mb/s lanfree and on the faster machines with 1200mHz CPUs we're seeing speeds of around 20mb/s. When specifying parallelism in the DB2 databases to use multiple sessions we get 2 * 10-15mb/s and also 2 CPUs using 100%. Truss says that almost all of this CPU time is spent in userland. The native speed of the 9840C drives is 35mb/s and on AIX machines and Slowlaris machines with Oracle we see speeds of about 60mb/s per session over the SAN. At first I thought it could be the loopback interface but I didnt see any performance gain when switching to shared memory. I have also tried all the performance recommendations by IBM. I am going to trace the storage agent tomorrow to see if I can shed some light on what all the CPU time is spent on. On to my questions: Has anyone experienced the same extreme CPU load when using the storage agent on Solaris? Could it possibly be a patch related problem since the Solaris Oracle machines are more heavily patched than the DB2 ditos? The environment: Serverside: TSM server 5.2.3.2 on AIX 5.2. 16 StorageTek 9840C tape drives in powderhorn libraries using ACSLS. Everything is SAN connected with Cisco directors. Clientside: Solaris 5.8 64bit kernel. Gresham EDT 6.4.3.0 used to connect to the ACSLS. Storage Agent 5.2.3.5 on Solaris 5.8. TSM client 5.2.3.5. A range of different SUN hardware: different machines, different HBAs (both Sbus and PCI). -David -- David Hendén Exist AB, Sweden +46 70 3992759