Thanks for the hint.......using Google is great (which I do), but if you don't have a clue what to look for........like a dictionary--you have to have some kind of clue/general idea how a word is spelled to be able to look it up ;----)
I did find a /sys/block/emcpowerb/queue/nr_requests file with 128 in it. From: Remco Post <r.p...@plcs.nl> To: ADSM-L@VM.MARIST.EDU Date: 10/21/2010 02:47 PM Subject: Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS storage Sent by: "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> Hi all, YMMV, I've really never played with these, but check /sys/block/sdX/queue/nr_requests and some other files in that dir ow BTW, google is your friend ;-) On 21 okt 2010, at 19:38, Zoltan Forray/AC/VCU wrote: > This is RedHat Linux 5.5 > > > > From: > Remco Post <r.p...@plcs.nl> > To: > ADSM-L@VM.MARIST.EDU > Date: > 10/21/2010 01:24 PM > Subject: > Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with SAN/FILEDEVCLASS > storage > Sent by: > "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> > > > > for AIX: > > lsattr -l hdiskX -> look foor the queue_depth field values between 20 and > 64 make sense, 128 in some extreme cases > chdev -l hdiskX -a queue_depth=Y (but only if the vg is off-line) > > I found that some drives only support a queue_depth of 1... in that case, > you found your bottleneck. > > On 21 okt 2010, at 18:57, Zoltan Forray/AC/VCU wrote: > >> In reference to these recommendations, this is what one of my SAN folks >> said: >> >> If "increasing the queue depth for the individual disks" is something > you >> can do on a CLARiiON, it's not something I'm familiar with. On the HBA >> (and if you can), you would do that from the host side (like with >> SanSurfer for Qlogic HBAs). >> >> I have no idea what he might be referring to with "EMC voodoo >> application". >> >> "iostat/vmstat" are unix host utilities. >> >> Each of the two LUNs is spread out over 7 disks. The 2 RAID Groups and >> the enclosure they are in are dedicated to Tivoli. >> >> I've seen some references to using lots of smaller LUNs rather than a > few >> big ones. You have 2 5.5TB LUNs now. We can try splitting each of > those >> into 10-12 smaller LUNs. >> Zoltan Forray >> TSM Software & Hardware Administrator >> Virginia Commonwealth University >> UCC/Office of Technology Services >> zfor...@vcu.edu - 804-828-4807 >> Don't be a phishing victim - VCU and other reputable organizations will >> never use email to request that you reply with your password, social >> security number or confidential personal information. For more details >> visit http://infosecurity.vcu.edu/phishing.html >> >> >> >> From: >> "Strand, Neil B." <nbstr...@leggmason.com> >> To: >> ADSM-L@VM.MARIST.EDU >> Date: >> 10/19/2010 01:50 PM >> Subject: >> Re: [ADSM-L] Lousy performance on new 6.2.1.1 server with > SAN/FILEDEVCLASS >> storage >> Sent by: >> "ADSM: Dist Stor Manager" <ADSM-L@VM.MARIST.EDU> >> >> >> >> Zoltan, >> You may need to increase the queue depth for the individual disks >> and/or the HBA attached to the disks. >> Monitor both the server (iostat/vmstat) and the storage (EMC voodoo >> application) for latency and compare the results for consistency. You >> may need to adjust the striping of your logical LUNs on the storage. I >> have observed serious performance degradation on an older IBM ESS simply >> because the logical volumes were created on a single SSA rather than >> spread across the entire set of disks. >> >> Cheers, >> Neil Strand >> Storage Engineer - Legg Mason >> Baltimore, MD. >> (410) 580-7491 >> Whatever you can do or believe you can, begin it. >> Boldness has genius, power and magic. >> >> >> -----Original Message----- >> From: ADSM: Dist Stor Manager [mailto:ads...@vm.marist.edu] On Behalf Of >> Zoltan Forray/AC/VCU >> Sent: Tuesday, October 19, 2010 9:15 AM >> To: ADSM-L@VM.MARIST.EDU >> Subject: [ADSM-L] Lousy performance on new 6.2.1.1 server with >> SAN/FILEDEVCLASS storage >> >> Now that I have ventured into new territory with this new server (Linux >> 6.2.1.1), I am experiencing terrible performance when it comes to moving >> data from disk (FILEDEVCLASS on EMC/SAN storage) vs my other 6.1 and 5.5 >> servers. >> >> With the server doing nothing but migrating data from this SAN based >> stgpool to TS1130 tape, I am seeing roughly 700GB being moved in a >> 12-hour >> period. On my other, internal disk based TSM servers, I move >> multiple-terabytes per day/24-hours. >> >> So, where should I focus on why this is so slow? Is it because I am >> using >> SAN storage? How about the FILEDEVCLASS vs fixed, pre-formatted volumes >> (like every other server is using)? >> >> Or is this normal? If it is, I am in for some serious problems. One of >> these servers is expected to replace an existing 5.5 server that >> processes >> 20TB+ of backups, per week (no, I can not go straight to tape due to the >> type of backups being performed). >> >> Suggestions? Thoughts? >> Zoltan Forray >> TSM Software & Hardware Administrator >> Virginia Commonwealth University >> UCC/Office of Technology Services >> zfor...@vcu.edu - 804-828-4807 >> Don't be a phishing victim - VCU and other reputable organizations will >> never use email to request that you reply with your password, social >> security number or confidential personal information. For more details >> visit http://infosecurity.vcu.edu/phishing.html >> >> IMPORTANT: E-mail sent through the Internet is not secure. Legg Mason >> therefore recommends that you do not send any confidential or sensitive >> information to us via electronic mail, including social security > numbers, >> account numbers, or personal identification numbers. Delivery, and or >> timely delivery of Internet mail is not guaranteed. Legg Mason therefore >> recommends that you do not send time sensitive >> or action-oriented messages to us via electronic mail. >> >> This message is intended for the addressee only and may contain > privileged >> or confidential information. Unless you are the intended recipient, you >> may not use, copy or disclose to anyone any information contained in > this >> message. If you have received this message in error, please notify the >> author by replying to this message and then kindly delete the message. >> Thank you. > > -- > Met vriendelijke groeten/Kind Regards, > > Remco Post > r.p...@plcs.nl > +31 6 248 21 622 -- Met vriendelijke groeten/Kind Regards, Remco Post r.p...@plcs.nl +31 6 248 21 622