UNCLASSIFIED Performance tuning UV on Windows2003

2004-04-27 Thread HENDERSON MICHAEL MR
Folks,


I notice that when I do two concurrent processes (like ANALYZE.FILE from one
window and SELECT from another window) on the same large (3GB,64-bit) file,
the 'Pages/sec' count in Win2K3 goes through the roof, even though the
'memory commit charge' is only 176MB out of 2465MB.

Maybe the 'Pages/sec' count is not measuring what I expect it to, but it
seems odd to me for the system to be paging (i.e. swapping program code
segments in and out of memory) in this situation.  I'd except the disk
system to be getting a thrashing as the two processes want to read different
bits of the same file, but not paging.

Can anyone tell me
1) Do I understand correctly what is happening?
   If not, what _is_ going on?

2) Are there any UV configuration tunables which could 
   improve the situation?  If so, which one(s)?

3) Are there any Windows configuration tunables which 
   could improve the situation?  If so, which one(s)?


Oh, it's UV 10.0.15, but I don't think that's terribly relevant


Thanks


Mike

The information contained in this Internet Email message is intended
for the addressee only and may contain privileged information, but not
necessarily the official views or opinions of the New Zealand Defence Force.
If you are not the intended recipient you must not use, disclose, copy or 
distribute this message or the information in it.

If you have received this message in error, please Email or telephone
the sender immediately.
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Reply Performance Degraded running u10.0.0 in Aix 5.2 ML 2

2004-04-20 Thread Foo Chia Teck
 
Hi all,

There have 180 user license, some of them are using vb application (about 15 uvcs), 20 
uv-net connection, about 60 telnet connection and 20 of fixed tty dump terminal users.

Some of the user are using SAME user id to login to universe is that will cost the 
issue ?

I have double up the swap spaces from 4gb to 8gb. but i still having the performance 
degraded issue. below is the usages of swap spaces before and after increment. The 
user claim that the system starting slow after 48 hours, so the mis guys have to 
restart universe every 48 hours.

[ServerX1]:/lsps -a
Page Space  Physical Volume   Volume GroupSize %Used Active  Auto  Type
hd6 hdisk0rootvg4096MB 1 yes   yeslv


[ServerX1]:/lsps -a
Page Space  Physical Volume   Volume GroupSize %Used Active  Auto  Type
paging00hdisk0rootvg4096MB 1 yes   yeslv
hd6 hdisk0rootvg4096MB 1 yes   yeslv


The above output showing that the system only using 1% of the swap spaces even i have 
increase the size to 8gb. Is that any uvconfig parameter that i need to configure to 
tell universe that i have 8gb of swap spaces.


tty:  tin tout   avg-cpu:  % user% sys % idle% iowait
 53.75451.7  24.9 75.10.0   0.0

Disks:% tm_act Kbps  tpsKb_read   Kb_wrtn
hdisk1   0.0   0.0   0.0  0 0
hdisk0   0.0   0.0   0.0  0 0
hdisk3   0.0   0.0   0.0  0 0
hdisk2  27.9 116.4  35.8  4   113
hdisk5   0.0   0.0   0.0  0 0
hdisk4  28.9 112.4  34.8  0   113
cd0  0.0   0.0   0.0  0 0

 
Above is the output of iostat. hdisk2  hdisk4 are configured as RAID5 with 64k of 
strips and mirrored each other.

Please advise,
thank you






---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.659 / Virus Database: 423 - Release Date: 4/15/2004
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Performance Degraded running u10.0.0 in Aix 5.2 ML 2

2004-04-19 Thread Karl L Pearson
Hmmm. Are you saying 'Ogres' are like onions?

On Fri, 2004-04-16 at 07:05, Scott Richardson wrote:
 Performance of UV applications on various Operating Systems
 is not rocket science. Perhaps better described as large, nasty
 tight onions that need peeling, one layer at a time, and
 understanding what each peeled layer is doing and why.
 Once this knowledge is acquired and understood, a plan can
 be built and executed to attack/resolve the problem.
 
 Are users logging out/off when they're done using the system,
 or when they've completed some large tasks or operations?
 How often is the system rebooted?
 RAID 5 file systems can slow down IO.
 We'll need specifics on file system setup and parameters.
 How many users? What are these users doing?
 Have you got everyone and their siblings all running SELECT and
 SORT operations all the time? Data Entry out the wazoo?
 
 How big are the files, and how are they sized? How frequently
 does data change in the files, (grow, shrink, etc...)
 
 How big is your /tmp file system, and what kind of file system,
 and where is it physically located?? Provide it it's own file system,
 on it's own disk or disk set, (i.e. not the same disks where other
 activity is going on).
 
 4GB of RAM, yet only 4 GB paging/swap space?
 Where is this swap paging space, (i.e. what disks?)
 
 topas may be fine for quick and dirty analysis and understanding,
 but using it extensively can help contribute to performance problems.
 
 You need to configure and tune the platform, the OS, the UV DB,
 the IO sub-system,  the applications, the users, and the
 administration/operations, and thenensure they're all coordinated
 with each other, to maximize platform performance.
 
 To find, (and therefore address  resolve), the root causes of what
 is happening here, you need to profile the platform using something
 such as the DPMonitor, (extremely low-overhead monitoring Agent)
 and display/crunch the performance metrics on another platform,
 (i..e. a Windows Performance Explorer Console). Using this method,
 you'll be able completely profile the entire platform, (OS and
 applications),
 around the clock, and then easily dial into specific timeframes where
 problems are occurring, and fully understand exactly what is happening
 and learn why it is happening, so it can be addressed and resolved,
 and measure the progress along the entire way.
 
 The DPMonitor is available with a free 10 day evaluation license where it
 will track system-wide performance metrics. Fully licensed version will
 track individual processes that you select, or all processes if you so
 desire. When you monitor all of the processes, you can quickly and
 easily identify processes deserving further analysis, and stop tracking
 processes that are not casuing any problems. More information on the
 DPMonitor can be found at http://www.deltek.us and the DPMonitor
 can be downloaded right off the website. If you're short on memory,
 DPMonitor will allow you to see how much memory you will need to
 allow the system to run as fast as it can, given how you're running it.
 If you need tuning of OS or UV parameters, or other things that ay be
 playing contributing factor/roles, the DPMonitor will clearly point this
 out,
 grahically, so that anyone can plainly see what is happening.
 
 Once you make any changes, you'll be able to monitor, and measure,
 any differences, consistently, and prove whether or not you have
 improved, or detrimented, your cause. Best of all, you'll be able to
 show, prove, and justify to  management what you're doing, and
 why, and show them what it will take to get the problems addressed
 and resolved, positively, without question.
 
 Hope this helps.  I know the DPMonitor can  will help.
 I have used it personally, numerous times, to peel many a complex onion,
 understand what is exactly going on, find out why, and then put together
 and executed plans that have successfully addressed and resolved similar
 problems and streamlined operations moving forward saving many a
 business significant time, frustration, and money, and then ensured that
 any  all operations moving forward were done from a pro-active,
 knowing ahead of time manner, rather than fire-fighting problems on a
 continual basis. If you want something done, why not do it right, once?
 Stop beating your head against the onion wall! Work smarter!
 Let the DPMonitor be your detailed, EKG-like instrument to cut to
 the heart of your complex application server performance problems,
 identify them, and help you to resolve them, quickly and easily.
 
 
 Been there, done that.
 Many times over.
 
 Sincere Regards,
 Scott Richardson
 Senior Systems Engineer / Consultant
 Marlborough, MA 01752
 Email: [EMAIL PROTECTED]
 Web: http://home.comcast.net/~CheetahFTL/CC/CheetahFTL_1.htm
 eFax: 208-445-1259
 
 - Original Message - 
 From: Foo Chia Teck [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Friday, April 16, 2004 2:22 AM
 Subject: Performance

Performance Degraded running u10.0.0 in Aix 5.2 ML 2

2004-04-16 Thread Foo Chia Teck
Hi,

We are facing performance degraded when running Universe 10.0.0 in AIX 5L
5.2. 

A bit intro on hardware specs. We are using pSeries 650 running on SMP 2
Power4 processor with 4GB of RAM, 4GB Paging size and RAID5 SSA Hard Disk.

My Universe configuration as below:

Current tunable parameter settings:
 MFILES =   300
 T30FILE=   200
 OPENCHK=   1
 WIDE0  =   0x3dc0
 UVSPOOL=   /uvspool1
 UVTEMP =   /uvtmp1
 SCRMIN =   3
 SCRMAX =   5
 SCRSIZE=   512
 QDEPTH =   16
 HISTSTK=   99
 QSRUNSZ=   2000
 QSBRNCH=   4
 QSDEPTH=   8
 QSMXKEY=   32
 TXMODE =   0
 LOGBLSZ=   512
 LOGBLNUM   =   8
 LOGSYCNT   =   0
 LOGSYINT   =   0
 TXMEM  =   32
 OPTMEM =   64
 SELBUF =   4
 ULIMIT =   128000
 FSEMNUM=   23
 GSEMNUM=   97
 PSEMNUM=   64
 FLTABSZ=   11
 GLTABSZ=   75
 RLTABSZ=   75
 RLOWNER=   300
 PAKTIME=   5
 NETTIME=   5
 QBREAK =   1
 VDIVDEF=   1
 UVSYNC =   1
 BLKMAX =   131072
 PICKNULL   =   0
 SYNCALOC   =   1
 MAXRLOCK   =   74
 ISOMODE=   1
 PKRJUST=   0
 PROCACMD   =   0
 PROCRCMD   =   0
 PROCPRMT   =   0
 ALLOWNFS   =   0
 CSHDISPATCH=   /bin/csh
 SHDISPATCH =   /bin/sh
 DOSDISPATCH=   NOT_SUPPORTED
 LAYERSEL   =   0
 OCVDATE=   0
 MODFPTRS   =   1
 THDR512=   0
 UDRMODE=   0
 UDRBLKS=   0
 MAXERRLOGENT   =   100
 JOINBUF=   4095
 64BIT_FILES=   0
 TSTIMEOUT  =   60
 PIOPENDEFAULT  =   0
 MAXKEYSIZE =   255
 SMISDATA   =   0
 EXACTNUMERIC   =   15
 MALLOCTRACING  =   0
 CENTURYPIVOT   =   1930
 SPINTRIES  =   0
 SPINSLEEP  =   1
 CONVERT_EURO   =   0
 SYSTEM_EURO=   164
 TERM_EURO  =   164
 SQLNULL=   128

When the uv restarted it run fine for a day before it used up all the CPU
and memory resources. A fast check on 'topas' show CPU resources used up for
Kernel and User. There are no free resources on Wait and Idle. Around 70% of
the CPU resources used for User and 30% used for Kernel. 

On memory side, seem all the physical memory had been consumed up. Even
Paging space also been used. A quick snapshot on the memory from 'topas' as
below: 

MEMORY
 Real,MB4095
 % Comp 22.4
 % Noncomp  76.2
 % Client   75.1

 PAGING SPACE
 Size,MB4096
 % Used  1.4
 % Free 98.5

When all the physical resources are fully occupied, the Universe processing
become slow. The only thing I can do now is to restart Universe when the
performance degraded?

Are there any performance tuning we need to do on the OS to prevent this
issue? Or is there any known issue with this version of Universe?

Please assist me to solve this problem.

Regard's,

Foo




---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.657 / Virus Database: 422 - Release Date: 4/13/2004
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance Degraded running u10.0.0 in Aix 5.2 ML 2

2004-04-16 Thread djordan
Hi Foo

Could it be an application problem.  It sounds like an application is
adding data to a possible type 1 file as a log or a como file or even a
print job and it is growing to an enormous size consuming resources.
There might be some other application that has gone rogue.   Can you
identify the time this problem started and relate it to a program that
was updated.  Can you identify a user who is using a lot of the
resources and find out what they are doing.

Regards

David Jordan

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Foo Chia Teck
Sent: Friday, 16 April 2004 4:22 PM
To: [EMAIL PROTECTED]
Subject: Performance Degraded running u10.0.0 in Aix 5.2 ML 2


Hi,

We are facing performance degraded when running Universe 10.0.0 in AIX
5L 5.2. 

A bit intro on hardware specs. We are using pSeries 650 running on SMP 2
Power4 processor with 4GB of RAM, 4GB Paging size and RAID5 SSA Hard
Disk.

My Universe configuration as below:

Current tunable parameter settings:
 MFILES =   300
 T30FILE=   200
 OPENCHK=   1
 WIDE0  =   0x3dc0
 UVSPOOL=   /uvspool1
 UVTEMP =   /uvtmp1
 SCRMIN =   3
 SCRMAX =   5
 SCRSIZE=   512
 QDEPTH =   16
 HISTSTK=   99
 QSRUNSZ=   2000
 QSBRNCH=   4
 QSDEPTH=   8
 QSMXKEY=   32
 TXMODE =   0
 LOGBLSZ=   512
 LOGBLNUM   =   8
 LOGSYCNT   =   0
 LOGSYINT   =   0
 TXMEM  =   32
 OPTMEM =   64
 SELBUF =   4
 ULIMIT =   128000
 FSEMNUM=   23
 GSEMNUM=   97
 PSEMNUM=   64
 FLTABSZ=   11
 GLTABSZ=   75
 RLTABSZ=   75
 RLOWNER=   300
 PAKTIME=   5
 NETTIME=   5
 QBREAK =   1
 VDIVDEF=   1
 UVSYNC =   1
 BLKMAX =   131072
 PICKNULL   =   0
 SYNCALOC   =   1
 MAXRLOCK   =   74
 ISOMODE=   1
 PKRJUST=   0
 PROCACMD   =   0
 PROCRCMD   =   0
 PROCPRMT   =   0
 ALLOWNFS   =   0
 CSHDISPATCH=   /bin/csh
 SHDISPATCH =   /bin/sh
 DOSDISPATCH=   NOT_SUPPORTED
 LAYERSEL   =   0
 OCVDATE=   0
 MODFPTRS   =   1
 THDR512=   0
 UDRMODE=   0
 UDRBLKS=   0
 MAXERRLOGENT   =   100
 JOINBUF=   4095
 64BIT_FILES=   0
 TSTIMEOUT  =   60
 PIOPENDEFAULT  =   0
 MAXKEYSIZE =   255
 SMISDATA   =   0
 EXACTNUMERIC   =   15
 MALLOCTRACING  =   0
 CENTURYPIVOT   =   1930
 SPINTRIES  =   0
 SPINSLEEP  =   1
 CONVERT_EURO   =   0
 SYSTEM_EURO=   164
 TERM_EURO  =   164
 SQLNULL=   128

When the uv restarted it run fine for a day before it used up all the
CPU and memory resources. A fast check on 'topas' show CPU resources
used up for Kernel and User. There are no free resources on Wait and
Idle. Around 70% of the CPU resources used for User and 30% used for
Kernel. 

On memory side, seem all the physical memory had been consumed up. Even
Paging space also been used. A quick snapshot on the memory from 'topas'
as
below: 

MEMORY
 Real,MB4095
 % Comp 22.4
 % Noncomp  76.2
 % Client   75.1

 PAGING SPACE
 Size,MB4096
 % Used  1.4
 % Free 98.5

When all the physical resources are fully occupied, the Universe
processing become slow. The only thing I can do now is to restart
Universe when the performance degraded?

Are there any performance tuning we need to do on the OS to prevent this
issue? Or is there any known issue with this version of Universe?

Please assist me to solve this problem.

Regard's,

Foo




---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.657 / Virus Database: 422 - Release Date: 4/13/2004
-- 
u2-users mailing list
[EMAIL PROTECTED] http://www.oliver.com/mailman/listinfo/u2-users

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: UniVerse vs Progress Performance

2004-04-16 Thread Scott Richardson
Sounds like something is not tuned properly somewhere.
Another Onion that needs a damn good peeling!

Download the DPMonitor on both of these puppies,
and then you can realistically compare volumes of I/O,
volumes of CPU, volumes of memory, etc... in an apples to apples
sort of comparison of sorts.

Once you have it peeled and profiled, you will then have the
technology required to put it all back together properly, so that
it will scream like a raped-ape as they say.

Not only that - you can monitor what ever changes you make
along the way and clearly see if they help, or hurt your cause, and why.

See my other reply to the Performance Degraded... thread.

When you peel all the layers off these tight, nasty onions, and
understand what's going on at all the different levels - it make it easy
to identify, address  resolve these problems - and monitor them
proactively going forward as changes occur, growth/shrinkage happens,
or additional processes / users come into the mix.

UV applications, properly tuned and configured on their platform, should
run extremely well, price/performance-wise.

Been there, done that.
Many times over.

Sincere Regards,
Scott Richardson
Senior Systems Engineer / Consultant
Marlborough, MA 01752
Email: [EMAIL PROTECTED]
Web: http://home.comcast.net/~CheetahFTL/CC/CheetahFTL_1.htm
eFax: 208-445-1259



- Original Message - 
From: Ross Ferris [EMAIL PROTECTED]
To: U2 Users Discussion List [EMAIL PROTECTED]
Sent: Friday, April 16, 2004 12:48 AM
Subject: RE: UniVerse vs Progress Performance


Probably need to see Progress running on the IBM under AIX - or UV on Intel
chip with same OS to make significant comparison; even neglecting just WHAT
is going on under the hood  could have been 400+ users doing 'nothing'

Ross Ferris
Stamina Software
Visage - an Evolution in Software Development


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Dawn M. Wolthuis
Sent: Friday, 16 April 2004 1:36 PM
To: 'U2 Users Discussion List'
Subject: RE: UniVerse vs Progress Performance

I'm curious if there is a follow up on this?  Is it a database tuning
issue?
Indexing?  Memory?  ...

Thanks.  --dawn

Dawn M. Wolthuis
Tincat Group, Inc.
www.tincat-group.com

Take and give some delight today.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of André Nel
Sent: Tuesday, March 23, 2004 3:07 AM
To: [EMAIL PROTECTED]
Subject: FW: UniVerse vs Progress Performance



Hi All

Visited a  neighbouring company (same line of business as ours) running 430
users on a Compaq Proliant box with SCO Openserver 5 and Progress version
9.1c as database. Application is in-house. At the time of my visit the CPU
usage was constantly running at 80%. No problems being experienced with
users complaining the system is slow etc.

The server spec is as follows:

2x intel pentium III xeon 500Mhz processors
1.8GB RAM
Smart Array 3200 controller
Compaq Fast SCSI-2 controller
10x 18.2 GB Ultra SCSI-2 drives (8 drives are RAID 1, other 2 RAID 0) and 5
drives on Ultra 2 controller and 5 drives on Ultra 3 Controller
2x 10/100 Tx Ethernet controllers

We are running AIX v5.1 with Maintainance Level 3 and UniVerse 10.0.7 (190
users) on a p620 box with the following specs:

System Model: IBM,7025-6F1
Machine Serial Number: 6577ABA
Processor Type: PowerPC_RS64-III
Number Of Processors: 2
Processor Clock Speed: 602 MHz
CPU Type: 64-bit
Kernel Type: 32-bit
LPAR Info: -1 NULL
Memory Size: 4096 MB
Good Memory Size: 4096 MB
Paging 3072MB
Firmware Version: IBM,M2P01208

Our box is struggling with the 190 users. File types are T30. All our lines
are minimum 64K diginet.

Comparing the 2 boxes, the amount of users on each box, any reason why we
are struggling with the 190 users? The transaction volumes of the company
running 430 users are considerably higher than ours?

Any comments please

Thanks

André


--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.658 / Virus Database: 421 - Release Date: 9/04/2004


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.658 / Virus Database: 421 - Release Date: 9/04/2004

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance Degraded running u10.0.0 in Aix 5.2 ML 2

2004-04-16 Thread Anthony Youngman
Okay, it's AIX not linux, but I've just noticed that RAM = swap.

You are an ABSOLUTE FOOL if you do that on linux. Maybe (or maybe not)
the same applies to AIX - quite likely since they are both nixen and
probably manage memory similiarly.

Double swap space to 8Gb and see if that improves matters.

Oh - and if you don't believe me, a swap = ram configuration will
CRASH the early vanilla 2.4 kernels and that's 2002 vintage.

Cheers,
Wol 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Scott Richardson
Sent: 16 April 2004 14:06
To: U2 Users Discussion List
Subject: Re: Performance Degraded running u10.0.0 in Aix 5.2 ML 2

Performance of UV applications on various Operating Systems
is not rocket science. Perhaps better described as large, nasty
tight onions that need peeling, one layer at a time, and
understanding what each peeled layer is doing and why.
Once this knowledge is acquired and understood, a plan can
be built and executed to attack/resolve the problem.

Are users logging out/off when they're done using the system,
or when they've completed some large tasks or operations?
How often is the system rebooted?
RAID 5 file systems can slow down IO.
We'll need specifics on file system setup and parameters.
How many users? What are these users doing?
Have you got everyone and their siblings all running SELECT and
SORT operations all the time? Data Entry out the wazoo?

How big are the files, and how are they sized? How frequently
does data change in the files, (grow, shrink, etc...)

How big is your /tmp file system, and what kind of file system,
and where is it physically located?? Provide it it's own file system,
on it's own disk or disk set, (i.e. not the same disks where other
activity is going on).

4GB of RAM, yet only 4 GB paging/swap space?
Where is this swap paging space, (i.e. what disks?)

topas may be fine for quick and dirty analysis and understanding,
but using it extensively can help contribute to performance problems.

You need to configure and tune the platform, the OS, the UV DB,
the IO sub-system,  the applications, the users, and the
administration/operations, and thenensure they're all coordinated
with each other, to maximize platform performance.

To find, (and therefore address  resolve), the root causes of what
is happening here, you need to profile the platform using something
such as the DPMonitor, (extremely low-overhead monitoring Agent)
and display/crunch the performance metrics on another platform,
(i..e. a Windows Performance Explorer Console). Using this method,
you'll be able completely profile the entire platform, (OS and
applications),
around the clock, and then easily dial into specific timeframes where
problems are occurring, and fully understand exactly what is happening
and learn why it is happening, so it can be addressed and resolved,
and measure the progress along the entire way.

The DPMonitor is available with a free 10 day evaluation license where
it
will track system-wide performance metrics. Fully licensed version will
track individual processes that you select, or all processes if you so
desire. When you monitor all of the processes, you can quickly and
easily identify processes deserving further analysis, and stop tracking
processes that are not casuing any problems. More information on the
DPMonitor can be found at http://www.deltek.us and the DPMonitor
can be downloaded right off the website. If you're short on memory,
DPMonitor will allow you to see how much memory you will need to
allow the system to run as fast as it can, given how you're running it.
If you need tuning of OS or UV parameters, or other things that ay be
playing contributing factor/roles, the DPMonitor will clearly point this
out,
grahically, so that anyone can plainly see what is happening.

Once you make any changes, you'll be able to monitor, and measure,
any differences, consistently, and prove whether or not you have
improved, or detrimented, your cause. Best of all, you'll be able to
show, prove, and justify to  management what you're doing, and
why, and show them what it will take to get the problems addressed
and resolved, positively, without question.

Hope this helps.  I know the DPMonitor can  will help.
I have used it personally, numerous times, to peel many a complex onion,
understand what is exactly going on, find out why, and then put together
and executed plans that have successfully addressed and resolved similar
problems and streamlined operations moving forward saving many a
business significant time, frustration, and money, and then ensured that
any  all operations moving forward were done from a pro-active,
knowing ahead of time manner, rather than fire-fighting problems on a
continual basis. If you want something done, why not do it right, once?
Stop beating your head against the onion wall! Work smarter!
Let the DPMonitor be your detailed, EKG-like instrument to cut to
the heart of your complex application server

RE: Performance Degraded running u10.0.0 in Aix 5.2 ML 2

2004-04-16 Thread Steve Ferries
HI All,

Before doubling you swap space, check to see how much you are using at your busy 
times. We have an 8 Gig system, and a 6 Gig pool:

Page Space  Physical Volume   Volume GroupSize   %Used  Active  Auto  Type
paging00hdisk1rootvg6144MB   2 yes   yeslv

Everyone into the pool!

Regards,

Steve Ferries
Vice President, Information Technologies
Total Credit Recovery Limited





-Original Message-
From: Anthony Youngman [  mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
Sent: Friday, April 16, 2004 9:31 AM
To: U2 Users Discussion List
Subject: RE: Performance Degraded running u10.0.0 in Aix 5.2 ML 2


Okay, it's AIX not linux, but I've just noticed that RAM = swap.

You are an ABSOLUTE FOOL if you do that on linux. Maybe (or maybe not)
the same applies to AIX - quite likely since they are both nixen and
probably manage memory similiarly.

Double swap space to 8Gb and see if that improves matters.

Oh - and if you don't believe me, a swap = ram configuration will
CRASH the early vanilla 2.4 kernels and that's 2002 vintage.

Cheers,
Wol

-Original Message-
From: [EMAIL PROTECTED] [  mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
On Behalf Of Scott Richardson
Sent: 16 April 2004 14:06
To: U2 Users Discussion List
Subject: Re: Performance Degraded running u10.0.0 in Aix 5.2 ML 2

Performance of UV applications on various Operating Systems
is not rocket science. Perhaps better described as large, nasty
tight onions that need peeling, one layer at a time, and
understanding what each peeled layer is doing and why.
Once this knowledge is acquired and understood, a plan can
be built and executed to attack/resolve the problem.

Are users logging out/off when they're done using the system,
or when they've completed some large tasks or operations?
How often is the system rebooted?
RAID 5 file systems can slow down IO.
We'll need specifics on file system setup and parameters.
How many users? What are these users doing?
Have you got everyone and their siblings all running SELECT and
SORT operations all the time? Data Entry out the wazoo?

How big are the files, and how are they sized? How frequently
does data change in the files, (grow, shrink, etc...)

How big is your /tmp file system, and what kind of file system,
and where is it physically located?? Provide it it's own file system,
on it's own disk or disk set, (i.e. not the same disks where other
activity is going on).

4GB of RAM, yet only 4 GB paging/swap space?
Where is this swap paging space, (i.e. what disks?)

topas may be fine for quick and dirty analysis and understanding,
but using it extensively can help contribute to performance problems.

You need to configure and tune the platform, the OS, the UV DB,
the IO sub-system,  the applications, the users, and the
administration/operations, and thenensure they're all coordinated
with each other, to maximize platform performance.

To find, (and therefore address  resolve), the root causes of what
is happening here, you need to profile the platform using something
such as the DPMonitor, (extremely low-overhead monitoring Agent)
and display/crunch the performance metrics on another platform,
(i..e. a Windows Performance Explorer Console). Using this method,
you'll be able completely profile the entire platform, (OS and
applications),
around the clock, and then easily dial into specific timeframes where
problems are occurring, and fully understand exactly what is happening
and learn why it is happening, so it can be addressed and resolved,
and measure the progress along the entire way.

The DPMonitor is available with a free 10 day evaluation license where
it
will track system-wide performance metrics. Fully licensed version will
track individual processes that you select, or all processes if you so
desire. When you monitor all of the processes, you can quickly and
easily identify processes deserving further analysis, and stop tracking
processes that are not casuing any problems. More information on the
DPMonitor can be found at  http://www.deltek.us http://www.deltek.us and the 
DPMonitor
can be downloaded right off the website. If you're short on memory,
DPMonitor will allow you to see how much memory you will need to
allow the system to run as fast as it can, given how you're running it.
If you need tuning of OS or UV parameters, or other things that ay be
playing contributing factor/roles, the DPMonitor will clearly point this
out,
grahically, so that anyone can plainly see what is happening.

Once you make any changes, you'll be able to monitor, and measure,
any differences, consistently, and prove whether or not you have
improved, or detrimented, your cause. Best of all, you'll be able to
show, prove, and justify to  management what you're doing, and
why, and show them what it will take to get the problems addressed
and resolved, positively, without question.

Hope this helps.  I know the DPMonitor can  will help.
I have used it personally

RE: UniVerse vs Progress Performance

2004-04-16 Thread Brutzman, Bill

I thought that Progress lives as more-or-less a traditional SQL database.

Please clarify...What is special about Progress ?

--Bill

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Performance Degraded running u10.0.0 in Aix 5.2 ML 2

2004-04-16 Thread Sara Burns
We upgraded UniVerse 10.0.11 from AIX 4.3.3 to AIX 5.2 last October.  We
also run Oracle and Vantive on the same box.  One thing we have found is
that we need to have at least twice the amount of Paging Space as real
memory.  If the free Paging Space gets down to zero the machine dies a
horrible death.
 
As I understand it with AIX the data is first written to the Paging Space
then to memory - rather unusual.  So if your Paging Space is low then you
may be doing a lot of swapping out to somewhere else.  After the upgrade we
noticed that the batch processes were faster but we did notice a degradation
in the user interface for Oracle / Vantive although not noticeable with
UniVerse.
 
When you run topas check to see if there are any processes consuming large
amounts of cpu constantly.  One of our non-UniVerse applications suffers
from run-aways and we see them consuming one cpu each constantly.
Fortunately we have 4 cpus so can live with these run-aways (as we cannot
remove them without dropping multiple users in critical applications).  If
you get more run-aways AIX will drop their priority so this will become less
obvious - but it will affect all users of the machine.
 
It is not common to have UniVerse run-aways but we have seen it
infrequently.  If you see a process consuming high cpu look at it in
PORT.STATUS to see what it is doing.  Check you are not using PORT.STATUS
for any utilities that run often.  This can have a major effect on the
performance of the machine.  Also topas has an effect as well.
 
We are not running any ML patches on our production machine as neither ML1
nor ML2 are compatible with one of our other applications - the one that
runs away.  
Generally we are very pleased with the performance on AIX - only management
wants us to move from this platform.
 
Our approx specs are p660, 4 cpu, 6Gb RAM, 12Gb Paging Space
Universe 10.0.11, 320 users
Oracle 9.2 - usually about 600 oracle processes
Vantive 9 - 100 processes each with up to 5 threads
 
Sara Burns
 
Sara Burns (SEB) 
Development Team Leader

Public Trust 
Phone: +64 (04) 474-3841 (DDI) 

Mobile: 027 457 5974
 mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

Information contained in this communication is confidential. If you are not
the intended recipient the information should not be used, disclosed, copied
or commercialised. The information is not necessarily the views nor the
official communication of Public Trust. No guarantee or representation is
made that the communication is free of errors, virus or interference.

 
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: UniVerse vs Progress Performance

2004-04-15 Thread Dawn M. Wolthuis
I'm curious if there is a follow up on this?  Is it a database tuning issue?
Indexing?  Memory?  ...

Thanks.  --dawn

Dawn M. Wolthuis
Tincat Group, Inc.
www.tincat-group.com

Take and give some delight today.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of André Nel
Sent: Tuesday, March 23, 2004 3:07 AM
To: [EMAIL PROTECTED]
Subject: FW: UniVerse vs Progress Performance



Hi All

Visited a  neighbouring company (same line of business as ours) running 430
users on a Compaq Proliant box with SCO Openserver 5 and Progress version
9.1c as database. Application is in-house. At the time of my visit the CPU
usage was constantly running at 80%. No problems being experienced with
users complaining the system is slow etc.

The server spec is as follows:

2x intel pentium III xeon 500Mhz processors
1.8GB RAM
Smart Array 3200 controller
Compaq Fast SCSI-2 controller
10x 18.2 GB Ultra SCSI-2 drives (8 drives are RAID 1, other 2 RAID 0) and 5
drives on Ultra 2 controller and 5 drives on Ultra 3 Controller
2x 10/100 Tx Ethernet controllers

We are running AIX v5.1 with Maintainance Level 3 and UniVerse 10.0.7 (190
users) on a p620 box with the following specs:

System Model: IBM,7025-6F1
Machine Serial Number: 6577ABA
Processor Type: PowerPC_RS64-III
Number Of Processors: 2
Processor Clock Speed: 602 MHz
CPU Type: 64-bit
Kernel Type: 32-bit
LPAR Info: -1 NULL
Memory Size: 4096 MB
Good Memory Size: 4096 MB
Paging 3072MB 
Firmware Version: IBM,M2P01208

Our box is struggling with the 190 users. File types are T30. All our lines
are minimum 64K diginet.

Comparing the 2 boxes, the amount of users on each box, any reason why we
are struggling with the 190 users? The transaction volumes of the company
running 430 users are considerably higher than ours?

Any comments please

Thanks

André


-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: UniVerse vs Progress Performance

2004-04-15 Thread Ross Ferris
Probably need to see Progress running on the IBM under AIX - or UV on Intel chip with 
same OS to make significant comparison; even neglecting just WHAT is going on under 
the hood  could have been 400+ users doing 'nothing'

Ross Ferris
Stamina Software
Visage - an Evolution in Software Development


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Dawn M. Wolthuis
Sent: Friday, 16 April 2004 1:36 PM
To: 'U2 Users Discussion List'
Subject: RE: UniVerse vs Progress Performance

I'm curious if there is a follow up on this?  Is it a database tuning
issue?
Indexing?  Memory?  ...

Thanks.  --dawn

Dawn M. Wolthuis
Tincat Group, Inc.
www.tincat-group.com

Take and give some delight today.


-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of André Nel
Sent: Tuesday, March 23, 2004 3:07 AM
To: [EMAIL PROTECTED]
Subject: FW: UniVerse vs Progress Performance



Hi All

Visited a  neighbouring company (same line of business as ours) running 430
users on a Compaq Proliant box with SCO Openserver 5 and Progress version
9.1c as database. Application is in-house. At the time of my visit the CPU
usage was constantly running at 80%. No problems being experienced with
users complaining the system is slow etc.

The server spec is as follows:

2x intel pentium III xeon 500Mhz processors
1.8GB RAM
Smart Array 3200 controller
Compaq Fast SCSI-2 controller
10x 18.2 GB Ultra SCSI-2 drives (8 drives are RAID 1, other 2 RAID 0) and 5
drives on Ultra 2 controller and 5 drives on Ultra 3 Controller
2x 10/100 Tx Ethernet controllers

We are running AIX v5.1 with Maintainance Level 3 and UniVerse 10.0.7 (190
users) on a p620 box with the following specs:

System Model: IBM,7025-6F1
Machine Serial Number: 6577ABA
Processor Type: PowerPC_RS64-III
Number Of Processors: 2
Processor Clock Speed: 602 MHz
CPU Type: 64-bit
Kernel Type: 32-bit
LPAR Info: -1 NULL
Memory Size: 4096 MB
Good Memory Size: 4096 MB
Paging 3072MB
Firmware Version: IBM,M2P01208

Our box is struggling with the 190 users. File types are T30. All our lines
are minimum 64K diginet.

Comparing the 2 boxes, the amount of users on each box, any reason why we
are struggling with the 190 users? The transaction volumes of the company
running 430 users are considerably higher than ours?

Any comments please

Thanks

André


--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.658 / Virus Database: 421 - Release Date: 9/04/2004


---
Outgoing mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.658 / Virus Database: 421 - Release Date: 9/04/2004
 
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Performance

2004-04-12 Thread Sara Burns
A trap we fell into when converting from UniData to UniVerse was to run that
excellent utility PORT.STATUS.  This is extremely debilitating to UniVerse
and should be restricted to only essential operations.
 
Currently we are looking at converting UniVerse from AIX to W2K3.  This is
making us look at how things are done in our applications.  We have noticed
a number of situations where we are making UNIX calls where there are built
in UniVerse methods.  This is a hangover from previous lives which I think
started at Reality to UniData to UniVerse.  There was also ALL as part of
the mix.
 
A recent small change to a program extended the running time greatly.
Turned out a call to SP-ASSIGN (our modified version) was being made every
time a record was read instead of only if it were to be printed.  This is
very slow and I believe involves OS system calls.  We have also noticed a
very high level of system calls, but that was also on our previous Sequent
hardware - so maybe it is our app not UniVerse or AIX.
 
What I am trying to say is that by converting from D3 you may have extensive
performance hits in your application code, which is not obvious.
 
We are intending doing a lot more research into what our application is
doing, especially as it interacts with the OS.  This may be our greatest
benefit from investigating the W2K3 route.  
 
Sara Burns (SEB) 
Development Team Leader

Public Trust 
Phone: +64 (04) 474-3841 (DDI) 

Mobile: 027 457 5974
 mailto:[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]

Information contained in this communication is confidential. If you are not
the intended recipient the information should not be used, disclosed, copied
or commercialised. The information is not necessarily the views nor the
official communication of Public Trust. No guarantee or representation is
made that the communication is free of errors, virus or interference.

 
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Performance

2004-04-09 Thread FFT2001
In a message dated 4/8/2004 12:20:45 PM Pacific Daylight Time, 
[EMAIL PROTECTED] writes:


 1.)  Application files have all been analyzed and sized correctly.
 2.)  IBM U2 support analyzed Universe files, locking, swap space and all
 have been adjusted accordingly or were 'ok'.
 3.)  We are running RAID 5, with 8G allocated for Universe
 4.)  We are already running nmon, which is how we identified the paging
 faults and high disk I/O

Indexes created without suppressing NULLs
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance

2004-04-09 Thread Stevenson, Charles
Kevin,
When you finally get this solved,  let us know what the answer was.  I
am sure all responders would be interested.

re. /tmp: I've seen marginal but not incredible inprovement moving
UVTEMP onto our EMC storage rather than on the system's local disk
(/tmp)

re. file sizing: since you are porting from D3, I assume yoou made
everything type 18, which is the standard Pick hashing algorithm?  That
ought to behave about the same as it did on D3.   How about Separation?
Does D3 have that concept?  I don't think Jeff mentioned it.  For most
files you want to set separation such that you get integer number of
groups for each OS disk read.  If a sigle disk read grabs 8K, then
separation 4 (512*4= 2K/group) means filescans will ask for a group, the
OS will read in 4 groups, and the next 3 times the process asks for the
next group, it's probably still sitting in memory.  So if the OS does
read 8K at a time, separations of 1,2,4,8,16,12 make sense, depending on
the nature of the records.  4 is typical.

re. locks:  I notice the lock table is pretty small, and there are a
lots of 'CLR.OM.LOCK proceses.  Is this one of those PICK aps where
people developed their own record locking scheme because they didn't
trust PICK's record lock handling?  If so,  maybe that is a source of
ineffeciency.  It's not clear how that would manifest itself with
paging, though.

What about loading programs into shared memory?  Do you have an
absolutely huge program that many users use?  By default they each load
their own copy of the object file into their private memory.  But you
can change that so only one copy is loaded.  The same with small utility
routines that get called by everyone throughout the day.  Load them once
in shared memory,  then all users will run off that copy.   Again, we're
talking incremental, not incredible, performance improvements.

I'm grasping here.  I'm sure IBM's Hdwr, AIX,  U2 support has gone
through all this already.  You will post the answer once you know it,
won't you?
 
cds
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance

2004-04-09 Thread Kevin Vezertzis
Thanks to everyone for the performance suggestions...I will report to
the board as soon as we resolve it.

Kevin



Kevin D. Vezertzis
Project Manager
Cypress Business Solutions, LLC.
678.494.9353  ext. 6576  Fax  678.494.9354
 
[EMAIL PROTECTED]
Visit us at www.cypressesolutions.com
 
 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Stevenson, Charles
Sent: Friday, April 09, 2004 4:24 PM
To: U2 Users Discussion List
Subject: RE: Performance

Kevin,
When you finally get this solved,  let us know what the answer was.  I
am sure all responders would be interested.

re. /tmp: I've seen marginal but not incredible inprovement moving
UVTEMP onto our EMC storage rather than on the system's local disk
(/tmp)

re. file sizing: since you are porting from D3, I assume yoou made
everything type 18, which is the standard Pick hashing algorithm?  That
ought to behave about the same as it did on D3.   How about Separation?
Does D3 have that concept?  I don't think Jeff mentioned it.  For most
files you want to set separation such that you get integer number of
groups for each OS disk read.  If a sigle disk read grabs 8K, then
separation 4 (512*4= 2K/group) means filescans will ask for a group, the
OS will read in 4 groups, and the next 3 times the process asks for the
next group, it's probably still sitting in memory.  So if the OS does
read 8K at a time, separations of 1,2,4,8,16,12 make sense, depending on
the nature of the records.  4 is typical.

re. locks:  I notice the lock table is pretty small, and there are a
lots of 'CLR.OM.LOCK proceses.  Is this one of those PICK aps where
people developed their own record locking scheme because they didn't
trust PICK's record lock handling?  If so,  maybe that is a source of
ineffeciency.  It's not clear how that would manifest itself with
paging, though.

What about loading programs into shared memory?  Do you have an
absolutely huge program that many users use?  By default they each load
their own copy of the object file into their private memory.  But you
can change that so only one copy is loaded.  The same with small utility
routines that get called by everyone throughout the day.  Load them once
in shared memory,  then all users will run off that copy.   Again, we're
talking incremental, not incredible, performance improvements.

I'm grasping here.  I'm sure IBM's Hdwr, AIX,  U2 support has gone
through all this already.  You will post the answer once you know it,
won't you?
 
cds
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance

2004-04-08 Thread Brian Leach
Duh duh duh

Please ignore my last post.

I scanned it and missed 'AIX'.

It's the end of the day here 


Brian
 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Kevin Vezertzis
Sent: 08 April 2004 17:08
To: [EMAIL PROTECTED]
Subject: Performance

We are looking for some insight from anyone that has experienced performance
degradation in UV, as it relates to the OS.  We are running UV 10.0.14 on
AIX 5.1.we are having terrible 'latency' within the application.  This is a
recent conversion from D3 to UV and our client is extremely disappointed
with the performance.  We've had IBM hardware support and Universe support
in on the box, but to no avail..we are seeing high paging faults and very
highly utilized disk space.  Any thoughts or suggestions?
 
Thanks,
Kevin
 
 
 
Kevin D. Vezertzis
Project Manager
Cypress Business Solutions, LLC.
678.494.9353  ext. 6576  Fax  678.494.9354
 
[EMAIL PROTECTED]
Visit us at www.cypressesolutions.com
 
 
 
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


This email was checked by MessageLabs SkyScan before entering Microgen.



This email was checked on leaving Microgen for viruses, similar
malicious code and inappropriate content by MessageLabs SkyScan.

DISCLAIMER

This email and any attachments are confidential and may also be
privileged.

If you are not the named recipient, please notify the sender
immediately and do not disclose the contents to any other
person, use it for any purpose, or store or copy the information.

In the event of any technical difficulty with this email, please
contact the sender or [EMAIL PROTECTED]

Microgen Information Management Solutions
http://www.microgen.co.uk
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance

2004-04-08 Thread Stevenson, Charles
Kevin,

We had the same problem when migrating UV from DEC in one data center to
HP managed in another data center, even though it was a honking big HP.

The sys admins configured memory memory similar to how their 20 other
unix machines were set up.  The trick was to configure most memory for
shared data, so that the large UV files used most of the day pretty much
stay in memory all the time.

I have no personal experience managing unix memory, but these guys did,
for Informix, Oracle, etc, and they see UV as acting very differently
(well, weird was their exact word) from all other DBMSs they have
managed.

Once the memory allocation was resolved, the new system screamed as
expected.

Give it a try,
Chuck Stevenson


 -Original Message-
 From: [EMAIL PROTECTED] 
 [mailto:[EMAIL PROTECTED] On Behalf Of Kevin Vezertzis
 Sent: Thursday, April 08, 2004 9:08 AM
 To: [EMAIL PROTECTED]
 Subject: Performance
 
 
 We are looking for some insight from anyone that has 
 experienced performance degradation in UV, as it relates to 
 the OS.  We are running UV 10.0.14 on AIX 5.1.we are having 
 terrible 'latency' within the application.  This is a recent 
 conversion from D3 to UV and our client is extremely 
 disappointed with the performance.  We've had IBM hardware 
 support and Universe support in on the box, but to no 
 avail..we are seeing high paging faults and very highly 
 utilized disk space.  Any thoughts or suggestions?
  
 Thanks,
 Kevin
  
  
  
 Kevin D. Vezertzis
 Project Manager
 Cypress Business Solutions, LLC.
 678.494.9353  ext. 6576  Fax  678.494.9354
  
 [EMAIL PROTECTED]
 Visit us at www.cypressesolutions.com
  
  
  
 -- 
 u2-users mailing list
 [EMAIL PROTECTED] http://www.oliver.com/mailman/listinfo/u2-users
 
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance

2004-04-08 Thread Jeff Schasny
Screen savers?  Best performance to background processes? On AIX?

-Original Message-
From: Brian Leach [mailto:[EMAIL PROTECTED]
Sent: Thursday, April 08, 2004 9:24 AM
To: 'U2 Users Discussion List'
Subject: RE: Performance


First things,

1. turn off any screen savers
2. ensure your server is set to adjusted to give best performance to
background processes
3. turn off any virus checkers
4. turn off veritas backup

Brian Leach 

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Performance

2004-04-08 Thread Scott Richardson
Hello Kevin,

I have seen many good posts in reply to your situation already.
File-sizing, (and therefore disk IO) is a key/critical area.

What kind of file systems do you have?

How much memory  swap space do you have?
What are the Virtual Memory AIX tuning parameters set to?

IBM Hardware - AIX support and IBM U2 support - are all the 
same company, and they can't find it?

Please give us the system configuration information so we can 
all develop a more clear picture of what you're running there.
Is this system a recent OS upgrade from AIX 4.X?
Any new or different hardare added or subtracted?
Any other changes that may be noteworthy?

The way you discuss memory, page faulting, and very high 
disk IO, I would make sure they verify each of your uvconfig 
parameters, and kernel system tunable parameters, and make 
sure you have more than ample swap space, and a large /tmp 
mounted file system space with fast striped disk sub-system 
underneath. 

One tool that will help map out exactly what is going on, and 
therefore provide a road map on how to address/resolve these 
issues, and then prove that these issues are indeed resolved,
would be the DPMonitor. DPMonitor is available on the internet 
and has a free 10 day evaluation license available that will allow 
you to track system-wide parameters and performance metrics
that will provide a very clear picture as to what is happening.

Check it out at www.deltek.us.

This tool has been used on AIX 5.1, on small single processor
configurations, up through very large systems, 16  32 processor 
systems.

Performance Agent runs on the AIX Application Server.
Extremely low overhead Agent.

Performance Explorer runs on a Windows Workstation.

Well worth the free 10 day evaluation license.

Regards,
Scott
Sr. Systems Engineer / Consultant
Marlborough, MA



- Original Message - 
From: Kevin Vezertzis [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, April 08, 2004 12:08 PM
Subject: Performance


 We are looking for some insight from anyone that has experienced
 performance degradation in UV, as it relates to the OS.  We are running
 UV 10.0.14 on AIX 5.1.we are having terrible 'latency' within the
 application.  This is a recent conversion from D3 to UV and our client
 is extremely disappointed with the performance.  We've had IBM hardware
 support and Universe support in on the box, but to no avail..we are
 seeing high paging faults and very highly utilized disk space.  Any
 thoughts or suggestions?
  
 Thanks,
 Kevin
  
  
  
 Kevin D. Vezertzis
 Project Manager
 Cypress Business Solutions, LLC.
 678.494.9353  ext. 6576  Fax  678.494.9354
  
 [EMAIL PROTECTED]
 Visit us at www.cypressesolutions.com
  
  
  
 -- 
 u2-users mailing list
 [EMAIL PROTECTED]
 http://www.oliver.com/mailman/listinfo/u2-users
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance

2004-04-08 Thread Kevin Vezertzis

Thanks for all of the posts...here are some of our 'knowns'...

1.)  Application files have all been analyzed and sized correctly.
2.)  IBM U2 support analyzed Universe files, locking, swap space and all
have been adjusted accordingly or were 'ok'.
3.)  We are running RAID 5, with 8G allocated for Universe
4.)  We are already running nmon, which is how we identified the paging
faults and high disk I/O

4.)  Attached you will find the following:
smat -s
LIST.READU EVERY
PORT.STATUS
Uvconfig
Nmon (verbose and disk)
Vmtune

I know this is a lot of data, but it is a mix of what each of you have
suggested.  Thanks again for all of the help.

Kevin



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Kevin Vezertzis
Sent: Thursday, April 08, 2004 12:08 PM
To: [EMAIL PROTECTED]
Subject: Performance

We are looking for some insight from anyone that has experienced
performance degradation in UV, as it relates to the OS.  We are running
UV 10.0.14 on AIX 5.1.we are having terrible 'latency' within the
application.  This is a recent conversion from D3 to UV and our client
is extremely disappointed with the performance.  We've had IBM hardware
support and Universe support in on the box, but to no avail..we are
seeing high paging faults and very highly utilized disk space.  Any
thoughts or suggestions?
 
Thanks,
Kevin
File access State  Netnode Owner Collisions Retries
Semaphore #   1 00 0  0   0
Semaphore #   2 00 0  0   0
Semaphore #   3 00 0  0   0
Semaphore #   4 00 0  0   0
Semaphore #   5 00 0  0   0
Semaphore #   6 00 0  0   0
Semaphore #   7 00 0  0   0
Semaphore #   8 00 0  0   0
Semaphore #   9 00 0  0   0
Semaphore #  10 00 0  0   0
Semaphore #  11 00 0  0   0
Semaphore #  12 00 0  0   0
Semaphore #  13 00 0  0   0
Semaphore #  14 00 0  0   0
Semaphore #  15 00 0  0   0
Semaphore #  16 00 0  0   0
Semaphore #  17 00 0  0   0
Semaphore #  18 00 0  0   0
Semaphore #  19 00 0  0   0
Semaphore #  20 00 0  0   0
Semaphore #  21 00 0  0   0
Semaphore #  22 00 0  0   0
Semaphore #  23 00 0  0   0

Group accessState  Netnode Owner Collisions Retries
Semaphore #   1 00 0 34  34
Semaphore #   2 00 0 13  13
Semaphore #   3 00 0  6   6
Semaphore #   4 00 0 21  21
Semaphore #   5 00 0 10  10
Semaphore #   6 00 0 12  12
Semaphore #   7 00 0 12  12
Semaphore #   8 00 0 43  43
Semaphore #   9 00 0  7   7
Semaphore #  10 00 0  9   9
Semaphore #  11 00 0 11  11
Semaphore #  12 00 0 10  10
Semaphore #  13 00 0 11  11
Semaphore #  14 00 0 16  16
Semaphore #  15 00 0 10  10
Semaphore #  16 00 0 11  11
Semaphore #  17 00 0 17  17
Semaphore #  18 00 0 12  12
Semaphore #  19 00 0 19  19
Semaphore #  20 00 0  5   5
Semaphore #  21 00 0 22  22
Semaphore #  22 00 0  8   8
Semaphore #  23 00 0 34  34
Semaphore #  24 00 0  5   5
Semaphore #  25 00 0 10  10
Semaphore #  26 00 0 11  11
Semaphore #  27 00 0 15  15
Semaphore #  28 00 0 21  21
Semaphore #  29 00 0 12  12
Semaphore #  30 00 0 41  41
Semaphore #  31 00 0  7   7
Semaphore #  32 00 0 49  49
Semaphore #  33 00 0  9   9
Semaphore #  34 00 0 25  25
Semaphore #  35 00 0 13  13
Semaphore #  36 00 0 10  10
Semaphore #  37 00 0  6   6
Semaphore #  38 00 0 11  11

RE: Performance

2004-04-08 Thread Bob Gerrish
I  saw /tmp mentioned in one of Scott Richardson's posts.  UniVerse uses /tmp
for building select lists and sorts.  If it is undersized, it can cause page
faults like you are seeing.  How big is /tmp?  It might pay to monitor it's
usage.  It can be as critical as having adequate swap / paging space.
Double check Scott's recommendation on /tmp.

Bob Gerrish  -  [EMAIL PROTECTED]
Kingsgate Enterprises, Inc.
At 12:18 PM 4/8/2004, you wrote:

Thanks for all of the posts...here are some of our 'knowns'...

1.)  Application files have all been analyzed and sized correctly.
2.)  IBM U2 support analyzed Universe files, locking, swap space and all
have been adjusted accordingly or were 'ok'.
3.)  We are running RAID 5, with 8G allocated for Universe
4.)  We are already running nmon, which is how we identified the paging
faults and high disk I/O
4.)  Attached you will find the following:
smat -s
LIST.READU EVERY
PORT.STATUS
Uvconfig
Nmon (verbose and disk)
Vmtune
I know this is a lot of data, but it is a mix of what each of you have
suggested.  Thanks again for all of the help.
Kevin



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Kevin Vezertzis
Sent: Thursday, April 08, 2004 12:08 PM
To: [EMAIL PROTECTED]
Subject: Performance
We are looking for some insight from anyone that has experienced
performance degradation in UV, as it relates to the OS.  We are running
UV 10.0.14 on AIX 5.1.we are having terrible 'latency' within the
application.  This is a recent conversion from D3 to UV and our client
is extremely disappointed with the performance.  We've had IBM hardware
support and Universe support in on the box, but to no avail..we are
seeing high paging faults and very highly utilized disk space.  Any
thoughts or suggestions?
Thanks,
Kevin
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Performance

2004-04-08 Thread John Jenkins
Kevin

As this is AIX (and only on AIX please - as it has it's own memory flush) -
set UVSYNC=0 in uvconfig and run uvregen (Universe stopped please)

This stops UniVerse doing its own sync There is a small risk with this
but not much. Please do *not* change SYNCALLOC.

FSEMNUM and GSEMNUM look fine (assuming this was under load).
UVTEMP could do with relocating off root onto another volume - ideally on a
SAN. I am sure you know that RAID 5 costs disk throughput (parity disk
writes - but this is a business call - just be aware).

There are a lot of processes in CLR.OM.LOCK - what does this do and is it
intensive?

Looking at SELECTs in use - we don't see the file sizes information etc -
but check out the following:
SELECT UNPOSTED.MRE WITH TEMP.LOC EQ BS2 BY BIN.SORT BY PROD 
SSELECT CATALOG.DETAIL BY-DSND DATE.TO (seen 3x)
SSELECT INVENTORY WITH 67 = PETSAFE (seen 2x)
SSELECT MAIL.MSG BY-DSND CREATE.DATE BY-DSND CREATE.TIME
SSELECT ADJ.CODES
SSELECT SMTP WITH 6 EQ 
SSELECT IMPORT.ORDERS

Are these SELECTs performed a lot and are the files large? - if so look at
indexes and watch the use of NO.NULLS where appropriate. Also watch that you
do *not* build index on DICT itmes which translate to data out of the
primary file (things like DATE() or a file translate).

I have ignored a couple of SELECTS with SAMPLE statements.

Would be interested in sar output for file opens - hopefully you are caching
these file handles in COMMON (ideally named) - also the run queue (how many
CPUs? - the run queue should not (other than very occasionally) exceed more
than 2x the number of CPUs on the system (or in the LPAR if you use these).

By heavily utilized disk space do you mean you use lots of disk ? - if so
- and it seems disproportionate to yout data volumes - then are you using
lots of dynamic files and if so have you tuned the file split/merge ratios?.

Final points - I am sort-of-assuming that MFILES is appropriately
set..? (see PORT.STATUS .. MFILE.HIST). You might also
want to check out FILE.USAGE. Email me off-line if you want to go through
any reports you already have on this - let's see what was and what was not
covered (saves retreading old ground).


Regards

JayJay

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Kevin Vezertzis
Sent: 08 April 2004 20:18
To: 'U2 Users Discussion List'
Subject: RE: Performance


Thanks for all of the posts...here are some of our 'knowns'...

1.)  Application files have all been analyzed and sized correctly.
2.)  IBM U2 support analyzed Universe files, locking, swap space and all
have been adjusted accordingly or were 'ok'.
3.)  We are running RAID 5, with 8G allocated for Universe
4.)  We are already running nmon, which is how we identified the paging
faults and high disk I/O

4.)  Attached you will find the following:
smat -s
LIST.READU EVERY
PORT.STATUS
Uvconfig
Nmon (verbose and disk)
Vmtune

I know this is a lot of data, but it is a mix of what each of you have
suggested.  Thanks again for all of the help.

Kevin



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Kevin Vezertzis
Sent: Thursday, April 08, 2004 12:08 PM
To: [EMAIL PROTECTED]
Subject: Performance

We are looking for some insight from anyone that has experienced performance
degradation in UV, as it relates to the OS.  We are running UV 10.0.14 on
AIX 5.1.we are having terrible 'latency' within the application.  This is a
recent conversion from D3 to UV and our client is extremely disappointed
with the performance.  We've had IBM hardware support and Universe support
in on the box, but to no avail..we are seeing high paging faults and very
highly utilized disk space.  Any thoughts or suggestions?
 
Thanks,
Kevin


-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


FW: UniVerse vs Progress Performance

2004-03-23 Thread André Nel


Hi All

Visited a  neighbouring company (same line of business as ours) running 430 users on a 
Compaq Proliant box with SCO Openserver 5 and Progress version 9.1c as database. 
Application is in-house. At the time of my visit the CPU usage was constantly running 
at 80%. No problems being experienced with users complaining the system is slow etc.

The server spec is as follows:

2x intel pentium III xeon 500Mhz processors
1.8GB RAM
Smart Array 3200 controller
Compaq Fast SCSI-2 controller
10x 18.2 GB Ultra SCSI-2 drives (8 drives are RAID 1, other 2 RAID 0) and 5 drives on 
Ultra 2 controller and 5 drives on Ultra 3 Controller
2x 10/100 Tx Ethernet controllers

We are running AIX v5.1 with Maintainance Level 3 and UniVerse 10.0.7 (190 users) on a 
p620 box with the following specs:

System Model: IBM,7025-6F1
Machine Serial Number: 6577ABA
Processor Type: PowerPC_RS64-III
Number Of Processors: 2
Processor Clock Speed: 602 MHz
CPU Type: 64-bit
Kernel Type: 32-bit
LPAR Info: -1 NULL
Memory Size: 4096 MB
Good Memory Size: 4096 MB
Paging 3072MB 
Firmware Version: IBM,M2P01208

Our box is struggling with the 190 users. File types are T30. All our lines are 
minimum 64K diginet.

Comparing the 2 boxes, the amount of users on each box, any reason why we are 
struggling with the 190 users? The transaction volumes of the company running 430 
users are considerably higher than ours?

Any comments please

Thanks

André


--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: FW: UniVerse vs Progress Performance

2004-03-23 Thread Timothy Snyder





André Nel wrote on 03/23/2004 04:07:09 AM:

 Comparing the 2 boxes, the amount of users on each box, any reason
 why we are struggling with the 190 users? The transaction volumes of
 the company running 430 users are considerably higher than ours?

You haven't provided enough information to say for certain; evaluating
performance bottlenecks can be quite involved.  How many disks are being
used, and what type of RAID is employed?  What are you seeing as far as CPU
utilization?  You can use sar or topas to determine this.  Naturally, there
are many, MANY metrics to consider, but seeing the way user, system, and
I/O wait time are represented is a good place to start.


Tim Snyder
IBM Data Management Solutions
Consulting I/T Specialist , U2 Professional Services

Office (717) 545-6403  (rolls to cell phone)
[EMAIL PROTECTED]
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: FW: UniVerse vs Progress Performance

2004-03-23 Thread Results
Tim,
   You raise some good points. I always start with file sizes because 
it is usually easy to diagnose and frequently a quick win to get some 
speed back. André needs to also look at the complexity of the 
application. The 430 might be doing little more than they could do on a 
spreadsheet and the 190 might be doing complex sales analysis, stock 
modeling, JIT manufacturing, and logistics. Just because they are in the 
same business does not mean the software has similar abilities.
   The fact is, they may have done something brilliant with their 
system and your 'mileage' might be completely typical while they are 
experiencing atypically good results. Just because we are mv doesn't 
mean no one else is working at exploiting the efficiencies of those 
other systems.

   - Charles Right-Sized Barouch

Timothy Snyder wrote:



André Nel wrote on 03/23/2004 04:07:09 AM:

 

Comparing the 2 boxes, the amount of users on each box, any reason
why we are struggling with the 190 users? The transaction volumes of
the company running 430 users are considerably higher than ours?
   

You haven't provided enough information to say for certain; evaluating
performance bottlenecks can be quite involved.  How many disks are being
used, and what type of RAID is employed?  What are you seeing as far as CPU
utilization?  You can use sar or topas to determine this.  Naturally, there
are many, MANY metrics to consider, but seeing the way user, system, and
I/O wait time are represented is a good place to start.
Tim Snyder
IBM Data Management Solutions
Consulting I/T Specialist , U2 Professional Services
Office (717) 545-6403  (rolls to cell phone)
[EMAIL PROTECTED]
 

--
Sincerely,
 Charles Barouch
 www.KeyAlly.com
 [EMAIL PROTECTED]
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Help Needed regarding performance improvement of delete query

2004-03-16 Thread Anthony Youngman
And replying to Scott's post to say thanks for the compliment, but
I've just had another idea ...

How many fields are you using for your select on the master file?
ESPECIALLY if it's just the date, trans that across to your secondary
file, and index it! If it's more than one field, try and work out a
usable trans that will pull all the fields across in one index that you
can run a select over.

Don't forget, declaring an index means that that data is stored in the
file, whether it was calculated, trans'd, or was in the file anyway.

So now you can do a select and purge on your secondary file without ever
having to go near the primary, and the database will take care of making
sure all the required information is available as it's needed ... :-)

Cheers,
Wol

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of Scott Richardson
Sent: 15 March 2004 13:08
To: U2 Users Discussion List
Subject: Re: Help Needed regarding performance improvement of delete
query

Great points from Wol, as always.

What kind of /tmp disk space do you have on this system?
(Assuming that /tmp is where UV does some of it's SELECT
scratch pad intermediate writing when processing large queries,
consult your sites actual uvconfig for all of your actual values...).

If this /tmp is small, single physical disk, or heavily fragmented,
this would also contribute to poor query runtime performance.
Ditto on your system's swap space, which should be at least
2X physical memory.

Wol's approach of breaking down the query into selecting
smaller groups of data is a great one. Chip away at the stone,
methodically, consistently, and constantly.

What platform is this on?
What OS version?
What UV Version?
How much memory  disk space?
How much /tmp and swap space?

Are you running this query with other users on the system, who
may be also trying to access the files this query is working with?

Are you runing this at night when it might conflict with a backup
operation?

More food for thought.

Regards,
Scott

- Original Message - 
From: Anthony Youngman [EMAIL PROTECTED]
To: U2 Users Discussion List [EMAIL PROTECTED]
Sent: Monday, March 15, 2004 4:02 AM
Subject: RE: Help Needed regarding performance improvement of delete
query


This might help speed things up a bit ...

Firstly, of course, is your file properly sized?

Secondly, (and in this case you will need to run the SELECT / DELETE
sequence several times) try putting a SAMPLE 1000 (or whatever number
makes sense) at the end of your select.

Basically, this will mean that the SELECT runs until it finds that
number of records and then stops. So each sequence won't load the system
so badly. Creating a huge select list will stress your ram badly ...
looping through this sequence won't stress the system so badly, though
you really do need to use indices to reduce the stress even more ...

Create an index on various fields that you're using as your select
criteria. If you're selecting old records, then you need to select on
date, and this really will make life both easy and fast. The more
closely you can guarantee that a select, acting on a single index, will
pick up only or mostly records that you are going to delete, the better.
That will SERIOUSLY reduce the time taken and the performance hit.

Cheers,
Wol

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of ashish ratna
Sent: 15 March 2004 08:25
To: [EMAIL PROTECTED]
Subject: Help Needed regarding performance improvement of delete query

Hi All,

We are working for purging of old data from the database. But we are
facing performance problems in this.

We are using select query which is created dynamically on the basis of
number of records. We want to know if there is any limit for size of
query in Universe.

Although in universe help pdf it is mentioned that there is no limit for
the length of select query. But when we run the program on the file with
records more than 0.5 million it gave the error-

Pid 14433 received a SIGSEGV for stack growth failure. Possible causes:
insufficient memory or swap space, or stack size exceeded maxssiz.

Memory fault(coredump)

If there is no limitation on the size of query then please suggest some
other possible solution which can help us reducing the time of query and
completing the process successfully without giving the error.

Thanks in advance.

Regards,

Ashish.







***

This transmission is intended for the named recipient only. It may
contain
private and confidential information. If this has come to you in error
you
must not act on anything disclosed in it, nor must you copy it, modify
it,
disseminate it in any way, or show it to anyone. Please e-mail the
sender to
inform us of the transmission error or telephone ECA International
immediately and delete the e-mail from your information system.

Telephone numbers for ECA International

Re: Help Needed regarding performance improvement of delete query

2004-03-16 Thread FFT2001
In a message dated 3/15/2004 9:54:01 AM Eastern Standard Time, [EMAIL PROTECTED] 
writes:

 10 READNEXT ID ELSE STOP
READV DTE FROM F.MASTER, ID, 5 ELSE GOTO 10
IF DTE GT DEL.DATE THEN GOTO 10
DELETE F.MASTER, ID
DELETE F.REFERENCE, ID
GOTO 10

I count three goto's
So three times through the spank machine.
Will
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Help Needed regarding performance improvement of delete query

2004-03-16 Thread FFT2001
Gives Mark the most improved player award.
Will

In a message dated 3/16/2004 5:07:09 PM Eastern Standard Time, [EMAIL PROTECTED] 
writes:

 LOOP WHILE READNEXT ID DO
READV DTE FROM F.MASTER, ID, 5 THEN
 IF DTE LE DEL.DATE THEN
  DELETE F.MASTER, ID
  DELETE F.REFERENCE, ID
 END
END
 REPEAT
 
 there you go (to).
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Help Needed regarding performance improvement of delete query

2004-03-16 Thread Grant.Boice
I personally like this approach...

   LOOP
  READNEXT ID ELSE EXIT
  READV DTE FROM F.MASTER, ID, 5 ELSE CONTINUE
  IF DTE LE DEL.DATE ELSE CONTINUE
  DELETE F.MASTER, ID
  DELETE F.REFERENCE, ID
   REPEAT

That's my $0.02 worth.  (If that!)

Grant

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tuesday, March 16, 2004 6:08 PM
To: U2 Users Discussion List
Subject: Re: Help Needed regarding performance improvement of delete
query


Gives Mark the most improved player award.
Will

In a message dated 3/16/2004 5:07:09 PM Eastern Standard Time, [EMAIL PROTECTED] 
writes:

 LOOP WHILE READNEXT ID DO
READV DTE FROM F.MASTER, ID, 5 THEN
 IF DTE LE DEL.DATE THEN
  DELETE F.MASTER, ID
  DELETE F.REFERENCE, ID
 END
END
 REPEAT
 
 there you go (to).
-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Help Needed regarding performance improvement of delete query

2004-03-15 Thread ashish ratna
Hi All,
 
We are working for purging of old data from the database. But we are facing 
performance problems in this.
 
We are using select query which is created dynamically on the basis of number of 
records. We want to know if there is any limit for size of query in Universe.
 
Although in universe help pdf it is mentioned that there is no limit for the length of 
select query. But when we run the program on the file with records more than 0.5 
million it gave the error-
 
Pid 14433 received a SIGSEGV for stack growth failure. Possible causes: insufficient 
memory or swap space, or stack size exceeded maxssiz. 

Memory fault(coredump)

If there is no limitation on the size of query then please suggest some other possible 
solution which can help us reducing the time of query and completing the process 
successfully without giving the error.

Thanks in advance.

Regards,

Ashish.

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


RE: Help Needed regarding performance improvement of delete query

2004-03-15 Thread ashish ratna
Hi Wol,

The scenario is that-

We have a master file having more than 3-4 million records and have corresponding 
reference file which contains reference data for this master file.

Now we start our purge program which selects records from master file on the basis of 
date. Corresponding data should be deleted from the other file (reference file).

For this requirement we have adopted the approach that- select the record from master 
file on the basis of date. Save the list of these records, then on the basis of this 
list select the records from reference file.

Issue is that this list contains more than 0.5 million and I want to take few (say 
10,000 at a time) record ids from this list for further processing.

Any pointers for this problem will be very helpful.

Thanks in advance.

Ashish.



-Original Message-
From: Anthony Youngman [mailto:[EMAIL PROTECTED]
Sent: Monday, March 15, 2004 4:50 PM
To: ashish ratna
Subject: RE: Help Needed regarding performance improvement of delete
query


Ahhh

I thought you were selecting records and deleting them. So the first
thousand would have disappeared, and you would obviously get a different
thousand next time round because the first lot would have gone :-)
Sounds like that's not the case.

In that case, do you have a field that is repeated across many records,
but where each individual value (or range of values) wouldn't return too
big a bunch of records? Or do you have a numeric id - you could declare
an index on that ...

Let's take that numeric id idea - and then you'll have to build on it
for yourself. Declare an i-descriptor as @ID[3]. That logically
partitions your file into a thousand pieces. Declare an index on this.
The first term in your select will then be WITH IDESC EQ
whatever-number-you-want. That'll reduce the load on the database for
each pass, you'll just need to wrap it in a loop where whatever-number
goes from 0 to 999

Actually, what I would probably do is declare my i-descriptor as
INT(@ID/30) and then run this purge daily with whatever-number as
today's day. Obviously, it'll do nothing on the 31sts, and in March
it'll do two month's work on the 29th and 30th, but you've reduced the
hit on the system considerably.

Without knowing what you're doing in more detail, it's difficult to give
you any proper advice, but certainly I'll try and think of any tips that
you can build upon, like this. But you need to work out what's right for
you :-)

NB Any reason for taking this off the mailing list? By all means cc it
to me, but if you keep it on the list there are other people who may be
able to help too - I'm very good at overviews, but I fall short on the
logic - I think there is a way to get the next thousand records, but I
haven't got a clue what the syntax is ...

Cheers,
Wol

-Original Message-
From: ashish ratna [mailto:[EMAIL PROTECTED] 
Sent: 15 March 2004 10:33
To: Anthony Youngman
Subject: RE: Help Needed regarding performance improvement of delete
query

Hi,

Thanks for the nice suggestions. 
I have another question that, using SAMPLE once I process 1000 records
(using SAMPLE 1000), how can I select next 1000 records in next run
(i.e. 1001 to 2000 records and so on)?

I was trying few combinations but didn't succeeded. Can you tell me the
syntax for that?

Thanks again.

Regards,
Ashish.



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Anthony Youngman
Sent: Monday, March 15, 2004 2:32 PM
To: U2 Users Discussion List
Subject: RE: Help Needed regarding performance improvement of delete
query


This might help speed things up a bit ...

Firstly, of course, is your file properly sized?

Secondly, (and in this case you will need to run the SELECT / DELETE
sequence several times) try putting a SAMPLE 1000 (or whatever number
makes sense) at the end of your select.

Basically, this will mean that the SELECT runs until it finds that
number of records and then stops. So each sequence won't load the system
so badly. Creating a huge select list will stress your ram badly ...
looping through this sequence won't stress the system so badly, though
you really do need to use indices to reduce the stress even more ...

Create an index on various fields that you're using as your select
criteria. If you're selecting old records, then you need to select on
date, and this really will make life both easy and fast. The more
closely you can guarantee that a select, acting on a single index, will
pick up only or mostly records that you are going to delete, the better.
That will SERIOUSLY reduce the time taken and the performance hit.

Cheers,
Wol

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of ashish ratna
Sent: 15 March 2004 08:25
To: [EMAIL PROTECTED]
Subject: Help Needed regarding performance improvement of delete query

Hi All,
 
We are working for purging of old data from the database. But we are
facing performance problems in this.
 
We are using select

Re: Help Needed regarding performance improvement of delete query

2004-03-15 Thread Scott Richardson
Great points from Wol, as always.

What kind of /tmp disk space do you have on this system?
(Assuming that /tmp is where UV does some of it's SELECT
scratch pad intermediate writing when processing large queries,
consult your sites actual uvconfig for all of your actual values...).

If this /tmp is small, single physical disk, or heavily fragmented,
this would also contribute to poor query runtime performance.
Ditto on your system's swap space, which should be at least
2X physical memory.

Wol's approach of breaking down the query into selecting
smaller groups of data is a great one. Chip away at the stone,
methodically, consistently, and constantly.

What platform is this on?
What OS version?
What UV Version?
How much memory  disk space?
How much /tmp and swap space?

Are you running this query with other users on the system, who
may be also trying to access the files this query is working with?

Are you runing this at night when it might conflict with a backup
operation?

More food for thought.

Regards,
Scott

- Original Message - 
From: Anthony Youngman [EMAIL PROTECTED]
To: U2 Users Discussion List [EMAIL PROTECTED]
Sent: Monday, March 15, 2004 4:02 AM
Subject: RE: Help Needed regarding performance improvement of delete query


This might help speed things up a bit ...

Firstly, of course, is your file properly sized?

Secondly, (and in this case you will need to run the SELECT / DELETE
sequence several times) try putting a SAMPLE 1000 (or whatever number
makes sense) at the end of your select.

Basically, this will mean that the SELECT runs until it finds that
number of records and then stops. So each sequence won't load the system
so badly. Creating a huge select list will stress your ram badly ...
looping through this sequence won't stress the system so badly, though
you really do need to use indices to reduce the stress even more ...

Create an index on various fields that you're using as your select
criteria. If you're selecting old records, then you need to select on
date, and this really will make life both easy and fast. The more
closely you can guarantee that a select, acting on a single index, will
pick up only or mostly records that you are going to delete, the better.
That will SERIOUSLY reduce the time taken and the performance hit.

Cheers,
Wol

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
On Behalf Of ashish ratna
Sent: 15 March 2004 08:25
To: [EMAIL PROTECTED]
Subject: Help Needed regarding performance improvement of delete query

Hi All,

We are working for purging of old data from the database. But we are
facing performance problems in this.

We are using select query which is created dynamically on the basis of
number of records. We want to know if there is any limit for size of
query in Universe.

Although in universe help pdf it is mentioned that there is no limit for
the length of select query. But when we run the program on the file with
records more than 0.5 million it gave the error-

Pid 14433 received a SIGSEGV for stack growth failure. Possible causes:
insufficient memory or swap space, or stack size exceeded maxssiz.

Memory fault(coredump)

If there is no limitation on the size of query then please suggest some
other possible solution which can help us reducing the time of query and
completing the process successfully without giving the error.

Thanks in advance.

Regards,

Ashish.






***

This transmission is intended for the named recipient only. It may contain
private and confidential information. If this has come to you in error you
must not act on anything disclosed in it, nor must you copy it, modify it,
disseminate it in any way, or show it to anyone. Please e-mail the sender to
inform us of the transmission error or telephone ECA International
immediately and delete the e-mail from your information system.

Telephone numbers for ECA International offices are: Sydney +61 (0)2 9911
7799, Hong Kong + 852 2121 2388, London +44 (0)20 7351 5000 and New York +1
212 582 2333.


***

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: Help Needed regarding performance improvement of delete query

2004-03-15 Thread Mike Masters
Folks,

It's a real nightmare when you have to purge +2gig files, eh? They can run for weeks 
and have your users complaining that entire week too. Nag.. nag..nag...

Simple solution:

Backup entire data file to media. Select what you want to KEEP and move those records 
to a temp file. Clearfile in a basic program or delete old data file. Rename temp file 
or copy records back to original data file.

If you have a exclusive access window, this is a piece of cake; otherwise, you have to 
become more creative and carve out your own customized exclusive access window on the 
fly and use a whole host of mirrors hoping the phone doesn't ring with your 
fingerprints all over a potiential mess --- just in case you get distracted and 
mis-shuffle the deck :p)

Bram
  - Original Message - 
  From: ashish ratna 
  To: Anthony Youngman 
  Cc: [EMAIL PROTECTED] 
  Sent: Monday, March 15, 2004 7:55 AM
  Subject: RE: Help Needed regarding performance improvement of delete query


  Hi Wol,

  The scenario is that-

  We have a master file having more than 3-4 million records and have corresponding 
reference file which contains reference data for this master file.

  Now we start our purge program which selects records from master file on the basis 
of date. Corresponding data should be deleted from the other file (reference file).

  For this requirement we have adopted the approach that- select the record from 
master file on the basis of date. Save the list of these records, then on the basis of 
this list select the records from reference file.

  Issue is that this list contains more than 0.5 million and I want to take few (say 
10,000 at a time) record ids from this list for further processing.

  Any pointers for this problem will be very helpful.

  Thanks in advance.

  Ashish.



  -Original Message-
  From: Anthony Youngman [mailto:[EMAIL PROTECTED]
  Sent: Monday, March 15, 2004 4:50 PM
  To: ashish ratna
  Subject: RE: Help Needed regarding performance improvement of delete
  query


  Ahhh

  I thought you were selecting records and deleting them. So the first
  thousand would have disappeared, and you would obviously get a different
  thousand next time round because the first lot would have gone :-)
  Sounds like that's not the case.

  In that case, do you have a field that is repeated across many records,
  but where each individual value (or range of values) wouldn't return too
  big a bunch of records? Or do you have a numeric id - you could declare
  an index on that ...

  Let's take that numeric id idea - and then you'll have to build on it
  for yourself. Declare an i-descriptor as @ID[3]. That logically
  partitions your file into a thousand pieces. Declare an index on this.
  The first term in your select will then be WITH IDESC EQ
  whatever-number-you-want. That'll reduce the load on the database for
  each pass, you'll just need to wrap it in a loop where whatever-number
  goes from 0 to 999

  Actually, what I would probably do is declare my i-descriptor as
  INT(@ID/30) and then run this purge daily with whatever-number as
  today's day. Obviously, it'll do nothing on the 31sts, and in March
  it'll do two month's work on the 29th and 30th, but you've reduced the
  hit on the system considerably.

  Without knowing what you're doing in more detail, it's difficult to give
  you any proper advice, but certainly I'll try and think of any tips that
  you can build upon, like this. But you need to work out what's right for
  you :-)

  NB Any reason for taking this off the mailing list? By all means cc it
  to me, but if you keep it on the list there are other people who may be
  able to help too - I'm very good at overviews, but I fall short on the
  logic - I think there is a way to get the next thousand records, but I
  haven't got a clue what the syntax is ...

  Cheers,
  Wol

  -Original Message-
  From: ashish ratna [mailto:[EMAIL PROTECTED] 
  Sent: 15 March 2004 10:33
  To: Anthony Youngman
  Subject: RE: Help Needed regarding performance improvement of delete
  query

  Hi,

  Thanks for the nice suggestions. 
  I have another question that, using SAMPLE once I process 1000 records
  (using SAMPLE 1000), how can I select next 1000 records in next run
  (i.e. 1001 to 2000 records and so on)?

  I was trying few combinations but didn't succeeded. Can you tell me the
  syntax for that?

  Thanks again.

  Regards,
  Ashish.



  -Original Message-
  From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
  Behalf Of Anthony Youngman
  Sent: Monday, March 15, 2004 2:32 PM
  To: U2 Users Discussion List
  Subject: RE: Help Needed regarding performance improvement of delete
  query


  This might help speed things up a bit ...

  Firstly, of course, is your file properly sized?

  Secondly, (and in this case you will need to run the SELECT / DELETE
  sequence several times) try putting a SAMPLE 1000 (or whatever number
  makes sense) at the end of your select

Re: Help Needed regarding performance improvement of delete query

2004-03-15 Thread Lost on Air Force One
Oh, by the way

Don't forget to use fuser in unix or the equalivant on Bill Gates' machines in order 
to verify that you truly do have exclusive access rights; otherwise, no worky.
  - Original Message - 
  From: Mike Masters 
  To: U2 Users Discussion List 
  Sent: Monday, March 15, 2004 8:24 AM
  Subject: Re: Help Needed regarding performance improvement of delete query


  I forgot to mention why.

  U2 definately prefers ADDing records instead of DELETing records.. any day.
- Original Message - 
From: ashish ratna 
To: Anthony Youngman 
Cc: [EMAIL PROTECTED] 
Sent: Monday, March 15, 2004 7:55 AM
Subject: RE: Help Needed regarding performance improvement of delete query


Hi Wol,

The scenario is that-

We have a master file having more than 3-4 million records and have corresponding 
reference file which contains reference data for this master file.

Now we start our purge program which selects records from master file on the basis 
of date. Corresponding data should be deleted from the other file (reference file).

For this requirement we have adopted the approach that- select the record from 
master file on the basis of date. Save the list of these records, then on the basis of 
this list select the records from reference file.

Issue is that this list contains more than 0.5 million and I want to take few (say 
10,000 at a time) record ids from this list for further processing.

Any pointers for this problem will be very helpful.

Thanks in advance.

Ashish.



-Original Message-
From: Anthony Youngman [mailto:[EMAIL PROTECTED]
Sent: Monday, March 15, 2004 4:50 PM
To: ashish ratna
Subject: RE: Help Needed regarding performance improvement of delete
query


Ahhh

I thought you were selecting records and deleting them. So the first
thousand would have disappeared, and you would obviously get a different
thousand next time round because the first lot would have gone :-)
Sounds like that's not the case.

In that case, do you have a field that is repeated across many records,
but where each individual value (or range of values) wouldn't return too
big a bunch of records? Or do you have a numeric id - you could declare
an index on that ...

Let's take that numeric id idea - and then you'll have to build on it
for yourself. Declare an i-descriptor as @ID[3]. That logically
partitions your file into a thousand pieces. Declare an index on this.
The first term in your select will then be WITH IDESC EQ
whatever-number-you-want. That'll reduce the load on the database for
each pass, you'll just need to wrap it in a loop where whatever-number
goes from 0 to 999

Actually, what I would probably do is declare my i-descriptor as
INT(@ID/30) and then run this purge daily with whatever-number as
today's day. Obviously, it'll do nothing on the 31sts, and in March
it'll do two month's work on the 29th and 30th, but you've reduced the
hit on the system considerably.

Without knowing what you're doing in more detail, it's difficult to give
you any proper advice, but certainly I'll try and think of any tips that
you can build upon, like this. But you need to work out what's right for
you :-)

NB Any reason for taking this off the mailing list? By all means cc it
to me, but if you keep it on the list there are other people who may be
able to help too - I'm very good at overviews, but I fall short on the
logic - I think there is a way to get the next thousand records, but I
haven't got a clue what the syntax is ...

Cheers,
Wol

-Original Message-
From: ashish ratna [mailto:[EMAIL PROTECTED] 
Sent: 15 March 2004 10:33
To: Anthony Youngman
Subject: RE: Help Needed regarding performance improvement of delete
query

Hi,

Thanks for the nice suggestions. 
I have another question that, using SAMPLE once I process 1000 records
(using SAMPLE 1000), how can I select next 1000 records in next run
(i.e. 1001 to 2000 records and so on)?

I was trying few combinations but didn't succeeded. Can you tell me the
syntax for that?

Thanks again.

Regards,
Ashish.



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Anthony Youngman
Sent: Monday, March 15, 2004 2:32 PM
To: U2 Users Discussion List
Subject: RE: Help Needed regarding performance improvement of delete
query


This might help speed things up a bit ...

Firstly, of course, is your file properly sized?

Secondly, (and in this case you will need to run the SELECT / DELETE
sequence several times) try putting a SAMPLE 1000 (or whatever number
makes sense) at the end of your select.

Basically, this will mean that the SELECT runs until it finds that
number of records and then stops

Re: Help Needed regarding performance improvement of delete query

2004-03-15 Thread Mark Johnson
I have a client that needs roughly 250,000 items removed monthly from a file
containing 5-6 million records.

Since there is no need to use the keys (records) for any other purpose
except for deleting because they are old, a standard data/basic SELECT
statement is just about as fast as you can get. It isn't encumbered by the
number of selected keys as it's not retaining them for any purpose.

This simple program looks like this:

OPEN MASTER TO F.MASTER ELSE STOP
OPEN REFERENCE TO F.REFERENCE ELSE STOP
DEL.DTE=ICONV(01/02/2003,D)
SELECT F.MASTER
10 READNEXT ID ELSE STOP
READV DTE FROM F.MASTER, ID, 5 ELSE GOTO 10
IF DTE GT DEL.DATE THEN GOTO 10
DELETE F.MASTER, ID
DELETE F.REFERENCE, ID
GOTO 10
END

Of course if there's additional use for the deleted keys, then use another
approach.

my 1 cent.
- Original Message -
From: ashish ratna [EMAIL PROTECTED]
To: Anthony Youngman [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Monday, March 15, 2004 7:55 AM
Subject: RE: Help Needed regarding performance improvement of delete query


Hi Wol,

The scenario is that-

We have a master file having more than 3-4 million records and have
corresponding reference file which contains reference data for this master
file.

Now we start our purge program which selects records from master file on the
basis of date. Corresponding data should be deleted from the other file
(reference file).

For this requirement we have adopted the approach that- select the record
from master file on the basis of date. Save the list of these records, then
on the basis of this list select the records from reference file.

Issue is that this list contains more than 0.5 million and I want to take
few (say 10,000 at a time) record ids from this list for further processing.

Any pointers for this problem will be very helpful.

Thanks in advance.

Ashish.



-Original Message-
From: Anthony Youngman [mailto:[EMAIL PROTECTED]
Sent: Monday, March 15, 2004 4:50 PM
To: ashish ratna
Subject: RE: Help Needed regarding performance improvement of delete
query


Ahhh

I thought you were selecting records and deleting them. So the first
thousand would have disappeared, and you would obviously get a different
thousand next time round because the first lot would have gone :-)
Sounds like that's not the case.

In that case, do you have a field that is repeated across many records,
but where each individual value (or range of values) wouldn't return too
big a bunch of records? Or do you have a numeric id - you could declare
an index on that ...

Let's take that numeric id idea - and then you'll have to build on it
for yourself. Declare an i-descriptor as @ID[3]. That logically
partitions your file into a thousand pieces. Declare an index on this.
The first term in your select will then be WITH IDESC EQ
whatever-number-you-want. That'll reduce the load on the database for
each pass, you'll just need to wrap it in a loop where whatever-number
goes from 0 to 999

Actually, what I would probably do is declare my i-descriptor as
INT(@ID/30) and then run this purge daily with whatever-number as
today's day. Obviously, it'll do nothing on the 31sts, and in March
it'll do two month's work on the 29th and 30th, but you've reduced the
hit on the system considerably.

Without knowing what you're doing in more detail, it's difficult to give
you any proper advice, but certainly I'll try and think of any tips that
you can build upon, like this. But you need to work out what's right for
you :-)

NB Any reason for taking this off the mailing list? By all means cc it
to me, but if you keep it on the list there are other people who may be
able to help too - I'm very good at overviews, but I fall short on the
logic - I think there is a way to get the next thousand records, but I
haven't got a clue what the syntax is ...

Cheers,
Wol

-Original Message-
From: ashish ratna [mailto:[EMAIL PROTECTED]
Sent: 15 March 2004 10:33
To: Anthony Youngman
Subject: RE: Help Needed regarding performance improvement of delete
query

Hi,

Thanks for the nice suggestions.
I have another question that, using SAMPLE once I process 1000 records
(using SAMPLE 1000), how can I select next 1000 records in next run
(i.e. 1001 to 2000 records and so on)?

I was trying few combinations but didn't succeeded. Can you tell me the
syntax for that?

Thanks again.

Regards,
Ashish.



-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Behalf Of Anthony Youngman
Sent: Monday, March 15, 2004 2:32 PM
To: U2 Users Discussion List
Subject: RE: Help Needed regarding performance improvement of delete
query


This might help speed things up a bit ...

Firstly, of course, is your file properly sized?

Secondly, (and in this case you will need to run the SELECT / DELETE
sequence several times) try putting a SAMPLE 1000 (or whatever number
makes sense) at the end of your select.

Basically, this will mean that the SELECT runs until it finds

SV: SV: Performance Discussion - Unidata

2004-03-03 Thread Björn Eklund
Hi Martin,
excuse the late answer, we have a StorageTek D280 with 1GB cache and we see
good performance after tuning the cache.

Björn Eklund

-Ursprungligt meddelande-
Från: Martin Thorpe [mailto:[EMAIL PROTECTED]
Skickat: den 26 februari 2004 13:21
Till: U2 Users Discussion List
Ämne: Re: SV: Performance Discussion - Unidata


Bjorn

If you didnt mind me asking, what hardware are you using in terms of 
SAN, is it EMC Clarion, Storagetek/Sun StoreEdge etc and also how much 
cache have you got on those arrays? 1GB?

Do you see good performance over that?

Thanks

Björn Eklund wrote:

Hi Martin,
we have equipment that looks a lot like yours, same server but with double
the amount of CPU and RAM.
We also have an external SAN storage(FC disks 15000 rpm) where all the
unidata files resides. 
When we started the system for the first time everything we tried to do was
very slow. After tuning the storage kabinett's cache we got an acceptable
performance.

After some time we started looking for other ways of improving performance
and did a resize on all our files. The biggest change was from blocksizze 2
to 4 on almost every file. This made an improvement of about 50-100%
perfomance on our disk intense batchprograms.
I don't remeber any figures on speed regarding reads and writes but I can
ask our unixadmin to dig them up if you want.

It's just a guess but I do belive that Unidata rely heavily on Solaris
buffers.

Regards
Björn

-Ursprungligt meddelande-
Från: Martin Thorpe [mailto:[EMAIL PROTECTED]
Skickat: den 25 februari 2004 19:13
Till: [EMAIL PROTECTED]
Ämne: Performance Discussion - Unidata


Hi guys

Hope everybody is ok!

To get straight to the point, system as follows:

SunFire V880
2x1.2GHZ UltaSparc3cu Processors
4GB RAM
6x68GB 10krpm FC-AL disks
96GB backplane

Disks are grouped together to create volumes - as follows:

Disk 1   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Primary Mirror)
Disk 2   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Submirror)
Disk 3   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 4   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 5   -   /u-   RAID 10 
(Unidata Volume Submirror - striped)
Disk 6   -   /u-   RAID 10 
(Unidata Volume Sumkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=
8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352
bmirror - striped)

UD60   -   Unidata Binary area
XFER   -   Data output area for Unidata accounts (csv files etc)
/U -   Primary Unidata account/database area.

If I perform tests via the system using both dd and mkfile, I see speeds 
of around 50MB/s for WRITES, 60MB/s for READS, however if a colleague 
loads a 100MB csv file using READSEQ into a Unidata file, not doing 
anything fancy, I see massive Average Service Times (asvc_t - using 
IOSTAT) and the device is usually always 100% busy, no real CPU overhead 
but with 15MB/s tops WRITE. There is only ONE person using this system 
(to test throughput).

This is confusing, drilling down I have set a 16384 block interlace size 
on each stripe and the following info for the mounted volume:

mkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=
8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352

in /etc/system I have set the following parameters:

set shmsys:shminfo_shmmni=1024
set shmsys:shminfo_shmmax=8388608
set shmsys:shminfo_shmseg=50
set msgsys:msginfo_msgmni=1615
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=985
set semsys:seminfo_semmnu=1218

set maxpgio=240
set maxphys=8388608

I have yet to change the throughput on the ssd drivers in order to break 
the 1MB barrier, however I still would have expected better performance. 
UDTCONFIG is as yet unchanged from default.

Does anybody have any comments?

Things to try in my opinion:

I think I have the RAID correct, the Unidata TEMP directory I have 
redirected to be on the /U RAID 10 partition rather than the RAID 1 ud60 
area.

1. Blocksizes should match average Unidata file size.

One question I have is does Unidata perform its own file caching? can I 
mount filesystems using FORCEDIRECTIO or does Unidata rely heavily on 
the Solaris based buffers?

Thanks for any information you can provide

  


-- 
Martin Thorpe
DATAFORCE GROUP LTD
DDI: 01604 673886
MOBILE: 07740598932
WEB: http://www.dataforce.co.uk
mailto: [EMAIL PROTECTED]

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


SV: Performance Discussion - Unidata

2004-02-26 Thread Björn Eklund
Hi Martin,
we have equipment that looks a lot like yours, same server but with double
the amount of CPU and RAM.
We also have an external SAN storage(FC disks 15000 rpm) where all the
unidata files resides. 
When we started the system for the first time everything we tried to do was
very slow. After tuning the storage kabinett's cache we got an acceptable
performance.

After some time we started looking for other ways of improving performance
and did a resize on all our files. The biggest change was from blocksizze 2
to 4 on almost every file. This made an improvement of about 50-100%
perfomance on our disk intense batchprograms.
I don't remeber any figures on speed regarding reads and writes but I can
ask our unixadmin to dig them up if you want.

It's just a guess but I do belive that Unidata rely heavily on Solaris
buffers.

Regards
Björn

-Ursprungligt meddelande-
Från: Martin Thorpe [mailto:[EMAIL PROTECTED]
Skickat: den 25 februari 2004 19:13
Till: [EMAIL PROTECTED]
Ämne: Performance Discussion - Unidata


Hi guys

Hope everybody is ok!

To get straight to the point, system as follows:

SunFire V880
2x1.2GHZ UltaSparc3cu Processors
4GB RAM
6x68GB 10krpm FC-AL disks
96GB backplane

Disks are grouped together to create volumes - as follows:

Disk 1   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Primary Mirror)
Disk 2   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Submirror)
Disk 3   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 4   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 5   -   /u-   RAID 10 
(Unidata Volume Submirror - striped)
Disk 6   -   /u-   RAID 10 
(Unidata Volume Sumkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352
bmirror - striped)

UD60   -   Unidata Binary area
XFER   -   Data output area for Unidata accounts (csv files etc)
/U -   Primary Unidata account/database area.

If I perform tests via the system using both dd and mkfile, I see speeds 
of around 50MB/s for WRITES, 60MB/s for READS, however if a colleague 
loads a 100MB csv file using READSEQ into a Unidata file, not doing 
anything fancy, I see massive Average Service Times (asvc_t - using 
IOSTAT) and the device is usually always 100% busy, no real CPU overhead 
but with 15MB/s tops WRITE. There is only ONE person using this system 
(to test throughput).

This is confusing, drilling down I have set a 16384 block interlace size 
on each stripe and the following info for the mounted volume:

mkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352

in /etc/system I have set the following parameters:

set shmsys:shminfo_shmmni=1024
set shmsys:shminfo_shmmax=8388608
set shmsys:shminfo_shmseg=50
set msgsys:msginfo_msgmni=1615
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=985
set semsys:seminfo_semmnu=1218

set maxpgio=240
set maxphys=8388608

I have yet to change the throughput on the ssd drivers in order to break 
the 1MB barrier, however I still would have expected better performance. 
UDTCONFIG is as yet unchanged from default.

Does anybody have any comments?

Things to try in my opinion:

I think I have the RAID correct, the Unidata TEMP directory I have 
redirected to be on the /U RAID 10 partition rather than the RAID 1 ud60 
area.

1. Blocksizes should match average Unidata file size.

One question I have is does Unidata perform its own file caching? can I 
mount filesystems using FORCEDIRECTIO or does Unidata rely heavily on 
the Solaris based buffers?

Thanks for any information you can provide

-- 
Martin Thorpe
DATAFORCE GROUP LTD
DDI: 01604 673886
MOBILE: 07740598932
WEB: http://www.dataforce.co.uk
mailto: [EMAIL PROTECTED]

-- 
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users
--
u2-users mailing list
[EMAIL PROTECTED]
http://www.oliver.com/mailman/listinfo/u2-users


Re: SV: Performance Discussion - Unidata

2004-02-26 Thread Martin Thorpe
Hi Bjorn

I agree with you regarding caching - I was unsure as to wether Unidata 
does its own file caching (as ORACLE does), in which case you could then 
mount the Unidata volume as DIRECTIO but with it not - its not an 
option. The problem is that any Unidata operation in terms of major disk 
access seems to slaughter io, it could be the most efficient program and 
if it is allowed to run freely (with no delays) you get massive average 
service times (usually up around a second which is totally un-acceptable 
for one person) and very poor write rates (under 20MB/s).

A couple of things I have thought about is playing around with the 
system file caching in terms of the UFS Write throttle (sd_max_throttle 
- to limit the queue) and the UFS high/low water marks to see if I can 
pull down the service times. The read/write speed is not really an issue 
to me as long as its consistant and at an acceptable limit, but the 
biggest thing to me is the average service times, as this causes 
headaches for everyone else.

With a 1 second delay every 100 records in the mentioned program (code 
attached) the average service times are normal and you dont notice any 
problems with server response times.
With a 1 second delay every 1000 records you start to notice a slight 
deterioration in the system.
Running freely, you notice a major problem, service time is around a second.

It is as though the buffers overflow to the point that they are clogged 
and any disk operations by other processes while this type of program is 
running, suffer greatly due to the high service times.

I'm going to try researching this, but wondered if anybody had any 
further information for me? I will post my results if anyone is interested.

Björn Eklund wrote:

Hi Martin,
we have equipment that looks a lot like yours, same server but with double
the amount of CPU and RAM.
We also have an external SAN storage(FC disks 15000 rpm) where all the
unidata files resides. 
When we started the system for the first time everything we tried to do was
very slow. After tuning the storage kabinett's cache we got an acceptable
performance.

After some time we started looking for other ways of improving performance
and did a resize on all our files. The biggest change was from blocksizze 2
to 4 on almost every file. This made an improvement of about 50-100%
perfomance on our disk intense batchprograms.
I don't remeber any figures on speed regarding reads and writes but I can
ask our unixadmin to dig them up if you want.
It's just a guess but I do belive that Unidata rely heavily on Solaris
buffers.
Regards
Björn
-Ursprungligt meddelande-
Från: Martin Thorpe [mailto:[EMAIL PROTECTED]
Skickat: den 25 februari 2004 19:13
Till: [EMAIL PROTECTED]
Ämne: Performance Discussion - Unidata
Hi guys

Hope everybody is ok!

To get straight to the point, system as follows:

SunFire V880
2x1.2GHZ UltaSparc3cu Processors
4GB RAM
6x68GB 10krpm FC-AL disks
96GB backplane
Disks are grouped together to create volumes - as follows:

Disk 1   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Primary Mirror)
Disk 2   -   root, var, dev, ud60, xfer -   RAID 1  (Root Volume 
Submirror)
Disk 3   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 4   -   /u-   RAID 10 
(Unidata Volume Primary Mirror - striped)
Disk 5   -   /u-   RAID 10 
(Unidata Volume Submirror - striped)
Disk 6   -   /u-   RAID 10 
(Unidata Volume Sumkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352
bmirror - striped)

UD60   -   Unidata Binary area
XFER   -   Data output area for Unidata accounts (csv files etc)
/U -   Primary Unidata account/database area.
If I perform tests via the system using both dd and mkfile, I see speeds 
of around 50MB/s for WRITES, 60MB/s for READS, however if a colleague 
loads a 100MB csv file using READSEQ into a Unidata file, not doing 
anything fancy, I see massive Average Service Times (asvc_t - using 
IOSTAT) and the device is usually always 100% busy, no real CPU overhead 
but with 15MB/s tops WRITE. There is only ONE person using this system 
(to test throughput).

This is confusing, drilling down I have set a 16384 block interlace size 
on each stripe and the following info for the mounted volume:

mkfs -F ufs -o 
nsect=424,ntrack=24,bsize=8192,fragsize=1024,cgsize=10,free=1,rps=167,nbpi=8
275,opt=t,apc=0,gap=0,nrpos=8,maxcontig=16 
/dev/md/dsk/d10 286220352

in /etc/system I have set the following parameters:

set shmsys:shminfo_shmmni=1024
set shmsys:shminfo_shmmax=8388608
set shmsys:shminfo_shmseg=50
set msgsys:msginfo_msgmni=1615
set semsys:seminfo_semmni=100
set semsys:seminfo_semmns=985
set semsys:seminfo_semmnu=1218
set maxpgio=240
set maxphys=8388608
I